text
stringlengths
0
1.22M
Abstract A systematic analysis of methods for computing the trajectories of solid-phase particles applied in modern astrophysics codes designed for modeling gas–dust circumstellar disks has been carried out for the first time. The motion of grains whose velocities are determined mainly by the gas drag, that is, for which the stopping time or relaxation time for the velocity of the dust to the velocity of the gas $t_{stop}$ is less than or comparable to the rotation period, are considered. The methods are analyzed from the point of view of their suitability for computing the motions of small bodies, including dust grains less than 1 $\mu m$ in size, which are strongly coupled to the gas. Two test problems are with analytical solutions. Fast first order accurate methods that make it possible to avoid additional restrictions on the time step size $\tau$ due to gas drag in computations of the motion of grains of any size are presented. For the conditions of a circumstellar disk, the error in the velocity computations obtained when using some stable methods becomes unacceptably large when the time step size is $\tau>t_{stop}$. For the radial migration of bodies that exhibit drifts along nearly Keplerian orbits, an asymptotic approximation, sometimes called the short friction time approximation or drift flux model, gives a relative error for the radial-velocity computations equals to $St^{2}$ where St is the Stokes number, the ratio of the stopping time of the body to some fraction of the rotation period (dynamical time scale) in the disk. 523.4-52: 573.552 Analysis of Methods for Computing the Trajectories of Dust Particles in a Gas–Dust Circumstellar Disk O.P. Stoyanovskaya111Boreskov Institute of Catalysis, Lavrentieva, 5, 630090, Novosibirsk, Russia, [email protected], E.I. Vorobyov222Institute of Astrophysics, University of Vienna, Vienna, Austria, [email protected], V.N. Snytnikov333Boreskov Institute of Catalysis, Lavrentieva, 5, 630090, Novosibirsk, Russia, [email protected] DOI: 10.1134/S1063772917120071 Keywords: protoplanetary discs, two-phase medium, two-fluid approach, splitting method 1 Introduction Molecular clouds, in which stars form with their circumstellar disks, are comprised 98-99% of hydrogen and helium. All the remaining elements are contained in solid-phase dust particles, some of which go into forming planets in the circumstellar disks. The fraction of solid-phase particles in these disks relative to hydrogen and helium can change appreciably in time and space. In the process of forming planets from dust particles, bodies of larger sizes form along the way, right to multi-kilometer planetesimals and protoplanets. During the migration of bodies through a disk, the growth and fragmentation of solid-phase particles can change the gas-dynamical regimes for the interaction of individual bodies and dust particles with the ambient gas. The change in these regimes from free-molecular interactions with the gas flowing around bodies to the gas acting as an approximately continuous medium is especially important for modeling accreting protoplanetary disks on their early massive stages associated with the growth of dust particles. Currently, modeling of processes in gas–dust disks, especially two-phase circumstellar disks, is usually based on a gas-dynamical approximation (see, for example [7, 26]). In this approximation, with a two-fluid approach, the dynamics of dust grain cloud are calculated separately from the gas dynamics using gas-dynamical equations (e.g. [25, 5, 19, 2, 3, 6]). In a number of cases, it is convenient to represent the mean velocity of the medium and separately the relative velocity between the gas and bodies using a gas-dynamical system of equations for a two-phase medium of dust and gas (e.g., [13, 11]). This is sometimes called a one-fluid approach. The initial systems of equations for one-fluid and two-fluid approaches are mathematically equivalent (e.g., [11]), and the way of describing the system is determined by the chosen numerical method. We note separately that, in a number of astrophysical problems, it is possible to consider particles with different velocities located at one point in space. In other words, an intersection of particle trajectories is considered in the mathematical model for the dust subsystem. Lagrangian–Eulerian (e.g., Particle-in-Cell [8]) and fully Lagrangian (e.g., Smoothed Particle Hydrodynamics [15] or the method of Osiptsov cite13) have been developed for such problems. Such approaches can be used to simulate the collective dynamics of large bodies in circumstellar disks (e.g. [21]). All these approaches for a two-phase medium re- quire the integration over time of the equation for the velocity of the bodies or the relative velocity of the gas and bodies, which is influenced by aerodynamical drag, gravitation, and other forces. A computational difficulty arises when the typical time for the aerodynamical forces (the time over which the velocity of the dust relative to the gas is appreciably changed by drag between the dust and gas) is several orders of magnitude shorter than the typical time for the action of other forces (other names for the former time scale are the settling time, time for relaxation of the dust velocity to the gas velocity, and the stopping time). For explicit methods, the presence of several time scales in the system means that the time step must be smaller than the minimum typical time. However, the integration must be carried out over an interval exceeding several times the maximum typical time scale. For problems where each time step is computationally expensive (two- and three dimensional equations, taking into account self-gravitational or magnetic fields together with other physical–chemical processes), explicit methods for the integration of the dynamics of small dust consume unacceptably high computational resources. One approach to resolving this problem is a transition to an asymptotic approximation [9], known as the short friction time approximation or drift flux model. In this approximation, the velocities of the bodies and gas are linked by a simple algebraic relation, which we will derive in Section 3.1. Section 9 presents a derivation of the necessary conditions for applicability of this approximation, which demonstrates that it provides correct results only for dust of a limited size. On the other hand, the simulation results (e.g. [4]) and observations indicate the growth of dust sizes from 1 $\mu m$ to 1-100 cm or more over the first 10 million years of the evolution of a circumstellar disk. Therefore, universal algorithms enabling the integration of the equations of motion for dust with sizes from micrometers to 1–10 m are of interest when modeling the dynamics of a gas–dust disk over long time scales. Note that calculation of the orbits of larger bodies, whose velocities are determined mainly by gravitation, rather than gas drag, requires high-order accurate methods, able to reproduce the trajectories of the bodies over a large number of orbits. Here, we limit our consideration to numerical methods for bodies for which the times for the relaxation of the dust velocity relative to the gas velocity $t_{stop}$ are shorter than or comparable to the orbital period (dynamical time scale). We first present a systematic analysis of the numerical schemes for computing the trajectories of dust particles applied in modern astrophysical problems. We have obtained for the first time expressions for the local errors in the approximations used in the schemes considered as a function of the parameters of the problem and the computational path, and determined theoretically the orders of these approximations. We present recommendations for choosing methods for computing the trajectories of dust particles based on our results. The paper has the following structure. Section  2 presents the regimes for aerodynamical friction in a circumstellar disk. Section  3 describes the general approaches used to construct fast numerical schemes. The methods we tested are presented in Section  4. Sections  5 and 7 describe test problems with analytical solutions, and the results of estimating the actual accuracy of these methods are presented in Sections  6 and 8. Section 9 presents a derivation of the necessary condition for applicability of the short friction time approximation, and Section  10 summarizes our results. 2 Regimes of aerodynamical drag in a circumstellar disk The aerodynamical force exerted by a gas with density $\rho$ on a spherical body with radius $a$ has the form $$\bf F_{\rm drag}=-\frac{1}{2}C_{D}\pi a^{2}\rho\|\bf{u}-\bf{v}\|(\bf{u}-\bf{v}),$$ (1) where $u$ is the velocity of the gas and $v$ is the velocity of the body. $C_{D}$ is the dimensionless drag coefficient, whose value depends on the size of the body and the parameters for flow around the body [24]: $$C_{D}=\begin{cases}\displaystyle\frac{8c_{s}}{3\|\bf{u}-\bf{v}\|}&a<% \displaystyle\frac{9}{4}\lambda,\textrm{Epstein regime},\\ 24R_{e}^{-1}&R_{e}<1,\textrm{linear Stokes regime},\\ 24R_{e}^{-0.6}&1<R_{e}<800,\\ 0.44&R_{e}>800,\end{cases}$$ (2) where $\lambda$ is the mean free path of molecules in the gas and $R_{e}$ is the Reynolds number, $c_{s}$ is the sound speed in the gas. Bodies whose radii are $$a<\displaystyle\frac{9}{4}\lambda,$$ (3) undergo collisions with individual molecules of the gas. This is called the Epstein or free molecular flow regime. Larger bodies interact with the gas as if it were a continuous medium in the Stokes regime. For such bodies, the drag coefficient depends on the Reynolds number $R_{e}$. A schematic of interactions of bodies with gas in the Epstein and Stokes regimes is presented in Fig. 1. Let us now estimate the size of a body that can be described using the Epstein regime in a circumstellar disk. Following [23], we adopted the characteristic dependence for the disk surface density on radius in the disc $\Sigma=\Sigma_{0}\left(\displaystyle\frac{r}{r_{0}}\right)^{-1}$ ($\Sigma=\int^{H}_{-H}\rho dz$, where $\rho$ is the gas volume density in the disk and H the disk height). Let us consider a disk that extends from 1 to 100 AU. We then obtain $\Sigma_{0}=300$ g/cm${}^{2}$ for $r_{0}=10$ AU for a disk mass of $M_{\rm disc}=0.2M_{\odot}$ and $\Sigma_{0}=30$ g/cm${}^{2}$ for $r_{0}=10$ AU for a disk mass of $M_{\rm disc}=0.02M_{\odot}$. The resulting surface-density profiles are presented in the right panel of Fig. 2. The mean-free path of a gas molecule is determined by $\lambda=\displaystyle\frac{m_{\rm H_{2}}}{\rho\sigma}$, where $m_{\rm H_{2}}$ is the mass of a hydrogen molecule and $\sigma$ is the cross section for elastic collisions between hydrogen molecules. Setting $H=0.1r$, $\rho=\displaystyle\frac{\Sigma}{H}$, and using the values $m_{\rm H_{2}}=3.32\times 10^{-24}$ g and $\sigma=7\times 10^{-16}$ cm${}^{2}$ we can use (3) to determine the maximum size of particles in the disk that interact with the gas in the free-molecular flow (Epstein) regime. The derived particle sizes at various disk radii are presented in the left panel of Fig.2. We can see that bodies with radii less than 1 m interact with the ambient gas in the Epstein regime in the outer part of an axially symmetric disk (distances of more than 10 AU from the protostar). Moreover, the formation of self-gravitating clumps at a radius of 100 AU whose densities exceed the background density by more than a factor of 100 makes it possible to use the Epstein regime as aerodynamical drag for bodies with sizes up to $\approx$1 m. Thus, in the outer part of a disk, the flow of gas around solid bodies with sizes from those of grains to those of boulders (and in the case of low-mass disks, to the sizes of planetesimals) can be described in a free-molecular flow (Epstein) regime. In this approach, the acceleration $\bf g_{\rm drag}$ experienced by the body interacting with the gas can be written $$\bf g_{\rm drag}=\frac{\bf{u}-\bf{v}}{t_{\rm stop}},$$ (4) where $t_{\rm stop}$ is the stopping time of the particle, which is given by $$t_{\rm stop}=\displaystyle\frac{m_{\rm s}\|\bf{u}-\bf{v}\|}{\|\bf F_{\rm drag}% \|},$$ (5) and $m_{\rm s}$ is the mass of the particle, $m_{\rm s}=4\pi\rho_{\rm s}a^{3}/3$ where $\rho_{s}$ is the density of the substance of which the dust consists. We obtain by virtue of (1) and (2) $$t_{\rm stop}=\frac{a\rho_{\rm s}}{\rho c_{s}}.$$ (6) For a circumstellar disk $\displaystyle\frac{c_{\rm s}}{v_{\rm K}}=\frac{H}{r}$ where $v_{\rm K}=\displaystyle\sqrt{\frac{GM}{r}}$ is the Keplerian velocity, $M$ the mass of the star, and $G$ the gravitational constant. Then, $$t_{\rm stop}=\frac{a\rho_{\rm s}}{\Sigma\Omega_{K}},$$ (7) where $\displaystyle\Omega_{K}=\frac{v_{K}}{r}$ is the Keplerian frequency, which characterizes the dynamical time scale for the motion of a particle in a Keplerian orbit in the circumstellar disk. We characterized the coupling of dust to gas in a circumstellar disk due to aerodynamical drag using the Stokes number or dimensionless stopping time: $$St=t_{\rm stop}\Omega_{K},$$ (8) which takes the form $St=\displaystyle\frac{a\rho_{\rm s}}{\Sigma}$ in the Epstein regime. Thus, the smaller the radius of a body,the stronger it is tied to the gas ($St<1$). Setting $\rho_{\rm s}=2.2$ g/cm${}^{3}$ and using (3) we can estimate the maximum Stokes number for a body in a circumstellar disk interacting with the gas in the Epstein regime. The resulting values are presented in the central panel of Fig.2; we can see that, under the disk conditions considered, bodies interacting with the gas in a free-molecular flow regime can both experience strong adhesion with the gas ($St\ll 1$), and move essentially independently of the gas ($St\gg 1$). Note that $St\ll 1$ does not mean that the dust velocity coincides with the gas velocity, so that there is an absence of dust drift. 3 Methods for computing the dust velocity To understand the requirements for numerical algorithms used to compute the dynamics of solid bodies in a disk, we considered a model equation of motion for a single body in a medium with friction. For simplicity, we analyzed the equation for one velocity component (the transition to a system of ordinary differential equations for the three velocity components is straightforward): $$\frac{dv}{dt}=g+g_{\rm drag},$$ (9) where $g_{\rm drag}=\displaystyle\frac{u-v}{t_{\rm stop}}$ is the acceleration of the body due to friction in the gas, $g$ is its acceleration due to non-aerodynamical forces, and $u(t)$ is the gas velocity as a function of time. An explicit first order accurate method for this equation has the form $$\displaystyle\frac{v^{n+1}-v^{n}}{\tau}=g^{n}+\displaystyle\frac{u^{n}-v^{n}}{% t_{\rm stop}},$$ (10) where $v^{n}$ is the velocity of the body at the current time $t$ (in time step $n$) and $v^{n+1}$ the velocity of the body at the following time $t+\tau$ (in time step $n+1$). If $g^{n}=0$ and $u^{n}=0$, Eq. (10) acquires the form $$v^{n+1}=\left(1-\displaystyle\frac{\tau}{t_{\rm stop}}\right)v^{n}=\left(1-% \displaystyle\frac{\tau}{t_{\rm stop}}\right)^{n}v_{0}.$$ (11) It follows from (11) that for $v^{n+1}$ to be finite for any $n$ a necessary condition is that: $\left|1-\displaystyle\frac{\tau}{t_{\rm stop}}\right|<1$, which is equivalent to the form $$\tau<2t_{\rm stop}.$$ (12) Let us consider the effect of this constraint on the time step size. Suppose a grain has a radius of $1$ $\mu m$; for a massive disk in which $\Sigma=100$ g/cm${}^{2}$ at $r=20$ AU, we find from (7) $t_{\rm stop}\approx 100$ s. When integrating the equations of the disk dynamics (usually over ten or more orbital periods of the outer part of the disk), the time step size is determined from the Courant condition $\tau<\displaystyle\frac{\Delta r}{v}$, where $\Delta r$ is the step for discretization of the spatial derivatives (the size of a grid cell or smoothing length in Smoothed Particle Hydrodynamics). Let us estimate the typical size of this step. Letting the inner disk radius be $r=1$ AU, we carried out simulations of the disk dynamics in cylindrical coordinates, dividing a total rotation into 256 cells. This yields $$\tau=\displaystyle\frac{\Delta r}{v_{\rm K}}=\frac{2\pi r}{256v_{\rm K}}% \approx 1,23\times 10^{5}\ \textrm{s}.$$ (13) Satisfying the condition (12) requires computations using a factor of 1000 more time steps. Experience with solving stiff problems indicates that constraints on the time step size can be eased appreciably if an implicit scheme is used for the computations, such as the simplest version of the implicit first order accurate scheme: $$\displaystyle\frac{v^{n+1}-v^{n}}{\tau}=g^{n+1}+\displaystyle\frac{u^{n+1}-v^{% n+1}}{t_{\rm stop}}.$$ (14) In the general case, $g^{n+1},u^{n+1}$, and $v^{n+1}$ are unknown, and computing them using such a scheme requires iterations. In the case of the motion of a body in a plane or in space, a matrix inversion is required at each iteration. Therefore, modern simulations of the dynamics of gas–dust disks have tended to use the faster approaches considered below. Section 3.1 is concerned with an approach involving simplification of the initial problem and Section 3.2 with numerical methods for solving the entire problem. 3.1 Short friction time approximation Let us suppose that we require the solution of a system of equations of motion of a grain with velocity $v$ in gas with velocity $u$. We considered the case when the gas influences the dust, but, due to its low mass concentration, the dust does not affect the velocity of the gas: $$\left\{\begin{array}[]{lcl}\displaystyle\frac{dv}{dt}=g+\frac{u-v}{t_{\rm stop% }},\\ \displaystyle\frac{du}{dt}=g_{u}.\\ \end{array}\right.$$ (15) where $g_{u}$ is the acceleration acting on a gaseous volume in the vicinity of a solid body. Subtracting the second equation from the first and multiplying by the Stokes number $St$ yields the equivalent system $$\left\{\begin{array}[]{lcl}\displaystyle St\frac{d(v-u)}{dt}=St(g-g_{u})+% \Omega(u-v),\\ \displaystyle\frac{du}{dt}=g_{u}.\\ \end{array}\right.$$ (16) When $St\ll 1$ (a small body and a large difference between the dynamical time and the stopping time), the first equation of this system has the solution $u\approx v$ with accuracy to within $St$. The subsequent approximation is obtained if we take into account that $St\displaystyle\frac{d(u-v)}{dt}$ is of second order of smallness in $St$ relative to the other terms. Therefore, we have to within first-order accuracy in $St$ $$\left\{\begin{array}[]{lcl}St(g-g_{u})+\Omega(u-v)=0,\\ \displaystyle\frac{du}{dt}=g_{u}.\\ \end{array}\right.$$ (17) It follows that $$v=g_{\rm rel}t_{\rm stop}+u,$$ (18) where $g_{\rm rel}=g-g_{u}$ is the difference in the accelerations acting on the gas and on the body. The physical meaning of Eq. (22) is the use of the ¡¡steady-state¿¿ velocity of the particles relative to the gas in the computations, that is, the use of a quantity constructed based on the assumption that the relative velocity between the gas and bodies is constant when the forces are constant. This method for computing the velocity of a small grain is economical and natural for complex disk models, and was applied in [9, 25]. The necessary condition for applicability of this approximation for particles migrating along nearly Keplerian orbits is derived in Section 9. 3.2 Methods for Integrating the Entire Problem 3.2.1 Regularization The idea behind this method is replacing the explicit computation of the aerodynamical acceleration by the regularized acceleration $g^{n}_{\rm drag}=\displaystyle\frac{u^{n}-v^{n}}{t_{\rm stop}+\tau}$. This regularization method is used, for example, in [25]. We will show in Section 6.1 that a correct choice of the numerical scheme can make it possible to obtain a correct asymptotic radial drift velocity for bodies of arbitrarily $t_{\rm{stop}}$. 3.2.2 Quasianalytical integration The idea behind this method is that the stiff term in the equation of motion depends linearly on the velocity of the dust relative to the gas. The velocity of a grain at a time $\tau$ can then be calculated using a finite-difference method, and we have from the analytical solution of the Cauchy problem for a linear equation $$\left\{\begin{array}[]{lcl}\displaystyle\frac{dv}{dt}=g^{num}+\frac{u^{num}-v}% {t_{\rm stop}},\\ v|_{t=0}=v^{n}.\\ \end{array}\right.$$ (19) where $g^{num},u^{num}$ are constants. Substituting in the system (19) $g^{num}=g^{n}$ and $u^{num}=u^{n}$ then yields $$v^{n+1}=(g^{n}t_{\rm{stop}}+u^{n})+(v^{n}-g^{n}t_{\rm{stop}}-u^{n})e^{-% \displaystyle\frac{\tau}{t_{\rm{stop}}}}$$ (20) This approach was used, for example, in [13, 14, 18, 20]. 3.2.3 Mixed layer scheme In the mixed layer scheme, the aerodynamical acceleration is computed with the velocity of the body calculated using an implicit scheme in the new time step, and with the gas velocity taken from the previous time step: $$g^{n}_{\rm drag}=\frac{u^{n}-v^{n+1}}{t_{\rm stop}}.$$ (21) Based on the approaches described, a large number of schemes can be constructed for Eq. (9) which will be unconditionally stable and provide constrained solutions for any $\tau$. Apart from one-step schemes, when the right-hand side of the equation is treated like a single operator, the operator splitting technique physical processes is often used in simulations of complex physical systems. We restricted our consideration to schemes that (a) have already been used in other computations and (b) can be realized with a minimum number of arithmetic operations. The aim of our study is to show that unconditionally stable schemes that have been applied, and which require the same number of arithmetic operations, can differ substantially in the accuracy of obtained solutions. Another aim is to identify optimal methods from this point of view. 4 Tested schemes 4.1 Shord Friction Time Approximation An asymptotic value for the velocity of the particles relative to the gas is specified in the computations. In this approximation, the dust velocity is taken to be $$v=g_{\rm rel}t_{\rm stop}+u,$$ (22) where $g_{\rm rel}=g-g_{u}$ is the difference of the accelerations acting on the gas and on the body. 4.2 Mixed Layer Scheme This scheme has the form: $$\displaystyle\frac{v^{n+1}-v^{n}}{\tau}=\frac{u^{n}-v^{n+1}}{t_{\rm stop}}+g^{% n}.$$ (23) This scheme was used, for example, in [5]. 4.3 Quasianalytical Scheme without Operator Splitting Setting $g^{num}=g^{n}$ and $u^{num}=u^{n}$ in the system (19) yields $$v^{n+1}=(g^{n}t_{\rm{stop}}+u^{n})+(v^{n}-g^{n}t_{\rm{stop}}-u^{n})e^{-% \displaystyle\frac{\tau}{t_{\rm{stop}}}}$$ (24) This scheme was used, for example, in [17]. 4.4 Schemes Based on Operator Splitting with Respect to Physical Processes We tested schemes based on this method, which are of first-order accuracy in the time(e.g., [14, 25]); the first stage of a separation scheme is finding the velocity of an individual body due to the nonaerodynamical (e.g., gravitational) acceleration, $$\displaystyle\frac{v^{n+1/2}-v^{n}}{\tau}=g,$$ (25) and the second stage is correction of the grain velocity due the gas drag: $$\displaystyle\frac{v^{n+1}-v^{n+1/2}}{\tau}=g_{\rm drag}.$$ (26) Stages (25) and (26) are not commutative; when their order is changed, we obtain the scheme $$\displaystyle\frac{v^{n+1/2}-v^{n}}{\tau}=g_{\rm drag},$$ (27) $$\displaystyle\frac{v^{n+1}-v^{n+1/2}}{\tau}=g.$$ (28) Here, $v^{n+1/2}$ is the intermediate velocity of the grain. We will refer to (25), (26) s the scheme with direct operator order and to (27), (28) as the scheme with reverse operator order. 4.4.1 Regularization The regularization method with the direct operator order has the form $$\left\{\begin{array}[]{lcl}\displaystyle\frac{v^{n+1/2}-v^{n}}{\tau}=g^{n},\\ \displaystyle\frac{v^{n+1}-v^{n+1/2}}{\tau}=\frac{u^{n}-v^{n+1/2}}{t_{\rm stop% }+\tau},\par\end{array}\right.$$ (29) and with the reverse operator order has the form $$\left\{\begin{array}[]{lcl}\displaystyle\frac{v^{n+1/2}-v^{n}}{\tau}=\frac{u^{% n}-v^{n}}{t_{\rm stop}+\tau},\\ \displaystyle\frac{v^{n+1}-v^{n+1/2}}{\tau}=g^{n}.\par\end{array}\right.$$ (30) A regularization method combined with an operator splitting technique was used, for example, in [25]. 4.4.2 Quasianalytical integration This method with the direct operator order has the form $$\left\{\begin{array}[]{lcl}\displaystyle\frac{v^{n+1/2}-v^{n}}{\tau}=g^{n},\\ v^{n+1}=u^{n}+(v^{n+1/2}-u^{n})\rm{e}^{-\displaystyle\frac{\tau}{t_{\rm stop}}% },\par\end{array}\right.$$ (31) and with the reverse operator order has the form $$\left\{\begin{array}[]{lcl}v^{n+1/2}=u^{n}+(v^{n}-u^{n})\rm{e}^{-\displaystyle% \frac{\tau}{t_{\rm stop}}},\\ \displaystyle\frac{v^{n+1}-v^{n+1/2}}{\tau}=g^{n},\\ \end{array}\right.$$ (32) Quasianalytical integration combined with operator splitting technique with respect to physical processes was used, for example, in [13, 14]. 5 Test 1. The DUSTYBOX equation and exact analytical solution We considered the slow migration of grains toward a protostar in a protoplanetary gaseous disk, whose pressure decreases with radius. We supposed that the gaseous disk was close to equilibrium, when its radial velocity is close to zero and its azimuthal velocity $u_{\varphi}$ is close to the Keplerian velocity $v_{\rm K}=\sqrt{\displaystyle\frac{GM}{r}}$, but does not reach this velocity. Let a dust particle move along its orbit with velocity $v_{\varphi}$. We assumed that the gravitational field acting on the particle is created purely by the protostar, not the disk. In this case, the particle experiences the radial acceleration $g=\displaystyle\frac{v_{\varphi}^{2}}{r}-\frac{MG}{r^{2}}$. If the motion is such that the radial velocity of a particle is much lower than the Keplerian velocity, the acceleration, which depends on the radius in the general case, can be taken to be constant. This leads to a model for the linear motion of a body under the action of this constant acceleration and the gas drag, which moves with the constant velocity $u$. This is a simplified model and does not decribe the transformation of the azimuthal velocity into radial velocity, but represents a non-trivial test for the numerical schemes (23),(29-32). This test is currently known as DUSTYBOX[10] and has been applied since 1995 [15]: $$\left\{\begin{array}[]{lcl}\par\displaystyle\frac{dv}{dt}=g+\frac{u-v}{t_{\rm stop% }},\\ v|_{t=0}=v_{0}.\par\end{array}\right.$$ (33) In our case, $v$ is the radial velocity of the migrating grains, $g$ the constant radial acceleration of the grains, and $u$ the radial velocity of the gas. The analytical solution of this problem has the form $$v=(gt_{\rm stop}+u)+(v_{0}-gt_{\rm stop}-u){\rm e}^{-\displaystyle\frac{t}{t_{% \rm stop}}}.$$ (34) It is clear that $v\rightarrow(gt_{\rm stop}+u)$ when $t\gg t_{\rm stop}$ , which coincides with the terminal velocity expressed by the condition (22). 6 Results of testing the methods using DUSTYBOX We took 100 dust particles with sizes from 1 $\mu m$ to 1 m and placed them at a distance $r=20$ AU from the protostar with the mass $M=1M_{\odot}$. For simplicity, we assumed that $u=0$, and that $\Sigma(20\rm AU)=100$ g/cm${}^{2}$, with $g=-0.001\displaystyle\frac{v^{2}_{\rm K}}{r}$, $v_{0}=0.01v_{\rm K}$. For each of the particles, we carried out the integration (33) over 1000 rotations, i.e., over the time interval $[0;T]=[0;2000\pi\Omega_{\rm K}^{-1}]$, using a time step $\tau$ given by (13). Since $T\gg t_{\rm stop}$ for the entire range of particle sizes, this test can be used to estimate how close the computed stationary velocity is to the analytical value (34). For convenience, all the constants needed to determine the parameters of the problem (33) are presented in Table 1 and the parameters and initial conditions are presented in Table 2 in cgs units and the ¡¡astronomical¿¿ units indicated in the table. The ratios of the computed speed $v$ and the Keplerian speed $v_{\rm K}$ at the radius $20$ AU are presented in Fig.3. Regularization with direct operator order (29) produces radial migration rates close to the analytical values for all considered values of $t_{\rm stop}$. However, this same method with reverse operator order yields radial migration rates for particles with sizes less than $1$ cm that substantially exceed the analytical values when $t_{\rm stop}<\tau$. It follows from Fig.3 (upper right panel) that, when using the scheme (30) all bodies with sizes less than $1$ cm experience the same gas drag and move in the disk like centimeter particles. On the contrary, quasianalytical integration with direct operator order (31) appreciably underestimates the migration rates for bodies with sizes less than $1$ cm, essentially coupling their motion stiffly to the gas. Quasianalytical integration with reverse operator order (32) yields the same computational artefact as regularization with reverse operator order (30), namely, an artificial enhancement in the migration rates of small bodies. We also found that the mixed layer scheme without operator splitting (23) accurately reproduces the migration rates of any bodies; that is, the results essentially coincide with those of the scheme (29) shown in the upper left panel of Fig.3. This test is not meaningful for the scheme (24), since the numerical solution exactly coincides with the analytical solution when $g=const$. 6.1 Approximation Error To understand why the numerical results for the schemes (30)-(32) do not coincide with the analytical results, we transformed all the test schemes to a ¡¡reduced¿¿ form in which $v^{n+1}$ is a function of the values from the previous time step. The results of this transformation are presented in Table 3. In this form, regularization with direct operator order (29) and the mixed time step scheme (23) are identical, explaining the coincidence of the results for these computations. Moreover, the results for the schemes (29) and (23) approach the terminal velocity from the previous time step $u^{n}+gt_{\rm stop}$ as $t_{\rm stop}\rightarrow 0$. On the contrary, regularization with reverse operator order (30) yields the velocity $g\tau+u^{n}$, and the quasianalytical approach with reverse operator order (32) the velocity $g\tau$, as $t_{stop}\rightarrow 0$. Since neither of these values depends on $t_{stop}$, the computation of the rate ends up being the same for grains with different sizes. The quasianalytical scheme with reverse operator order (31) yields velocities that depend on the grain size for small times $t_{\rm stop}$, but these values are appreciably underestimated relative to the analytical results. We also estimated how the error in the velocity computation in the problem (33) depends on the time step $\tau$ for the methods displaying high accurary in their reproduction of the body velocities. If $v^{n}$ is the exact solution of (33) we can expand $v^{n+1}$ in a Taylor series in $\tau$: $$v^{n+1}=v^{n}+\tau\left(\displaystyle\frac{dv}{dt}\right)^{n}+\displaystyle% \frac{\tau^{2}}{2}\left(\displaystyle\frac{d^{2}v}{dt^{2}}\right)^{n}...+% \displaystyle\frac{\tau^{k}}{k!}\left(\displaystyle\frac{d^{k}v}{dt^{k}}\right% )^{n}...$$ (35) Because $v^{n}$ satisfies (33), $$\left(\displaystyle\frac{dv}{dt}\right)^{n}=g+\displaystyle\frac{u^{n}-v^{n}}{% t_{\rm stop}},\ \ \left(\displaystyle\frac{d^{k}v}{dt^{k}}\right)^{n}=\frac{(-% 1)^{k-1}}{t_{\rm stop}^{k-1}}\left(\displaystyle\frac{dv}{dt}\right)^{n},$$ (36) This means that the Taylor-series expansion of the solution has the form $$v^{n+1}=v^{n}+\tau\left(g+\displaystyle\frac{u^{n}-v^{n}}{t_{\rm stop}}\right)% -\displaystyle\frac{\tau^{2}}{2t_{\rm stop}}\left(g+\displaystyle\frac{u^{n}-v% ^{n}}{t_{\rm stop}}\right)+O(\tau^{3}).$$ (37) Subtracting the presented numerical velocity obtained for the schemes (29) and (23) from (37), $$v^{n+1}=(v^{n}\displaystyle\frac{t_{\rm stop}}{\tau}+gt_{\rm stop}+u^{n})% \displaystyle\frac{\tau}{\tau+t_{\rm stop}},$$ (38) yields the error $\varepsilon$ for the velocity $$\varepsilon=\displaystyle\frac{1}{2}\left(v^{n}-u^{n}-gt_{\rm stop}\right)% \frac{\tau^{2}(\tau-t_{\rm stop})}{t^{2}_{\rm stop}(\tau+t_{\rm stop})}+...% \frac{(-1)^{k-1}\tau^{k}}{k!t^{k}_{\rm stop}}(v^{n}-u^{n}-gt_{\rm stop})...$$ (39) It is clear from (39) that all terms in the error are proportional to $(v^{n}-u^{n}-gt_{\rm stop})$. Since it follows from (34) when $\displaystyle\frac{\tau}{t_{\rm stop}}\gg 1$ that $v=u+gt_{\rm stop}$, all terms in the error tend to zero. This means that the schemes (29) and (23) have infinite approximation orders for small values of $t_{\rm stop}$. We confirmed this in computational experiments, by carrying out the integration for the problem (33) in the same time interval, $[0;2000\pi\Omega_{K}^{-1}]$, but with time steps of $2\tau$ and $4\tau$. We found the difference between the numerical and the analytical solutions relative to the analytical solution at the final time $T=2000\pi\Omega_{K}^{-1}$. The results for the scheme (23) are presented in Table 4. The approximation error is close to ¡¡machine¿¿ accuracy, without any increase in the error when the integration step is increased. The second column of Table 3 presents the main error terms for the methods (23)-(29).The main error terms for schemes (30)-(32) depend quadratically on $\tau$. This means that these schemes will be first-order accurate over the entire integration interval. We confirmed this using computational experiments analogous to those desribed in the previous section. The computation results for the scheme (30) are presented in Table 5. For bodies of all sizes in all the computations for this scheme, the error in the solution increases by a factor of two when $\tau$ is increased by this same factor. The deviation of the computational from the exact solution for these methods is inversely proportional to $t^{2}_{\rm stop}$; therefore, the maximum errors are expected for bodies with smaller sizes, consistent with the results presented in Fig.3 and Table 5. It follows from this table that the relative error in the solution does not exceed 10% only for large bodies for which $\tau\leq 0.1t_{\rm stop}$; when $\tau=t_{\rm stop}$ the error in the velocity is 100%. This means that, although the schemes (30)-(32) do not require a restriction on the step size from the point of view of stability, and are first-order accurate in time, applying these schemes requires a constraint on the step size $\tau$ from the point of view of accuracy. The accuracy-imposed restriction $\tau\leq 0.1t_{stop}$ for the scheme (30) is a factor of 20 stricter than the stability-imposed restriction for the explicit method (10), $\tau\leq 2t_{\rm stop}$. 7 Test 2. Simple model for radial migration of grain in a disc. Solution of a linearized system as an approximation to the solution of the original system In this section, we consider a test problem with an approximate analytical solution in which the non-aerodynamical acceleration $g$ depends non-linearly on the velocity of the body. This represents a simple model for the radial migration of bodies in a circumstellar disk. Weidenschilling 1977 [24] published a qualitative analysis of the solution of this problem in 1977, which established that meter-size bodies have the maximum migration velocities toward the protostar, leading to their falling into the protostar over several hundred years. The equations of motion of a solid body in a disk (under the gravitational field of the star and friction with gas in the disk) written in polar coordinates have the form (e.g., [12, 1, 22]): $$\left\{\begin{array}[]{lcl}\par\displaystyle\frac{dr}{dt}=v_{r},\\ \displaystyle\frac{dv_{r}}{dt}=\frac{v_{\varphi}^{2}}{r}-\frac{GM}{r^{2}}-% \frac{v_{r}-u_{r}}{t_{\rm stop}},\\ \displaystyle\frac{dv_{\varphi}}{dt}=-\frac{v_{r}v_{\varphi}}{r}-\frac{v_{% \varphi}-u_{\varphi}}{t_{\rm stop}}.\end{array}\right.$$ (40) Here, $r,v_{r},v_{\varphi}$ are the orbital radius and the radial and angular velocities of the body, $u_{r},u_{\varphi}$ the radial and angular velocities of the gas, and $M$ the mass of the central body. The following Cauchy problem can be posed for the system: $$\left\{\begin{array}[]{l}\left.r\right|_{t=0}=r_{0},\\ \left.v_{r}\right|_{t=0}=0,\\ \left.v_{\varphi}\right|_{t=0}=v_{K}=\sqrt{\displaystyle\frac{GM}{r}}.\\ \end{array}\right.$$ (41) Following [1], we assumed that (1) the gas disk in which the dust particles move is in equilibrium, i.e., $u_{r}=0$; (2) the gas pressure decreases from the center to the periphery, i.e., the configuration of the gas disk brings about the equilibrium angular velocity $u^{2}_{\varphi}=(1-\eta)v^{2}_{\rm K}$, where $0<\eta\ll 1$. We took a dust particle to move in a close to Keplerian orbit and simplified the initial system (40) using the following approximations: $$\displaystyle\frac{d}{dt}(rv_{\varphi})\approx v_{r}\frac{d}{dr}(rv_{\rm K})=% \frac{1}{2}v_{r}v_{\rm K},$$ (42) $$v_{\varphi}+v_{\rm K}\approx 2v_{\rm K},$$ (43) , $$\displaystyle\frac{1}{\sqrt{1-\eta}}\approx 1+\frac{\eta}{2},\ \eta\sqrt{1-% \eta}\approx\eta.$$ (44) Due to (42) the third differential equation in (40) becomes an algebraic equation, and the second becomes a linear equation in $v_{r},\ v_{\varphi}$ by virtue of (43): $$\left\{\begin{array}[]{lcl}\par\displaystyle\frac{dr}{dt}=v_{r},\\ \displaystyle\frac{dv_{r}}{dt}=\frac{2v_{\rm K}}{r}(v_{\varphi}-v_{\rm K})-% \frac{v_{r}}{t_{\rm stop}},\\ \displaystyle\frac{1}{2}v_{r}v_{\rm K}=-r\frac{v_{\varphi}-u_{\varphi}}{t_{\rm stop% }}.\end{array}\right.$$ (45) Applying (44) transforms the system (45) to the form: $$\left\{\begin{array}[]{lcl}\displaystyle\frac{dr}{dt}=v_{r},\\ \displaystyle\frac{dv_{r}}{dt}=-\eta\frac{v^{2}_{\rm K}}{r}+\frac{2v_{\rm K}}{% r}(v_{\varphi}-u_{\varphi})-\frac{v_{r}}{t_{\rm stop}},\\ \displaystyle\frac{1}{2}v_{r}v_{\rm K}=-r\frac{v_{\varphi}-u_{\varphi}}{t_{\rm stop% }}.\end{array}\right.$$ (46) We can obtain a quasistationary solution for the system (46) ((satisfying the condition $\displaystyle\frac{dv_{r}}{dt}=0$) if we exclude $(v_{\varphi}-u_{\varphi})$ from the second equation: $$\frac{v_{r}}{v_{K}}=-{\eta\over St(r)+St(r)^{-1}}\;,$$ (47) As far as we are aware, the relation (47) was first presented in [16]. A relation for the case of a non- equilibrium gas disk, when $u_{r}\neq 0$, can be found in [22]. By integration of (47) over the time, we obtain $$\eta t=t_{\rm stop}\ln(\displaystyle\frac{r_{0}}{r})+\frac{r_{0}^{3}-r^{3}}{3% GMt_{\rm stop}},$$ (48) $$v_{\varphi}=u_{\varphi}+\frac{v_{r}St(r)}{2}.$$ (49) If $t_{\rm stop}\Omega\ll 1$, Eq. (48) acquires the form $$r=(r_{0}^{3}-3GM\eta t_{\rm stop}t)^{1/3}.$$ (50) We do not know of any proof that the solutions of the system (40) and the transformed system (46) are close when $\eta\ll 1$, however, our numerous numerical computations show that formulas (47)-(50) are a good approximation to the solution of the initial system (40). The similarity of the numerical solution (40) and the analytical solution (47) has also been shown in [5, 17]. 8 Numerical solution of the non-linear system We solved the Cauchy problem (40)-(41) using the following two-step first order accurate scheme. The first step is finding the dust velocity using one of the methods (23),(24) or (29). On the second step we used the resulting velocities to determine the coordinates of the particle: $$\left\{\begin{array}[]{lcl}\displaystyle\frac{r^{n+1}-r^{n}}{\tau}=v_{r}^{n+1}% ,\\ \displaystyle\frac{\varphi^{n+1}-\varphi^{n}}{\tau}=v_{\varphi}^{n+1},\\ \end{array}\right.$$ (51) where $r^{n},\varphi^{n}$ are the radius and angle of the particle at the current time step (at time $t$) and $r^{n+1},\varphi^{n+1}$ are the coordinates of the particle at the next time step (at time $t+\tau$). The system (40) without terms corresponding to the action of gas drag has the invariant $\displaystyle\frac{d(v_{r}^{2}+v_{\varphi}^{2})}{dt}=2\Omega_{\rm K}(r)$. This system with the initial conditions (41) has a stationary analytical solution ($r=r_{0},\ v_{r}=0,\ v_{\varphi}=v_{\rm K}$). It is known that the following problem arises when this system is integrated numerically (for example, using an explicit method): a diverging, spiral trajectory is obtained instead of a closed, circular orbit. Preserving the correct form of the trajectories requires reducing the time step size; the larger the number of orbits undergone by the body, the smaller the step that must be used. Simplectic schemes have been developed to avoid this restriction on the time step size while preserving the correct form of the trajectories, such as leap-frog schemes. However, Bai and Stone [2] showed that a simplectic leap-frog scheme requires additional constraints on the time step size in the integration of the dynamics of small bodies that are strongly tied to the gas. Thus, the choice of scheme enabling computations with the maximum time step is determined by the need to find the optimum compromise between two factors – the requirement for the time step size from the point of view of preserving the orbit geometry and from the point of view of modeling the dynamics of small bodies. In our current study, we have solved a simpler problem. We are convinced that the use of the scheme (23)+(51) with a step $\tau$ determined by the condition (13) enables preservation of the circular orbit of a body at a distance of 20 AU from the protostar over 1000 orbits, i.e., over about one million years. Further, we have compared all methods only from the point of view of the possibility of integrating the dynamics of bodies of arbitrarily small size without reducing the time step size relative to that indicated by the condition (13). 8.1 Reconstruction of the Velocity Field The aim of this section is to show that a group of methods having high accuracy when solving the linear equation (33) also display high accuracy for the non-linear problem modeling the radial migration of bodies in a disk. We took 20 particles for which we specified the stopping time $t_{\rm stop}=10^{a}\Omega^{-1}_{\rm K}$, where $a$ has values in the interval $[-6;2]$. We placed the particles at the radius $r_{0}=20$ AU from a protostar with mass $M=1M_{\odot}$ in a disk with $u_{\varphi}=0.995v_{\rm K}$ and $u_{r}=0.$ Under these conditions, the range of Stokes numbers corresponds to bodies with radii from $10^{-6}$ m to 100 m. We numerically solved the Cauchy problem (40)-(41) for each particle. We carried out the integration over 15 orbits of the bodies at a distance of 20 AU, that is, in the interval $[0,T]=[0,30\pi\Omega^{-1}_{\rm K}]$ with a constant time step size identical for all the bodies, determined by relation (13). We computed the values $\displaystyle\frac{v_{r}}{v_{\rm K}(r)}$ for the resulting numerical solutions at $t=T$, which we then compared with the analytical values given by (47). Figure 4 presents the radial drift velocity for the bodies as a function of the Stokes number computed using the mixed layer scheme (23)+(51). The numerical and analytical solutions coincide for all values of $\displaystyle\frac{t_{\rm stop}}{\tau}$; i.e., this method accurately conveys the loss of $\tau$ angular momentum of a body due to the gas drag, independent of the size of the body. We also obtained a similar coincidence of the numerical and analytical results for the schemes (29)+(51) and (24)+(51). The dependence of the radial velocity of the bodies on the Stokes number presented in Fig. 4 illustrates the result of [24] known since 1977, that the highest migration rates are displayed by bodies for which $St=1$ (under the conditions considered, these are bodies with radii of 1 m). We can also see that the radial drift speeds of meter-size bodies exceed those of micron-size particles by nearly six orders of magnitude. This means that, in the absence of inhomogeneities in the disk that delay the rapid radial migration of meter-size bodies, such bodies will fall into the protostar on a time scale of several hundred years. 8.2 Migration of a Ring The aim of this section is to show that a group of methods displaying high accuracy in the solution of the linear problem (33) correctly conveys the motion of a ring of dust particles on the scales of the disk and the characteristic time scale for the disk dynamics for the entire range of particles considered. We will also show that the schemes (30)+(51) and (31)+(51) appreciably distort the migration speed of a ring when $\tau>t_{\rm stop}$. As in Section 8.1 we considered a disk around a protostar with mass $M=1M_{\odot}$, and with $u_{\varphi}=0.995v_{\rm K},\ u_{r}=0.$ We took 400 particles and distributed them uniformly in a ring [18 AU; 20 AU]. We specified the same stopping time for all the particles, which we kept constant over the entire course of the integration time: $$t_{\rm stop}=St\Omega^{-1}_{\rm K}(r_{0}),$$ (52) where $r_{0}=20$ AU, $St=2\times 10^{-3}$. According to (7) under the conditions of a massive disk described in Section 2, this stopping time corresponds to an initial radius for the bodies of $2-3$ mm. The migration of bodies in the inner region of the disk in a regime in which the stopping time remains constant implies that the radius of solid-phase particles grows in accordance with a law that depends on the surface density and radius. We carried out the integration with a constant time step size determined by (13) over 1300 orbits of the outer boundary of the ring; i.e., $T=2600\pi\Omega^{-1}_{\rm K}(r_{0})$ or 116000 yrs. Figure 5 presents the results of the computations obtained using the methods (23)+(51),(30)+(51) and (31)+(51). It is clear that the particles whose trajectories were computed using the mixed layer scheme are located inside a ring whose boundaries are in agreement with the analytical values found using (48). We obtained a similar agreement between the numerical and analytical results for the schemes (29)+(51) and (24)+(51). We showed in Section 5 that quasianalytical integration with direct operator order (31) underestimates the drift velocities of small bodies. We can see in Fig. 5 (middle panel) that the computed positions of the particles deviate from the exact values by more than $2$ AU over $116000$ yrs. It is clear from Fig.3 (lower left panel) that this deviation will be substantially larger for smaller bodies. Another example of a numerical artefact that arises when the scheme (31) is used is presented in [14] (Fig. 4) where the vertical settling of dust onto the plane of the disk is calculated. On the contrary, regularization with reverse oper- ator order (30) substantially overestimates the migration rates of bodies. The right-hand panel of Fig.5 shows that particles whose orbital radii should have been shifted from $18$ to $11$ AUpractically reached th protostar. Figure 6 shows the migration of a ring of particles with radii of approximately $1$ cm (left panel), $1$ m (middle panel), $100$ m (right panel) over times of $42700$, $1420$ and $85400$ yrs, respectively. The parameters of the disk are the same as in Fig. 5. We carried out all these computations using the mixed layer scheme. The numerical solutions are close to the analytical solutions obtained using (48) for the entire range of particle sizes. n agreement with Fig.4 the migration speeds are maximum for meter-size particles ($St\approx 1$ for the conditions of the massive disk from Section 2). The ring expands in the course of migration toward the protostar for bodies with $St\ll 1$, and narrows for bodies with $St\gg 1$. 9 Short Friction Time Approximation. Necessary and Sufficient Conditions for its Applicability We estimated the conditions under which the short friction time approximation correctly conveys the velocities of dust migrating in the disk along nearly Keplerian orbits. We applied (22) to solve (40)+(41) for the conditions of a gaseous disk described in Section 7, and obtained $g_{rel,r}=-\displaystyle\frac{1}{\rho}\frac{dp}{dr}$, $g_{rel,\varphi}=\displaystyle\frac{v_{r}v_{\varphi}}{r}$. Because the gaseous disk is in equilibrium with $u^{2}_{\varphi}=v^{2}_{\rm K}(1-\eta)$, we find that $-\displaystyle\frac{1}{\rho}\frac{dp}{dr}=-\eta\frac{GM}{r^{2}}$, so that $$v_{r}=-\eta\displaystyle\frac{v^{2}_{\rm K}}{r}t_{\rm stop}=-\eta v_{K}St,\ \ % v_{\varphi}=\displaystyle\frac{v_{r}v_{\varphi}}{r}t_{\rm stop}+v_{\rm K}\sqrt% {1-\eta}.$$ (53) On the other hand, according to (47) $v_{r}=\displaystyle-\frac{\eta v_{\rm K}}{St+St^{-1}}$. The difference in the velocities $v_{r}$ from (53) and (47) relative to the velocity (53) then acquires the form $\left|1-\displaystyle\frac{St}{\displaystyle\frac{1}{St+1/St}}\right|=St^{2}$. Thus, the relation $$St^{2}=\varepsilon,$$ (54) where $\varepsilon$ is the relative error in the computation of the radial velocity, determines a necessary condition for applicability of the short friction time approximation. That is, for particles moving in a circumstellar disk along nearly Keplerian orbits, the short friction time approximation makes it possible to obtain a solution with a relative error below 1% ($\varepsilon<0.01$), only if $$St<0.1\ \ \textrm{or}\ \ t_{\rm stop}<0.1\Omega^{-1}.$$ (55) We confirmed with our simulations that violation of the condition (55) leads to appreciable deviations of the numerical from the analytical solution. Here, we computed the migration of a ring of particles similar to that described in Section 8.2, but using the short friction time approximation (53) instead of the mixed layer scheme (23). We specified the stopping times of the bodies using formula (52), using the values $St=0.01$ and $St=1$ at $r_{0}=20$ AU, which corresponds to sizes of the modeled bodies from 1 cm to 1 m. We used a time step size $\tau$ determined from (13) with $r=1$ AU. The results of these computations are presented in the left and right panels of Fig.7. The computed positions of particles with sizes of 1 cm ($St=0.01$, Eq. (55) is satisfied) are located inside the ring determined by the analytical formula (48) after 30 000 yrs. The application of the short friction time approximation yields appreciably overestimated migration rates toward the center for bodies 1 m in size ($St=1$, the condition (55) is violated). A substantially stricter condition for applicability of the short friction time approximation, $t_{\rm stop}<\tau$, is presented in [25] (see the text following formula (7) in that study). This condition supposes that it is possible to apply this approximation to model only particles whose sizes are a factor of 20–100 smaller than those admitted by the condition (55). The essence of this stricter constraint is presented in Fig. 8, which shows the time dependences of the relative velocity between the dust and gas. If the initial velocity of the dust is far from equilibrium (black curve), and $\tau\ll t_{stop}$, the dust velocity at time $\tau$ will differ substantially from its equilibrium value. Since we simply replace the relative velocity of the dust by its equilibrium value, $g_{rel}t_{stop}$, in the short friction time approximation, it is clear that introducing this underestimated dust velocity at earlier times can lead to appreciable deviations of the numerical from the analytical solution. In practice, this condition is not necessary if there are no mechanisms maintaining strongly non-equilibrium velocities for the dust in the system. In this case (red curve), the velocity at time $\tau$ essentially coincides with $g_{rel}t_{stop}$, so that the numerical solutions are independent of the time step size $\tau$. We also demonstrated that this independence of the computational results on the time step size $\tau$ is indeed realized in practice, by repeating the simulations with the initial value $St=0.01$, but using $\tau=10^{-3}t_{stop}$. The results are presented in the central panel of Fig.7. It is clear that the particles have moved in strict agreement with the analytical predictions. 10 Conclusion We have considered methods for integrating the equations of dust grain motion stiffly coupled to the gas in circumstellar disk, in particular, the case when a body interacts with the gas in the Epstein (free molecular flow) regime. Various methods were compared from the point of view of their applicability for computations for bodies of arbitrary small size (from 1 $\mu$m to 10  m), whose velocities are mainly determined by the gas drag. Since the accurate reproduction of the trajectories of bodies with $St>10$ over a large number of orbits requires high order accurate methods, the results presented here refer only to bodies with $St<10$. When choosing a method for integrating the equations of motion, it is necessary to take into consideration the following factors. • The explicit method 10 can be used only to compute the dynamics of large bodies, for which $\tau<2t_{\rm stop}$. Otherwise, this scheme is unstable. • The fastest computational approach to modeling the radial drift of small particles is the short friction time approximation (22). The relative error in the computation of the radial velocity in this method is $St^{2}$. This shows that this method can be applied only for bodies satisfying the condition $St<0.1$. In contrast to the earlier studies [25, 2], we have shown that this approach can be applied independent of the relative sizes of $\tau$ and $t_{\rm stop}$, if there are no mechanisms in the system maintaining a non-equilibrium relative velocity between the gas and bodies. • Several stable and fast schemes that require similar computational resources are available for the solution of the equations of motion of bodies with Stokes numbers $St<10$. We have shown that the mixed layer scheme 23, regularization with direct operator order 29 and quasianalytical integration 24 accurately reproduce the variations of the angular momenta of arbitrarily small bodies. All these methods can be used to integrate the equations of motion of bodies with time steps determined by the Courant condition for the solution of gas dynamical equations. These methods are optimal for obtaining accurate results with the minimum number of arithmetic operations. • Operator splitting technique with respect to physical processes, which is often used in astrophysical codes, leads to substantial variations in the accuracy of these algorithms for small bodies, $t_{\rm stop}<\tau$, achieved in practice with the stable and fast schemes listed above. Changing the order of the operators in a scheme leads to the same result. Analysis of the errors in solutions in the linear problem DUSTYBOX and corresponding numerical tests has enabled us to determine the accuracy of schemes obtained in practice. Simulations with operator splitting tecnique with respect to physical processes satisfying the condition $t_{\rm stop}>\tau$, are admissible with any operator order and coincide with the computations without this splitting. Our analysis of the errors and test results revealed the following artefacts. Regularization with reverse operator order (30) overestimates the angular-momentum losses for bodies whose stopping times are shorter than the time step size $\tau$. The combination of quasianalytical integration with operator splitting yields strongly distorted solutions when $\tau>t_{\rm stop}$. The angular-momentum losses are underestimated when direct operator order (31) is used, and overestimated when reverse operator order (32) is used.Achieving a computational accuracy of 10% with these methods requires the use of a smaller time step size than is required to provide stability in an explicit scheme. Acknowledgments We thank S. Nayakshin, Ya. Pavlyuchenkov, V. Akimkin, and E. Lashina for discussions of this study. This work was supported by the Russian Foundation for Basic Research (grant 16-07-00916) and a grant of the President of the Russian Federation (MK-5915.2016.1), the Ministry of Education and Science of the Russian Federation (grant 3.5602.2017/BCh) and the Austrian Agency for International Mobility and Cooperation in Education, Science and Research (OEAD). References [1] P. J. Armitage. Lecture notes on the formation and early evolution of planetary systems. ArXiv Astrophysics e-prints, January 2007. [2] X.-N. Bai and J. M. Stone. Particle-gas Dynamics with Athena: Method and Convergence. ApJS, 190:297–310, October 2010. [3] L. Barrière-Fouchet, J.-F. Gonzalez, J. R. Murray, R. J. Humble, and S. T. Maddison. Dust distribution in protoplanetary disks. Vertical settling and radial migration. AAp, 443:185–194, November 2005. [4] F. Brauer, C. P. Dullemond, and T. Henning. Coagulation, fragmentation and radial motion of solid particles in protoplanetary disks. AAp, 480:859–877, March 2008. [5] S.-H. Cha and S. Nayakshin. A numerical simulation of a ’Super-Earth’ core delivery from 100 to 8 au. MNRAS, 415:3319–3334, August 2011. [6] N. Cuello, J.-F. Gonzalez, and F. C. Pignatale. Effects of photophoresis on the dust distribution in a 3D protoplanetary disc. MNRAS, 458:2140–2149, May 2016. [7] T. J. Haworth, J. D. Ilee, D. H. Forgan, S. Facchini, D. J. Price, D. M. Boneberg, R. A. Booth, C. J. Clarke, J.-F. Gonzalez, M. A. Hutchison, I. Kamp, G. Laibe, W. Lyra, F. Meru, S. Mohanty, O. Panić, K. Rice, T. Suzuki, R. Teague, C. Walsh, P. Woitke, and Community authors. Grand Challenges in Protoplanetary Disc Modelling. PASA, 33:e053, October 2016. [8] R. W. Hockney and J. W. Eastwood. Computer Simulation Using Particles. 1981. [9] A. Johansen and H. Klahr. Dust Diffusion in Protoplanetary Disks by Magnetorotational Turbulence. ApJ, 634:1353–1371, December 2005. [10] G. Laibe and D. J. Price. DUSTYBOX and DUSTYWAVE: two test problems for numerical simulations of two-fluid astrophysical dust-gas mixtures. MNRAS, 418:1491–1497, December 2011. [11] G. Laibe and D. J. Price. Dust and gas mixtures with multiple grain species - a one-fluid approach. MNRAS, 444:1940–1956, October 2014. [12] L.D. Landau and E.M. Lifshitz. Fluid Mechanics. Vol. 6 (2nd ed.). Butterworth-Heinemann, 1987. [13] P. Lorén-Aguilar and M. R. Bate. Two-fluid dust and gas mixtures in smoothed particle hydrodynamics: a semi-implicit approach. MNRAS, 443:927–945, September 2014. [14] P. Lorén-Aguilar and M. R. Bate. Two-fluid dust and gas mixtures in smoothed particle hydrodynamics II: an improved semi-implicit approach. MNRAS, 454:4114–4119, December 2015. [15] J. J. Monaghan and A. Kocharyan. SPH simulation of multi-phase flow. Computer Physics Communications, 87:225–235, May 1995. [16] Y. Nakagawa, M. Sekiya, and C. Hayashi. Settling and growth of dust particles in a laminar phase of a low-mass solar nebula. Icarus, 67:375–390, September 1986. [17] S. Nayakshin and R.J. Humphries. Accretion of pebbles onto gas giant planets at wide separations. MNRAS, in preparation. [18] L. Pan and P. Padoan. Turbulence-induced Relative Velocity of Dust Particles. I. Identical Particles. ApJ, 776:12, October 2013. [19] W. K. M. Rice, G. Lodato, J. E. Pringle, P. J. Armitage, and I. A. Bonnell. Accelerated planetesimal growth in self-gravitating protoplanetary discs. MNRAS, 355:543–552, December 2004. [20] G. P. Rosotti, A. Juhasz, R. A. Booth, and C. J. Clarke. The minimum mass of detectable planets in protoplanetary discs and the derivation of planetary masses from high-resolution observations. MNRAS, 459:2790–2805, July 2016. [21] V. N. Snytnikov and O. P. Stoyanovskaya. Clump formation due to the gravitational instability of a multiphase medium in a massive protoplanetary disc. MNRAS, 428:2–12, January 2013. [22] T. Takeuchi and D. N. C. Lin. Radial Flow of Dust Particles in Accretion Disks. ApJ, 581:1344–1355, December 2002. [23] E. I. Vorobyov. Embedded Protostellar Disks Around (Sub-)Solar Protostars. I. Disk Structure and Evolution. ApJ, 723:1294–1307, November 2010. [24] S. J. Weidenschilling. Aerodynamics of solid bodies in the solar nebula. MNRAS, 180:57–70, July 1977. [25] Z. Zhu, R. P. Nelson, R. Dong, C. Espaillat, and L. Hartmann. Dust Filtration by Planet-induced Gap Edges: Implications for Transitional Disks. ApJ, 755:6, August 2012. [26] I. N. Ziglina and A. B. Makalkin. Gravitational instability in the dust layer of a protoplanetary disk: Interaction of solid particles with turbulent gas in the layer. Solar System Research, 50:408–425, November 2016.
SUPERCONDUCTORS WITH MESOSCOPIC PHASE SEPARATION A. J. Coleman    E. P. Yukalova and V. I. Yukalov Department of Mathematics and Statistics Queen’s University, Kingston, Ontario K7L 3N6, Canada Abstract A model of superconductivity is proposed taking into account repulsive particle interaction, mesoscopic phase separation and softening of crystalline lattice. These features are typical of many high–temperature superconductors. The main results obtained for the model are: (i) phase separation is possible only if repulsive forces play a significant role; (ii) the critical temperature as a function of the superconducting phase fraction can have non–monotonic behaviour; (iii) superconductivity is possible in heterophase systems even when it would be forbidden in pure samples. These results are in agreement with experiments. I Introduction In the standard theory of superconductivity one employs a reduced Hamiltonian involving only the attractive part of an effective particle interaction, responsible for the existence of superconductivity, and omitting the repulsive part of the interaction as irrelevant. This approach was adapted in the original paper of Bardeen, Cooper and Schrieffer [1]. The consideration of the Coulomb interaction leads to a renormalization of the coupling constant [1,2], which can be effectively included in the formula for the superconducting critical temperature [3]. In usual metals the averaged Coulomb interaction is small as compared to the coupling parameter mediated by phonon exchange. A detailed discussion of reasons why the repulsive interaction may be neglected has been given in a review by Legget [4]. These arguments concern the direct influence of repulsive interactions. However, they can affect superconductivity indirectly. An obvious example is that all interactions are included in the definition of the electronic energy bands and the corresponding density of states [5,6]. An interaction that is not directly responsible for the pairing mechanism, can, nevertheless, drastically strengthen the order by providing a high density of states at the Fermi surface, thus ensuring a large value of the effective coupling parameter, which, in turn, leads to a higher superconducting transition temperature. Such an increase of the density of states can occur, for instance, when a van Hove singularity is located almost precisely at the Fermi level [7]. It is necessary to keep in mind that it is Coulomb interactions, and not pairing interactions that define the space structure of matter. The electrical properties change drastically from one structure to another [5-8]. Moreover, the stability or instability of a crystalline (and chemical) structure is closely connected with superconductivity. This was noticed long before the discovery of the high - temperature superconductors. For example, Testardi [9] analysed a large number of superconducting A-15 compounds and concluded that superconductivity and structural instability are really related: the higher the superconducting critical temperature, the more probable is the occurrence of structural instability. Indeed, most superconductors with $\;T_{c}\sim 20K\;$ clearly exhibit instability. The observation that higher superconducting $\;T_{c}\;$ is almost always related to increased instability received strong confirmation after the historic discovery of high -$\;T_{c}\;$ superconductors by Bednorz and Müller [10]. Possibly, the most careful, critical, material systematics for high -$\;T_{c}\;$ superconductors has been given by Phillips [11]. He stresses that if we review the known high -$\;T_{c}\;$ superconducting materials we are unavoidably confronted with mechanical instabilities as a factor which always accompanies $\;T_{c}\;$, whether in intermetallic or oxide - based superconductors. This occurs so often that we would hard put to dismiss the repeated appearance of such instabilities as mere coincidence. According to Phillips [11], there is no room for doubt that it is lattice instabilities (and nothing else) that produce high -$\;T_{c}\;$ superconductivity just as much in cuprates as in intermetallic compounds. The main result of structural instabilities is an essential softening of the lattice. This softening is manifested in various precursor effects occurring near the superconducting transition temperature, such as anomalies in elastic moduli, in strain parameters, and in sound velocity [9,11]. The softening of phonon modes can be observed by infrared, Raman, and neutron scattering experiments [11]. Nuclear gamma - resonance studies [12,13] also show that a dramatic lattice softening occurs at $\;T_{c}\;$ for high - temperature superconductors, the most noticable result of which is an anomalous sagging of the Mössbauer factor. The review of experimental data also provides evidence that the lattice softening and the increase in $\;T_{c}\;$ are both associated with the formation of phase precursor clusters with structural (spatial or chemical) composition different from the basic component [9,11]; the linear size of the clusters is of the order $\;10-100\AA\;$. Such mixed structures exist in many ceramic superconductors, as has been observed by using pulsed–neutron scattering, synchrotron $\;x\;$- ray powder diffraction, nuclear magnetic resonance, nuclear quadrupole resonance, and nuclear gamma resonance [5,11,14]. For instance, the low - temperature orthorhombic and high–temperature tetragonal structures coexist in oxide superconductors in a region around $\;T_{c}\;$ [15-17]. A more detailed discussion of these and other experiments can be found in a recent review [18]. The relation between the existence of soft vibrational modes and the appearance of clusters of a mesoscopic size has been also proved by molecular dynamics computer simulation for metastable glassy models [19,20]. The clusters of one phase structure inside another, occurring near a phase transition point, are randomly distributed in space, and often, being unstable, change with time. To emphasize the randomness of their distribution, they can be called heterostructural, or heterophase fluctuations, and to stress their mesoscopic sizes, they can be named mesoscopic structural fluctuations [18]. Since structural characteristics are intimately related to electronic and conducting properties, the existence of structural fluctuations implies the appearance of spatial fluctuations in the superconducting properties. Mesoscopic structural fluctuations are accompanied by large energy fluctuations which favour a stochastic separation of a sample into superconducting and normal regions [18,21]. There is clear experimental evidence that the superconducting cuprates display a nanoscale phase separation into insulating and superconducting nanodomains [7]. This is in agreement with some simple models [7,22-24] predicting that such a phase separation can be thermodynamically profitable. Recent experiments with high - temperature superconductors confirm that only part of a given sample - often less than a few percent - is in a superconducting phase (see [25-27] and references therein). In the present paper we suggest a way of taking into account the three mutually interrelated factors: Coulomb interaction, phase separation, and lattice softening; all of which factors are very important in high - temperature superconductors. To clearly emphasize the influence of these factors as such we shall use the standard approximations accepted for superconductors. We give a detailed analysis of the superconducting critical temperature as a function of parameters related to the attracting and repulsive interactions and to the superconducting phase fraction. Everywhere below we use the system of units with $\;\hbar\equiv k_{B}\equiv 1;$. II Phase separation In order to take into account phase separation, we must admit such a possibility from the beginning, and then the corresponding phase probabilities must be defined in a self - consistent way. Assume that a phase separation has occurred so that our sample consists of regions occupied by two different phases numbered by the index $\;\nu=1,2\;$; the phase regions being randomly distributed in space. Assign the index $\;\nu=1\;$ to the superconducting phase, and $\;\nu=2\;$, to the normal phase. For each phase distribution, or phase configuration, the sample is nonuniform, which greatly complicates the consideration. However, assuming that each phase distribution is random, we may average observable quantities over these phase configurations. As a result, we come to a system described by a renormalized Hamiltonian that allows usual techniques supplemented with additional equations for phase probabilities and conditions of stability. The procedure of averaging over configurations has been expounded, in all necessary detail, in earlier papers [28-31] and discussed in a recent review [18]. Therefore, we shall not repeat it here, especially since the mathematical foundation is quite long and tedious even though the final result is rather simple. The averaging over phase configurations, in the case of a random mixture of two phases, leads to the definition of a renormalized Hamiltonian $$\stackrel{{\scriptstyle-}}{{H}}=H_{1}\oplus H_{2},$$ (1) which is a direct sum of terms $$H_{\nu}=w_{\nu}H_{\nu}^{kin}+w_{\nu}^{2}H_{\nu}^{int},$$ (2) in which $\;H_{\nu}^{kin}\;$ is a kinetic - energy operator, including external fields, if any, $\;H_{\nu}^{int}\;$ is an interaction - energy operator. The renormalizing factor $\;w_{\nu}\;$ is the geometric probability of the $\;\nu\;$- phase, that is, the ratio of the average volume occupied by the $\;\nu\;$- phase to the total volume of the sample. The Hamiltonian (2) is defined on the space $${\cal Y}={\cal Y}_{1}\otimes{\cal Y}_{2},$$ (3) which is a tensor product of Hilbert spaces, where $\;{\cal Y}_{\nu}\;$ is the space of states typical of the $\;\nu\;$- phase. When $\;\nu=1\;$ corresponds to a superconducting phase, and $\;\nu=2\;$, to a normal phase, the state - spaces are chosen so that the so - called anomalous averages calculated with the states from $\;{\cal Y}_{1}\;$ are not identically zero, while those calculated with the states from $\;{\cal Y}_{2}\;$ are zero. In other words, the gaps related to the anom alous averages satisfy the condition $$\Delta_{1}\not\equiv 0,\qquad\Delta_{2}\equiv 0.$$ (4) This condition can be interpreted in several ways. For example, we can always posit that each Hilbert space of microstates, $\;{\cal Y}_{\nu}\;$, is restricted to functions which are invariant under a symmetry group yielding desired properties for the order parameters [32]. Then, the order parameters are just the gaps in (4), and the corresponding microstates can, for example, be chosen as the BCS wave functions [1,33,34] for the superconducting phase and the usual Slater determinants for the normal phase. Another way to interpret the choice of the order parameters (4) is to connect them, and the values of the anomalous averages, with the off - diagonal long - range order of reduced density matrices [35]. The largest eigenvalues of the latter are also known [35-37] to be directly related to this off - diagonal order. A more detailed discussion of these relations can be found in reviews [38,39]. Such relations between microscopic and macroscopic characteristics can be described in a general way by introducing the notion of order indices [40,41]. If $\;{\hat{\rho}}_{n}^{N}\;$ is an $\;n\;$- particle reduced density matrix for a system of $\;N\;$ particles, then the order indices are defined as $$\alpha_{n}\equiv\lim_{N\rightarrow\infty}\frac{\log||{\hat{\rho}}_{n}^{N}||}{% \log N}.$$ It has been shown [41] that the index $\;\alpha_{n}\;$ satisfies the restriction $$0\leq\alpha_{n}\leq n\qquad(0\leq n\leq N).$$ Different thermodynamic phases correspond to different sets of order indices, so that a $\;\nu\;$- phase is characterized by a set $\{\alpha_{n}^{(\nu)}\}\;$. Necessary and sufficient conditions that the phase with $\;\nu=1\;$ is superconducting and the phase with $\;\nu=2\;$ is normal are given by $$\alpha_{2n}^{(1)}=n,\qquad\alpha_{2n}^{(2)}<n.$$ (5) Thus, there are several well developed methods for defining thermodynamic phases. What we still need to define is how to find the geometric probabilities $\;w_{\nu}\;$ of the corresponding phases when considering a mixture described by the Hamiltonian (1). According to the general procedure [18], the phase probabilities are given by the minimization of the thermodynamic potential $$f=-\frac{T}{N}\ln Tre^{-\beta\stackrel{{\scriptstyle-}}{{H}}}\qquad(\beta T% \equiv 1)$$ (6) under the condition $$w_{1}+w_{2}=1,\qquad 0\leq w_{\nu}\leq 1,$$ (7) where $\;T\;$ is temperature. The normalization condition (7) can be taken into account explicitly by using the notation $$w_{1}\equiv w\qquad w_{2}\equiv 1-w.$$ (8) Then $\;w\;$ satisfies the equations $$\frac{\partial f}{\partial w}=0,\qquad\frac{\partial^{2}f}{\partial w^{2}}>0.$$ (9) The first of the equations in (9) is equivalent to $$\langle\frac{\partial\stackrel{{\scriptstyle-}}{{H}}}{\partial w}\rangle=0.$$ (10) Substituting the hamiltonian (1) and introducing the notation $$K_{\nu}\equiv\langle H_{\nu}^{kin}\rangle,\qquad U_{\nu}\equiv\langle H_{\nu}^% {int}\rangle,$$ (11) we obtain the equation $$w=\frac{2U_{2}+K_{2}-K_{1}}{2(U_{1}+U_{2})}$$ (12) for the probability of the superconducting phase. In case of thermodynamic phases with equal average densities, the probabilities $\;w_{\nu}\;$ coincide with the phase concentrations defined as the ratios of the number of particles in each phase to the total number of particles. The average density of the superconducting and of the normal phase are practically the same. Therefore we may speak of (12) as of the superconducting phase concentration. The inequality in (9) shows when the phase separation in a sample is thermodynamically advantageous, as compared to a pure superconducting system. Taking account of (6), we find $$\left(\langle\frac{\partial^{2}\stackrel{{\scriptstyle-}}{{H}}}{\partial w^{2}% }\rangle\right)>\beta\langle\left(\frac{\partial\stackrel{{\scriptstyle-}}{{H}% }}{\partial w}\right)^{2}\rangle.$$ (13) In accordance with (7) and (12), we have $$-2U_{1}\leq K_{1}-K_{2}\leq 2U_{2}.$$ (14) Substituting the Hamiltonian $\;\stackrel{{\scriptstyle-}}{{H}}\;$, defined in (1) and (2), into the left - hand side of (13), we get $$\langle\frac{\partial^{2}\stackrel{{\scriptstyle-}}{{H}}}{\partial w^{2}}% \rangle=2(U_{1}+U_{2}).$$ Using this and noticing that the right - hand side of (13) is always non - negative, we obtain the inequality $$U_{1}+U_{2}>0.$$ (15) Thus, phase separation is possible only when there are sufficiently strong repulsive interactions in the system for (15) to be satisfied. In other words, inequality (15) is a necessary condition for phase separation. This general result is in agreement with Hubbard - model calculations for the copper - oxide superconductors [42]. According to these it is necessary to include a sufficiently strong nearest - neighbour Coulomb repulsion in order to produce phase separation. A physical system will remain in a mixed state, with phase separation, as long as this is thermodynamically favourable compared with pure phases. The boundary, in the space of thermodynamic parameters, between mixed and pure phases is the set of points at which a necessary condition for phase stability is violated, - one of the conditions (13) or (14), for example. Experiment supports the usual assumption that transformation of a pure phase into a mixed phase begins with the appearance of nuclei of a competing phase within the pure phase. This process can be called nucleation and the point in the phase diagram at which this occurs, a nucleation point. At such a point, some thermodynamic and dynamic characteristics may display a singularity. This may occur, for example, in the density - density response function which is determined by the Fourier transform of the second - order density matrix. The behaviour of the thermodynamic and dynamic characteristics in the vicinity of such a phase transition is a separate problem which does not fall within the purview of the present paper. Rather, it is our object to delineate features which we regard as basic to any reasonable model of superconductors which exhibit mesoscopic phase separation. III Specification of Hamiltonian Let us now specify the Hamiltonian, defined in (1) and (2), keeping in mind that $\;H_{1}\;$ corresponds to the superconducting phase; and $\;H_{2}\;$, to the normal phase. This implies that the corresponding order parameters and order indices must satisfy conditions (4) and (5), respectively. These conditions can also be formulated for anomalous averages defined through the field operators, $\;\psi_{\nu s}(\stackrel{{\scriptstyle\rightarrow}}{{r}})\;$, where $\;s\;$ denotes spin and $\;\stackrel{{\scriptstyle\rightarrow}}{{r}}\in{\bf R}^{3}\;$. The anomalous averages $$\langle\psi_{1s}(\stackrel{{\scriptstyle\rightarrow}}{{r}})\psi_{1s}(\stackrel% {{\scriptstyle\rightarrow}}{{r}}^{\prime})\rangle\not\equiv 0$$ (16) for the first representation are not zero, while those for the second representation vanish, $$\langle\psi_{2s}(\stackrel{{\scriptstyle\rightarrow}}{{r}})\psi_{2s}(\stackrel% {{\scriptstyle\rightarrow}}{{r}}^{\prime})\rangle\equiv 0.$$ (17) As is obvious, the conditions (16) and (17) are directly related to the corresponding properties of symmetry of microscopic states [4,18] or to the values (5) of the order indices [40,41]. The kinetic part of Hamiltonian (2) can be written in the form $$H_{\nu}^{kin}=\int\sum_{s}\psi^{\dagger}_{\nu s}(\stackrel{{\scriptstyle% \rightarrow}}{{r}})\left[K_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{r}},% \stackrel{{\scriptstyle\rightarrow}}{{r}}^{\prime})-\mu\delta(\stackrel{{% \scriptstyle\rightarrow}}{{r}}-\stackrel{{\scriptstyle\rightarrow}}{{r}}^{% \prime})\right]\psi_{\nu s}(\stackrel{{\scriptstyle\rightarrow}}{{r}}^{\prime}% )d\stackrel{{\scriptstyle\rightarrow}}{{r}}d\stackrel{{\scriptstyle\rightarrow% }}{{r}}^{\prime},$$ (18) in which $\;K_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{r}},\stackrel{{\scriptstyle% \rightarrow}}{{r}}^{\prime})\;$ is a kinetic kernel including external scalar fields, if any, and $\;\mu\;$ is the chemical potential. Take for the interaction part of (2) a general expression $$H_{\nu}^{int}=\frac{1}{2}\int\sum_{ss^{\prime}}\psi^{\dagger}_{\nu s}(% \stackrel{{\scriptstyle\rightarrow}}{{r}}_{1})\psi^{\dagger}_{\nu s^{\prime}}(% \stackrel{{\scriptstyle\rightarrow}}{{r}}_{2})V_{\nu}(\stackrel{{\scriptstyle% \rightarrow}}{{r}}_{1},\stackrel{{\scriptstyle\rightarrow}}{{r}}_{2},\stackrel% {{\scriptstyle\rightarrow}}{{r}}_{3},\stackrel{{\scriptstyle\rightarrow}}{{r}}% _{4})\psi_{\nu s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{r}}_{3})\times$$ $$\times\psi_{\nu s}(\stackrel{{\scriptstyle\rightarrow}}{{r}}_{4})d\stackrel{{% \scriptstyle\rightarrow}}{{r}}_{1}d\stackrel{{\scriptstyle\rightarrow}}{{r}}_{% 2}d\stackrel{{\scriptstyle\rightarrow}}{{r}}_{3}d\stackrel{{\scriptstyle% \rightarrow}}{{r}}_{4},$$ (19) where $\;V(\ldots)\;$ is an interacting vertex including all effective interactions, repulsive and attractive, direct and indirect. For this moment, we do not need to specify these interactions. However, as an example, we can think of direct interaction as a repulsive Coulomb force taking screening into account, and that indirect interactions are mediated by an exchange of phonons or other bosons. In order not to complicate our discussion with the consequences of a variety of other known factors on the properties of superconductors, we limit ourselves to isotropic matter. Thus, we leave aside anisotropy effects and the related van Hove singularities [7]. For an isotropic system we can use the expansion $$\psi_{\nu s}(\stackrel{{\scriptstyle\rightarrow}}{{r}})=\frac{1}{\sqrt{V}}\sum% _{k}c_{\nu s}(\stackrel{{\scriptstyle\rightarrow}}{{r}})e^{i\stackrel{{% \scriptstyle\rightarrow}}{{k}}\stackrel{{\scriptstyle\rightarrow}}{{r}}},$$ (20) in which $\;V\;$ is the total volume of the system. In what follows, to simplify the notation, we shall accept the convention $$c_{1s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv c_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}).$$ (21) Then, we shall deal mainly with expressions corresponding to the superconducting phase, since an identical treatment can be given for the normal phase. The difference between the phases can be taken into account at the final stage by invoking conditions (16) and (17). Substituting (20) in (18), and using the property $$\displaystyle\frac{1}{V}\int e^{i(\stackrel{{\scriptstyle\rightarrow}}{{k}}-% \stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})\stackrel{{\scriptstyle% \rightarrow}}{{r}}}d\stackrel{{\scriptstyle\rightarrow}}{{r}}=\delta_{kk^{% \prime}}\equiv\left\{\begin{array}[]{cc}1,&\stackrel{{\scriptstyle\rightarrow}% }{{k}}=\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime}\\ 0,&\stackrel{{\scriptstyle\rightarrow}}{{k}}\neq\stackrel{{\scriptstyle% \rightarrow}}{{k}}^{\prime},\end{array}\right.$$ we have $$H_{1}^{kin}=\sum_{k,k^{\prime},s}\left[T_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})-\mu% \delta_{kk^{\prime}}\right]c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}% }{{k}})c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime}),$$ (22) where the convention (21) is used and $$T_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle% \rightarrow}}{{k}}^{\prime})\equiv\frac{1}{V}\int e^{-i\stackrel{{\scriptstyle% \rightarrow}}{{k}}\stackrel{{\scriptstyle\rightarrow}}{{r}}}K_{\nu}(\stackrel{% {\scriptstyle\rightarrow}}{{r}},\stackrel{{\scriptstyle\rightarrow}}{{r}}^{% \prime})e^{i\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime}\stackrel{{% \scriptstyle\rightarrow}}{{r}}^{\prime}}d\stackrel{{\scriptstyle\rightarrow}}{% {r}}d\stackrel{{\scriptstyle\rightarrow}}{{r}}^{\prime}$$ (23) is the transport matrix. For (19) we get $$H_{1}^{int}=\frac{1}{2V}\sum_{k_{1}\ldots k_{4}}\sum_{ss^{\prime}}\Gamma_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3},\stackrel% {{\scriptstyle\rightarrow}}{{k}}_{4})c^{\dagger}_{s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{1})c^{\dagger}_{s^{\prime}}(\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{2})c_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}% }_{3})c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})$$ (24) with the vertex $$\Gamma_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})\equiv\frac{1}{V}\int V_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{r}}_{1},\stackrel{{\scriptstyle% \rightarrow}}{{r}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{r}}_{3},\stackrel% {{\scriptstyle\rightarrow}}{{r}}_{4})\times$$ $$\times\exp\left\{-i\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1}\stackrel{{% \scriptstyle\rightarrow}}{{r}}_{1}-i\stackrel{{\scriptstyle\rightarrow}}{{k}}_% {2}\stackrel{{\scriptstyle\rightarrow}}{{r}}_{2}+i\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{3}\stackrel{{\scriptstyle\rightarrow}}{{r}}_{3}+i\stackrel% {{\scriptstyle\rightarrow}}{{k}}_{4}\stackrel{{\scriptstyle\rightarrow}}{{r}}_% {4}\right\}d\stackrel{{\scriptstyle\rightarrow}}{{r}}_{1}d\stackrel{{% \scriptstyle\rightarrow}}{{r}}_{2}d\stackrel{{\scriptstyle\rightarrow}}{{r}}_{% 3}d\stackrel{{\scriptstyle\rightarrow}}{{r}}_{4}.$$ (25) In the following step to simplify the Hamiltonian, we use a convenient approximation which embodies a possible Cooper pairing. This could be done in several equivalent ways. We may construct an approximating Hamiltonian [43] acquiring the so - called Hartree - Fock - Bogolubov approximation. The latter is widely known and is successfully used in various applications, including such exotic ones as heated rotating nuclei [44-46]. We may follow a variational approach as in the Blatt quasi - chemical equilibrium theory [47]. Or we may use the fundamental ansatz for the many - body wave function yielding what is called [38,39] the antisymmetrized geminal power. All these approaches are equivalent to one another [4,34,48,49]. To our mind, the main common idea lying behind these approaches can, in second - quantization language, be expressed by the operator approximation $$c^{\dagger}_{1}c^{\dagger}_{2}c_{3}c_{4}\approx c^{\dagger}_{1}c_{4}\langle c^% {\dagger}_{2}c_{3}\rangle+\langle c^{\dagger}_{1}c_{4}\rangle c^{\dagger}_{2}c% _{3}-\langle c^{\dagger}_{1}c_{4}\rangle\langle c^{\dagger}_{2}c_{3}\rangle-$$ $$-c^{\dagger}_{1}c_{3}\langle c^{\dagger}_{2}c_{4}\rangle-\langle c^{\dagger}_{% 1}c_{3}\rangle c^{\dagger}_{2}c_{4}+\langle c^{\dagger}_{1}c_{3}\rangle\langle c% ^{\dagger}_{2}c_{4}\rangle+$$ $$+c^{\dagger}_{1}c^{\dagger}_{2}\langle c_{3}c_{4}\rangle+\langle c^{\dagger}_{% 1}c^{\dagger}_{2}\rangle c_{3}c_{4}-\langle c^{\dagger}_{1}c^{\dagger}_{2}% \rangle\langle c_{3}c_{4}\rangle,$$ (26) where, for compactness, we write $\;c_{n}\;$ instead of $c_{s_{n}}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\;$. The right - hand side of (26) may be called the operator antisymmetrized geminal power. Averaging (26), we obtain the familiar approximation for the correlation function $$\langle c^{\dagger}_{1}c^{\dagger}_{2}c_{3}c_{4}\rangle\cong\langle c^{\dagger% }_{1}c_{4}\rangle\langle c^{\dagger}_{2}c_{3}\rangle-\langle c^{\dagger}_{1}c_% {3}\rangle\langle c^{\dagger}_{2}c_{4}\rangle+\langle c^{\dagger}_{1}c^{% \dagger}_{2}\rangle\langle c_{3}c_{4}\rangle$$ (27) of the Hartree - Fock - Bogolubov form. Applying (26) to (24), we present the latter as a sum $$H_{1}^{int}=H_{1}^{nor}+H_{1}^{sup}+B_{1},$$ (28) in which the first term $$H_{1}^{nor}=\frac{1}{V}\sum_{k_{1}\ldots k_{4}}\sum_{ss^{\prime}}\stackrel{{% \scriptstyle-}}{{\Gamma}}_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},% \stackrel{{\scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})\times$$ $$\times\left[c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1})c_{s% }(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})\langle c^{\dagger}_{s^{\prime% }}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2})c_{s^{\prime}}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{3})\rangle-c^{\dagger}_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{1})c_{s^{\prime}}(\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{3})\langle c^{\dagger}_{s^{\prime}}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{2})c_{s}(\stackrel{{\scriptstyle\rightarrow}}{% {k}}_{4})\rangle\right]$$ (29) has the normal Hartree - Fock form with $$\stackrel{{\scriptstyle-}}{{\Gamma}}_{\nu}(\stackrel{{\scriptstyle\rightarrow}% }{{k}}_{1},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 4})\equiv\frac{1}{2}\left[\Gamma_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k% }}_{1},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})+\Gamma_{% \nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{1},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4},\stackrel% {{\scriptstyle\rightarrow}}{{k}}_{3})\right].$$ (30) The second term in (28), that is, $$H_{1}^{sup}=\frac{1}{2V}\sum_{k_{1}\ldots k_{4}}\sum_{ss^{\prime}}\Gamma_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3},\stackrel% {{\scriptstyle\rightarrow}}{{k}}_{4})\times$$ $$\times\left[c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1})c^{% \dagger}_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2})\langle c_% {s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3})c_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{4})\rangle+c_{s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{1})c_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}% }_{2})\langle c^{\dagger}_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k% }}_{3})c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})\rangle% \right],$$ (31) contains anomalous averages and is, thus, responsible for superconductivity. The last term in (28) is $$B_{1}=\frac{1}{2}\sum_{k_{1}\ldots k_{4}}\sum_{ss^{\prime}}\Gamma_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3},\stackrel% {{\scriptstyle\rightarrow}}{{k}}_{4})\times$$ $$\times\left[\langle c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_% {1})c_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3})\rangle% \langle c^{\dagger}_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2}% )c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})\rangle-\langle c^{% \dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1})c_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{4})\rangle\langle c^{\dagger}_{s^{\prime}}(% \stackrel{{\scriptstyle\rightarrow}}{{k}}_{2})c_{s^{\prime}}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{3})\rangle\right.-$$ $$\left.-\langle c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1})c% ^{\dagger}_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2})\rangle% \langle c_{s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3})c_{s}(% \stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})\rangle\right].$$ (32) The Hamiltonian defined by (1), together with (2), (18) and (19), describes a system in which the total momentum and spin are conserved. Therefore $$\langle c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c_{s^{\prime% }}(\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})\rangle=\delta_{kk^{% \prime}}\delta_{ss^{\prime}}\langle c^{\dagger}_{s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\rangle,$$ $$\langle c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c^{\dagger}_% {s^{\prime}}(\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})\rangle=\delta% _{-kk^{\prime}}\delta_{-ss^{\prime}}\langle c^{\dagger}_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}})c^{\dagger}_{-s}(-\stackrel{{\scriptstyle% \rightarrow}}{{k}})\rangle.$$ (33) If we denote the up and down spins as $\;s=\uparrow,\downarrow\;$, then in (33) for $\;s=\uparrow\;$ the notation $\;-s\;$ means $\;\downarrow\;$, and for $\;s=\downarrow\;,-s\;$ means $\;\uparrow\;$. For the normal average introduce the notation $$n_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv\sum_{s}n_{1s}(\stackrel% {{\scriptstyle\rightarrow}}{{k}});\qquad n_{1s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})=\langle c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow% }}{{k}})c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\rangle,$$ (34) and for the anomalous average, $$\sigma_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv\langle c_{-s}(-% \stackrel{{\scriptstyle\rightarrow}}{{k}})c_{s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})\rangle.$$ (35) The function $\;n_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\;$ is a momentum distribution of particles. Since the transport matrix (23) and the vertex (25) do not depend on spin, we have $$n_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=2n_{\nu s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}).$$ (36) The normal term (29), with the use of (33) and (34), can be written as $$H_{1}^{nor}=\sum_{kk^{\prime},s}M_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}% },\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})c^{\dagger}_{s}(\stackrel% {{\scriptstyle\rightarrow}}{{k}})c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k% }}^{\prime}),$$ (37) where $\;M_{\nu}\;$ is a mass operator with kernel $$M_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle% \rightarrow}}{{k}}^{\prime})=\frac{1}{V}\sum_{p}\left[\stackrel{{\scriptstyle-% }}{{\Gamma}}_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{% \scriptstyle\rightarrow}}{{p}},\stackrel{{\scriptstyle\rightarrow}}{{p}},% \stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})-\frac{1}{2}\stackrel{{% \scriptstyle-}}{{\Gamma}}_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},% \stackrel{{\scriptstyle\rightarrow}}{{p}},\stackrel{{\scriptstyle\rightarrow}}% {{k}}^{\prime},\stackrel{{\scriptstyle\rightarrow}}{{p}})\right]n_{\nu}(% \stackrel{{\scriptstyle\rightarrow}}{{p}}),$$ (38) By using (33) and (35), the superconducting term (31) becomes $$H_{1}^{sup}=\frac{1}{2}\sum_{kk^{\prime}p}\stackrel{{\scriptstyle-}}{{\Gamma}}% _{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle% \rightarrow}}{{k}}^{\prime},-\stackrel{{\scriptstyle\rightarrow}}{{p}},% \stackrel{{\scriptstyle\rightarrow}}{{p}})\times$$ $$\times\left[c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c^{% \dagger}_{-s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})\sigma_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{p}})+c_{s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})c_{-s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})% \stackrel{{\scriptstyle\dagger}}{{\sigma}}_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{p}})\right].$$ (39) And the scalar term (32) simplifies to $$B_{1}=-\frac{1}{2}\sum_{k}M_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}},% \stackrel{{\scriptstyle\rightarrow}}{{k}})n_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})-$$ $$-\frac{1}{V}\sum_{kp}\Gamma_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}},-% \stackrel{{\scriptstyle\rightarrow}}{{k}},-\stackrel{{\scriptstyle\rightarrow}% }{{p}},\stackrel{{\scriptstyle\rightarrow}}{{p}})\stackrel{{\scriptstyle% \dagger}}{{\sigma}}_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\sigma_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{p}}).$$ (40) Notice that the necessity of an accurate and detailed analysis of all transformations presented here is dictated by our main goal which is to consider the role of all terms of the Hamiltonian not omitting any of them. As we have already shown and shall see in what follows, all these terms, repulsive as well as attractive, play a crucial role for superconductors with phase separation. IV Gap Equation The sole approximation we have invoked so far now is the operator geminal power form (26), which is equivalent to the Hartree - Fock - Bogolubov approximation or to the antisymmetrized geminal power approach. The structure of the resulting Hamiltonian is still too complex to permit specific conclusions. Actually, a group theoretical analysis by Ozaki [50] of the Hartree - Fock - Bogolubov approximation, enumerated 73 possible ordered states, including 26 superconducting states! The latter include non - Cooper pairing, when an electron pair has nonzero momentum, and various superconducting states coexisting with other nonsuperconducting orders. To make the problem tractable, we can resort to an approximation that restricts the space of microscopic states to a subspace satisfying some additional constraints [48,51,52]. In our case a natural such constraint is to consider only those quantum states that conserve momenta, that is, when the momentum conservation occurs not only on the average, as in (33), but for all operator combinations: $$c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}^{\prime})=\delta_{kk^{\prime}}c^{\dagger}_{s}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})c_{s}(\stackrel{{\scriptstyle% \rightarrow}}{{k}}),$$ $$c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c^{\dagger}_{-s}(% \stackrel{{\scriptstyle\rightarrow}}{{k}}^{\prime})=\delta_{-kk^{\prime}}c^{% \dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c^{\dagger}_{-s}(-% \stackrel{{\scriptstyle\rightarrow}}{{k}}).$$ (41) Such a restricted space consists of BCS wave functions that are a particular kind of antisymmetric geminal power functions [34,53]. The restriction (41) makes it possible to greatly simplify all formulas. To this end, let us introduce the single - particle spectrum $$\varepsilon_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv T_{\nu}(% \stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}}% {{k}}),$$ (42) the diagonal mass - operator $$M_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv M_{\nu}(\stackrel{{% \scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}}{{k}})$$ (43) and an effective interaction $$J_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle% \rightarrow}}{{p}})\equiv\Gamma_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}% },-\stackrel{{\scriptstyle\rightarrow}}{{k}},-\stackrel{{\scriptstyle% \rightarrow}}{{p}},\stackrel{{\scriptstyle\rightarrow}}{{p}}).$$ (44) Taking account of (41) and (42) - (44) we obtain for the kinetic term (22) $$H_{1}^{kin}=\sum_{k,s}\left[\varepsilon_{1}(\stackrel{{\scriptstyle\rightarrow% }}{{k}})-\mu\right]c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c% _{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}}),$$ (45) for the normal term (37) $$H_{1}^{nor}=\sum_{k,s}M_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c^{% \dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}),$$ (46) and for the superconducting term (39) $$H_{1}^{sup}=\frac{1}{2V}\sum_{kp,s}J_{1}(\stackrel{{\scriptstyle\rightarrow}}{% {k}},\stackrel{{\scriptstyle\rightarrow}}{{p}})\left[c^{\dagger}_{s}(\stackrel% {{\scriptstyle\rightarrow}}{{k}})c^{\dagger}_{-s}(-\stackrel{{\scriptstyle% \rightarrow}}{{k}})\sigma_{1}(\stackrel{{\scriptstyle\rightarrow}}{{p}})+\right.$$ $$\left.+c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c_{-s}(-\stackrel{{% \scriptstyle\rightarrow}}{{k}})\stackrel{{\scriptstyle\dagger}}{{\sigma}}_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{p}})\right].$$ (47) The scalar term (40) becomes $$B_{\nu}=-\frac{1}{2}\sum_{k}M_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})% n_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})-\frac{1}{V}\sum_{kp}J_{\nu}% (\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}% }{{p}})\sigma^{\dagger}_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\sigma% _{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{p}}).$$ (48) Finally, collecting all these terms, for the Hamiltonian of the superconducting phase, given in (2), we obtain the expression $$H_{1}=w_{1}\sum_{k}\sum_{s}\omega_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}% })c^{\dagger}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})c_{s}(\stackrel{{% \scriptstyle\rightarrow}}{{k}})+w_{1}^{2}H_{1}^{sup}+w_{1}^{2}B_{1},$$ (49) in which $\;H_{1}^{sup}\;$ and $\;B_{1}\;$ are defined by (47) and (48), and $$\omega_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv\varepsilon_{\nu}% (\stackrel{{\scriptstyle\rightarrow}}{{k}})-\mu+w_{\nu}M_{\mu}(\stackrel{{% \scriptstyle\rightarrow}}{{k}})$$ (50) plays the role of an effective spectrum, renormalized, as compared to the single - particle spectrum (42), by the presence of the mass operator (43). The Hamiltonian (49) can be diagonalized by the Bogolubov canonical transformation $$c_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=u(\stackrel{{\scriptstyle% \rightarrow}}{{k}})a_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})+v(% \stackrel{{\scriptstyle\rightarrow}}{{k}})\stackrel{{\scriptstyle\dagger}}{{a}% }_{-s}(-\stackrel{{\scriptstyle\rightarrow}}{{k}}),$$ (51) in which both $\;c_{s}\;$ and $\;a_{s}\;$ satisfy the Fermi commutation relations. Diagonalization is achieved with $$\left|u(\stackrel{{\scriptstyle\rightarrow}}{{k}})\right|^{2}=\frac{1}{2}\left% |1+\frac{\omega_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})}{E_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})}\right|,\qquad\left|v(\stackrel{{% \scriptstyle\rightarrow}}{{k}})\right|^{2}=\frac{1}{2}\left|1-\frac{\omega_{1}% (\stackrel{{\scriptstyle\rightarrow}}{{k}})}{E_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})}\right|$$ leading to the Hamiltonian $$H_{1}=w_{1}\sum_{k,s}E_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\stackrel% {{\scriptstyle\dagger}}{{a}}_{s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})a_{% s}(\stackrel{{\scriptstyle\rightarrow}}{{k}})+w_{1}C_{1},$$ (52) where the quasiparticle spectrum $$E_{1}^{2}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv\Delta_{1}^{2}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})+\omega_{1}^{2}(\stackrel{{% \scriptstyle\rightarrow}}{{k}})$$ (53) contains the gap $$\Delta_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\equiv-\frac{w_{1}}{V}% \sum_{p}J_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{% \scriptstyle\rightarrow}}{{p}})\sigma_{1}(\stackrel{{\scriptstyle\rightarrow}}% {{p}}).$$ (54) For the scalar part of (52) we have $$c_{\nu}\equiv\sum_{k}\left[\omega_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{% k}})-E_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})+\Delta_{\nu}(\stackrel% {{\scriptstyle\rightarrow}}{{k}})\sigma_{\nu}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})-\frac{w_{\nu}}{2}M_{\nu}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})n_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\right].$$ (55) With the diagonal hamiltonian (52) it is straightforward to calculate the momentum distribution (34), $$n_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=1-\frac{\omega_{1}(\stackrel{% {\scriptstyle\rightarrow}}{{k}})}{E_{1}(\stackrel{{\scriptstyle\rightarrow}}{{% k}})}\tanh\frac{w_{1}E_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})}{2T},$$ (56) and the anomalous averages (35), $$\sigma_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=\frac{\Delta_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})}{2E_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})}\tanh\frac{w_{1}E_{1}(\stackrel{{\scriptstyle\rightarrow}}% {{k}})}{2T}.$$ (57) So, for the gap (54) we obtain the equation $$\Delta_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=-\frac{w_{1}}{V}\sum_{p}% J_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle% \rightarrow}}{{p}})\frac{\Delta_{1}(\stackrel{{\scriptstyle\rightarrow}}{{p}})% }{2E_{1}(\stackrel{{\scriptstyle\rightarrow}}{{p}})}\tanh\frac{w_{1}E_{1}(% \stackrel{{\scriptstyle\rightarrow}}{{p}})}{2T}.$$ (58) Let us emphasize that in the case of a superconductor with phase separation, with which we are dealing, all formulas obtained above involve the phase probabilities $\;w_{\nu}\;$ in an intricate way. And it is very important where and how the latter enter into the expressions. Even though the technical approximations we have adopted are standard, it is essential to accurately trace the role of the phase probabilities since it is these which distinguish the heterophase from pure superconductor. Also, we have to carefully take into account all interactions since, as has been shown, the presence of repulsive interactions is decisive for the phase separation itself. For the superconducting phase we need a nontrivial solution of (58). For the normal phase according to condition (17) instead of (56) we have $$n_{2}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=1-\tanh\frac{w_{2}\omega_{2}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})}{2T}=\frac{2}{\exp[w_{2}\omega_{2}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})/T]+1},$$ (59) and instead of (57) and (58) we have $$\sigma_{2}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=0,\qquad\Delta_{2}(% \stackrel{{\scriptstyle\rightarrow}}{{k}})=0,$$ (60) with the effective spectrum, $\;\omega_{2}(\stackrel{{\scriptstyle\rightarrow}}{{k}})\;$, given by (50). V Interaction Potentials It follows from the previous Section that all characteristics of the system can be calculated provided we know the mass operator (43) and the effective interaction (44). These quantities are not independent of each other since they are both defined, by means of (38) and (44), through the vertex (25). If the real - space vertex entering into the hamiltonian (19), describes an interaction between particles, which is, as usual, invariant with respect to permutation of particles and space reflections, then the momentum - space vertex (25) has the properties [2]: $$\Gamma_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})=\Gamma_{\nu}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 1},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{3})=$$ $$=\Gamma_{\nu}(-\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},-\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{2},-\stackrel{{\scriptstyle\rightarrow}}{{k}}_% {3},-\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4})=\Gamma_{\nu}(\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{1},-\stackrel{{\scriptstyle\rightarrow}}{{k}}_% {3},-\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle% \rightarrow}}{{k}}_{4})=$$ $$=\Gamma_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{3},\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{4},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 1},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2}).$$ With these properties, the symmetrized vertex (30) coincides with (25), $$\stackrel{{\scriptstyle-}}{{\Gamma}}_{\nu}(\stackrel{{\scriptstyle\rightarrow}% }{{k}}_{1},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 4})=\Gamma_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{1},\stackrel{{% \scriptstyle\rightarrow}}{{k}}_{2},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{% 3},\stackrel{{\scriptstyle\rightarrow}}{{k}}_{4}).$$ It then follows from (38),(43) and (44) that $$M_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=\frac{1}{V}\sum_{p}n_{\nu}(% \stackrel{{\scriptstyle\rightarrow}}{{p}})\left[J_{\nu}(\stackrel{{% \scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}}{{k}})-% \frac{1}{2}J_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{% \scriptstyle\rightarrow}}{{p}})\right].$$ (61) Thus, the mass operator (43) is completely defined by the effective interaction (44) by the relation (61). As we have emphasized, we have to keep in the Hamiltonian (19) all interactions, direct and indirect. Therefore, the vertex (25) can be written as a sum $\;\Gamma_{\nu}=\Gamma_{\nu}^{dir}+\Gamma_{\nu}^{ind}\;$ of direct and indirect interactions. Hence, the effective interaction (44) is a sum $$J_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle% \rightarrow}}{{p}})=J_{\nu}^{dir}(\stackrel{{\scriptstyle\rightarrow}}{{k}},% \stackrel{{\scriptstyle\rightarrow}}{{p}})+J_{\nu}^{ind}(\stackrel{{% \scriptstyle\rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}}{{p}})$$ (62) of direct and indirect interactions. In principle, we could continue our analysis employing only the occurrence of the separation (62) of the effective interaction into two terms, without specifying their nature. For example, as interacting particles we could think of electrons or of atoms of $\;{}^{3}He\;$ or of nucleons, taking for the real - space direct interactions the Coulomb, Lennard - Jones or Yukawa potentials, respectively. The indirect interaction might be assumed to be mediated by a boson - exchange mechanism involving phonons or excitons or something else. However, we prefer to be more concrete, so that in what follows, we shall think of particles as electrons whose direct interaction is described by a screened Coulomb potential and indirect interaction induced by phonon exchange. Generally, the screened Coulomb potential depends on the properties of a phase inside which electrons interact, since the screening is described by an inverse dielectric function reflecting the features of matter [54,55]. The simplest form of screening is obtained by using the Thomas - Fermi approximation which is equivalent to the static approximation for the Lindhard dielectric function [54,55]. Then, in real space the screened Coulomb interaction of particles immersed in the $\;\nu\;$- phase has the form $$\Phi_{\nu}(r)=\frac{e^{2}}{r}e^{-\kappa_{\nu}r},$$ (63) in which $\;e\;$ is the charge, $\;\kappa_{\nu}^{-1}\;$ is a screening radius with $$\kappa_{\nu}^{2}=\frac{4}{a_{B}}\left(\frac{3}{\pi}n_{\nu}\right)^{1/3}\qquad% \left(a_{B}\equiv\frac{1}{me^{2}}\right),$$ where $\;m\;$ is the electron mass and $\;n_{\nu}\;$, the electron density in the $\;\nu\;$- phase. In momentum space, (63) yields $$v_{\nu}(k)=\frac{4\pi e^{2}}{k^{2}+\kappa_{\nu}^{2}}.$$ (64) Recall that the particular form (64) is not of great importance for our argument: for example, we could take for $\;v_{\nu}(k)\;$ the more general expression $(4\pi/k^{2})\varepsilon_{\nu}^{-1}(k,\omega_{\nu}^{exc}(k))\;$, in which $\;\varepsilon_{\nu}(k,\omega)\;$ is a dielectric function and $\;\omega_{\nu}^{exc}(k)\;$, a spectrum of elementary excitations in the $\;\nu\;$- phase. Also, for the dielectric function we could invoke a more refined approximation [55-60]. Here we have chosen the simple form (64) in order to illustrate how the screening can depend on the phase properties. In any case, whether $\;v_{\nu}(k)\;$ is given by (64) or by a more complicated expression, the part of (62) related to the direct interaction is $$J_{\nu}^{dir}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=\int\Phi_{\nu}(r)e^{-% i(\stackrel{{\scriptstyle\rightarrow}}{{k}}-\stackrel{{\scriptstyle\rightarrow% }}{{p}})\stackrel{{\scriptstyle\rightarrow}}{{r}}}d\stackrel{{\scriptstyle% \rightarrow}}{{r}}=v_{\nu}\left(|\stackrel{{\scriptstyle\rightarrow}}{{k}}-% \stackrel{{\scriptstyle\rightarrow}}{{p}}|\right).$$ (65) For the effective indirect interaction we take one mediated by phonon exchange in the form [55] $$J_{\nu}^{ind}(\stackrel{{\scriptstyle\rightarrow}}{{k}},\stackrel{{% \scriptstyle\rightarrow}}{{p}})=-\frac{|\alpha_{\nu}|^{2}}{\omega_{\nu 0}^{2}-% [\omega_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})-\omega_{\nu}(% \stackrel{{\scriptstyle\rightarrow}}{{p}})]^{2}},$$ (66) where $\;\omega_{\nu 0}\;$ is a characteristic phonon frequency, $\;\alpha_{\nu}\;$ is the electron - phonon coupling taking account of screening, $$\alpha_{\nu}=-\frac{4\pi iZe^{2}}{k_{\nu F}(1+\kappa_{\nu}^{2}/k_{\nu F}^{2})}% \left(\frac{\rho_{\nu}}{M}\right)^{1/2},$$ $\;Z\;$ and $\;M\;$ are an ion charge and mass, respectively, $\;\rho_{\nu}\;$ is the density of ions in the $\;\nu\;$- phase, and $\;k_{\nu F}\approx(3\pi^{2}n_{\nu})^{1/3}$ is the Fermi momentum of an electron. The presentation (62) of the effective interaction as a sum of direct and indirect interactions leads to a similar decomposition $$M_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=M_{\mu}^{dir}(\stackrel{{% \scriptstyle\rightarrow}}{{k}})+M_{\nu}^{ind}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})$$ (67) of the mass operator (61). The first term in (67) is related to (65) giving $$M_{\nu}^{dir}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=n_{\nu}v_{\nu}(0)-% \frac{1}{2V}\sum_{p}n_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{p}})v_{\nu}(% |\stackrel{{\scriptstyle\rightarrow}}{{k}}-\stackrel{{\scriptstyle\rightarrow}% }{{p}}|),$$ (68) while the second term in (67) is expressed through (66) yielding $$M_{\nu}^{ind}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=-n_{\nu}\frac{|\alpha% _{\nu}|^{2}}{\omega_{\nu 0}^{2}}-\frac{1}{2V}\sum_{p}n_{\nu}(\stackrel{{% \scriptstyle\rightarrow}}{{p}})J_{\nu}^{ind}(\stackrel{{\scriptstyle% \rightarrow}}{{k}},\stackrel{{\scriptstyle\rightarrow}}{{p}}),$$ (69) where $$n_{\nu}\equiv\frac{1}{V}\sum_{p}n_{\nu}(\stackrel{{\scriptstyle\rightarrow}}{{% p}})$$ (70) is the electron density in the $\;\nu\;$- phase. The chemical potential, as a function of temperature and density, is defined by the formula $$N=-\langle\frac{\partial\stackrel{{\scriptstyle-}}{{H}}}{\partial\mu}\rangle=% \sum_{\nu}w_{\nu}\int\langle\stackrel{{\scriptstyle\dagger}}{{\psi}}_{\nu}(% \stackrel{{\scriptstyle\rightarrow}}{{r}})\psi_{\nu}(\stackrel{{\scriptstyle% \rightarrow}}{{r}})\rangle d\stackrel{{\scriptstyle\rightarrow}}{{r}}$$ (71) for the total number of electrons in the system. Eq.(71), with notation (34) and (36), gives $$N=\sum_{p}\left[w_{1}n_{1}(\stackrel{{\scriptstyle\rightarrow}}{{p}})+w_{2}n_{% 2}(\stackrel{{\scriptstyle\rightarrow}}{{p}})\right].$$ (72) Using (70), we can reduce (72) to the equation $$n\equiv\frac{N}{V}=w_{1}n_{1}+w_{2}n_{2}$$ (73) connecting the average electron density in the system with the electron densities in each of the phases which the system is composed of and with the corresponding phase probabilities. Note that when the electron densities in both phases coincide with each other, that is when $\;n_{\nu}=n\;$, then (73) reduces to (7). Considering the gap equation (58) we pass to the thermodynamic limit with the standard replacement $\;\frac{1}{V}\sum_{p}\rightarrow\int\frac{d\stackrel{{\scriptstyle\rightarrow}% }{{p}}}{(2\pi)^{3}}\;$. The main contribution to the resulting integral in the right - hand side of (58) comes from momenta close to the Fermi surface which is defined by the condition $$\omega_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}}_{F})=0.$$ (74) Keeping this in mind, we can rewrite the gap equation (58) as $$\Delta_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=\frac{w_{1}}{2}\left[% \frac{|\alpha_{1}|^{2}}{\omega_{10}^{2}-\omega_{1}^{2}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})}-\stackrel{{\scriptstyle-}}{{v}}_{1}(\stackrel{{% \scriptstyle\rightarrow}}{{k}})\right]\times$$ $$\times\int\frac{\Delta_{1}(\stackrel{{\scriptstyle\rightarrow}}{{p}})}{\sqrt{% \Delta_{1}^{2}(\stackrel{{\scriptstyle\rightarrow}}{{p}})+\omega_{1}^{2}(% \stackrel{{\scriptstyle\rightarrow}}{{p}})}}\tanh\frac{w_{1}\sqrt{\Delta_{1}^{% 2}(\stackrel{{\scriptstyle\rightarrow}}{{p}})+\omega_{1}^{2}(\stackrel{{% \scriptstyle\rightarrow}}{{p}})}}{2T}\cdot\frac{d\stackrel{{\scriptstyle% \rightarrow}}{{p}}}{(2\pi)^{3}},$$ (75) where $$\stackrel{{\scriptstyle-}}{{v}}_{1}(k)\equiv\lim_{p\rightarrow k_{F}}\frac{1}{% 4\pi}\int v_{1}(|\stackrel{{\scriptstyle\rightarrow}}{{k}}-\stackrel{{% \scriptstyle\rightarrow}}{{p}}|)d\Omega(\stackrel{{\scriptstyle\rightarrow}}{{% p}})=$$ $$=\frac{\pi e^{2}}{kk_{F}}\ln\left|\frac{(k+k_{F})^{2}+\kappa_{1}^{2}}{(k-k_{F}% )^{2}+\kappa_{1}^{2}}\right|$$ (76) is the screened Coulomb interaction averaged over spherical angles. A necessary condition that (75) has a nonzero solution for the gap $\;\Delta_{1}(\stackrel{{\scriptstyle\rightarrow}}{{p}})\;$ is $$\frac{|\alpha_{1}|^{2}}{\omega_{10}^{2}-\omega_{1}^{2}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})}-\stackrel{{\scriptstyle-}}{{v}}_{1}(k)>0,$$ (77) which occurs in the vicinity of the Fermi surface so that $\;\omega_{1}^{2}(\stackrel{{\scriptstyle\rightarrow}}{{k}})<\omega_{10}^{2}\;$. Actually, inequality (77) has almost the same form as the BCS criterion for superconductivity [1]. However in our case the quantities entering into (77) depend on the superconducting phase probability. This dependence is essential and as we show below can dramatically change the characteristics of superconductors with phase separation making the existence of superconductivity possible even for values of the parameters which in a pure sample would imply the absence of superconductivity. Equation (75) can be simplified by considering the value of the gap at the Fermi surface, i.e. $$\Delta\equiv\lim_{k\rightarrow k_{F}}\Delta_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{k}}),$$ and by introducing the level density per spin, $$N_{1}(\omega)\equiv\frac{1}{(2\pi)^{3}}\int\frac{dS(\omega)}{|\nabla_{k}\omega% _{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})|_{\omega}},$$ (78) where the integration is over the surface given by the equation $\;\omega_{1}(\stackrel{{\scriptstyle\rightarrow}}{{k}})=0\;$. Then (75) yields $$\int_{0}^{\omega_{10}}\frac{\lambda_{eff}}{\sqrt{\Delta^{2}+\omega^{2}}}\tanh% \frac{w\sqrt{\Delta^{2}+\omega^{2}}}{2T}d\omega=1,$$ (79) where the effective coupling parameter is $$\lambda_{eff}\equiv wN_{1}(0)\left[\frac{|\alpha_{1}|^{2}}{\omega_{10}^{2}}-% \stackrel{{\scriptstyle-}}{{v}}_{1}(k_{F})\right],$$ (80) $\;N_{1}(0)\;$ being the level density (78) at the Fermi surface defined by (74). The criterion (77) implies that for superconductivity the effective coupling parameter (80) is positive. For metals one has $\;\kappa_{1}\sim k_{F}\;$ and $\;\stackrel{{\scriptstyle-}}{{v}}_{1}(k_{F})\sim\pi e^{2}/k_{F}^{2}\;$. Therefore in this case we have the estimate $$\frac{\stackrel{{\scriptstyle-}}{{v}}_{1}(k_{F})}{|\alpha_{1}|^{2}/\omega_{10}% ^{2}}\sim\frac{\omega_{10}^{2}}{\Omega_{p}^{2}}\qquad(\Omega_{p}^{2}\equiv% \frac{4\pi}{M}\rho_{1}Z^{2}e^{2}),$$ in which $\;\Omega_{p}\;$ is the ion plasma frequency. The condition $\;\lambda_{eff}>0\;$ means that $\;\omega_{10}<\Omega_{p}\;$. The latter inequality makes it possible to understand why structural instabilities and the related softening of the lattice, typical of high - temperature superconductors, favour the appearance of superconductivity. Lattice softening is associated with a decrease in the characteristic phonon frequency $\;\omega_{10}\;$, this makes it easy to satisfy the condition $\;\omega_{10}<\Omega_{p}\;$ and favours the onset of superconductivity. However, the latter occurs not in the whole volume of a sample, but only in those parts of it that are occupied by the superconducting phase. This is why lattice softening, enhancement of superconductivity and phase separation are phenomena that are intimately related to one another. This conclusion is supported as well by the consideration of s Hubbard - type model for which it has been found [42] that superconducting correlations are enhanced in the phase - separation regions. VI Critical Temperature The superconducting phase transition occurs at the critical temperature $\;T_{c}\;$ where $\;\Delta=0.\;$. The equation for $\;T_{c}\;$ follows from (79): $$\lambda_{eff}\int_{0}^{\omega_{10}}\frac{1}{\omega}\tanh\frac{w\omega}{2T_{c}}% d\omega=1.$$ (81) Recall that here $\;w(T_{c})\;$ is the geometric probability of the superconducting phase, that is, the relative part of volume occupied by this phase. The upper limit in the integral (81) is the characteristic phonon frequency exhibiting softening because of structural fluctuations and the related phase separation [18,61]. The simultaneous occurrence of lattice softening and phase separation can be clearly observed by using Mössbauer spectroscopy [62,63]. In high - temperature superconductors these effects lead to anomalous saggings of the Mössbauer factor at critical temperature [12,13,64]. The softening of phonon frequencies at $\;T_{c}\;$ can also be observed by other experimental methods such as infrared, Raman and neutron scattering [11]. The softening of the characteristic frequency $\;\omega_{10}\;$, as was shown in [61,63], can be expressed by the relation $$\omega_{10}=w^{\varphi}\omega_{0},$$ (82) in which $\;\omega_{0}\;$ is a characteristic phonon frequency of a pure superconductor, without phase separation, and the parameter $\;\varphi\;$ measures the intensity of softening. Weak softening corresponds to $\;\varphi=1/2\;$, moderate, to $\;\varphi=1\;$, and the strong softening, to $\;\varphi=3/2\;$. For superconductors with phase separation the dependence of the critical temperature $\;T_{c}\;$ on the superconducting - phase probability $\;w\;$ is given by an intricate relation through (80) - (82). If one interprets the onset of superconductivity as the condensation of Cooper pairs, then one can call $\;w\;$ the concentration of superconducting condensate. No matter what it is called, the main point is that this quantity can be varied in high - temperature superconductors by varying their chemical structure, for example by doping. The experimentally measured dependence of $\;T_{c}\;$ on $\;w\;$ exhibits, for some high - temperature superconductors, a quite unusual nonmonotonic behaviour, as is reviewed in refs.[65,66]. For this reason it is especially interesting to study the dependence of $\;T_{c}\;$ on $\;w\;$. Formula (81) allows us to obtain two asymptotic expressions. One limit corresponds to the case $$T_{c}\ll\frac{\omega_{0}}{\pi}w^{1+\varphi},\qquad\lambda_{eff}\ll 1,$$ (83) when $$T_{c}\simeq 1.134w^{1+\varphi}\omega_{0}\exp\left(-\frac{1}{\lambda_{eff}}% \right).$$ (84) Note, that if we consider $\;\omega_{eff}\equiv w^{1+\varphi}\omega_{0}\;$ and $\;\lambda_{eff}\;$ as fitting parameters, then expression (84), as is known [67,68], describes the majority of low - temperature superconductors quite well. Another limiting case, opposite to (83), is when $$T_{c}\gg\frac{\omega_{0}}{\pi}w^{1+\varphi},\qquad\lambda_{eff}\gg 1.$$ (85) Then (81) gives $$T_{c}\simeq\frac{1}{2}w^{1+\varphi}\lambda_{eff}\omega_{0}.$$ (86) There is a temptation to treat (85) as the strong coupling limit. In doing this, we should be very cautions and not forget that $\;\lambda_{eff}\;$, given by (80), is an effective coupling parameter. So, it may happen that $\;\lambda_{eff}\;$ is large, even if the bare coupling constant, $\;\lambda\;$ is not. If $\;\lambda\gg 1\;$, then by a standard argument Eliashberg equations imply that $\;T_{c}\sim\sqrt{\lambda}\;$, although the dependence $\;T_{c}\sim\lambda\;$, as is stated in [69], is also consistent with the strong coupling limit of these equations. The involvement of the superconducting phase concentration in the definition of the effective coupling parameter (80) makes the applicability of such simple asymptotic expressions, as (84) and (86), to high - temperature superconductors with phase separation quite limited. We therefore attempt an accurate analysis of the dependence of $\;T_{c}\;$ on $\;w\;$. First, we have to remember that the level density (78) also depends on $\;w\;$ through the effective spectrum (50) renormalized by the mass operator (67). Consider the isotropic case leaving aside the very interesting, but separate problem of van Hove singularities [7]. Then, differentiating the effective spectrum (50), we find $$\lim_{k\rightarrow k_{F}}\left|\nabla_{k}\omega_{1}(\stackrel{{\scriptstyle% \rightarrow}}{{k}})\right|=\varepsilon_{F}^{\prime}+wM_{F}^{\prime},$$ (87) where $$\varepsilon_{F}^{\prime}\equiv\lim_{k\rightarrow k_{f}}\frac{\partial}{% \partial k}\varepsilon_{1}(k),\qquad M_{F}^{\prime}\equiv\lim_{k\rightarrow k_% {F}}\frac{\partial}{\partial k}M_{1}(k).$$ The level density (78) at the Fermi surface becomes $$N_{1}(0)=\frac{N_{1}^{*}(0)}{|1+\gamma w|},$$ (88) where we use the notation $$N_{1}^{*}(0)\equiv\frac{k_{F}^{2}}{2\pi^{2}|\varepsilon_{F}^{\prime}|},\qquad% \gamma\equiv\frac{M_{F}^{\prime}}{\varepsilon_{F}^{\prime}}.$$ (89) To estimate the value of $\;M_{F}^{\prime}\;$, take into account that $\;n_{1}\approx k_{F}^{3}/3\pi^{2}\;$ and $\;\kappa_{1}\approx k_{F}\;$, then $\;M_{F}^{\prime}\approx 4e^{2}/3\pi\;$. For a parabolic zone $\;\varepsilon_{F}^{\prime}\approx k_{F}/m\;$, so that for the parameter $\;\gamma\;$ in (89) we get $\;\gamma\sim a_{e}/\pi^{2}a_{B}\;$, where $\;a_{e}\;$ is the average distance between electrons, and $\;a_{B}\;$ is the Bohr radius. In good conductors $\;a_{e}\sim a_{B}\;$, whence $\;\gamma\sim 0.1\;$. This means that in good metals the renormalization of the spectrum, due to the mass operator, is quite weak. In bad conductors with low electron density one has $\;a_{e}\gg a_{B}\;$. Therefore $\;\gamma\;$ may equal $\;1\;$ or more. This makes the role of the mass operator in renormalizing the electron spectrum and consequently in influencing the level density (88) very important. Such a situation is directly related to high - temperature superconductors which are, as is known, bad conductors having a low density of carriers. To distinguish the effects due to phase separation, we introduce the quantities $$\lambda^{*}\equiv N_{1}^{*}(0)\frac{|\alpha_{1}|^{2}}{\omega_{0}^{2}},\qquad% \mu^{*}\equiv N_{1}^{*}(0)\stackrel{{\scriptstyle-}}{{v}}_{1}(k_{F})$$ (90) which do not depend on the phase probability $\;w\;$. The first quantity in (90) is the electron - phonon coupling constant for a pure system without phase separation. The second quantity in (90) is the average intensity of screened Coulomb interaction. With notation (90), for the effective coupling parameter (80) we obtain $$\lambda_{eff}=\frac{\lambda^{*}-\mu^{*}w^{2\varphi}}{w^{2\varphi-1}|1+\gamma w% |}.$$ (91) The criterion of superconductivity means that $\;\lambda_{eff}>0\;$, which leads because of (91) to $$\lambda^{*}-\mu^{*}w^{2\varphi}>0.$$ This inequality, in the case of pure superconductor, when $\;w\equiv 1\;$, reduces to the standard condition $\;\lambda^{*}>\mu^{*}\;$ usually valid for low - temperature superconductors. When the phase separation occurs in a superconductor, then the condition $\;\lambda_{eff}>0\;$ is easier to satisfy. Really, when $\;0<w<1\;$, then the inequality $\;\lambda^{*}>\mu^{*}w^{2\varphi}\;$ can be true even if $\;\lambda^{*}<\mu^{*}\;$. In this way, phase separation favours the appearance of superconductivity. A sample with phase separation can become superconductive even if a similar sample, without phase separation, cannot have superconducting properties. The superconducting critical temperature is given by (81) which becomes $$\frac{\lambda^{*}-\mu^{*}w^{2\varphi}}{w^{2\varphi-1}|1+\gamma w|}\int_{0}^{w^% {\varphi}\omega_{0}}\frac{1}{\omega}\tanh\frac{w\omega}{2T_{c}}d\omega=1.$$ (92) As we see, it is not easy to analyse precisely the influence of phase separation on the critical temperature, that is, to solve explicitly for the dependence of $\;T_{c}\;$ on the phase probability $\;w\;$. To slightly simplify the situation, we may notice that since $\;\varepsilon_{F}^{\prime}\approx k_{F}/m,\;N_{1}^{*}(0)\approx mk_{F}/2\pi^{2}\;$ and $\;\stackrel{{\scriptstyle-}}{{v}}_{1}(k_{F})\approx\pi e^{2}/k_{F}^{2}\;$, then $\;\mu^{*}\approx\gamma\;$. Therefore, for simplicity, we put $\gamma=\mu^{*}\;$. Of course, such a slight simplification does not help much, and to proceed further in defining the dependence of $\;T_{c}\;$ on $\;w\;$ we have to resort to numerical analysis of (92). To this end, it is convenient to introduce the dimensionless critical temperature $$t_{c}\equiv\frac{T_{c}}{\omega_{0}}.$$ Now we can reorganize (92) in the form $$\frac{\lambda^{*}-\mu^{*}w^{2\varphi}}{w^{2\varphi-1}(1+\mu^{*}w)}\int_{0}^{1}% \tanh\left(\frac{w^{1+\varphi}}{2t_{c}}x\right)\frac{dx}{x}=1.$$ (93) From (93) we can immediately conclude that the behaviour of $\;t_{c}=t_{c}(w)\;$ can be, in general, nonmonotonic, since there are two points at which $\;t_{c}(w)\;$ tends to zero. The first point corresponds to $\;w\rightarrow 0\;$. Then from (93) we obtain $$t_{c}\simeq\frac{1}{2}w^{2-\varphi}\frac{\lambda^{*}-\mu^{*}w^{2\varphi}}{1+% \mu^{*}w}.$$ (94) The second case, when $\;t_{c}\rightarrow 0\;$, is when $\;w\rightarrow(\lambda^{*}/\mu^{*})^{1/2\varphi}\;$, then $$t_{c}\simeq 1.134w^{1+\varphi}\exp\left\{-\frac{w^{2\varphi-1}(1+\mu^{*}w)}{% \lambda^{*}-\mu^{*}w^{2\varphi}}\right\}.$$ (95) Recall that the dependence of $\;t_{c}\;$ on $\;w\;$ is interesting to analyse because the superconducting phase concentration $\;w\;$ can be measured and can be varied in experiments by changing the chemical structure of materials, for example, by doping [6,11,65,66]. We made a detailed analysis of the function $\;t_{c}(w)\;$ by solving the equation (93) numerically. Graphs of the resulting functions are presented in figs.1-12. We did not try to fit any particular experimental situation. Rather we wanted to understand the whole picture as to how the behaviour of $\;t_{c}(w)\;$ changes qualitatively with the change of parameters $\;\varphi,\mu^{*}\;$ and $\;\lambda^{*}\;$. We were pleasantly surprised by the wide variety of curves which were obtained. It is apparent from the accompanying graphs that by the choice of the corresponding parameters it will be possible to obtain a reasonably good fit to any experimental curve. Figs.1-4 correspond to the case of weak softening; figs.5-8, to that of moderate softening; and figs.9-12, to the case of strong softening. With the increase of the Coulomb parameter the function $\;t_{c}(w)\;$ really becomes nonmonotonic. The behaviour of $\;t_{c}(w)\;$ plotted in figs.3,4,7 and 8 has a striking similarity to the corresponding experimental curves for high - temperature cuprate superconductors (see [6,11,65,70] and references therein). The coincidence becomes practically complete if we redraw our figures in the relative coordinates $\;\stackrel{{\scriptstyle-}}{{T}}\equiv t_{c}/t_{c}^{max},\;\sigma\equiv w/w^{% max}\;$, as in [65,66], where the point $\;(t_{c}^{max},w^{max})\;$ denotes the point of a maximum of the considered curve. It is clear from the preceeding analysis, as illustrated in our graphs, that, for fixed $\;\varphi,\;\mu\;$ and $\;\lambda\;$, the value of the reduced critical temperature, $\;T_{c}\;$, depends crucially on the parameter $\;w\;$. The operational meaning of this parameter is rather different according as the the system is 1) stable, or 2) metastable. Notice that while $\;w\;$ occurs explicitly on the left hand side of equation (12) it also occurs implicitly in a complicated manner in the right hand side as a result of the renormalization of the Hamiltonian (1). Thus for a stable system satisfying conditions (9) and (13), we can think of $\;w\;$ as determined self - consistently by equation (12) once all characteristics of the system, - such as chemical composition, particle masses, interaction potentials, temperature, density and pressure - are given. Note that all of these characteristics are necessary for $\;w\;$ to be uniquely determined. In particular, the total role of interactions must be taken into account. In contrast to this, following common practice, in our model the gap equation (79) and consequently equation (93) for the critical temperature, take into account only interactions specified on the Fermi surface. Essential to the argument of the present paper is the conviction that it would not be reasonable to try to treat superconductivity in heterophase system with a model in which $\;w\;$ is determined by parameters defined merely on the Fermi surface. For instance, parameters characterizing the ground state would be indispensable. The equations for phase probabilities always contain more characteristic constants than the equations for an order parameter [18]. It is therefore permissible to contemplate the possibility of holding some parameters, such as $\;\lambda^{*}\;$ and $\;\mu^{*}\;$, fixed while $\;w\;$ varies as a result of changing other parameters such as the chemical composition. On the other hand, for a metastable system for which (13) does not hold even though thermal and mechanical stability are preserved, the fraction $\;w\;$ is not necessarily determined by equation (12), but might be arbitrary depending on the preparation of the sample. This possibility should not be overlooked since many high -$\;T_{c}\;$ superconductors are metastable. VII Conclusion We have developed an approach, for describing superconductors, taking into account three mutually interrelated factors: (i) repulsive interactions, (ii) lattice softening and (iii) phase separation. We have deliberately limited ourselves to the use of only commonly accepted approximations. This was in order to emphasize that our results are not artifacts of some technical tricks, but the direct consequences of the physical reasons offered. The main results can be summarized as follows: (i) A necessary condition for phase separation in a superconductor is the presence of repulsive interactions. (ii) Phase separation favours superconductivity making it possible in a heterophase sample even if it were impossible in a pure sample. (iii) The superconducting critical temperature as a function of the relative concentration of the superconducting phase can display the nonmonotonic behaviour typical of high - temperature cuprate superconductors. It should be a straightforward matter to adapt the basic approach of this paper to other models of superconductors by using alternative approximation methods. Acknowledgement We appreciate financial support by the Natural Sciences and Engineering Research Council of Canada. References [1] J.Bardeen, L.N.Cooper and J.R.Schrieffer, Phys. Rev. 108 (1957) 1175. [2] N.N.Bogolubov, V.V.Tolmachev and D.V.Shirkov, Fortschr. Phys. 6 (1958) 605. [3] W.L.McMillan, Phys. Rev. 167 (1968) 331. [4] A.J.Leggett, in: Low Temperature Physics, eds. M.J.Hock and R.H.Lemmer (Springer, Berlin, 1991) p.1. [5] W.E.Pickett, Rev. Mod. Phys. 61 (1989) 433. [6] N.Tsuda, K.Nasu, A.Yanase and K.Siratori, Electronic Conduction in Oxides (Springer, Berlin, 1991). [7] R.S.Markiewicz, Physica C 217 (1993) 381. [8] B.T.Matthias, T.H.Geballe and V.B.Compton, Rev. Mod. Phys. 35 (1963) 1. [9] L.R.Testardi, Rev. Mod. Phys. 47 (1975) 637. [10] J.G.Bednorz and K.A.Muller, Z.Phys. B 64 (1986) 189. [11] J.C.Phillips, Physics of High - $T_{c}\;$ Superconductors (Academic, Boston, 1989). [12] V.M.Cherepanov, M.A.Chuev, S.S.Yakimov, V.Y.Goncharov and S.A.Smirnov, Pisma JETP 47 (1988) 354. [13] V.M.Cherepanov, M.A.Chuev, S.S.Yakimov and V.Y.Goncharov, Hyperfine Interact. 55 (1990) 1257. [14] R.Micnas, J.Ranninger and S.Robaszkiewisz, Rev. Mod. Phys. 62 (1990) 113. [15] J.Axe, A.Moudden, D.Hohlwein, D.Cox, K.Mohanty, A.Moodenbaugh and Y.Xu, Phys. Rev. Lett. 62 (1989) 2751. [16] E.Baggio Saitovich, I.Souza Azevedo and R.Scorzelli, Hyperfine Interact. 50 (1989) 521. [17] E.Baggio Saitovich, R.Scorzelli, I.Souza Azevedo and H.Micklitz, Phys. Rev. B 41 (1990) 2103. [18] V.I.Yukalov, Phys. Rep. 208 (1991) 395. [19] U.Buchenau and H.Schober, IFF Bulletin 38 (1991) 4. [20] H.R.Schober, Physica A 201 (1993) 14. [21] Y.L.Khait, Phys. Rep. 99 (1983) 237. [22] Y.L.Khait, Z. Phys. B 71 (1988) 7. [23] V.J.Emery, S.A.Kivelson and H.Q.Linn, Phys. Rev. Lett. 64 (1990) 475. [24] V.I.Yukalov, Int. J. Mod. Phys. B 6 (1992) 91. [25] J.Tholence, B.Souletie, O.Laborde, J.Capponi, C.Chaillout and M.Marezio, Phys. Lett. A 184 (1994) 215. [26] B.G.Levi, Physics Today 47 (1994) 17. [27] A.Lappas, J.Osborne, K.Prassides, A.Amato, R.Feyerhem, F.Gydax and A.Schenk, Physica B 194 (1994) 353. [28] V.I.Yukalov, Phys. Rev. B 32 (1985) 436. [29] V.I.Yukalov, Physica A 136 (1986) 575. [30] V.I.Yukalov, Physica A 141 (1987) 352. [31] V.I.Yukalov, Phys. Lett. A 125 (1987) 95. [32] A.J.Coleman and D.O’Shea, Phys. Rev. B 22 (1980) 3428. [33] M.C.Gutzwiller, Phys. Rev. 125 (1962) 1455. [34] A.J.Coleman, Phys. Rev. Lett. 13 (1964) 406. [35] C.N.Yang, Rev. Mod. Phys. 34 (1962) 694. [36] A.J.Coleman, Rev. Mod. Phys. 35 (1963) 668. [37] A.J.Coleman, J. Math. Phys. 6 (1965) 1425. [38] A.J.Coleman, in: Quantum Statistics and Many - Body Problem, ed. S.B.Trickey, W.P.Kirk and J.W.Duffy (Plenum, New York, 1975) p.239. [39] A.J.Coleman, in: Force Concept in Chemistry, ed. B.Deb (Van Nostrand, New York, 1981) p.418. [40] A.J.Coleman and V.I.Yukalov, Mod. Phys. Lett. B 5 (1991) 1679. [41] A.J.Coleman and V.I.Yukalov, Nuovo Cimento B 108 (1993) 1377. [42] W.Barford and E.Gagliano, Physica B 194 (1994) 1455. [43] N.N.Bogolubov, Lectures on Quantum Statistics (Gordon and Breach, New York, 1970). [44] A.L.Goodman, Nucl. Phys. A 352 (1981) 45. [45] K.Tanabe and K.Sugawara - Tanabe, Nucl. Phys. A 390 (1982) 385. [46] A.L.Goodman, Nucl. Phys. A 402 (1983) 189. [47] J.M.Blatt, Prog. Theor. Phys. 24 (1960) 252. [48] M.Baranger, Phys. Rev. 130 (1963) 1244. [49] M.D.Girardeau, Phys. Rev. 140 (1965) 1139. [50] M.A.Ozaki, J. Math. Phys. 26 (1985) 1521. [51] A.L.Goodman, Nucl. Phys. A 352 (1981) 30. [52] K.Tanabe, K.Sugawara - Tanabe and H.Mang, Nucl. Phys. A 357 (1981) 20. [53] A.J.Coleman, Can. J. Phys. 45 (1967) 1271. [54] J.M.Ziman, Principles of the Theory of Solids (Cambridge University, Cambridge, 1972). [55] D.Pines, Elementary Excitations in Solids (Benjamin, New York, 1963). [56] L.Reining and R. Del Sole, Phys. Rev. Lett. 67 (1991) 3816. [57] R.Daling, W. van Haeringen and B.Farid, Phys. Rev. B 44 (1991) 2952. [58] L.Reining and R.Del Sole, Surf. Sci. 242 (1991) 222. [59] F.Bechstedt, R.Del Sole, G.Cappellini and L.Reining, Solid State Commun. 84 (1992) 765. [60] G.Cappellini, R.Del Sole, L.Reining and F.Bechstedt, Phys. Rev. B 47 (1993) 9892. [61] V.I.Yukalov, Physica A 155 (1989) 519. [62] V.I.Yukalov, in: Abstracts of International Conference on the Applications of the Mössbauer Effect (Budapest, 1989), p.927. [63] V.I.Yukalov, Solid State Commun. 69 (1989) 393. [64] V.I.Yukalov, in: Abstracts of Latin American Conference on the Applications of the Mössbauer Effect (Havana, 1990) p.32. [65] T.Schneider and H.Keller, Phys. Rev. Lett. 69 (1992) 3374. [66] L.J.Dunne, Physica C 223 (1994) 291. [67] M.Surma, Phys. Status Solidi B 116 (1983) 465. [68] M.Surma, Phys. Status Solidi B 121 (1984) 209. [69] S.V.Vonsovsky and M.S.Svirsky, Sverkhprovod. Fiz. Khim. Tekhn. 5 (1992) 1957. [70] F.Torrance, Y.Tokura, A.Nazzal, T.Huang, S.Parkin, D.Keane, S.LaPlaca, P.Horn and G.Held, Phys. Rev. B 40 (1989) 8872. Figure Captions Fig.1. The superconducting critical temperature as a function of the superconducting phase concentration for the parameters $\;\varphi=0.5,\;\mu^{*}=0.1\;$ and $\;\lambda^{*}=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.2. The same as in fig.1, but for the parameters $\;\varphi=0.5,\;\mu^{*}=1\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.3. The same as in fig.1 for the parameters $\;\varphi=0.5,\;\mu^{*}=5\;$, and $\;\lambda*=1\;$ (hardly visible lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.4. The same as in fig.1 for the parameters $\;\varphi=0.5,\;\mu^{*}=10\;$, and $\;\lambda*=5\;$ (lower curve) and $\;\lambda^{*}=10\;$ (upper curve). The curve corresponding to $\;\lambda^{*}=1\;$ in this case is invisible. Fig.5. The same as in fig.1 for the parameters $\;\varphi=1,\;\mu^{*}=0.1\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.6. The same as in fig.1 for the parameters $\;\varphi=1,\;\mu^{*}=1\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.7. The same for the parameters $\;\varphi=1,\;\mu^{*}=5\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.8. The same for the parameters $\;\varphi=1,\;\mu^{*}=10\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.9. The same for the parameters $\;\varphi=1.5,\;\mu^{*}=0.1\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.10. The same for the parameters $\;\varphi=1.5,\;\mu^{*}=1\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.11. The same for the parameters $\;\varphi=1.5,\;\mu^{*}=5\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve). Fig.12. The same for the parameters $\;\varphi=1.5,\;\mu^{*}=10\;$, and $\;\lambda*=1\;$ (lower curve), $\;\lambda^{*}=5\;$ (middle curve) and $\;\lambda^{*}=10\;$ (upper curve).
Scaling Navier-Stokes Equation in Nanotubes Mihail Gărăjeu    Henri Gouin Aix-Marseille Université, CNRS, Centrale Marseille, M2P2 UMR 7340, 13451, Marseille, France    Giuseppe Saccomandi Dipartimento di Ingegneria Industriale, Università degli Studi di Perugia, 06125 Perugia, Italy. [email protected]; [email protected]; [email protected] Abstract On one hand, classical Monte Carlo and molecular dynamics (MD) simulations have been very useful in the study of liquids in nanotubes, enabling a wide variety of properties to be calculated in intuitive agreement with experiments. On the other hand, recent studies indicate that the theory of continuum breaks down only at the nanometer level; consequently flows through nanotubes still can be investigated with Navier-Stokes equations if we take suitable boundary conditions into account. The aim of this paper is to study the statics and dynamics of liquids in nanotubes by using methods of non-linear continuum mechanics. We assume that the nanotube is filled with only a liquid phase; by using a second gradient theory the static profile of the liquid density in the tube is analytically obtained and compared with the profile issued from molecular dynamics simulation. Inside the tube there are two domains: a thin layer near the solid wall where the liquid density is non-uniform and a central core where the liquid density is uniform. In the dynamic case a closed form analytic solution seems to be no more possible, but by a scaling argument it is shown that, in the tube, two distinct domains connected at their frontiers still exist. The thin inhomogeneous layer near the solid wall can be interpreted in relation with the Navier length when the liquid slips on the boundary as it is expected by experiments and molecular dynamics calculations. Navier length; nanotube; thin film; scaling Navier-Stokes pacs: 80.50.Rp; 62.25.-g; 68.60.Bs; 47.10.ad Physics of Fluids 25, 082003 (2013); doi: 10.1063/1.4818159 I Introduction Nanofluidics is the study of the behavior of fluids that are confined to structures of nanometer characteristic dimensions (typically 1-100 nm). The possibility to observe liquids flowing at nano and micro scales, for example in carbon nanotubes Iijima ; Harris ; Tabeling , by using sophisticated experiments and complex molecular simulations using Lennard-Jones forces reveals new behaviors that are often surprising and essentially different from those usually observed at macroscopic scale Ball ; Rafii ; Bonthuis . For example, Majumder et al Majumder perform some interesting experiments and they estimate that, in nanotubes, the flow rates are four to five orders of magnitude faster than conventional fluid flow predicted through pores of 7 nm diameter and, contrary to predictions based on classical hydrodynamics, the flow rate does not decrease with increasing viscosity. Sinha et alMattia1 , in another set of experiments, indicate that in carbon nanotubes ranging from 2 to 7 nm of diameter, fluids flow with velocities up to 105 times faster than what predicted by classical fluid dynamics calculations. The critical dimension below which confinement in nanotubes affects fluid transport is currently debated. For example if we consider water molecules between two flat, hydrophobic surfaces, it has been calculatedMattia2 that, at room temperature and atmospheric pressure, this critical dimension is around $100$ nm. Conversely, some experiments seems to show that the continuum approximation breaks down below $10$ nm in case of water, whereas experiments on capillary filling of molten metals in $0.6-1.2$ nm channels for zeolites show that the threshold for confinement effects is closer to $1$ nmMattia2 . These incongruences may be explained by the fact that actually there is a severe computation limitation to molecular simulation, that the smooth liquid-gas interface disappears in tubes with diameter less than $8-10$ nm and therefore anomalous behavior of water may be observed in experiments with carbon nanotubes. Indeed, at this nano-size, the surface chemistry and structure of nanotubes must be controlled with a high precision to control flow rate and interaction of fluid componentsMattia2 ; Thomas . Moreover, in the framework of molecular dynamics, there are some problems to apply in a simple and direct way the propest boundary conditions necessary to generate the fluid flow. This is true especially when we consider pressure driven flowNicholls . Various methods exist to investigate fluid transport in molecular dynamics. Examples are the gravitational field method, where an artificial gravitational force – much greater than the earth’s gravitational pull – is introduced or the channel moving model, a method to trigger the flow with the viscous shear forces applied to the fluid by two moving channel walls. Despite this indeterminacy in the literature, a relevant number of experimental studies lead to the conclusion that the classical Navier-Stokes equations are still valid at the nanoscale (see Bocquet and CharlaixBocquet and included references). The critical threshold for the applicability of continuum hydrodynamics investigated with molecular simulations and experiments may set around $1$ nm. This value can be numerically obtained because beyond the validity of continuum equations, the value of the viscosity quantitatively remains equal to the bulk value. A typical correlation time for the stress-stress correlation function is the picosecond $\tau_{\sigma}=10^{-12}$ s, and the kinematic viscosity is $\nu=10^{-6}\text{m}^{2}\text{s}^{-1}$; consequently we obtain for water a viscous length scale $\ell_{c}=\sqrt{\nu\tau_{\sigma}}\approx 1$ nm. This observation seems to indicate, at least for water, that an unexpected nano-metric characteristic length scale naturally emerges as the lowest bound for the validity of the notion of viscosity. The important conclusion, in analyzing the actual literature, is that for water under normal physicochemical conditions, the Navier-Stokes equation remains valid in nano-channels down to typically $1-2$ nm and the discrepancy between molecular dynamics simulations and experiments seems to be induced by the interaction of the fluid with the wall,  i.e. when we consider the boundary conditions. The evidence of this conclusion is given by the measurements and the molecular dynamics simulations of the density profile which clearly fluctuates in the vicinity of a solid wall. Therefore the main problem is not if the continuum hypothesis has to be abandoned, but whence the correct boundary conditions comes out. Since van der Waals at the end of the $19$-th century, the fluid inhomogeneities in liquid-vapor interfaces are represented using continuous models that allows to take account of a volume energy depending on space density derivativeDunn ; Seppecher ; widom ; Kazm ; Onuki . Nevertheless, the corresponding square-gradient functional is unable to model repulsive force contributions and misses the dominant damped oscillatory packing structure of liquid interlayers near a substrate wallChernov . Furthermore, the decay lengths are correct only close to the liquid-vapor critical point where the damped oscillatory structure is subdominantEvans . In mean field theory, weighted density-functional has been used to explicitly demonstrate the dominance of this structural contribution in van der Waals thin films and to take account of long-wavelength capillary-wave fluctuations as in papers that renormalize the square-gradient functional to include capillary wave fluctuationsFischer . In contrast, fluctuations strongly damp out oscillatory structure and it is mainly for this reason that van der Waals original prediction of a hyperbolic tangent profile is so close to simulations and experimentsOno ; Rowlinson . It is possible to adjust, in phenomenological way, this state of affairs by considering the approach by Cahn in his celebrated paper studying wetting near a critical point Cahn0 . An approach that may be justified via a suitable asymptotic expression considering approximation of hard sphere molecules and London potentials for liquid-liquid and solid-liquid interactionsGouin 2 : in this way, we took account of the power-law behavior which is dominant in a thin liquid film in contact with a solid. It is found that a similar situation may be also considered for the flow of the fluids and not only for their densities. The amended boundary conditions at a solid surface in the nano-scale framework must introduce a new length, the so-called Navier length or slip lengthBlake ; Landau ; Navier : a length relating the tangential velocity to the shear rate at the wall. Liquid slip is essential in nano-fluidic systems, as shrinking channel size leads to a dramatic increase of flow resistance and thus high-energy consumption for driving nonslip flowMH ; Ma . The aim of the note is to justify the boundary conditions equations of nano-fluid mechanics using a simple mesoscopic approach. Our basic idea has been suggested by some experimental work regarding the measurement of the density of water in narrow pores Ball ; Bear . In such experiments it is shown that at the nanoscale the liquid must be compressible and inhomogeneous in a very narrow layer near the solid wall. In our opinion this layer is connected with the Navier length. To support this idea we consider a nanotube made up of a cylindrical hollow tube whose diameter is of some nanometers. The nanotube is immersed in a liquid filling the interior of the nanotube, and to take account of the compressibility of the liquid, we use a second gradient theory in which the fluid is modeled by a van der Waals fluid for which the surdeformations are taken into accountvan der Waals ; Korteweg ; Cahn ; Isola . Therefore, we use a continuum theory in which the volume energy of the liquid is a function not only of the density but also of the gradient of density. The associated mathematical model may be obtained via a molecular mean field theory Gouin 1 ; Gouin 3 or via the axiomatic theory of the thermomechanics of continua Forest or by considering maximization of the entropy productionRaja1 ; Raja2 . In the following, the ideas of the van der Waals square gradient functional is used together with a condition at the wall taking account of the fluid density at its immediate proximity Gouin 2 ; Gouin 3 . By using this continuum approach, we provide a bridge between classical models of fluid mechanics and molecular simulations. A framework to develop simple analytical results in closed form of technical importance. The plan of the paper is the following. Section 2 is devoted to the basic equations of capillary fluids using a second gradient theory. In Section 3 we consider the static problem to obtain the density profile of the liquid in the nanotube and its comparison with molecular dynamics simulation. In Section 4 we consider a dimensional analysis of Navier-Stokes equations in cylindrical coordinates and we show that the inhomogeneous character of the governing equations introduce in a natural way the Navier length. The last section is devoted to remarks and conclusion. II Capillary fluids II.1 Basic equations Let us consider a fluid in a nanotube. In the immediate vicinity of the solid wall of the nanotube, the intermolecular forces are dominant and the density profile of the confined fluid is inhomogeneous; in the case of a small variation of density, the intermolecular forces induce a sharp variation of the gradient of density at the wall. In this framework the specific fluid internal energy $\varepsilon$, which is usually a function only of the density $\rho$ and the specific entropy $s$, must also take account of the gradient of density $\mathop{\rm grad}\rho$. The second gradient model Germain is a theory of continua based on constitutive equations depending on the gradient of the density. In this case, restricting first our attention to statics, we start from a specific internal energy density in the form $$\varepsilon=f(\rho,s,\beta)\quad\mathrm{{with}\quad}\beta=(\mathop{\rm grad}% \rho)^{2},$$ and in such a way the stress tensor is Gouin 1 $$\mathbf{\mathbf{\sigma}}=-p\,\mbox{\boldmath{$I$}}-\lambda\,(\mathop{\rm grad}% \rho)\otimes(\mathop{\rm grad}\rho)\equiv-p\,\mbox{\boldmath{$I$}}-\lambda\,(% \mathop{\rm grad}\rho)(\mathop{\rm grad}\rho)^{T}\ $$ (1) where $\lambda\equiv 2\,\rho\,\varepsilon_{\beta}^{\prime}\ ,\ p\equiv\rho^{2}% \varepsilon_{\rho}^{\prime}-\rho\,\mathop{\rm div}(\lambda\,\mathop{\rm grad}\rho)$ is the spherical part of the stress tensor, $I$ is the identity tensor and $\,{}^{T}$ denotes the transposition. The scalar $\lambda$ - call the surdeformation coefficient of the fluid - accounts for surdeformation effects and generally depends on $\rho,s$ and $\beta$. By using kinetic theory, Rowlinson and Widom Rowlinson obtained an analogous result but with $\lambda$ constant at a given temperature $T$ and the specific energy $\varepsilon$ reads $$\rho\,\varepsilon(\rho,s,\beta)=\rho\,\alpha(\rho,s)+\frac{\lambda}{2}\,\beta,$$ where $\alpha(\rho,s)$ is the the specific internal energy of the classical compressible fluid of pressure $P\equiv\rho^{2}\alpha_{\rho}^{\prime}$ and temperature $T\equiv\alpha_{s}^{\prime}$. Consequently, in Eq. (1), $$p=P-\lambda\left(\frac{\beta}{2}+\rho\,\Delta\rho\right)\quad{\rm and},$$ $$\mathbf{\mathbf{\sigma}}=-P\,\mbox{\boldmath{$I$}}+\lambda\,\left(\,\frac{1}{2% }\ \left((\mathop{\rm grad}\rho)^{2}+\rho\,\Delta\rho\right)\,\mbox{\boldmath{% $I$}}-(\mathop{\rm grad}\rho)(\mathop{\rm grad}\rho)^{T}\right),$$ where $\Delta$ denotes the Laplacian operator. Because a convex equation of state is not able to connect the different bulks associated with a fluid interface, many authors use the van der Waals equation of state or other similar laws for the thermodynamical pressure $P$. In fact, we only consider the liquid bulk and the thermodynamical pressure $P$ is expanded near the bulk density. The equation of motion is $$\rho\,\mathbf{a}=\mathop{\rm div}\mathbf{\mathbf{\sigma}}-\rho\mathop{\rm grad% }\Omega,$$ (2) where $\Omega$ is the extraneous force potential. Let us denote $\omega=\Omega-\lambda\,\Delta\rho$, then the equation of motion yields Gouin 1 $$\rho\,\mathbf{a}+\mathop{\rm grad}P+\rho\mathop{\rm grad}\omega=0.$$ This relation is similar to the one of the perfect fluid case but the term $\omega$ involves all capillarity effects. By neglecting the extraneous force potential, we obtain $$\rho\,\mathbf{\mathbf{a}}+\mathop{\rm grad}P-\lambda\,\rho\,\mathop{\rm grad}% \Delta\rho=0.$$ (3) The equation of motion (3) can also be written in the form Gouin 4 $$\mathbf{a}=T\mathop{\rm grad}s-\mathop{\rm grad}H,$$ and if $T$ is constant, $$\mathbf{a}+\mathop{\rm grad}\pi=0\,,$$ (4) with the potentials $$H=\varepsilon+\frac{p}{\rho}\equiv h-\lambda\,\Delta\rho\quad{\rm and}\quad\pi% =H-T\,s\equiv\mu-\lambda\,\Delta\rho$$ being the generalized enthalpy and generalized chemical potential of the capillary fluid, respectively, where $$h=\alpha+\frac{P}{\rho}\quad{\rm and}\quad\mu=\alpha+\frac{P}{\rho}-T\,s$$ are the enthalpy and the chemical potential of the classical compressible fluid, respectively Gouin 4 . In the case of viscous fluids, the equation of motion takes account of the viscous stress tensor which is classically given by $$\mbox{\boldmath{$\mathbf{\sigma}$}}_{v}=\eta(\mathtt{tr}\,\mbox{\boldmath{$D$}% })\mbox{\boldmath{$I$}}+2\,\kappa\,\mbox{\boldmath{$D$}},$$ where $\eta$ and $\kappa$ are the shear and bulk viscosity coefficients respectively assumed to be constant and $D$ is the deformation tensor, symmetric gradient of the velocity field Sli . It would be coherent to add terms accounting for the influence of higher order derivatives of the velocity field but the over-deformation only comes from the density. In fact, as discussed in introduction, the Navier-Stokes equations correctly take account of the viscous behavior without higher order derivatives of the velocity field. Equation (2) is modified as $\rho\,\mathbf{a}=\mathop{\rm div}(\mathbf{\mathbf{\sigma}}+\mathbf{\mathbf{% \sigma}}_{v})$ and for viscous liquids, Eq. (3) writes $$\rho\,\mathbf{a}+\mathop{\rm grad}P-\lambda\,\rho\mathop{\rm grad}\,\Delta\rho% -\mathop{\rm div}\mathbf{\mathbf{\sigma}}_{v}=0.$$ (5) II.2 Boundary conditions The forces acting between liquid and solid range over a few nanometers but can be simply described by a special surface energy. For a solid wall, the total surface energy $\varphi$ at the wall is expressed as De Gennes2 : $$\varphi(\rho_{{}_{S}})=-\gamma_{1}\rho_{{}_{S}}+\frac{1}{2}\,\gamma_{2}\,\rho_% {{}_{S}}^{2}.$$ (6) Here $\rho_{{}_{S}}$ denotes the limit value of the liquid density at the solid wall; the constants $\gamma_{1}$, $\gamma_{2}$ and $\lambda$ are positive and can be obtained by the mean field approximation in molecular theory Gouin 2 . The boundary condition for the liquid density at the solid wall $(S)$ is associated with the free surface energy (6) and was calculated in Gouin 3 $$\lambda\left(\frac{d\rho}{dn}\right)_{|_{S}}+\varphi^{\prime}(\rho_{{}_{S}})\ % =0,$$ (7) where $\displaystyle\frac{d}{dn}\ $ means the derivative following the direction of the external normal $\textbf{n}\,$ to the fluid. This condition corresponds to an embedding effect for the density of the fluid which is not taken into account in classical hydrodynamics. The aim of the present note is to show that the boundary condition (7) introduce a nano-boundary layer in the tube. The byproduct of this layer is the presence of a slip velocity that we read at the micro scale and this also when we consider the classical no-slip boundary condition for the velocity field at the wall. II.3 The chemical potential in the liquid phase Due to the fact $\mu$ is defined to an additive constant, we denote by $\mu_{{}_{0}}(\rho)$ the chemical potential of the fluid for the liquid-vapor plane interface, such that $$\mu_{{}_{0}}(\rho_{l})=0,$$ where $\rho_{l}$ is the liquid density in the liquid bulk corresponding to the plane liquid-vapor interface at a given temperature $T$. To the liquid bulk of density $\rho_{l_{b}}\neq\rho_{l}$ - the density $\rho_{l_{b}}$ does not correspond to a plane liquid-vapor interface but to a mother liquid bulk associated with a droplet or a bubble and does not verify the Maxwell rule of equal area corresponding to plane liquid-vapor interfaces Derjaguin - we associate $\mu_{l_{b}}(\rho)\equiv\mu_{{}_{0}}(\rho)-\mu_{{}_{0}}(\rho_{l_{b}})$ corresponding to the chemical potential for the mother liquid bulk $\rho_{l_{b}}$. The thermodynamical potentials $\mu_{l_{b}}$ can be expended at the first order near the liquid bulk of density $\rho_{l_{b}}$ $$\mu_{l_{b}}(\rho)=\frac{c_{l}^{2}}{\rho_{l}}\left(\rho-\rho_{l_{b}}\right),$$ where $c_{l}$ is the isothermal sound velocity in the liquid bulk of density $\rho_{l}$ Gouin 7 . Similarly, the thermodynamical pressure is expended as $$P=P_{l}+c_{l}^{2}\left(\rho-\rho_{l}\right),$$ (8) where $P_{l}$ is the thermodynamical pressure in the liquid bulk of density $\rho_{l}$. III Liquid density in a nanotube at equilibrium A nanotube is represented by a hollow cylinder of length size $L$ and of small diameter $d=2R$, ($d/L\ll 1$). In Subsection IIIA, $d$ ranges from 2 to 100 nanometers and $L$ is of the order of some microns. III.1 Profile of density by using the continuum approach We consider solid walls with a large thickness with regards to molecular dimensions such that the surface energy verifies an expression in form (6). At equilibrium ($\mathbf{a}=0$), far from the nanotube tips and by neglecting the external forces ($\pi=\mu_{{}_{0}}-\lambda\,\Delta\rho$), Eq. (4) implies the profile of density as solution of the differential equation : $$\lambda\,\Delta\rho=\mu_{{}_{0}}(\rho)-c,$$ where $c=\mu_{{}_{0}}(\rho_{l_{b}})$ is an additional constant associated with the density value $\rho_{l_{b}}$ in the mother bulk outside the nanotube Derjaguin . We consider the case when only the liquid fills up the nanotube. The profile of density is given by the differential equation : $$\lambda\,\left(u_{rr}+\frac{1}{r}\,u_{r}\right)-\frac{c_{l}^{2}}{\rho_{l}}\ u=% 0,\qquad\mathrm{with}\quad u=\rho-\rho_{l{{}_{b}}}.$$ (9) In cylindrical coordinates, $r$ denotes the radial coordinate. The reference length is $$\delta_{l}=\sqrt{\frac{\lambda\,\rho{{}_{l}}}{{c_{l}}^{2}}}\,.$$ We denote by $x$ the dimensionless variable such that $r=\delta_{l}\,x$. Equation (9) reads $$u_{xx}+\frac{1}{x}\,u_{x}-\,u=0.$$ (10) The solutions of Eq. (10) in classical expansion form $u=\sum_{n=0}^{\infty}a_{n}x^{n}$ yield $$\sum_{n=2}^{\infty}n^{2}\,a_{n}\,x^{n-2}-a_{n-2}\,x^{n-2}=0\quad% \Longrightarrow\quad n^{2}\,a_{n}=a_{n-2}\,.$$ Due to the symmetry at $x=0$, the odd terms are null and consequently, $$u=a_{{}_{0}}\,\sum_{p=0}^{\infty}\ \frac{1}{4^{p}\,(p\,!)^{2}}\ x^{2p}\,.$$ The series has an infinite radius of convergence. Let us define the functions $$f(x)\equiv\sum_{p=0}^{\infty}\ \frac{1}{4^{p}\,(p\,!)^{2}}\ x^{2p}\qquad\emph{% and}\qquad g(x)\equiv f^{\prime}(x)=\sum_{p=1}^{\infty}\ \frac{2p}{4^{p}\,(p\,% !)^{2}}\ x^{2p-1}.$$ Consequently, $u=a_{{}_{0}}\,f(r/\delta_{l})$. The boundary condition (7) at $x=R/\delta_{l}$ yields $$\frac{\lambda}{\delta_{l}}\,\frac{du}{dx}=\gamma_{1}-\gamma_{2}\,\rho\qquad{% \rm or}\qquad a_{{}_{0}}=\frac{\delta_{l}\,\left(\gamma_{1}-\gamma_{2}\,\rho_{% l_{b}}\right)}{\lambda\,g\left(\frac{R}{\delta_{l}}\right)+\gamma_{2}\,\delta_% {l}\,f\left(\frac{R}{\delta_{l}}\right)}$$ and the density profile reads $$\rho=\rho_{l_{b}}+\frac{\delta_{l}\,\left(\gamma_{1}-\gamma_{2}\,\rho_{l_{b}}% \right)}{\lambda\,g\left(\frac{R}{\delta_{l}}\right)+\gamma_{2}\,\delta_{l}\,f% \left(\frac{R}{\delta_{l}}\right)}\ f\left(\frac{r}{\delta_{l}}\right).$$ Densities $\rho_{l_{b}}$ and $\rho_{l}$ differ very slightly and, for the purposes of this work, can be considered as coinciding. Finally, the density profile can be written as $$\frac{\rho}{\rho_{l}}=1+\frac{\gamma_{1}-\gamma_{2}\rho_{l}}{\delta_{l}c_{l}^{% 2}g\left(\frac{R}{\delta_{l}}\right)+\gamma_{2}\rho_{l}f\left(\frac{R}{\delta_% {l}}\right)}f\left(\frac{r}{\delta_{l}}\right)$$ (11) In order to visualize the density profiles (11) we consider the case of the water at $20^{\texttt{o}}$ Celsius, for which the different physical constants involved in the model are (in cgs units) as follow : $\rho_{l}=0.998$, $c_{l}=1.478\times 10^{5}$ and $\lambda=1.17\times 10^{-5}$; the value of $\gamma_{2}$ only depends on the fluid and in the case of water $\gamma_{2}=54$ , whereas the coefficient $\gamma_{1}$ is related to the hydrophobicity or to the hydrophilicity of the solid wall Gouin 7 . In Figure 1, different density profiles obtained for four tubes of radius $R=2,5,10$ and $100$ nanometers and for different values of $\gamma_{1}$ ($\gamma_{1}=60,75,90$), corresponding to the case when the solid wall is hydrophilic are plotted. The density profiles plotted in Figure 1 show that at equilibrium and independently of the diameter of the tube, the fluid domain can be separated in two cylindrical domains: the core in the center of the tube, where the density is constant, and the boundary layer near the solid wall of the tube, where the gradient of the density is significant. The thickness of the layer wherein the variation of the density takes place, is about four times the value of $\delta_{l}=0.231$ nm. The maximal value of the density is reached on the boundary (at wall-fluid interface). It depends both on the value of the coefficient $\gamma_{1}$ and, to a lesser extent, on the diameter of the tube. The density variation inside the tube is moderate: at most $6.8\,\%$ for a strongly hydrophilic wall ($\gamma_{1}=90$) and for a tube of tiny radius $R=2$ nm. III.2 Comparison between continuum approach and molecular dynamics simulation Molecular dynamics (MD) simulations take account of van der Waals forces by using Lennard-Jones interaction potentials between a small number of molecules included inside the nanotube. Near the wall, MD simulations show oscillatory density profiles corresponding to the variations of the indicator function of molecular presence; moreover, the non-penetrability condition of the water molecules leads to empty domains beside the wallMattia2 ; Sony . These density fluctuations are obviously in contrast with the predictions of continuum studies corresponding to an averaging in molecular energies. In the layer beside the wall of about one nanometer, MD simulations consider a few number of molecules. As pointed out by Thomas and McGaughey Thomas2 (in Fig. 3 and Fig. 4), the graphs of density near the wall are not associated with continuous functions; the molecular distributions are gathered in cylindrical layers of about $0.2$ nm of thickness and the continuous guidelines are simply added between the density values of cylindrical layers to highlight the minima and maxima of the layer densities. Consequently, the comparison between MD simulations and the continuum approach corresponding to an averaging of the sum of molecular potentials must be done on the Gibbs adsorption of density Rowlinson at the nanotube wall involving the domain where the density differs from the bulk density. Our comparison is done by reference to the examples presented in the paper by Thomas and McGaughey. The density profile retained for comparison purpose is plotted in Figure 2. In continuum theory of capillarity, the Young angle $\theta$ between solid-liquid surface and liquid-vapor interface is given by the relation: $$\sigma_{{}_{SV}}-\sigma_{{}_{SL}}=\sigma_{{}_{LV}}\cos\theta,$$ (12) where $\sigma_{{}_{SV}},\sigma_{{}_{SL}},\sigma_{{}_{LV}}$ are respectively the solid-vapor, solid-liquid and liquid-vapor superficial tensions. For water at $20^{\circ}$ Celsius and in cgs units, $\sigma_{{}_{LV}}\simeq 72$ and $\sigma_{{}_{SV}}$ can be neglected. Relation (6) expresses the value of $\sigma_{{}_{SL}}$ by mean-field theory in capillarity ($\sigma_{{}_{SL}}=\varphi(\rho_{{}_{S}})$). Using a mean field model and London forces Gouin 2 the $\gamma_{2}$ value for water is obtained in Gouin 7 and reads $\gamma_{2}\simeq 54$. Consequently, from Eqs. (6) and (12), $\gamma_{1}\simeq 96$ and $\gamma_{1}\simeq 75$ correspond to a Young angle of $0$ degree and $45$ degree, respectively. Mattia and Gogotsi give a range of values of the Young angle for graphite Mattia2 . A realistic value for carbon nanotube can be taken as $\gamma_{1}\simeq 90$. As a relevant example for nanotubes, the graphs of density associated with the MD simulations and continuum model are presented on Fig. 2. The MD simulation profile is rebuilt from Fig. 3 in Thomas2 , where guidelines added between minima and maxima of densities are replaced by a step function corresponding to the cylindrical layers shown in Fig. 4 in Thomas2 . Both profiles of density, corresponding to the two models, differ from the uniform bulk density value only in the nanometer range near the wall. In this domain, we calculate the total mass for the MD simulation as well as for the continuum model; consequently, we are able, in the two cases, to compare the Gibbs adsorption at the wall. To take account of the gap of density near the wall appearing in MD simulations, the cylindrical layer near the wall associated with MD simulation is considered in size 10 per cent smaller than the other layers. For carbon nanotube with radius of 10.4 nm, MD simulation predicts a Gibbs adsorption per unit length at the wall of $11.4\times 10^{-15}$ g cm${}^{-1}$ whereas the continuum model predicts a Gibbs adsorption per unit length at the wall of $9.7\times 10^{-15}$ g cm${}^{-1}$. These two values are of the same order. Considering that the water molecule mass is about $3\times 10^{-23}$ g, we obtain a Gibbs adsorption of about 30 molecules per nanometer length of the nanotube. In Table 1 are shown the values of the Gibbs adsorption at the wall predicted by the continuum model for different values of the parameter $\gamma_{1}$. We observe that complete similarity between the two models is obtained for the perfect wetting. We can conclude: In the two models we obtain the same thickness of the domain where the density of water is different from the bulk density. The Gibbs adsorption at the wall is similar for the two models. In the comparison, the continuous mean-field theory uses London potential which is an approximation of Lennard-Jones potential but the difference of Gibbs adsorption between the two models is, in this example, less important than the disparity between the MD simulation results obtained in different papers in the literature Majumder ; Nicholls ; Rafii ; Sony . IV Motion of liquid in a nanotube Due to the cylindrical symmetry of the problem, it is supposed that the velocity field $\bm{v}$ and the density $\rho$ have a radial symmetry $$\bm{v}=u(r,z)\bm{e}_{r}+w(r,z)\bm{e}_{z},\qquad\rho=\rho(r,z),$$ where $(\bm{e}_{r},\bm{e}_{\theta},\bm{e}_{z})$ is the basis of the cylindrical coordinates $(r,\theta,z)$. The continuity equation is then written as $$\frac{1}{r}(r\rho u)_{r}+(\rho w)_{z}=0.$$ (13) In the following, and only for the sake of algebraic simplicity, Stokes’ hypothesis concerning the viscosity is assumed : $3\,\eta+2\,\kappa=0$. This assumption is not essential, but the analytic development is simplified and the comprehension of calculations is easier. In the steady case $({\partial\bm{v}}/{\partial t}=0)$, the non-vanishing equations of motion (5) are written as $$\displaystyle\rho\left(uu_{r}+wu_{z}\right)$$ $$\displaystyle=$$ $$\displaystyle-P_{r}+\kappa\left\{\frac{4}{3}\,\left[\frac{1}{r}\left(ru\right)% _{r}\right]_{r}+u_{zz}+\frac{1}{3}\,w_{rz}\right\}$$ (14) $$\displaystyle+\lambda\rho\left[\frac{1}{r}\left(r\rho_{r}\right)_{r}+\rho_{zz}% \right]_{r},$$ $$\displaystyle\rho\left(uw_{r}+ww_{z}\right)$$ $$\displaystyle=$$ $$\displaystyle-P_{z}+\kappa\left\{\frac{1}{r}\left(rw_{r}\right)_{r}+\frac{4}{3% }\,w_{zz}+\frac{1}{3}\left[\frac{1}{r}\left(ru\right)_{r}\right]_{z}\right\}$$ (15) $$\displaystyle+\lambda\rho\left[\frac{1}{r}\left(r\rho_{r}\right)_{r}+\rho_{zz}% \right]_{z}.$$ The solution of this set of equations cannot be obtained analytically. However, an approached velocity profile can be obtained by re-scaling Eqs (13–15). The re-scaling procedure, which is the object of the present section, is made with respect to a small geometrical parameter $\epsilon=d/L$ but also with respect to a small physical quantity $\tau=\delta_{l}/d$. To this goal, the following set of dimensionless variables – indicated with  $\tilde{}$  –  is introduced : $$r=d\tilde{r},\quad z=L\tilde{z},\quad u=\hat{w}\tilde{u},\quad w=\hat{w}\tilde% {w},\quad\rho=\rho_{l}\tilde{\rho},$$ where $\hat{w}$ is a reference velocity of the liquid; we chose the mean velocity in the nanotube estimated by its corresponding value in the case of a Poiseuille flow $$\hat{w}=-\frac{d^{2}\,\mathop{\rm grad}\Delta P}{32\,\kappa},$$ (16) where $\mathop{\rm grad}\Delta P$ denotes the gradient of the pressure difference between the nanotube extremities. In so doing, the continuity equation becomes $$(\tilde{r}\tilde{\rho}\tilde{u})_{\tilde{r}}+\epsilon\,\tilde{r}(\tilde{\rho}% \tilde{w})_{\tilde{z}}=0.$$ (17) If we denote by $Re=\rho_{l}\,\hat{w}\,d/\kappa$ the Reynolds number and by $M=\hat{w}/c_{l}$ the Mach number, and taking account of Eq.(8), the momentum equations become $$\displaystyle Re\,\tilde{\rho}\left(\tilde{u}\tilde{u}_{\tilde{r}}+\epsilon% \tilde{w}\tilde{u}_{\tilde{z}}\right)$$ $$\displaystyle=$$ $$\displaystyle-\frac{Re}{M^{2}}\,\tilde{\rho}_{\tilde{r}}+\frac{4}{3}\left[% \frac{1}{\tilde{r}}\left(\tilde{r}\tilde{u}\right)_{\tilde{r}}\right]_{\tilde{% r}}+\epsilon^{2}\tilde{u}_{\tilde{z}\tilde{z}}+\frac{1}{3}\,\epsilon\tilde{w}_% {\tilde{r}\tilde{z}}$$ (18) $$\displaystyle+\frac{Re}{M^{2}}\,\tau^{2}\tilde{\rho}\left[\frac{1}{\tilde{r}}% \left(\tilde{r}\tilde{\rho}_{\tilde{r}}\right)_{\tilde{r}}+\epsilon^{2}\tilde{% \rho}_{\tilde{z}\tilde{z}}\right]_{\tilde{r}},$$ $$\displaystyle Re\,\tilde{\rho}\left(\tilde{u}\tilde{w}_{\tilde{r}}+\epsilon% \tilde{w}\tilde{w}_{\tilde{z}}\right)$$ $$\displaystyle=$$ $$\displaystyle-\frac{Re}{M^{2}}\,\epsilon\tilde{\rho}_{\tilde{z}}+\frac{1}{% \tilde{r}}\left(\tilde{r}\tilde{w}_{\tilde{r}}\right)_{\tilde{r}}+\frac{4}{3}% \,\epsilon^{2}\tilde{w}_{\tilde{z}\tilde{z}}+\frac{1}{3}\,\epsilon\left[\frac{% 1}{\tilde{r}}\left(\tilde{r}\tilde{u}\right)_{\tilde{r}}\right]_{\tilde{z}}$$ (19) $$\displaystyle+\frac{Re}{M^{2}}\,\epsilon\tau^{2}\tilde{\rho}\left[\frac{1}{% \tilde{r}}\left(\tilde{r}\tilde{\rho}_{\tilde{r}}\right)_{\tilde{r}}+\epsilon^% {2}\tilde{\rho}_{\tilde{z}\tilde{z}}\right]_{\tilde{z}}.$$ In order to evaluate the respective size of the coefficients of Eqs. (17–19) some numerical reference values for different physical variables should be considered. These numerical values are expressed in cgs units as follows : $$L=10^{-2},\quad c_{l}=1.478\times 10^{5},\quad\kappa=0.01,\quad\rho_{l}=0.998,% \quad\delta_{l}=2.31\times 10^{-8},$$ and nanotubes of four different diameters are considered : $$d\in\left\{4\times 10^{-7},\quad 10^{-6},\quad 2\times 10^{-6},\quad 2\times 1% 0^{-5}\right\}.$$ We will assume $\mathop{\rm grad}\Delta P=-10^{6}$ (corresponding to one atmosphere per centimeter length of the nanotube). Consequently, the numerical values of the coefficients in equations (17–19) are resumed in Table 2. It is worth noting that the coefficient ${Re}\,\epsilon/M^{2}$ is independent of the diameter of the nanotube. Moreover, when $\mathop{\rm grad}\Delta P=-1$ corresponding to a very low pressure difference between the tips of the nanotube, the term ${Re}\,\epsilon/M^{2}$ is simply multiplied by $10^{-6}$ which always remains very large with respect to the other quantities. As suggested by the density profiles at equilibrium, the analyze of the liquid flow will be separately carried in two cylindrical domains: $\quad-$ In the core, containing the axis of the tube, where the liquid density at equilibrium is independent of $r$, $\quad-$ In the boundary layer, near the solid wall of the tube, where the density gradient is significant. Based on the observations made in Section III, the thickness of the boundary layer is of the order of 4$\delta_{l}$. Consequently, the equation of motion is solved in the two different regions by using a small length parameter. Using a matched asymptotic expansion, different analytic solutions are obtained in both zones. An immediate outcome should be that the inner part of the boundary layer solution matches the outer part of bulk flow. IV.1 Liquid flow in the core Due to $\epsilon\ll 1$, the main term of Eq. (17) yields $$(\tilde{r}\tilde{\rho}\tilde{u})_{\tilde{r}}=0$$ and consequently, $$\tilde{u}=\frac{\psi(\tilde{z})}{\tilde{r}\,\tilde{\rho}},$$ where $\psi$ is a function of $\tilde{z}$ only. Since $\tilde{u}$ must be bounded when $\tilde{r}$ goes to zero, we get $\psi(\tilde{z})=0$ and consequently $\tilde{u}=0$. Considering that $u(r,z)\equiv 0$, Eq. (17) yields $$(\tilde{\rho}\tilde{w})_{\tilde{z}}=0.$$ Then, the momentum equations become $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle-\frac{Re}{M^{2}}\,\tilde{\rho}_{\tilde{r}}+\frac{1}{3}\,\epsilon% \tilde{w}_{\tilde{r}\tilde{z}}+\frac{Re}{M^{2}}\,\tau^{2}\tilde{\rho}\left[% \frac{1}{\tilde{r}}\left(\tilde{r}\tilde{\rho}_{\tilde{r}}\right)_{\tilde{r}}+% \epsilon^{2}\tilde{\rho}_{\tilde{z}\tilde{z}}\right]_{\tilde{r}},$$ (20) $$\displaystyle\epsilon Re\,\tilde{\rho}\left(\tilde{w}\tilde{w}_{\tilde{z}}\right)$$ $$\displaystyle=$$ $$\displaystyle-\frac{Re}{M^{2}}\,\epsilon\tilde{\rho}_{\tilde{z}}+\frac{1}{% \tilde{r}}\left(\tilde{r}\tilde{w}_{\tilde{r}}\right)_{\tilde{r}}+\frac{4}{3}% \,\epsilon^{2}\tilde{w}_{\tilde{z}\tilde{z}}+\frac{Re}{M^{2}}\,\epsilon\tau^{2% }\tilde{\rho}\left[\frac{1}{\tilde{r}}\left(\tilde{r}\tilde{\rho}_{\tilde{r}}% \right)_{\tilde{r}}+\epsilon^{2}\tilde{\rho}_{\tilde{z}\tilde{z}}\right]_{% \tilde{z}}.$$ (21) In agreement with the coefficient values of Table 2, the main parts of the momentum equations are obtained by retaining the dominant terms in Eqs. (20–21) : $$\tilde{\rho}_{\tilde{r}}=0\qquad{\rm and}\qquad\frac{1}{\tilde{r}}\left(\tilde% {r}\tilde{w}_{\tilde{r}}\right)_{\tilde{r}}=\frac{Re}{M^{2}}\,\epsilon\,\tilde% {\rho}_{\tilde{z}}.$$ (22) Note that, due to $\tilde{\rho}_{\tilde{r}}=0$, the term $\displaystyle\frac{Re}{M^{2}}\,\epsilon\tau^{2}\tilde{\rho}\frac{1}{\tilde{r}}% \left(\tilde{r}\tilde{\rho}_{\tilde{r}}\right)_{\tilde{r}\tilde{z}}$, which should appear in the second equation (22) is null. Equations (22) can be explicitly integrated and yield $$\tilde{\rho}=\tilde{\rho}(\tilde{z})\qquad{\rm and}\qquad\tilde{w}(\tilde{r},% \tilde{z})=-\frac{Re}{4\,M^{2}}\,\epsilon\tilde{\rho}^{\prime}\left(k_{0}-% \tilde{r}^{2}\right)$$ (23) where $k_{0}$ is a constant to be determined by the boundary conditions. Introducing this velocity field in the continuity equation we obtain $$\tilde{\rho}(\tilde{z})=\sqrt{h_{0}\tilde{z}+h_{1}},$$ where the constants $h_{0}$ and $h_{1}$ must be determined from the inlet and outlet bulk densities. For example, if we assume that the inlet bulk density is $\tilde{\rho}(0)=1$, the outlet bulk density $\tilde{\rho}(1)$ derives from Eq. (8) when $\mathop{\rm grad}\Delta P=-10^{6}$ : $$\rho(L)-\rho(0)=\frac{\Delta P}{c_{l}^{2}}\simeq-0.46\times 10^{-6}.$$ Consequently, $h_{1}=1$ and $h_{0}=-0.92\times 10^{-6}$. IV.2 Liquid flow in the boundary layer In the boundary layer, $\tilde{r}$ is always different from zero and the reasoning made in Section IV no longer works. From Eq. (17) we get that $\tilde{u}$ is of order of $\epsilon\tilde{w}$. Then, introducing $\bar{u}$ as $$\tilde{u}=\epsilon\bar{u},$$ the continuity equation becomes $$\frac{1}{\tilde{r}}(\tilde{r}\tilde{\rho}\bar{u})_{\tilde{r}}+(\tilde{\rho}% \tilde{w})_{\tilde{z}}=0.$$ To have an idea of what happens near the wall of the nanotube, we have to translate and re-scale $\tilde{r}$ such that $\tilde{r}=1/2-\xi\overline{r}$. Hence, on the boundary of the nanotube where $\tilde{r}=1/2$, we get $\overline{r}=0$. The value of $\xi$ is determined by the condition $\overline{r}=1$ on the separating surface between the core and the boundary layer where $\tilde{r}=1/2-4\tau$, and we get $\xi=4\tau$. Therefore, the continuity equation is : $$-\frac{1}{1-2\xi\overline{r}}\,[(1-2\xi\overline{r})\tilde{\rho}\bar{u}]_{% \overline{r}}+\xi(\tilde{\rho}\tilde{w})_{\tilde{z}}=0,$$ and the momentum equations are : $$\displaystyle Re\,\epsilon^{2}\tilde{\rho}\left(-\xi\bar{u}\bar{u}_{\overline{% r}}+\xi^{2}\tilde{w}\bar{u}_{\tilde{z}}\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{Re}{M^{2}}\,\xi\tilde{\rho}_{\overline{r}}+\frac{4}{3}\,% \epsilon\left[\frac{1}{1-2\xi\overline{r}}\left((1-2\xi\overline{r})\bar{u}% \right)_{\overline{r}}\right]_{\overline{r}}$$ (24) $$\displaystyle+\epsilon^{3}\xi^{2}\bar{u}_{\tilde{z}\tilde{z}}-\frac{1}{3}\,% \epsilon\,\xi\tilde{w}_{\overline{r}\tilde{z}}-\frac{Re}{16\,M^{2}}\,{\xi\,% \tilde{\rho}}\left[\frac{1}{1-2\xi\overline{r}}\left((1-2\xi\overline{r})% \tilde{\rho}_{\overline{r}}\right)_{\overline{r}}+\epsilon^{2}\xi^{2}\tilde{% \rho}_{\tilde{z}\tilde{z}}\right]_{\overline{r}},$$ $$\displaystyle Re\,\epsilon\,\tilde{\rho}\left(-\xi\bar{u}\tilde{w}_{\overline{% r}}+\xi^{2}\tilde{w}\tilde{w}_{\tilde{z}}\right)$$ $$\displaystyle=$$ $$\displaystyle-\frac{Re}{M^{2}}\,\epsilon\,\xi^{2}\tilde{\rho}_{\tilde{z}}+% \frac{1}{1-2\xi\overline{r}}\left[(1-2\xi\overline{r})\tilde{w}_{\overline{r}}% \right]_{\overline{r}}+\frac{4}{3}\,\epsilon^{2}\xi^{2}\tilde{w}_{\tilde{z}% \tilde{z}}$$ (25) $$\displaystyle-\frac{1}{3}\,\epsilon^{2}\xi\left[\frac{1}{1-2\xi\overline{r}}% \left((1-2\xi\overline{r})\bar{u}\right)_{\overline{r}}\right]_{\tilde{z}}+% \frac{Re}{16\,M^{2}}\,\epsilon\,\xi^{2}\tilde{\rho}\left[\frac{1}{1-2\xi% \overline{r}}\left((1-2\xi\overline{r})\tilde{\rho}_{\overline{r}}\right)_{% \overline{r}}+\epsilon^{2}\xi^{2}\tilde{\rho}_{\tilde{z}\tilde{z}}\right]_{% \tilde{z}}.$$ Then, neglecting the terms whose coefficients are very small, we obtain from Eq. (24) : $$\displaystyle\frac{Re}{M^{2}}\,\xi\,\tilde{\rho}_{\overline{r}}-\frac{Re}{16\,% M^{2}}{\xi\,\tilde{\rho}}\left[\frac{1}{1-2\xi\overline{r}}\left((1-2\xi% \overline{r})\tilde{\rho}_{\overline{r}}\right)_{\overline{r}}\right]_{% \overline{r}}=0.$$ This equation can be partially integrated and gives : $$\log(\tilde{\rho})-\frac{1}{16}\left[\frac{1}{1-2\xi\overline{r}}\left((1-2\xi% \overline{r})\tilde{\rho}_{\overline{r}}\right)_{\overline{r}}\right]=k(\tilde% {z})$$ where $k$ is an unknown fonction of $\tilde{z}$ only. Then : $$\tilde{\rho}_{\tilde{z}}-\frac{1}{16}\tilde{\rho}\left[\frac{1}{1-2\xi% \overline{r}}\left((1-2\xi\overline{r})\tilde{\rho}_{\overline{r}}\right)_{% \overline{r}}\right]_{\tilde{z}}=\tilde{\rho}k^{\prime}(\tilde{z}).$$ (26) Taking account of Eq. (26), the dominant terms of Eq. (25) write : $$-\frac{Re}{M^{2}}\,\epsilon\,\xi^{2}\,\tilde{\rho}_{\tilde{z}}+\frac{Re}{16\,M% ^{2}}\,\epsilon\,\xi^{2}\tilde{\rho}\,\frac{1}{1-2\xi\overline{r}}\left[(1-2% \xi\overline{r})\tilde{\rho}_{\overline{r}}\right]_{\overline{r}\tilde{z}}=-% \frac{Re}{M^{2}}\,\epsilon\,\xi^{2}\,\tilde{\rho}k^{\prime}(\tilde{z})$$ which should be equal to zero; therefore $k(\tilde{z})$ is constant and Eq. (25) is restricted to : $$\frac{1}{1-2\xi\overline{r}}\left((1-2\xi\overline{r})\tilde{w}_{\overline{r}}% \right)_{\overline{r}}=0.$$ The solution of this equation with the no-slip boundary condition is $$\tilde{w}=-\frac{m(\tilde{z})}{2\xi}\log(1-2\xi\overline{r})=-\frac{m(\tilde{z% })}{2\xi}\log(2\tilde{r}),$$ (27) where $m(\tilde{z})$ is a function to be determined with the continuity condition of the velocity field through the surface separating the core and the boundary layer, i.e. for $\tilde{r}=1/2-\xi$. From Eqs. (23) and (27) we get : $$-\frac{Re}{4M^{2}}\,\epsilon\tilde{\rho}^{\prime}(\tilde{z})\,\left[k_{0}-% \left(\frac{1}{2}-\xi\right)^{2}\right]=-\frac{m(\tilde{z})}{2\xi}\log(1-2\xi).$$ Therefore $m$ is proportional with $\tilde{\rho}^{\prime}$ : $$m(\tilde{z})=\frac{\xi\,\epsilon Re}{8M^{2}}\frac{4k_{0}-(1-2\xi)^{2}}{\log(1-% 2\xi)}\,\tilde{\rho}^{\prime}(\tilde{z})$$ (28) IV.3 Velocity profile in the nanotube From Eqs. (23) (27) and (28) the expression of the velocity field $\tilde{w}(\tilde{r},\tilde{z})$ in the whole domain is : $$\tilde{w}(\tilde{r},\tilde{z})=\left\{\begin{array}[]{l}\displaystyle-\frac{% \epsilon Re}{4\,M^{2}}\,\tilde{\rho}^{\prime}(\tilde{z})\left(k_{0}-\tilde{r}^% {2}\right),\qquad 0\leqslant\tilde{r}\leqslant\frac{1}{2}-\xi\\ \displaystyle-\frac{\epsilon Re}{16M^{2}}\frac{4k_{0}-(1-2\xi)^{2}}{\log(1-2% \xi)}\,\tilde{\rho}^{\prime}(\tilde{z})\log(2\tilde{r}),\qquad\frac{1}{2}-\xi% \leqslant\tilde{r}\leqslant\frac{1}{2}\end{array}.\right.$$ (29) It depends on the constant $k_{0}$ which is determined by the following average condition (which expresses the fact that the average of $w$ on the outlet section of the tube is equal to $\hat{w}$) : $$\frac{4}{\pi}\int_{0}^{2\pi}\int_{0}^{1/2}\tilde{w}(\tilde{r},1)\,\tilde{r}\,% \textrm{d}\tilde{r}\textrm{d}\theta=1\quad\Leftrightarrow\quad\int_{0}^{1/2}% \tilde{w}(\tilde{r},1)\,\tilde{r}\,\textrm{d}\tilde{r}=\frac{1}{8}.$$ We obtain : $$k_{0}=\left(\frac{1}{2}-\xi\right)^{2}+\frac{8+\alpha(1-2\xi)^{4}}{16\alpha(1-% \xi)\xi}\log(1-2\xi),$$ where $\alpha=\displaystyle\frac{\epsilon Re}{4\,M^{2}}\,\tilde{\rho}^{\prime}(1)$ has a numerical value independent of the diameter of the nanotube, $\alpha\simeq-8.0$. In Figure 3 are plotted the profiles of the normalized velocity $\tilde{w}$ (29) in the four nanotubes. The motions are rather slow, the maximum of the velocity being about two times $\hat{w}$ (see Table 2). As already mentioned, it is assumed that the boundary layer (in grey on the Figure), where the liquid is inhomogeneous, is the same than at equilibrium (see Section 3 and Fig. 3). Obviously, due to the condition (28) the graphs are continuous between the boundary layer and the core (see Fig. 3). As for the classical Poiseuille flow, in the core the velocities profiles are parabolic (see Eq. (23)). In Figure 4 are plotted the profiles of the normalized velocity $\tilde{w}$ near the axis of the tube, in the four nanotubes. For larger nanotubes (5 nm to 100 nm) the normalized velocity is almost the same and the influence of the boundary wall on the normalized velocity in the core is less important that in the case of a thin tube (2 nm). It is worth noting that the flow near the axis of a thin nanotube is “proportionally” faster that the flow in larger nanotubes. Since the function $\tilde{\rho}^{\prime}(\tilde{z})$ has a weak variation inside the interval $[0,1]$, the value of the velocity (29) at the interface between the core and the boundary layer can be approximated by : $$\left.\tilde{w}\right|_{\tilde{r}=1/2-\xi}=-\frac{\tilde{\rho}^{\prime}(\tilde% {z})}{\tilde{\rho}^{\prime}(1)}\frac{8+\alpha(1-2\xi)^{4}}{16(1-\xi)\xi}\log(1% -2\xi)\simeq-\frac{8+\alpha(1-2\xi)^{4}}{16(1-\xi)\xi}\log(1-2\xi).$$ Whatever the radius of the nanotube, the density variation takes place in a thin layer for a thickness of one nanometer. Outside of this thin boundary layer, the liquid density is constant. In fact, due to the very thin boundary layer, we may consider the motion as the motion of an incompressible liquid in the core when $r\in[0,R-4\delta_{l}]$ and define a boundary slip velocity as the velocity obtained for $r=R-4\delta_{l}$ corresponding to the frontier of the inhomogeneous liquid layer. In this case, the Navier Length $b$ corresponds to De Gennes $$\frac{w}{b}=\frac{\partial w}{\partial r}\qquad{\rm when}\qquad r=R-4\delta_{l}.$$ (30) Due to $w=0$ when $r=R$, and the fact that the variations of $w$ in the boundary layer are smooth enough, the graph of velocity in the boundary layer is near a straight line. Then, for water at $20^{\texttt{o}}$ Celsius, the Navier length $b$ corresponds to the boundary layer thickness which is about one nanometer and, due to Eq. (16), the slip velocity is $$w_{g}=w_{{}_{|r=R-4\delta_{l}}}=\tilde{w}_{{}_{|\tilde{r}=1/2-\xi}}\,\hat{w}=% \frac{8+\alpha(1-2\xi)^{4}}{16(1-\xi)\xi}\log(1-2\xi)\frac{d^{2}\,\mathop{\rm grad% }\Delta P}{32\,\kappa}.$$ (31) Consequently, Eqs. (30) and (31) yield the boundary conditions for the Hagen-Poiseuille flow in the core. The values of the slip velocity $w_{g}$ are given in Table 3 (in c.g.s. system units). The case when $R=100$ nm is close from a flat thin boundary layer and in our model, the Navier length is constant whatever the radius of the nanotube is. V Conclusion and comments The question about the correct set of boundary conditions at the nanoscale is recurrent in both molecular dynamics simulation and the applications of continuum fluid-mechanics. Clearly, the classical no-slip boundary condition of macroscopic fluid mechanics does not apply, and in confined nano-flows, it is necessary to get a deep understanding of the interfacial friction phenomena between fluid and wall. Using the classical terminology we say that the slip velocity is the tangential velocity of the fluid at the solid wall determined by a surface friction coefficient $k$, while the Navier length represents the length given by the ratio $k/\eta$ (De Gennes , Fig. 1). Here we use a continuum model generalizing Navier-Stokes equation via an internal energy function of the deformation and the surdeformation of the fluid. For this reason the boundary effects predicted by the model are deeply different from what we see in classical Navier-Stokes equations. This model accounts for an embedding effect at the solid surfaces where the liquid is subjected to strong variations of density. The intermolecular forces, mainly by capillarity effects, create an inhomogeneous layer at the wall where slippage of the liquid is possible. The thickness of the layer depends on the molecular length $\delta_{l}$ and consequently on the temperature through the surdeformation coefficient $\lambda$ of the fluid and the isothermal sound speed ${c}_{l}$. The results are compatible with MD simulations: the Gibbs adsoption is of the same order and the inhomogeneous density layer has the same thickness in the two models. The thickness of the inhomogeneous layer is the Navier length; the slip velocity is the fluid velocity evaluated at the internal boundary of the inhomogeneous layer. Finally, the simple proposed model highlights the following points: The continuum mechanics approach is in intuitive agreement with what is expected by experiments and confirms the adequation of van der Waals’ model in nanoscale framework by using a convenient representation of the fluid-solid interaction. The continuum mechanics approach is important to obtain simple analytical solutions for simple flow geometries. Acknowledgements: GS is partially supported by PRIN project ’Matematica e meccanica dei sistemi biologici e dei tessuti molli’; GS & HG are also supported by ’Institut Carnot Star’ for the stays during the year 2012 and the collaboration between Aix-Marseille Université and Università degli Studi di Perugia. References (1) S. Iijima, ”Helical microtubules of graphitic carbon,” Nature 354, 56 (1991). (2) P. J. F. Harris, Carbon Nanotubes and Related Structures, New Materials for the Twenty-First Century (Cambridge University Press, Cambridge, 1999). (3) P. Tabeling, Introduction to microfluidics (Oxford University Press Publication, Oxford, 2006). (4) R. C. Ball, and R. Evans, ”The density profile of a confined fluid,” Mol. Phys. 63, 159 (1988). (5) H. Rafii-Tabar, Computational Physics of Carbon Nanotubes (Cambridge University Press, Cambridge, 2009). (6) D. J. Bonthuis, K. F. Rinne, K. Falk, C. Nadir Kaplan, D. Horinek, A. Nihat Berker, L. Bocquet, and R. R. Netz, ”Theory and simulations of water flow through carbon nanotubes: prospects and pitfalls,” J. Phys.: Condens. Matter 23, 184110 (2011). (7) M. Majumder, N. Chopra, R. Andrews†, and B. J. Hinds, ”Enhanced flow in carbon nanotubes,” Nature 438, 44 (2005). (8) S. Sinha, M. Pia Rossi, D. Mattia , Y. Gogotsi, and H. H. Bau, ”Induction and measurement of minute flow rates through nanopipes,” Phys. Fluids 19, 013603 (2007). (9) D. Mattia, and Y. Gogotsi, ”Review: static and dynamic behavior of liquids inside carbon nanotubes,” Microfluid. Nanofluid 5, 289 (2008). (10) J. A. Thomas, and A. J. H. McGaughey, ”Reassessing fast water transport through carbon nanotubes,” Nano Lett. 8, 2788 (2008). (11) W. D. Nicholls, M. K. Borg, and J. M. Reese, ”Molecular dynamics simulations of liquid flow in and around carbon nanotubes,” in Proceedings of ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting and 8th International Conference on Nanochannels, Microchannels, and Minichannels (FEDSM-ICNMM, Montreal, Canada, 2010) p. 1. (12) L. Bocquet, and E. Charlaix, ”Nanofluidics, from bulk to interfaces, a Critical Review,” Chem. Soc. Rev. 39, 1073 (2010). (13) J. E. Dunn, R. Fosdick and M. Slemrod (Eds.), Shock induced transitions and phase structures, The IMA Volumes in Mathematics and its Applications, vol. 52 (Springer, Berlin, 1993). (14) P. Seppecher, ”Moving contact lines in the Cahn-Hilliard theory,” Int. J. Eng. Sci. 34, 977 (1996). (15) B. Widom, ”What do we know that van der Waals did not know?,” Physica A 263, 500 (1999). (16) B. Kazmierczak, and K. Piechór, ”Parametric dependence of phase boundary solution to model kinetic equations,” ZAMP 53, 539 (2002). (17) A. Onuki, ”Dynamic van der Waals theory,” Phys. Rev. E 75, 036304 (2007). (18) A. A. Chernov, and L. V. Mikheev, ”Wetting of solid surfaces by a structured simple liquid: effect of fluctuations” Phys. Rev. Lett. 60, 2488 (1988). (19) R. Evans, ”The nature of liquid–-vapour interface and other topics in the statistical mechanics of non-uniform classical fluids,” Adv. Phys. 28, 143 (1979). (20) M. E. Fisher, and A. J. Jin, ”Effective potentials, constraints, and critical wetting theory,” Phys. Rev. B 44, 1430 (1991). (21) S. Ono and S. Kondo, Molecular theory of surface tension in liquid, in: Structure of Liquids, Edited by S. Flügge, Encyclopedia of Physics, X, (Springer, Berlin, 1960). (22) J. S. Rowlinson and B. Widom, Molecular Theory of Capillarity (Clarendon Press, Oxford, 1984). (23) J. W. Cahn, ”Critical point wetting,” J. Chem. Phys. 66, 3667 (1977). (24) H. Gouin, ”Energy of interaction between solid surface and liquids,” J. Phys. Chem. B 102, 1212 (1998) & arXiv:0801.4481. (25) C. L. Navier, ”Mémoire sur les lois du mouvement des fluides,” Mémoires Acad. R. Sci. Inst. France 6, 389 (1823). (26) L. Landau and E. Lifchitz, Fluid Mechanics (Mir Edition, Moscow, 1958). (27) T. D. Blake, ”Slip between a liquid and a solid - D.M. Tolstoi (1952) theory reconsidered,” Colloids Surf. 47, 135 (1990). (28) M. T. Matthews, and J. M. Hill, ”On three simple experiments to determine slip lengths Microfluid. Nanofluid.,” 6, 611 (2009). (29) M. D. Ma, L. Shen, J. Sheridan, J. Z. Liu, C. Chen, and Q. Zheng, ”Friction of water slipping in carbon nanotubes,” Phys. Rev. E 83, 036316 (2011). (30) J. Bear, Dynamics of Fluids in Porous Media (Dover Publ., New York, 1988). (31) J. D. van der Waals, Translation by J. S. Rowlinson, ”The thermodynamic theory of capillarity under the hypothesis of a continuous variation of density,” J. Stat. Phys. 20, 197 (1979). (32) D. J. Korteweg, ”Sur la forme que prennent les équations du mouvement des fluides si l’on tient compte des forces capillaires,” Arch. Néerlandaises, II, VI, 1 (1901). Also presented in C. Truesdell and W. Noll, The non-linear field theories of mechanics, Third Edition, Edited by S.S. Antman, ”Korteweg’s theory of capillarity,” (Springer, Berlin, 2004) p. 513. (33) J. W. Cahn, and J. E. Hilliard, ”Free energy of a nonuniform system. III. Nucleation in a two-component incompressible fluid,” J. Chem. Phys. 31, 688 (1959). (34) F. dell’Isola, H. Gouin, and G. Rotoli, ”Nucleation of spherical shell-like interfaces by second gradient theory: numerical simulations,” Eur. J. Mech., B/Fluids, 15, 545 (1996) & arXiv:0906.1897. (35) H. Gouin, Utilization of the second gradient theory in continuum mechanics to study motions and thermodynamics of liquid-vapor interfaces, Physicochemical Hydrodynamics, Series B, Physics, Vol. 174 (Plenum Publ., New-York, 1986) p. 667 & arXiv:1108.2766. (36) H. Gouin, and W. Kosiński, ”Boundary conditions for a capillary fluid in contact with a wall,” Archives of Mechanics 50, 907 (1998) & arXiv:0802.1995. (37) S. Forest, N. M. Cordero, and E. P. Busso, ”First vs. second gradient of strain theory for capillarity effects in an elastic fluid at small length scales,” Comput. Mater. Sci. 50, 1299 (2011). (38) J. Málek, and K. R. Rajagopal, ”On the modeling of inhomogeneous incompressible fluid-like bodies,” Mech. Mater. 38, 233 (2006). (39) J. Málek, and K. R. Rajagopal, ”Incompressible rate type fluids with pressure and shear-rate dependent material moduli,” Nonlinear Anal.: Real World Appl. 8, 156 (2007). (40) P. Germain, ”The method of virtual power in continuum mechanics. Part 2: microstructure,” SIAM J. Appl. Math. 25, 556 (1973). (41) H. Gouin , ”Thermodynamic form of the equation of motion for perfect fluids of grade n,” C.R. Acad. Sci. Paris, 305, 833 (1987) & arXiv:1006.0802 . (42) H. Schlichting and K. Gersten, Boundary-Layer Theory (McGraw Hill, New York, 1979). (43) P. G. de Gennes, ”Wetting: statics and dynamics,” Rev. Mod. Phys. 57, 827 (1985). (44) B. V. Derjaguin, N. V. Churaev and V. M. Muller, Surfaces Forces (Plenum Press, New York, 1987). (45) H. Gouin, ”Liquid-solid interaction at nanoscale and its application in vegetal biology,” Colloids Surf., A 383, 17 (2011) & arXiv:1106.1275. (46) J. A. Thomas, and A. J. H. McGaughey, ”Density, distribution, and orientation of water molecules inside and outside carbon nanotubes,” J. Chem. Phys. 128, 084715 (2008). (47) Sony Joseph, and N.R. Aluru, ”Why are carbon nanotubes fast transporters of water,” Nano Lett. 8, 452 (2008). (48) P. G. de Gennes, ”On fluid/wall slippage,” Langmuir 18, 3413 (2002) & arXiv:cond-mat/0112383.
Investigating Light Curve Modulation via Kernel Smoothing I. Application to 53 fundamental mode and first-overtone Cepheids in the LMC Maria Süveges${}^{,}$ Present address: Max Planck Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany; 1Observatoire de Genève, Université de Genève, Ch. d’Ecogia 1290, Versoix, Switzerland [email protected];[email protected]    Richard I. Anderson${}^{,}$ Swiss National Science Foundation Fellow2Department of Physics & Astronomy, The Johns Hopkins University, 3400 N Charles St, Baltimore, MD 21218, USA [email protected] (Received 7 May 2016; accepted ???) Key Words.: methods:statistical – stars:oscillations – stars:variables:Cepheids – Magellanic Cloud Abstract Context:Recent studies have revealed a hitherto unknown complexity of Cepheid pulsations by discovering modulated variability using photometry, radial velocities, and interferometry. However, a statistically rigorous search and characterization of such phenomena has so far been missing. Aims:We have employed a method as yet unused in time series analysis of variable stars for detecting and characterizing modulated variability in continuous time. Here we test this new method on 53 classical Cepheids from the OGLE-III catalog. Methods:We implement local kernel regression to search for both period and amplitude modulations simultaneously in continuous time and to investigate their detectability. We determine confidence intervals using parametric and non-parametric bootstrap sampling to estimate significance and investigate multi-periodicity using a modified pre-whitening approach that relies on time-dependent light curve parameters. Results:We find a wide variety of period and amplitude modulations and confirm that first overtone pulsators are less stable than fundamental mode Cepheids. Significant temporal variations in period are more frequently detected than those in amplitude. We find a range of modulation intensities, suggesting that both amplitude and period modulations are ubiquitous among Cepheids. Over the 12-year baseline offered by OGLE-III, we find that period changes are often non-linear, sometimes cyclic, suggesting physical origins beyond secular evolution. Our method more efficiently detects modulations (period and amplitude) than conventional methods reliant on pre-whitening with constant light curve parameters and more accurately pre-whitens time series, removing spurious secondary peaks effectively. Conclusions:Period and amplitude modulations appear to be ubiquitous among Cepheids. Current detectability is limited by observational cadence and photometric precision: detection of amplitude modulation below 3 mmag requires space-based facilities. Recent and ongoing space missions (K2, BRITE, MOST, CoRoT) as well as upcoming ones (TESS, PLATO) will significantly improve detectability of fast modulations, such as cycle-to-cycle variations, by providing high-cadence high-precision photometry. High-quality long-term ground-based photometric time series will remain crucial to study longer-term modulations and to disentangle random fluctuations from secular evolution. 1 Introduction Classical Cepheid variable stars (from hereon: Cepheids) have been the focus of a great deal of research since their discovery by Goodricke (1786), who suggested that their study ”may probably lead to some better knowledge of the fixed stars”. Indeed, Cepheids have been of great historical importance for the understanding of stellar evolution and structure. Although Cepheids are usually considered to be highly regular variable stars, Cepheid pulsations were shown very early on to exhibit time-dependencies (e.g. Eddington, 1919). In particular the changing periods of Cepheids have received much attention (e.g. Szabados, 1983; Berdnikov & Ignatova, 2000; Pietrukowicz, 2001, 2002; Turner et al., 2006), since they offer an immense opportunity for studying the secular evolution of stars on human timescales (decades) and provide important tests of stellar evolution models (e.g. Fadeyev, 2013; Anderson et al., 2016b). In addition, changing periods complicate phase-folding of time-series data obtained over long temporal baselines and often have to be accounted for when determining the orbit of long-period binary systems containing Cepheids (e.g. Szabados et al., 2013; Anderson et al., 2015). In addition, cyclic variations of pulsation periods exhibited by some Cepheids have been discussed in terms of the light-time effect due to orbital motion (Szabados, 1989), although only few cases have been confirmed using radial velocities (Szabados, 1991). Period changes may also be related to the much-discussed linearity of the period-luminosity relation, see García-Varela et al. (2016) and references therein. Recent detailed studies of both large samples of Cepheids in the LMC (Poleski, 2008) and of individual stars in the Galaxy (e.g. Berdnikov et al., 2000; Kervella et al., 2017) have revealed intricate, possibly periodic period change patterns that are not necessarily consistent with the classical picture of secular evolution. One of the most notorious and intricate cases of period and amplitude variations is that of Polaris, the North Star (Arellano Ferro, 1983), which identifies this Cepheid to be crossing the classical instability strip for the first time (Turner et al., 2006). Polaris’ amplitude seemed to diminish to the point of disappearing, which had been interpreted as the Cepheid leaving the instability strip. However, more recent observations have shown that the pulsation amplitude has increased again, and Polaris remains a puzzle. Another unique well-known example is that of V473 Lyrae (Burki & Mayor, 1980; Burki et al., 1982). Combining many years worth of observations, Molnár et al. (2013) were able to trace this star’s amplitude modulation cycles and determined a modulation period of $1204$ d. They discussed these modulations in the context of the Blažko (1907) effect, which is better known in RR Lyrae stars and has seen a boost in research thanks to the Kepler mission (e.g. Kolenberg et al., 2010; Benkő et al., 2011, 2014). Percy & Kim (2014) presented evidence that some long-period Cepheids exhibit amplitude changes of up to a few hundredths of a magnitude over timescales of a few hundred or thousands of days, potentially exhibiting cyclic behavior (e.g. for U Carinae). Such strong modulations are presumably not very common among classical Cepheids, or else they would likely be found more frequently in long-term photometric surveys such as the All Sky Automated Survey (Pojmanski, 2002) or in other long-term Cepheid photometry (e.g. Berdnikov et al., 2014). Soszynski et al. (2008a) mention that about $4\%$ of fundamental-mode (FU) and $28\%$ of first-overtone (FO) Cepheids are “Blažko Cepheids”, identified via secondary period peaks near the primary period (from hereon “twin peaks”), which are found after pre-whitening the light curve using the primary period. Additionally, among the entire set of 3374 Cepheids, 8 (all FO) were labeled as having variable amplitude. Soszyński et al. (2015a) have since provided additional targets of interest in this regard. While this paper was under review, Smolec (2017) further reported light curve modulation in 51 Cepheids located in the Small and Large Magellanic Clouds, none of which overlap with the stars discussed in the present work. Evidence for non-radial modes in Cepheids has been found in a sample of 138 Small Magellanic Cloud (SMC) FO Cepheids that exhibit light curve modulation (Soszynski et al., 2010; Dziembowski, 2015; Smolec & Śniegowska, 2016a). Periodicity of such light curve modulations, if it can be firmly established, would be strongly indicative of Cepheids pulsating in more than one mode (Moskalik & Kołaczkowski, 2009). Though difficult to detect with ground-based observatories, small amplitude light curve fluctuations appear to be rather common, if photometry is sufficiently precise and densely-sampled. Derekas et al. (2012) first showed this for the Kepler (fundamental-mode) Cepheid V1154 Cygni, and Evans et al. (2015) recently used the MOST satellite to demonstrate the different types of irregularities seen in the fundamental-mode Cepheid RT Aurigae and the first-overtone Cepheid SZ Tauri. Such low-amplitude modulations and period “jitter” may be explained by convection and/or granulation (Neilson & Ignace, 2014). Stothers (2009) furthermore proposed a model involving activity cycles to explain period and light amplitude changes in short-period Cepheids. However, light curve modulations remain difficult to detect, even with precise space-based photometry (Poretti et al., 2015). While most amplitude modulations in Cepheids are found using photometric measurements, the extreme precision afforded by state-of-the-art planet hunting instruments has recently enabled the discovery of small amplitude spectral modulations in Cepheids (Anderson, 2014, 2016). Furthermore, tentative evidence for modulated angular diameter variability in the long-period Cepheid $\ell$ Carinae based on long-baseline interferometry has recently been presented by Anderson et al. (2016a). In summary, recent advances in instrumentation have enabled the discovery that Cepheid variability is not as regular as often assumed. Whether or not irregularities are detected is dominated by observational precision and time-sampling. Moreover, there is evidence that not all irregularities of Cepheid variability share the same origin; for instance, the time-scales of radial velocity modulation in short- and long-period Cepheids are very different (Anderson, 2014). Given the patchy evidence for irregularities and modulations in Cepheid variability, it is important to characterize how and how often these phenomena occur. To this end, we have implemented a method based on local kernel estimation to detect irregularities in Cepheid pulsations. For the first time, this method allows to search for smooth variations of light curve amplitudes and periods in continuous time and enables the quantification of the significance of the detected effects. We here describe our technique, which we apply to a total of 53 FU and FO Cepheids from the OGLE-III catalog (Soszynski et al., 2008a). In a follow-up paper, we will then apply this technique to the full sample of OGLE-IV classical Cepheids (Soszyński et al., 2015b) to investigate limits of detectability, the rate of occurrence of period and amplitude modulations in Cepheids, and to characterize them. This will be a crucial step toward a physical understanding of these phenomena. This paper is structured as follows. We describe our method for analyzing light curves in Sec. 2, which is divided into target selection (Sec. 2.1) and a description of the sliding-windows based light curve modeling (Sec. 2.2). We present the results of this modeling in Sec. 3, which we divide into subsections dedicated to changing periods (Sec. 3.1), changing amplitudes (Sec. 3.2), and a discussion of how light curve shapes change with time (Sec. 3.3). We present the implications of the new method on results of a multiperiodicity analysis and on pre-whitening artefacts (Sec. 3.4), compare the trends and fluctuations discovered among different groups of Cepheids (Sec. 4.1), investigate their relationships with physical parameters of the Cepheids (Sec. 4.2), and compare our results with the literature. We summarize and conclude in Sec. 5. We explain the statistical methodology in detail in Appendix A. Using simulations of modulated periodicity with parameters taken from real Cepheids, we benchmark the detectability and performance of the kernel method in Appendices B and C. Figures illustrating the results for all 53 Cepheids and tables containing the numerical results from the fitting procedures are given in in Appendices D and E. 2 Methodology 2.1 Data and target selection We analyze I-band photometric time-series data (light curves) of classical Cepheids in the LMC published by the second- and third-generation Optical Graviational Lensing Experiment (Soszynski et al., 2008a; Udalski et al., 1999, OGLE-II and -III). These data were taken with the 1.3m Warsaw telescope at Las Campanas, Chile and reduced using difference imaging (Alard & Lupton, 1998; Alard, 2000; Wozniak, 2000; Udalski et al., 2008). OGLE photometry is particularly well-suited for the study of modulations in Cepheid variability due to its high quality (precision of up to $0.02$ mag), long temporal baseline (spanning up to 12 years), large number of observations (up to $\gtrsim 1500$ per target), excellent homogeneity, and large number of Cepheids available for study. The time sampling of the two surveys is seasonal, with the longest gap (between the end of OGLE-II and the start of OGLE-III) up to 300 $d$ for some stars. The instrumentation changed between the two phases of the survey, around HJD $-2450000\approx 2000$, so I-band photometry in the LMC from the third phase was carefully calibrated to seamlessly fit with the photometric data from the second based on more than 620000 stars in 78 overlapping subfields (Udalski et al., 2008). All data were obtained from the OGLE-III server111\urlhttp://ogledb.astrouw.edu.pl/ ogle/CVS/; \urlftp://ftp.astrouw.edu.pl/ogle/ogle3/. This work has a dual goal of assessing the performance of our methodology and of investigating the phenomenology of the modulated variability exhibited by Cepheids. To this end, we selected a sample of 53 Cepheids consisting both of ones likely to exhibit modulations and ones likely to be stable pulsators. Investigations of all Cepheids within the OGLE catalog of variable stars will be presented in the future. Whether or not a given Cepheid is likely to show the effect is difficult to determine a priori. A sign of modulations may be the presence of peaks in the secondary periodogram at a frequency very close to the primary one (a “twin” of the primary peak; $|f_{1}-f_{2}|<0.001$ except for one Cepheid, CEP-1564, for which this was 0.0176). The reason is that if the harmonic decomposition of the oscillation or its period varies with time, then pre-whitening with a constant model will not lead to a perfect removal of the primary pulsation from the light curve, and the residual signal would appear in the secondary periodogram as a twin peak. We thus selected as likely irregular candidates those Cepheids for which a multiperiodicity analysis reveals a twin peak after pre-whitening. We refer to these targets here as “twin-peak Cepheids” rather than adopting the terminology of Soszynski et al. (2008a) who refer to them as “Blažko Cepheids”, since it is still unclear whether the origin of the irregularities of the Cepheid variability is the same as that of the amplitude modulation found in RR Lyrae stars. Moreover, in order to assess the efficacy of the twin peaks phenomenon as a diagnostic for identifying pulsation irregularities and to provide a baseline for comparison, we also selected a likely regular, or “control”, sample consisting of target stars that do not exhibit twin peaks. Since first-overtone (FO) Cepheids are considered to be more irregular (less stable) than fundamental-mode (FU) Cepheids, we treat these two groups separately. To create a basis for target selection, we first modelled all individual light curves of fundamental (FU) and first-overtone (FO) Cepheids that consisted of more 700 observations, and inspected their residual periodograms. For the purpose of sample selection, we modelled light curves using a non-periodic fifth-order polynomial trend (to account for a possible temporal evolution of the instrument zero-point as well as other spurious or true changes in mean magnitude) and a Fourier series with 10 harmonics using the period from the OGLE-III catalog. Secondary (residual) periodograms were computed for each star using the method of Zechmeister & Kürster (2009). Including the polynomial trends proved efficient at removing artefacts from the residual periodograms, such as high peaks near $0,1,2,\ldots d^{-1}$ that otherwise dominated. Although a precise light curve modeling could have called for more or less than 10 harmonics, we found this a sufficient approach for sample selection. In all later stages of the analysis, in particular during the sliding window analysis, we used a more detailed modeling with different harmonic orders, which were determined separately for each Cepheid (cf. Section 2.2 and Appendix A). Figure 1 shows examples of each of the twin peaks and control Cepheid groups. The left hand side exemplifies the twin peaks case with (OGLE-LMC-)CEP-1998, which has a high secondary peak in the residual periodogram. For this star, the appearance of the twin peak occurs together with a prominent separation of the folded residual light curves according to observation times: the early observations (color-coded in yellow) trace a different line than later ones (red, violet and blue dots). Such a visual separation is not observed in every twin peaks Cepheid. Instead, various degrees of this pattern can be found, although its prominence does seem to correlate with the height of the peak in the residual periodogram. The fact that the yellow (earliest) and the light blue (latest) observations are close together, while the reddish dots of mid-survey observations are far from them, indicate a variation nonlinear in time, and thus excludes a misestimated period from the possible explanations. For comparison, the right hand panel shows CEP-1543, for which the residuals are flat and no significant secondary peaks are found in the residual periodogram. Since our method aims to smoothly and continuously trace temporal variations of pulsation periods and amplitudes, it is necessary for all objects studied to have a large number of sufficiently densely and uniformly distributed observations over the survey time span. We thus limited possible targets to those with at least 700 I-band observations, distributed in a way that ensured none of the time intervals used in our sliding windows fits contained less than 90 observations. Of all the Cepheids that satisfied these conditions, we selected a sample of 12 FU and 12 FO Cepheids that exhibit twin peaks. We further included five additional FO Cepheids with clearly visible changing amplitudes, which we encountered during light curve inspections. These were included despite having a slightly lower limit of 70 observations for each data sub-interval. This was deemed acceptable, since FO Cepheids have nearly sinusoidal light curve shapes that require fewer harmonics for an adequate light curve representation. All of these amplitude-changing Cepheids exhibit twin peaks, and therefore will be treated as part of the twin-peak FO group. Similarly, we selected 12 FU and 12 FO Cepheids for which pre-whitening did not reveal twin peaks as the control samples. 2.2 Detecting period and amplitude modulation using sliding windows Studying time-dependent variability phenomena has been gaining traction for some time. A so-termed time-dependent Fourier analysis has been applied to non-linear pulsation models of RR Lyrae stars (Kovacs et al., 1987), RRc stars observed by the Kepler spacecraft (Moskalik et al., 2015), RR Lyrae stars in the Galactic Bulge observed by OGLE (Netzel et al., 2015), and 138 FO Cepheids in the SMC (Smolec & Śniegowska, 2016b). As an alternative, the analytic signal processing method has also been applied to hydrodynamical models (Kolláth et al., 2002) and to investigate the period doubling phenomenon in RR Lyrae stars using Kepler data (Szabó et al., 2010). Given the sensitivity of the method to (seasonal) gaps in the ground-based OGLE data, analytic signal processing is not a suitable choice for the present investigation. Since previous studies of time-dependent Fourier and analytic signal processing techniques adopted fixed oscillation frequencies, any fluctuations in pulsation period were absorbed as phase-shifts in the Fourier phase coefficient (Moskalik et al., 2015; Szabó et al., 2010). Here, we seek to go one step further and develop a method capable of efficiently dealing with data gaps that simultaneously determines Fourier coefficients and changes in pulsation period. Moreover, our method makes no assumptions as to the periodicity or repeatability of any detected modulations. We adopt a highly flexible model to describe the potentially diverse and presumably small types of Cepheid light curve variations. Physical causes for changes in period, for instance, may originate from secular evolution or binarity (light-time effect). However, amplitude modulations are not so easily explained and modeled. In our sample, the separation of the residual light curves shown in the left panel of Figure 1 suggests nonlinearity of the changes. Moreover, the diversity of twin-peak structures in the examined secondary periodograms, though the irregular OGLE time sampling and the photometric noise undoubtedly affect the shapes, also suggests that modulation patterns may not be strictly repetitive. Figure 2 gives a few examples of this diversity. While the present work was under review, Smolec (2017) presented an investigation of periodic light curve modulation in SMC and LMC Cepheids based on a systematic search for double modulation side peaks in periodograms following a standard pre-whitening technique. Such an approach assumes periodicity of any detected modulations as well as only mild effects of the uneven time sampling and noise. Our visualizations of the residual light curves, with the observing time colour-coded, support that many kinds of secondary peak structures can be associated with visible modulations of the period and/or the harmonic parameters, and these are not necessarily strictly periodic or linear. Our goal is therefore to be open to all possibilities, and allow for nonlinear, cyclic, and non-cyclic components in the modelling of the modulations. Local kernel modelling (sliding window technique; e.g., Fan & Gijbels, 1996) is a simple option that provides this flexibility. We define these sliding windows using a grid of times that covers the entire observational baseline. The first grid point $\tau_{1}$ is fixed to the time of the first observation of the star, and every subsequent grid point is defined as $\tau_{i}=\tau_{1}+(i-1)\times 30$ days. For most of our Cepheids, 137 or 128 grid points were thus defined. One target (CEP-2580) was limited to 80, since observations of this target started roughly five years after the others. We wish to obtain a local estimate of pulsation period and harmonic amplitudes at each gridpoint $\tau_{i}$. Therefore, at each $\tau_{i}$, we select the data within a 3-year window centered at that point, and fit a harmonic model with a third-order polynomial trend: $$Y_{i}=\sum_{k=0}^{3}a_{k}t^{k}+\sum_{m=1}^{M}\left(s_{m}\sin 2\pi mft_{i}+c_{m% }\cos 2\pi mft_{i}\right)+\epsilon_{i},$$ (1) where $\epsilon_{i}\sim\mathcal{N}(0,\sigma_{i})$ are assumed to be independent Gaussian errors. The harmonic order $M$, individually selected for each Cepheid, is based on the constant models using the whole time span: $M=M_{0}+2$, where $M_{0}$ is the order of the best constant model for all data by the Bayes Information Criterion (Schwarz, 1978), and the two extra terms are added to allow for variations in the light curve shape. $M$ was subsequently kept fixed at all gridpoints, though all the model parameters were refitted in each window. Thus, we obtain best-fit pulsation period, harmonic amplitudes, and polynomial coefficients at each gridpoint. Any changes in mean brightness are thus absorbed by the polynomial term in eq. 1 and treated as nuisance parameters. The parameter estimation is performed by a weighted nonlinear least squares procedure that optimizes both the harmonic parameters and the pulsation period separately in each window. The result is a time series of the best-fit model parameters $\theta(\tau_{i})$ with corresponding point-wise confidence bands, where $\theta(\tau_{i})$ can stand for any of the pulsation period, harmonic parameters, or peak-to-peak amplitude. We estimate both approximate theoretical point-wise confidence bands and ones based on a Monte Carlo experiment; the two were in general very close to each other. However, the weighting scheme in this estimation procedure is chosen in a special way, to put emphasis on the process near the central gridpoint. To decrease bias there, we combine the usual error weighting with the kernel weights of local modelling: inverse variances (as given by the square of the photometric uncertainty) were multiplied with a factor derived from a normal density centered at $\tau_{i}$ with a standard deviation of 182.5 days (at the ends of the survey timespan, the definition remains the same; we do not lengthen the series by the addition of artificial points based on some rule of extrapolation from the observed values). Doing so attributes higher influence to observations made closer to $\tau_{i}$ (important not to oversmooth, and to trace irregularities better), while also weighting data according to their reliability. The result is a weighting scheme that increases the impact of the most relevant and most reliable observations, while it reduces the high correlation observed between estimates in neighboring windows when using simple error weighting. In addition to its flexibility, the local sliding window model has a few more advantages over complex global models with constant parameters. Since it is local, abrupt shifts in mean magnitude due to calibration effects between OGLE-II and -III data affect it only in windows including the time of the shift, and thanks to the weighting scheme, the effect is attenuated in windows centred relatively far from this time. Moreover, the third-order polynomial component in model (1) accounts for any calibration residuals (between OGLE-II and III). In the absence of information in the sampling gaps, the estimates of course may be biased or have a higher than average variance. The finite window size has two main effects on the estimates. First, the detectable timescales of period and amplitude modulations of the Cepheids are determined by the 12-year total survey timespan and the window size. Periods longer than approximately the full timespan cannot be distinguished from trends, so $\sim 12$ years is the long-period limit of modulation cycles we can identify. In the short-period limit, any fluctuations on timescales shorter than about 2 years are smoothed out due to our use of sliding windows giving high weight to observations within a 2 year time interval. Second, the faster the modulation, the more downward biased (underestimated) the estimated modulation amplitude. This bias is due to the local estimate being a kind of average value within the temporal sliding window. It is proportional to the characteristic frequency of the modulation and can be estimated in a simple way if this latter is known or estimated. We discuss this and provide an empirical bias correction formula in Appendix C. Bias is also present at the start and end of the observation period, if the fitted parameters are not approximately constant there, since we lack information in the subinterval of the window that stretches over the end. This bias is strongest at the endpoints, then decrease as the window includes more data, and vanishes at 1.5 years from the ends (the half-width of the window). We investigated the effect of the sampling gaps and the finite observation timespan using simulations, and found this to be a minor effect, although sampling gaps can indeed cause some systematic distortions in the estimates of an underlying trendlike and oscillatory pattern. Moreover, data gaps do not lead to false detections in the absence of modulation, cf. App. B for a detailed discussion. To assess whether a constant model sufficiently explains the observed photometric time series, we repeat our sliding windows analysis on a simulated, perfectly repetitive stable reference model using the best-fit constant parameters from a global light curve modeling. Confidence bands are added to this curve by applying a non-parametric Monte Carlo resampling based on the residuals222The probability levels of the found residuals $r_{i}$ at time $t_{i}$ are computed with respect to a Gaussian $\mathcal{N}(0,\sigma_{i})$, where $\sigma_{i}$ is the error at $t_{i}$. These probability levels are then resampled with repetition, and corresponding repetition residuals are computed with respect to the error bar at their new location. Due to the potential overdispersion originating possibly in various effects as well as in time-dependent variations, these confidence intervals are very conservative, and provide a very careful, strict estimation of significance.. The estimated functions $\theta(\tau_{i})$ are then compared to the obtained confidence bands. In order to assess significance of departures from the constant parameter values, we employ the multiple hypothesis testing procedure of Benjamini & Yekutieli (2001) to avoid spurious detections due to random fluctuations amplified by strong correlation between neighboring windows. The fitted magnitudes from the sliding window method enable an improved pre-whitening. We approximate the noise-free magnitude of the star at time $t_{i}$ by a weighted average of the fitted value from the models at the two closest window, with the weights based on the differences between $t_{i}$ and the window centres. We compute the improved residuals by subtracting these fitted values from the observed magnitudes. Then, we perform a secondary period search (Zechmeister & Kürster, 2009) on these improved residuals, and we check whether twin peaks are still apparent, or if other, weak secondary modes appear. In summary, the local kernel modeling method presented here does not impose assumptions on the type of period changes encountered and simultaneously traces temporal variations in pulsation period and light curve shape. This is an improvement over the $O-C$ technique, which does not allow to fully disentangle fluctuations due to period and Fourier amplitude changes. While the use of sliding windows represents a limitation on the time resolution, it provides tools to attribute statistical significance to the time-variation of signals, and to obtain a coherent, continuous-time picture of the simultaneous variations of the period and Fourier composition. A detailed description of the fitted local model, the model selection, the stable reference model, the error analysis and the assessment of significance taking into account the correlation caused by the overlap of the windows are given in Appendix A. We complement this with realistic simulated examples illustrating the detection power of the model and the limitations imposed by the OGLE time sampling and the kernel size in Appendix B, using real OGLE observation times and light curve profiles, trends and fluctuations similar to those found in our Cepheid sample. Appendix C considers the bias of the method and provides a means of correcting for it. The analysis presented in the paper was performed using the statistical computing environment R (R Core Team, 2015). 3 Results 3.1 Period changes Figures 19 and 20 in the Appendix give an overview of the variations in pulsation period estimated by the sliding window technique, as compared to a stable reference Cepheid simulated using the best stable parameter estimates. The sixth column of Tables 1 and 2 gives the maximal span of these changes, that is, $\max P(t)-\min P(t)$, while the seventh column gives the length of these intervals in percentage of the total observation span. The figures suggest a continuous broad range of possible variations potentially extending to many or even all Cepheids, rather than separate subclasses of steady and non-steady pulsators. Within the timescale limitations mentioned at the end of Section 2.2 and discussed in Appendix B, the range of the variations may include slow irregular or near-linear trends, stochastic or multi-scale oscillations, quasi-regular changes, and any combinations of these. Figure 3 presents an example of each (CEP-2132: near-linear trend; CEP-1621: irregular fluctuations, CEP-1140: quasi-periodic changes; CEP-1833: combination of damped quasi-periodic changes and near-linear trend). Appendix B suggests that though the OGLE time sampling can cause systematic distortions on the shape of the estimate of a real modulation pattern (see Appendix B), it cannot cause the observed phenomena. Neither the trends of CEP-2132 and 1833, nor the large fluctuations of CEP-1621 can be fully explained in this way. Considering the simulation results in Appendix B and C, the non-significant quasi-periodic modulation in CEP-1140 is much more likely to be attributable to a fast oscillation around the upper frequency detection limit of our window than to noise or effect of sampling gaps (the weak quadratic-like trend may be the consequence of end effects, though these are more likely to cause estimates levelling out than a sharp increase or decrease as here). The exceptionally large scatter of the estimates of the modulation frequency and amplitude presented in Appendix C in the case of CEP-1833 warrants prudence in accepting the shape of its modulation, but the existence of a trend seems to be certain and some additional instabilities very likely. The wide variety of different combinations of these relatively pure types can be appreciated in Figures 19 and 20 among our Cepheid sample. The intervals of deviation from a stable model are much more frequent and last much longer among twin-peak Cepheids (Figure 19) than in the control group (Figure 20), as emphasized by the extent of grey-shaded areas on the figures, and shown by $T_{P}$ in Tables 1 and 2. This is so for both FU and FO Cepheids. Among the variety of modulation types found (trend-like, stochastic, oscillation-like or arbitrary combination), visual inspection suggests a relatively strong trend component for CEP-1521, CEP-1527, CEP-1704, CEP-2217 (all FOs), and CEP-2132 (FU). Similar trends are visually less obvious, but potentially present for stars CEP-1405 (FO), CEP-1833 and CEP-2470 (both FUs). For most stars, this trend appears non-linear, or has added fluctuating components (stochastic or oscillation-like); almost pure near-linear patterns are shown only by CEP-2132 and CEP-1521, the latter nevertheless with some increasing oscillations towards the end of the observation period (where end effects may also be present). The strongest trend is observed for CEP-1833: its frequency change within the decade-long OGLE-III timespan is about 0.008 c/d. Given the several different types of period changes seen here, however, the observational baseline may not be sufficient to ascertain that these period changes are truly caused by secular evolution (cf. also Soszynski et al., 2008a): trends observed on such short timescales may actually prove to be a portion of fluctuations with a long characteristic timescale. The frequent presence of relatively strong fluctuations on various timescales further strengthens this impression. For four twin peak Cepheids (FU CEP-1140, FO CEP-1536, CEP-1564 and CEP-1693, the multiple testing procedure did not find significant period changes ($T_{P}=0$ in Table 1, and thus, instability of the pulsation period is not confirmed in these stars. CEP-1418 also has only a very short interval of period deviation. A strong instability on a shorter timescale than the sliding window length could cause remaining periodicity after removal of the (average) primary frequency. Although the frequency separation between the primary and the secondary periodogram peaks, which is commonly used as an indicator of the modulation’s typical timescale, does not suggest a high-frequency periodic modulation, the sliding window estimates in Figure 19 suggest indeed fast (low-level) variations for these stars. The strength of such fast variations are strongly underestimated by our 3-year window (see Appendix B), so it is possible that these stars do have a high-amplitude, fast period oscillation producing perceivable traces in the secondary periodogram. In addition, CEP-1140 and CEP-1536 have long intervals when their amplitude deviates significantly from the mean value, which offers an alternative explanation. A look at the period changes of CEP-1693 and CEP-1418 in Figure 19 reveals a slow, weak nonlinear trend (together with some comparatively strong oscillations) in their pulsation period. Although this is not significant with the small number of observations used per window and with our very conservative error assessment procedure, it may be sufficient to give rise to a twin peak when trying to pre-whiten by a constant model fitted to all data. For the fifth star, CEP-1564, we find no significant deviations from stable pulsations using our multiple hypothesis testing procedure. Within the control group, our results suggest identical variation types except for trends discernible by eye, and on average faster quasi-periodic or stochastic changes. The variations appear on average milder than those of the twin-peak group, as shown by the values of $DP$ in tables 1 and 2. The residual-based significance assessment finds these statistically non-significant, so we cannot exclude a noise origin of these modulations. However, as mentioned above, variations on timescales comparable to or shorter than the window length are generally under-estimated by the kernel method. Fast and strong cyclic variations can cause over-dispersion in the residuals at all phases in the secondary light curve. If these are fast enough with respect to the limitations of the kernel method, they might blur the pattern in the residual light curves seen in the center panels of Fig 1. As a consequence of this strong general over-dispersion, our conservative residual-based significance assessment procedure (cf. Section 2.2 and Appendix A) yields a very broad confidence band around the $P_{\mathrm{cat}}$ value of the constant model, and thus, we find the modulation to be non-significant. It follows that the twin-peak phenomenon, though it seems to be a good indicator of changes that are trend-like on the survey timespan, can fail to indicate even strong oscillatory modulations in the pulsation period, and thus miss many potentially scientifically interesting cases. Two such cases merit to be mentioned, that of CEP-0727 (FU) and CEP-1638 (FO). Their secondary periodogram does not show a twin peak, however, their $DP$ values in Table 2 are 0.002 $d$ and 0.0017 $d$, respectively, both among the highest in all our sample. 3.2 Amplitude changes We find much fewer stars to exhibit significant variations in the amplitude of the leading harmonic term, $A_{1}=\left(s_{1}^{2}+c_{1}^{2}\right)^{1/2}$ (cf. equation (1), than in period. Figures 21 and 22 present the estimated $A_{1}(t)$ curves for all stars. Tables 1 and 2 summarise the maximum span $\max A_{1}(t)-\min A_{1}(t)$ of the first harmonic amplitude $A_{1}(t)$ from the kernel fits, and the extent of the time intervals of significant deviations from the average $A_{1}$ in its last columns $DA_{1}$ and $T_{A_{1}}$. It appears indeed that the variations in the pulsation period are easier to detect, and for this reason they have been discussed in the literature much more extensively. Nevertheless, we find significant, more or less cyclic, amplitude variations for the twin peak FO Cepheids CEP-1527, CEP-1535 and CEP-1536, in addition to those five Cepheids included because of their clearly visible strong amplitude changes. Several other FO Cepheids (CEP-1405, -1561 and -1605) exhibit shorter, erratic excursions from otherwise fairly stable pulsation amplitudes. With the exception of CEP-1536, these stars also show changes in pulsation period. There are two FU stars as well with significant amplitude changes, CEP-1748 and CEP-1140, but contrary to the majority of the FO sample with amplitude modulations, these stars do not undergo significant period changes, as discussed in Section 3.1. Our method therefore successfully recovers all Cepheids identified as having variable amplitudes in the OGLE catalog and even increases this number. Long-term, trend-like changes (within the detectability limits of the 12-year survey) seem to be very rare. Two stars that show a pattern compatible with it are the FO Cepheids CEP-1561 and -1693. However, even for those, slow fluctuations around a mean amplitude which is stable on the long-term cannot be excluded. One FO Cepheid, CEP-2217, has a significantly higher first harmonic amplitude with the sliding window estimation than with the constant reference model. This is due to the fact that its strong trend-like period variation causes strong phase shifts of temporally distant observations, and smears the light curve folded with a constant period. Our results suggest that small amplitude variations may be present for many more Cepheids, although these amplitude fluctuations tend to stay below the $95\%$ confidence level with OGLE-III time cadences and our conservative significance assessment procedure. Taking this into account, we interpret our results as an indication that amplitude fluctuations on millimagnitude level and below are a common, possibly ubiquitous, phenomenon whose detectability is currently limited by the availability of sufficient time resolution and photometric precision. 3.3 Changes in light curve shape The morphology of light curves is most commonly described using relative amplitudes and phases, $\Phi_{j1}$ and $R_{j1}$ for the leading few harmonics, most importantly for $j=2,3$ (Simon & Lee, 1981). However, the continuous-time changes of these parameters are too weak to consider using these quantities, since the variations in the second and third harmonic amplitudes are below the detection limit. Nevertheless, it is possible to visually compare light curve shapes at different epochs, e.g. where the peak-to-peak amplitudes of the Cepheids are very different. Figures 23–27 show the light curves of the Cepheids in our sample at two such epochs selected individually for each star, such that furthermore the windows do not overlap. We plot the folded light curves so that maximum brightness occurs at phase 0.6 to facilitate visual comparison. Confidence bands are added based on bootstrap repetitions generated with resampled errors superposed to the reconstructed estimated local light curve. The fundamental-mode Cepheids in our sample, both twin-peak and control, exhibit little scatter in their light curves. In quite a few cases, there are discernible but tiny-looking differences in the minimum or maximum brightness or in the pattern of the brightening branch (in a few cases, these can be due to outliers, e.g. CEP-2215 in Figure 26, or to unfortunate phase gaps in the data such as for CEP-1932 in Figure 23). There are a few unusual cases, like that of CEP-1833, which has many downward scattered observations in its more recent window, but none in its earlier window; or those of CEP-1418 (Figure 23), CEP-1543, CEP-1711, and CEP-2774 (Figure 26), which seem to have stronger overdispersion than other fundamental-mode Cepheids. These stars have smaller amplitude ($<0.3$ mag) than those with less noisy light curve (0.4 mag or above), so this can also be due to the relatively smaller magnitude span of the plots, or can indicate possible further variations, not adequately modeled by the 3-year sliding window estimate. The variations exhibit a larger range for the overtone stars. Beside a few cases that show light curve variations comparable to the fundamental-mode sample (often those with relatively high average amplitude), and CEP-1536 for which our procedure selected apparently an unnecessarily high harmonic order resulting in a wiggly fit (cf. section 2.2), there are many that show high dispersion of observed magnitudes together with strong light curve shape variations. The dispersion of the observations affects the quality of the estimated light curve shapes, as is shown by the broad confidence bands around the estimates. In many cases, the observations with high residuals are far off from the centre of the window in real observation time (light-shaded points are more distant from the window centre in real observing time than dark-shaded points). This suggests that relatively strong changes might occur on a shorter timescale than the kernel length. 3.4 Residual periodograms Our study presented so far led to the conclusion that for the Cepheids in our sample, the presence of a “twin peak” in the secondary periodogram after pre-whitening is related to period and/or light curve shape modulations. Pre-whitening a light curve that intrinsically contains such modulations with a model of constant period and harmonic amplitudes will be obviously only approximate. Using instead the local estimates of period and Fourier amplitudes yields a more precise magnitude estimate at every observation time, and hence helps to remove the remnants of the main oscillation (the twin peak) from the secondary periodogram. Moreover, a secondary period search on the residuals of a local pre-whitening can be expected to detect weak secondary modes more efficiently and more precisely than a constant model. We thus constructed residual periodograms for all Cepheids in the sample by subtracting the time-dependent best-fit models. We compared the residual periodograms computed based on the stable reference fit with constant parameters over the full OGLE-III timespan to residual periodograms computed using the sliding window fit. In all but two of the 29 twin-peak cases, the sliding window fit removed perfectly the twin peak from the residual periodogram. Moreover, the method did not add any new spurious peaks to the residual periodograms of the 24 control Cepheids, and thus we can safely state that the method does not introduce any artefacts into the results. As expected, our more complex pre-whitening method yields more stable residual periodograms than pre-whitening with constant period and harmonic amplitudes. In all but 2 of the 29 Cepheids belonging to the twin peaks group (12 FU, 12 FO, and 5 additional FOs), we find no twin peaks after pre-whitening. Moreover, no new, spurious secondary periods are found in the residual periodograms of the 24 Cepheids belonging to control groups (12 FU, 12 FO). We present four typical examples of Cepheids in the twin peaks samples in Fig. 4. Residual periodograms obtained by pre-whitening with time-dependent light curve parameters are generally uniform and flat for twin-peak FU, control FU, and control FO Cepheids. The top left panel of Figure 4 shows the secondary periodograms after constant pre-whitening (black) and after pre-whitening with local estimates (red) in such a case, for the FU twin-peak Cepheid CEP-2191. In some cases, a slight, albeit most likely non-significant maximum appears at approximately $0,1,2,\ldots c/d$. It cannot be decided whether these indicate a parasite frequency of terrestrial origin or a remaining mean trend which is detected by frequency analysis as a low-frequency signal. Twin-peak FO Cepheids exhibit more complex patterns in their residual periodograms. Only five of 17 show flat residual periodogram following pre-whitening with our time-dependent best-fit models. Four of these belong to the group of five FO Cepheids that we selected for their obviously changing amplitudes (CEP-0916, CEP-1275, CEP-1955, and CEP-2820), while CEP-1536 was an FO twin-peak group member. CEP-1119, the 5th Cepheid selected for obvious amplitude changes, still exhibits a twin peak after pre-whitening, cf. top right panel in Fig. 4. This might be caused by imperfect fits in some windows: the sliding window period estimates in the bottom row of Figure 19 suggest very sharp period changes that were not precisely fitted. Three other overtone Cepheids exhibit mostly flat residuals similar to those of FU Cepheids with (likely spurious) peaks near $0,1,2,\ldots c/d$. For two of them, this peak is weak, for the third (CEP-1535, one of the newly discovered amplitude-changing Cepheids), it is strong. CEP-1564, shown in the bottom right panel of Figure 4, represents an exception. This is the only FO twin-peak Cepheid in our analysis for which we find no significant variability of pulsation period ($P_{1}\sim 2.063\,$d) and $A_{1}$ ($A_{1}\sim 0.074\,$mag), despite a very prominent secondary peak at $P_{\mathrm{twin-peak}}=2.141\,$d after pre-whitening with a constant model (see also the bottom right panel Figure 2). Pre-whitening with a time-dependent model yields a secondary periodogram and a secondary frequency almost identical to the former one. Folding the residuals with the secondary period yields a scattered, diffuse sinusoidal light curve with a peak-to-peak amplitude of about 7.5 millimagnitudes shown in the bottom panel of Figure 5. Since the period corresponding to the twin peak is relatively far off from the primary pulsation period (the relative difference is larger by a factor of ten than the largest of all other twin-peak stars), extremely large period variations could be expected for this star. However, the residuals folded by the primary pulsation frequency lack any trace of the pattern which is typical of other twin-peak Cepheids (compare the upper panel of Figure 5 to the middle panel of Figure 1). Moreover, the kernel fits using a broad search interval for the local pulsation frequency failed to find any strong modulation, either trend-like or fluctuating one. The fits shown in Fig. 19 remained stable for a wide range of search intervals (including also the frequency of the twin peak). Possible explanations for the persistence of the secondary peak can be that it results from fast fluctuations around the detectability limit of our kernel, or that both frequencies are physical, i.e., the star is blended or is a physical binary with another variable star, or that this star is indeed pulsating in two close-by modes. The residual periodograms of seven remaining FO twin-peak Cepheids exhibit weak secondary peaks similar to those presented in Figure 4, in the bottom left panel. Their strength ranges from probably non-significant to probably significant. All but one of them are at lower frequencies (longer periods) than the primary pulsation frequency, and the ratio of the lower to the higher frequency ranges from $\sim 0.6$-$0.9$. A few cases of such secondary modes were found after pre-whitening using a stable model by Soszynski et al. (2008a). Soszyński et al. (2015a) reported 206 FO stars exhibiting secondary frequencies with ratios around 0.6 among a total of 3530 FO ($5.8\%$) Cepheids in both Magellanic clouds (82 in the LMC). Our study finds seven out of our 29 FO Cepheids ($24\%$) to show some indication of secondary peaks and spanning a broader interval of frequency ratios, which suggests that these frequencies, though weak, may be even more common than previously thought in overtone pulsators. 4 Discussion 4.1 Separation of trends and oscillatory terms 4.1.1 Modeling time-dependent light curve parameters The complex variations seen in the figures of Appendix D do not suggest a simple statistical model. Aiming in this paper only at a rough separation and quantification of trend-like and fluctuation-like components, we modeled the time dependence of the pulsation period, amplitude of the first harmonic term and the peak-to-peak amplitude with a linear model consisting of a trend and an oscillatory component. This is purely heuristic and intentionally avoids interpretation of period changes as evidence for secular evolution (applies only to linear trends) or the light-time effect (cyclic changes). This is also warranted since changes in amplitude have no clear theoretical explanation. The benefit of using this model is its simplicity and ability to roughly capture the characteristic size of long-term trends and short-term fluctuations. A physical interpretation of the observed trends can later be based on the variations described heuristically. We write our model as $$\displaystyle\theta(t_{i})=$$ $$\displaystyle\alpha_{\theta}+\beta_{\theta}t_{i}+\gamma_{\theta}\,\cos\,2\pi f% _{\theta}t_{i}+\delta_{\theta}\sin 2\pi f_{\theta}t_{i}+\eta_{\theta,i},$$ (2) $$\displaystyle\eta_{\theta,i}\sim\mathcal{N}(0,\sigma^{2}_{\theta,i}),\quad% \mathrm{Corr}(\eta_{\theta,i},\eta_{\theta,i+1})=\rho,$$ where $\theta(t_{i})$ can represent time-dependent pulsation periods $P$, first harmonic amplitudes $A_{1}$, or total amplitudes $A$ at time $t_{i}$. $f_{\theta}$ is the frequency of the oscillatory term, and the error $\eta_{\theta,i}$ is assumed to follow a normal distribution with the locally estimated error $\sigma_{\theta,i}$ on the parameter estimate $\theta(t_{i})$. The strong correlations introduced by the overlapping windows make it necessary to include an autoregressive structure between the consecutive estimates, represented by the correlation coefficient $\rho$. The model is fitted for a given frequency $f_{\theta}$ by generalised least squares. Searching for the best approximation, we fit this model at a series of test frequencies $f_{\theta}$ in the range between the minimal and maximal reasonable frequencies (as constrained by the width of the sliding windows and the full timespan of the observations). Figure 6 shows an example of these fits. The left panel shows log-likelihoods of the fitted models for pulsation period and peak-to-peak amplitude as a function of frequency $f_{\theta}$ for the FU twin-peak CEP-1748. The best fits corresponding to the highest peak of the log-likelihood profiles are shown in the center and the right panels. The obtained fits appear to capture the trend and, in most cases, the dominant quasi-regular oscillatory variations, although the model is clearly only a rough approximation. In order to quantitatively characterize the period and amplitude modulations in Cepheids, we extract the key parameters of the model (2): (a) the slope of the trends $\beta_{P}$ and $\beta_{A_{1}}$, cf. Eq. 2; (b) the amplitudes of the best-fit oscillatory components $\Delta P$ and $\Delta A_{1}$, which are defined as $(\gamma_{P}^{2}+\delta_{P}^{2})^{1/2}$ and $(\gamma_{A_{1}}^{2}+\delta_{A_{1}}^{2})^{1/2}$, respectively; and (c) the frequencies $f_{P}$ and $f_{A_{1}}$ of the dominant oscillatory components. The estimated parameters for $A_{1}$ and $P$ are given (with estimated standard errors) in Appendix E. However, there are several important facts to keep in mind. As explained in Appendix C, the estimated frequencies can be “aliased”, that is, frequencies above the kernel method’s upper detection limit can be perceived as lower frequencies. Additionally, the amplitudes $\Delta P$ and $\Delta A_{1}$ of fast modulations are biased downwards. This bias depends on the modulation frequency, and thus can be corrected through an empirical estimated relationship between the two, if the modulation frequency is well known (see Appendix C). However, since it cannot be decided based on our data whether the estimated frequency is correct, the correction may be flawed, particularly for the period modulations in our control sample, for which Figure 20 suggests the possibility of relatively fast modulations. We explore the distribution of the estimated fluctuation parameters among the different Cepheid groups and with respect to the average period in the next two subsections (with the above caveats kept in mind). For amplitudes of the modulations, we give both bias-corrected and uncorrected versions; however, our conclusions do not differ in the two cases. 4.1.2 Trends and fluctuations of pulsation periods Figure 7 shows the parameters of the trend + oscillation model fitted to the variations of the pulsation period in the four Cepheid groups. As a color-coding of twin peak frequency separations shows, there does not appear to be a clear relation between modulation frequency and twin peak frequency separations, despite this being commonly assumed. Any tentative indications of such a relation are strongest for the T.FO group. The approximate trend component (leftmost panel) is on average higher in the twin-peak groups than in the control groups, both for fundamental-mode and first-overtone Cepheids. Inspecting the two twin-peak overtone stars with the lowest trend values (CEP-1536 and CEP-1561), the time series of periods from the sliding window fits confirms the absence of an overall trend, and the reason for the appearance of twin peaks in their secondary periodogram may be due to the comparatively strong fluctuating component in their pulsation period, and to some very significant changes in $A_{1}$ for CEP-1536. Their primary and secondary peaks had relatively large separation in the stable model analysis (though not exceptionally, as their colour in Figure 7 indicates), which supports the assumption of the presence of high-frequency modulations. The relative amplitude $\Delta P/P_{\rm{cat}}$ of the oscillatory component in the right panel of Figure 7 is higher in the overtone groups than in the fundamental mode groups. The shift between different modes is larger than the difference between the control and the twin-peak groups both within the fundamental and the overtone Cepheid sample. This supports that twin peaks are more likely to be the result of long-term, trend-like period instability rather than of fluctuations. The fact that twin-peak groups have, on average, lower characteristic frequencies of period fluctuations than control groups, regardless of pulsation mode, agrees with this: slower semi-regular variations are more likely than fast oscillating ones to induce twin peaks in residual periodograms when pre-whitening is carried out with constant light curve parameters. 4.1.3 Trends and fluctuations in brightness amplitude Figure 8 shows the temporal variations of the first harmonic amplitude $A_{1}$ in the four Cepheid groups. The trends in $A_{1}$ (leftmost panel) show a pattern similar to the trends in the pulsation period: they tend to be on average higher in the twin-peak groups than in the control groups, though the difference is less prominent than with the pulsation period. The behaviour of frequencies $f_{A_{1}}$ of the oscillatory term in the first harmonic amplitude shows no difference across the groups. The relative amplitudes of the oscillatory term in $A_{1}$ are on average higher among first overtone Cepheids than among fundamental mode objects, especially when we consider the bias-corrected amplitudes. In summary, our results suggest that the twin-peak phenomenon is indicative of trends as well as slow, relatively high-amplitude fluctuations of primarily the pulsation period. Particularly strong changes in the amplitude may also be identified by twin peaks, although this ability is limited to only the strongest or most trend-like cases in OGLE data. Soszynski et al. (2008a) found 28% of the FO and 4% of the FU sample to show the twin-peak phenomenon. However, our study suggests also that potentially interesting cases where the modulations have oscillatory or stochastic character on significantly shorter timescales than the window length can be missed, as was found in the case of CEP-0727 and CEP-1638 (see Sec. 3.1). Using the twin-peaks phenomenon only to identify cases of modulated pulsation might therefore miss a scientifically interesting subclass, and can give biased estimates about the occurrence and typology of the modulations. 4.2 Light curve variations versus physical parameters In Figure 9 we plot the (10-base) logarithm of absolute values of period trends and bias-corrected amplitude of period fluctuations, $\log{\Delta P}$, against average logarithmic pulsation period $\log{P}$. The left panel shows the well-known monotonically increasing relationship between $\log{P}$ and $\log{|\beta_{P}|}$ with large scatter. This is in agreement with the trends observed for period changes by means of O-C diagrams (e.g. Szabados, 1983; Pietrukowicz, 2001; Turner et al., 2006) and is a consequence of evolutionary timescales, which are shorter for the higher-mass long-period Cepheids (e.g. Bono et al., 2000; Fadeyev, 2013; Anderson et al., 2016b). The large scatter can be in part due to the 12-year timespan of OGLE: this may be too short to ascertain whether a slow, trend-like change is indeed a portion of an evolutionary long-term period change, or only a fluctuation on timescales longer than the OGLE timespan. Our selection procedure, namely random choice from all Cepheids with twin peaks or with flat secondary periodograms based only on the presence or absence of the twin peak, implied that our control FO sample have on average shorter periods than stars in the twin-peak FO group, as is discernible from Figure 9. A similar, though smaller and less clean separation is also apparent for FU Cepheids. The right panel of Figure 9 suggests that period fluctuations become stronger with increasing average period. Such a trend is not consistent with an interpretation in terms of the light-time effect. We further find no candidates for binarity based on the light-time effect within this sample: the values of $\Delta P\propto a_{1}\sin{i}$ determined are too low. Oscillatory period fluctuations have previously been reported for individual long-period Cepheids (e.g. RS Puppis, see Berdnikov et al., 2009). However, to the best of our knowledge no such relationship has yet been firmly established. The application of our method on the full sample of OGLE Cepheids will soon enable a more detailed investigation of this relation. As a consequence of the period-luminosity relation of Cepheids, similar near-linear relationships exist between these parameters and mean $I$ or $V$ magnitude. However, no other relation with colour or position on the colour-magnitude diagram (CMD) was found. We find no relationships between the model parameters and colour, mean magnitude, or period. Figure 10 shows that Cepheids found to be variable in period or amplitude occupy all regions of the $\log{P}-\log{A}$ parameter space covered by our sample. This further corroborates the ubiquity of light curve modulation among Cepheids. We are currently extending this study using the full sample of OGLE-III and -IV Cepheids in the Magellanic System (Soszyński et al., 2015c). Despite the different observational cadences over the LMC and SMC through OGLE-III and -IV, this will provide a more comprehensive picture of the distribution of modulation parameters using a much larger sample in part with a longer timespan, and is aimed at enabling a more in-depth physical interpretation of these phenomena. Smolec (2017) investigated all Cepheids in the OGLE Magellanic Cloud collection Soszynski et al. (2008b, 2010); Soszyński et al. (2015c). However, in that study, detection was based on the presence of a particular symmetric doublet structure in the residual periodogram after a standard constant-model prewhitening, which does not capture all the possible manifestations of a not necessarily strictly repetitive modulation. As a consequence, none of the 53 Cepheids discussed in the present work is mentioned in Smolec (2017). Our ongoing analysis of the full OGLE Cepheid collection using the kernel method will enable further insights and a more complete comparison. Within its range of detectable modulation frequencies, the sliding window method is much more sensitive to pick out a variety of complex modulation patterns than classical pre-whitening analyses that implicitly assume periodicity of modulations. 5 Conclusions We have applied local kernel modeling, a well-known method of statistics to investigate periodic and non-periodic temporal variations of Cepheid light curve parameters. We apply this method to 53 classical Cepheids from the OGLE-III catalog of variable stars (Soszynski et al., 2008a), selected according to the presence or absence of a secondary oscillation frequency close to the primary (referred to as “twin peak”), or because of very strong, visually obvious amplitude changes. We compare on the one hand the behaviour of fundamental-mode with first-overtone Cepheids, and on the other, the behaviour of Cepheids that exhibit twin peaks in secondary periodograms with those that do not. Our method yields estimates of the parameters of the modulations in the pulsation period and light curve parameters as smooth functions of time; we estimate the significance of the found deviations from the best-fit constant parameters by bootstrapping residuals (Monte Carlo methods) and by multiple hypothesis testing procedures with respect to the best-fit stable model. We find period modulations to be probably a very frequent, possibly ubiquitous phenomenon among Cepheids. Changes in light curve amplitudes also seem to occur frequently, although they are harder to identify reliably. Our results suggest that twin peaks are related to instabilities on longer timescales in the period or in the light curve parameters. The characteristic size of these instabilities for Cepheids are such that with a given time sampling pattern and photometric precision, period changes are easier to detect than variations in the harmonic content of the light curve. Over the sample of stars considered, we find a wide range of degrees to which amplitudes and periods can change, and timescales of the variations ranging from the shortest to the longest detectable. This suggests an extension of the range of both amplitude and period changes occurring in Cepheids to beyond our detection limits, which are imposed by the time sampling and precision of the photometry used. Applying our method to more densely sampled and more precise photometry from space (K2, CoRoT) should allow the detection of smaller amplitude changes on complementary timescales. We detect several different types of behaviour (near-linear, oscillatory, and stochastic) among period and amplitude variations. Specifically, we find that the majority of Cepheids exhibit period changes beyond or different from ones expected from secular evolution (see also Poleski, 2008). These may rather be related to other time-dependent phenomena such as convection, granulation, rotation (spots), (episodic?) mass-loss, or other forms of modulation such as the Blažko effect seen in RR Lyrae stars. For Cepheids exhibiting “twin peaks”, we find that pre-whitening with our time-dependent best-fit light curve estimates generally removes the “twin peak” from the secondary periodogram. After a time-dependent pre-whitening, several overtone twin-peak stars exhibited weak secondary peaks with frequency ratios of 0.65 to 0.87 to the primary frequency, extending across the range of period ratios occupied by beat Cepheids in the LMC (Alcock et al., 1995; Marquette et al., 2009). No such secondary periods were detected in fundamental-mode stars. As a next step, we will apply this method to a much larger sample of stars, such as the OGLE-IV catalog of Cepheid variable stars, and datasets with higher photometric precision. In doing so, we will explore the overall characteristics and occurrence rates of period and amplitude variations as well as their dependence on the stellar properties. Revealing the time-dependence of Cepheid light curves will thus yield new constraints on the structure of the outer envelopes of Cepheids and serve to understand the interaction between the pulsations and the medium they propagate inside of. Upcoming space-missions such as TESS and PLATO will provide high-quality photometry of many Cepheids, allowing to explore a larger range of amplitude variations and improve population statistics. Acknowledgements. We thank the anonymous referee for valuable comments that have helped to improve the quality of this manuscript. RIA acknowledges financial support from the Swiss National Science Foundation. The research made use of the public OGLE-III database (\url http://ogledb.astrouw.edu.pl/ ogle/CVS/) and the NASA’s ADS bibliographic services. References Alard (2000) Alard, C. 2000, A&AS, 144, 363 Alard & Lupton (1998) Alard, C. & Lupton, R. H. 1998, ApJ, 503, 325 Alcock et al. (1995) Alcock, C., Allsman, R. A., Axelrod, T. S., et al. 1995, AJ, 109, 1654 Anderson (2014) Anderson, R. I. 2014, A&A, 566, L10 Anderson (2016) Anderson, R. I. 2016, MNRAS, 463, 1707 Anderson et al. (2016a) Anderson, R. I., Mérand, A., Kervella, P., et al. 2016a, MNRAS, 455, 4231 Anderson et al. (2015) Anderson, R. I., Sahlmann, J., Holl, B., et al. 2015, ApJ, 804, 144 Anderson et al. (2016b) Anderson, R. I., Saio, H., Ekström, S., Georgy, C., & Meynet, G. 2016b, A&A, 591, A8 Arellano Ferro (1983) Arellano Ferro, A. 1983, ApJ, 274, 755 Benjamini & Hochberg (1995) Benjamini, Y. & Hochberg, Y. 1995, Journal of the Royal Statistical Society, Series B, 57, 289 Benjamini & Yekutieli (2001) Benjamini, Y. & Yekutieli, D. 2001, The Annals of Statistics, 29, 1165 Benkő et al. (2014) Benkő, J. M., Plachy, E., Szabó, R., Molnár, L., & Kolláth, Z. 2014, ApJS, 213, 31 Benkő et al. (2011) Benkő, J. M., Szabó, R., & Paparó, M. 2011, MNRAS, 417, 974 Berdnikov et al. (2009) Berdnikov, L. N., Henden, A. A., Turner, D. G., & Pastukhova, E. N. 2009, Astronomy Letters, 35, 406 Berdnikov & Ignatova (2000) Berdnikov, L. N. & Ignatova, V. V. 2000, in Astronomical Society of the Pacific Conference Series, Vol. 203, IAU Colloq. 176: The Impact of Large-Scale Surveys on Pulsating Sta r Research, ed. L. Szabados & D. Kurtz, 244–245 Berdnikov et al. (2000) Berdnikov, L. N., Ignatova, V. V., Caldwell, J. A. R., & Koen, C. 2000, New A, 4, 625 Berdnikov et al. (2014) Berdnikov, L. N., Kniazev, A. Y., Sefako, R., Kravtsov, V. V., & Zhujko, S. V. 2014, Astronomy Letters, 40, 125 Blažko (1907) Blažko, S. 1907, Astronomische Nachrichten, 175, 325 Bono et al. (2000) Bono, G., Caputo, F., Cassisi, S., et al. 2000, ApJ, 543, 955 Burki & Mayor (1980) Burki, G. & Mayor, M. 1980, A&A, 91, 115 Burki et al. (1982) Burki, G., Mayor, M., & Benz, W. 1982, A&A, 109, 258 Derekas et al. (2012) Derekas, A., Szabó, G. M., Berdnikov, L., et al. 2012, MNRAS, 425, 1312 Dziembowski (2015) Dziembowski, W. A. 2015, ArXiv e-prints [\eprint[arXiv]1512.03708] Eddington (1919) Eddington, A. S. 1919, MNRAS, 79, 177 Evans et al. (2015) Evans, N. R., Szabó, R., Derekas, A., et al. 2015, MNRAS, 446, 4008 Fadeyev (2013) Fadeyev, Y. A. 2013, Astronomy Letters, 39, 746 Fan & Gijbels (1996) Fan, J. & Gijbels, I. 1996, Local Polynomial Modelling and Its Applications (Chapman and Hall, London) García-Varela et al. (2016) García-Varela, A., Muñoz, J. R., Sabogal, B. E., Vargas Domínguez, S., & Martínez, J. 2016, ArXiv e-prints [\eprint[arXiv]1604.04814] Goodricke (1786) Goodricke, J. 1786, Royal Society of London Philosophical Transactions Series I, 76, 48 Hochberg (1988) Hochberg, Y. 1988, Biometrika, 75, 800 Hommel (1988) Hommel, G. 1988, Biometrika, 75, 383 Kervella et al. (2017) Kervella, P., Trahin, B., Bond, H. E., et al. 2017, ArXiv e-prints [\eprint[arXiv]1701.05192] Kolenberg et al. (2010) Kolenberg, K., Szabó, R., Kurtz, D. W., et al. 2010, ApJ, 713, L198 Kolláth et al. (2002) Kolláth, Z., Buchler, J. R., Szabó, R., & Csubry, Z. 2002, A&A, 385, 932 Kovacs et al. (1987) Kovacs, K., Buchler, J. R., & Davis, C. G. 1987, ApJ, 319, 247 Marquette et al. (2009) Marquette, J. B., Beaulieu, J. P., Buchler, J. R., et al. 2009, A&A, 495, 249 Meskaldji et al. (2013) Meskaldji, D. E., Thiran, J.-P., & Morgenthaler, S. 2013 [\eprint[arXiv]1112.4519] Molnár et al. (2013) Molnár, L., Szabados, L., Dukes, et al. 2013, Astronomische Nachrichten, 334, 980 Moskalik & Kołaczkowski (2009) Moskalik, P. & Kołaczkowski, Z. 2009, MNRAS, 394, 1649 Moskalik et al. (2015) Moskalik, P., Smolec, R., Kolenberg, K., et al. 2015, MNRAS, 447, 2348 Neilson & Ignace (2014) Neilson, H. R. & Ignace, R. 2014, A&A, 563, L4 Netzel et al. (2015) Netzel, H., Smolec, R., & Moskalik, P. 2015, MNRAS, 447, 1173 Percy & Kim (2014) Percy, J. R. & Kim, R. Y. H. 2014, Journal of the American Association of Variable Star Observers (JAAVSO), 42, 267 Pietrukowicz (2001) Pietrukowicz, P. 2001, Acta Astron., 51, 247 Pietrukowicz (2002) Pietrukowicz, P. 2002, Acta Astron., 52, 177 Pojmanski (2002) Pojmanski, G. 2002, Acta Astron., 52, 397 Poleski (2008) Poleski, R. 2008, Acta Astron., 58, 313 Poretti et al. (2015) Poretti, E., Le Borgne, J. F., Rainer, M., et al. 2015, MNRAS, 454, 849 R Core Team (2015) R Core Team. 2015, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria Schwarz (1978) Schwarz, G. 1978, Annals of Statistics, 6, 461 Simon & Lee (1981) Simon, N. R. & Lee, A. S. 1981, ApJ, 248, 291 Smolec (2017) Smolec, R. 2017, ArXiv e-prints [\eprint[arXiv]1703.05358] Smolec & Śniegowska (2016a) Smolec, R. & Śniegowska, M. 2016a, MNRAS[\eprint[arXiv]1603.01042] Smolec & Śniegowska (2016b) Smolec, R. & Śniegowska, M. 2016b, MNRAS, 458, 3561 Soszynski et al. (2010) Soszynski, I., Poleski, R., Udalski, A., et al. 2010, Acta Astron., 60, 17 Soszynski et al. (2008a) Soszynski, I., Poleski, R., Udalski, A., et al. 2008a, Acta Astron., 58, 163 Soszynski et al. (2008b) Soszynski, I., Poleski, R., Udalski, A., et al. 2008b, Acta Astron., 58, 163 Soszyński et al. (2015a) Soszyński, I., Udalski, A., Szymański, M. K., et al. 2015a, Acta Astron., 65, 329 Soszyński et al. (2015b) Soszyński, I., Udalski, A., Szymański, M. K., et al. 2015b, Acta Astron., 65, 297 Soszyński et al. (2015c) Soszyński, I., Udalski, A., Szymański, M. K., et al. 2015c, Acta Astron., 65, 297 Stothers (2009) Stothers, R. B. 2009, ApJ, 696, L37 Süveges (2014) Süveges, M. 2014, MNRAS, 440, 2099 Szabados (1983) Szabados, L. 1983, Ap&SS, 96, 185 Szabados (1989) Szabados, L. 1989, Commmunications of the Konkoly Observatory Hungary, 94, 1 Szabados (1991) Szabados, L. 1991, Commmunications of the Konkoly Observatory Hungary, 96, 123 Szabados et al. (2013) Szabados, L., Anderson, R. I., Derekas, A., et al. 2013, MNRAS, 434, 870 Szabó et al. (2010) Szabó, R., Kolláth, Z., Molnár, L., et al. 2010, MNRAS, 409, 1244 Turner et al. (2006) Turner, D. G., Abdel-Sabour Abdel-Latif, M., & Berdnikov, L. N. 2006, PASP, 118, 410 Udalski et al. (1999) Udalski, A., Soszynski, I., Szymanski, M., et al. 1999, Acta Astron., 49, 223 Udalski et al. (2008) Udalski, A., Szymanski, M. K., Soszynski, I., & Poleski, R. 2008, Acta Astron., 58, 69 Wozniak (2000) Wozniak, P. R. 2000, Acta Astron., 50, 421 Zechmeister & Kürster (2009) Zechmeister, M. & Kürster, M. 2009, A&A, 496, 577 Appendix A Detailed statistical methodology A.1 Local estimation Windows. Corresponding to the aim of the study, we perform nonlinear harmonic series fitting, optimising also over period, in 3-year windows of the time series. We fix the centers of the windows at a time grid with 30 day separation, which allows to follow the period, amplitude and shape variations with a reasonable temporal resolution, thus fitting about 130 windows for each of our targets. Weights. In order to obtain a smooth picture of the local changes in the light curve shape with most emphasis on the observations close to the centre of the window, we used a particular weighting scheme, which combined kernel weighting with the usual inverse squared error weights (Fan & Gijbels 1996). Suppose that we have $N$ observations $Y_{i}$ with errors $\sigma_{i}$ at times $t_{1},\ldots,t_{N}$. For a window centred at time $\tau_{k}$, we defined the first component of the weight of an observation at $t_{i}$ as $$w_{i}^{(1)}=\left\{\begin{array}[]{lr}hK\left\{(t_{i}-\tau_{k})/h\right\}&% \mathrm{if\;}|t_{i}-\tau_{k}|\leq 3h\\ 0&\mathrm{if\;}|t_{i}-\tau_{k}|>3h\end{array}\right.$$ with a Gaussian kernel $K(z)=\frac{1}{\sqrt{2\pi}}\exp\left(-z^{2}/2\right)$, and with bandwidth $h=182.5$. This component ensured that the observations close to the window centre contribute more to the fit than observations farther off, and cut the effect of observations outside a 3-year interval. As an example, Figure 11 shows a window centred at JD $=2850$ as a grey-shaded area of a part of the time series of OGLE-LMC-CEP-1621 (top panel). The Gaussian kernel, computed at the times of the observations, is shown in the middle panel in black. We used the inverse squared errors $$w_{i}^{(2)}=1/\sigma_{i}^{2}$$ as the other component of the weights, presented in the middle panel of Figure 11 as red spikes. The final weights, shown in the bottom panel, were determined by $$w_{i}=\frac{1}{W}w_{i}^{(1)}w_{i}^{(2)},\quad W=\sum_{i=1}^{N}w_{i}^{(1)}w_{i}% ^{(2)}.$$ This scheme ensured both that we obtain an estimate based on the most relevant observations, and that data with comparatively large errors have a weaker influence on the estimate than data with smaller errors. Model formula. Within each window, and using the above described weighting procedure, we fitted a harmonic + third-order polynomial model of the form $$Y_{i}=\sum_{k=0}^{3}a_{k}t_{i}^{k}+\sum_{m=1}^{M}\left(s_{m}\sin 2\pi mft_{i}+% c_{m}\cos 2\pi mft_{i}\right)+\epsilon_{i},$$ (3) where $\epsilon_{i}\sim\mathcal{N}(0,\sigma_{i})$ are assumed to be independent Gaussian errors. Polynomial order. Since a neglected nonlinear trend can cause bias in the frequency estimate, we included a third-order polynomial trend into the local model. Inspection of the fits for the Cepheids suggests that the mean magnitude can occasionally vary rapidly, and in such periods, the effect influences visibly the frequency estimate, as can be seen in the top panel of Figure 12. Mean magnitude variations within the window are thus accounted for by the polynomial term in the local model. Harmonic order $M$. For a sliding window fit, the choice of $M$ is crucial. It must be fixed and the same for all windows, otherwise the appearance of new significant terms or the dropout of formerly significant ones may result in unreasonable jumps of model parameter estimates. As the required $M$ can be very high for some stars such as bump Cepheids and low for others with sine-like light curve, we have to choose $M$ individually. The choice should be such that it allows for some coefficients becoming temporarily significant and falling out again. It is also desirable to allow for a somewhat higher order than that of a stable model for the complete observation span, since if there are in truth temporal variations, a stable model sees a sort of average, and we can get a too simple model. However, a too high $M$ adds many non-significant parameters to the model, which in turn can change the values estimated for the significant terms. Thus, the individual values were determined by fitting a stable model with $M_{\mathrm{init}}=15$, performing a statistical model selection procedure based on the Bayesian Information Criterion (Schwarz 1978). After finding a maximal order $M_{\mathrm{final}}$, we finally fixed the harmonic order at $M=M_{\mathrm{final}}+1$. Model fitting. According to the aim of investigating the temporal behaviour of the frequency and the light curve parameters, we treated the problem as a nonlinear model, optimising over the linear parameters $a_{0},\ldots,a_{3},s_{1},c_{1},\ldots,s_{M},c_{M}$ as well as over the frequency $f$. Optimisation was initialised with the stable model parameters, and was restricted to an adapted frequency interval around the catalog frequency. A.2 Error analysis We conducted a careful error analysis to assess the significance of the presumably small temporal variations in variability parameters. We therefore employed several different methods of determining confidence intervals for the estimated time-varying parameters, and to compare the time-varying model to the temporally stable alternative reference model for each Cepheid. Uncertainty of the estimates. For local linear regression models with normally distributed errors, uncertainty of the estimated parameters can be obtained by closed formulae (see equation 3.6 in Fan & Gijbels 1996), if the errors are known. In that case, the error on the estimates follows a multivariate normal distribution. However, we treated the pulsation period of the Cepheid as a parameter to be fitted. Hence, our model becomes a nonlinear regression model, so this formula and the multivariate normality of the estimates are only approximate (and asymptotic). We therefore used a double procedure to obtain uncertainty of the parameters: 1. The above cited formula from Fan & Gijbels (1996), extended for the nonlinear model. 2. Parametric bootstrap, using the given observational error bars as the standard deviation of the generating normal distribution at each epoch, assuming independent errors, and using the fitted non-stable models interpolated to each observing time as the error-free light curve. The sliding window estimation was then performed on all repetitions. Taking the quantiles 0.025 and 0.975 of the obtained estimates yielded the 95 % bootstrap confidence intervals (CIs), plotted around the estimates in Figures 3, 6 and 19-22. In all windows of all Cepheids, these two estimates were very close together, so in our nonlinear model, nonlinearity does not cause large deviations in the statistical distribution of the parameter estimates from approximations based on theory. Stable reference model. We would like to know whether a stable model, with parameters constant over time, can yield a plausible explanation for the estimates from the local sliding window model. For this, we simulated the best-fit stable model with added noise and repeated the sliding window estimation 250 times for each Cepheid. However, in the case of the stable model, the added noise cannot be simulated from the error bars using a Gaussian model, since the residuals from the stable model are strongly over-dispersed with respect to the given error bars. We must allow for an underestimation of the errors, if we want to have a plausible zero hypothesis. Therefore, we applied a nonparametric bootstrap of the standardized residuals. We scaled all the residuals $r_{1},r_{2},\ldots$ at times $t_{1},t_{2},\ldots$ with the error bars $\sigma_{1},\sigma_{2},\ldots$ at that time (the mean of the residuals is zero by construction, so we don’t need to subtract it). If the error bars are underestimated by a common factor with a good approximation, and the error distribution is a location-scale model (not necessarily Gaussian, including also heavy-tailed distributions such as the Lorentzian/Cauchy), this procedure obtains a homoschedastic error sample $s_{1}=r_{1}/\sigma_{1},s_{2}=r_{2}/\sigma_{2},\ldots$. We resampled these with repetition, obtaining a (scaled) sample $s^{*}_{1},s^{*}_{2},\ldots$ (each of the starred residuals is equal to an arbitrary one of the original scaled residuals). At each time $t_{1},t_{2},\ldots$, we computed a simulated noise value by scaling back the scaled residual with the local standard error on the observation: $r^{*}_{1}=s^{*}_{1}\sigma_{1},r^{*}_{2}=s^{*}_{2}\sigma_{2},\ldots$. This simulated noise value was added to the fitted magnitudes of the stable reference model, obtaining simulated “observed” magnitude values $y^{*}_{1},y^{*}_{2},\ldots$ of a noisy Cepheid with stable parameters and errors consistent with the assumption of stability. For the detection of such possibly very fine effects as short-term period and amplitude changes in Cepheids, we need also to account for the effect of the precision of times, although this may be a minor effect. The OGLE-III database gives the HJD dates in days with a precision of 5 digits. In order to assess the effect of rounding, we replaced the times $t_{1},t_{2},\ldots$ by values $t^{*}_{1},t^{*}_{2},\ldots$ jittered in their 6th digit. Finally, the time series $\{(t^{*}_{1},y^{*}_{1}),(t^{*}_{1},y^{*}_{2}),\ldots\}$ was fitted by the local kernel model, obtaining an estimate for every parameter value at every window centre $\tau_{i}$ ($P^{*}(\tau_{i}),a_{0}^{*}(\tau_{i}),\ldots,a_{3}^{*}(\tau_{i}),s_{1}^{*}(\tau_% {i}),\ldots,c_{M}^{*}(\tau_{i})$, denoted generally by $\theta^{*}(\tau_{i})$). This procedure was repeated $R=250$ times, and at each window centre, the quantiles 0.025 and 0.975 of the obtained estimates $\{\theta^{*}_{1}(\tau_{i}),\ldots,\theta^{*}_{R}(\tau_{i})\}$ were taken to get the 95% pointwise bootstrap confidence intervals on our hypothetic stable Cepheid. We performed the bootstrap procedure also with unchanged times. We found that the jittering has a negligible effect on the resulting confidence bands, and gives visibly larger CIs only rarely in short time intervals. Thus, our results are robust against the finite precision of the times. We indicate the CIs obtained with jittered times as orange band around the estimated stable bands in Figures 3, 6 and 19-22. The procedure is approximate for many reasons (the fitted Cepheid is itself an estimation, the residuals follow a correlated joint distribution with variances somewhat different from $\sigma_{i}^{2}$, the procedure is based on the assumption of homogeneous under-estimation of errors, and although the observational error distribution is not restricted to Gaussian, it must nevertheless be a location-scale distribution), but it accounts for the two dominant effects, namely, the overdispersion of the residuals and the implications of the irregular sparse time sampling on the sliding window estimates. A.3 Attribution of significance: multiple hypothesis testing The zero hypothesis to be tested for each model parameter (pulsation period, amplitudes) at each window centre $\tau_{i}$ is that the true local value of $\theta$ is in fact equal to the stable value $\bar{\theta}$. We compute a probability of seeing a discrepancy between the local and the stable value equal to or higher than the one determined under this zero hypothesis, then compare this probability to a pre-defined confidence level $\alpha$ ($0.05$ in our study): if the probability of the found discrepancy is higher than this level, we cannot reject the zero hypothesis of the local estimate being equal to the stable one. To compute the probability, we suppose that the discrepancy $\theta(\tau_{i})-\bar{\theta}$ follows a Gaussian distribution with mean 0 and variance computed from the empirical distribution of the repetitions obtained with the bootstrap procedure described above. This distribution represents the null distribution, that is, when the pulsation parameters of the Cepheid are in fact stable. We visually confirmed that the repetitions $\theta^{*}_{1}(\tau_{i})-\bar{\theta},\ldots,\theta^{*}_{R}(\tau_{i})-\bar{\theta}$ do follow a Gaussian distribution using quantile-quantile plots (for a short description, see Süveges 2014). Using this null distribution, we can then compute a $p$-value, namely, the probability that such a discrepancy is given by the kernel method at window centre $\tau_{i}$ when the Cepheid is a steadily pulsating one (with the best-fit stable reference parameters). The above $p$-values are point-wise and have been computed for each window centre (about 130 over the total time span of OGLE-II and III). Their number itself raises a problem. Generally, if we have a true zero hypothesis, and we assess significance of an alternative at level $\alpha$ separately $K$ times using completely independent data, even in the case of a true zero hypothesis we can expect approximately $K\alpha$ significant values (that is, false positives), simply by randomness. Thus, finding a few significant pointwise $p$-values should not necessarily be considered as having found a significant global discrepancy from stability. Multiple hypothesis testing methods are developed to deal with this situation. Moreover, $p$-values computed for neighboring windows are strongly correlated, due to the overlap in the data used for each local estimate. If by sheer randomness, at $\tau_{i}$ there was a $p$-value smaller than our confidence limit $\alpha$, there is an increased probability that we will have a similarly small $p$-value at the next or next few window centres, creating a false effect of a longer period where the pulsation parameters were significantly different from the stable one. Thus, particular versions of multiple hypothesis testing procedure must be used, which in addition takes into account the dependence between neighbouring $p$-values. Such procedures have been developed (among others) by Hommel (1988); Hochberg (1988); Benjamini & Hochberg (1995); Benjamini & Yekutieli (2001) and Meskaldji et al. (2013). Here, we applied the procedure by Benjamini & Yekutieli (2001) that imposes only few restrictions on the dependence structure and is one of the most powerful among the alternatives. The time intervals during which this method indicates significance are highlighted in Figures 19, 20, 21 and 22 via a grey background. Appendix B Simulated instability In the following, we investigate how the detectability of the minute effects reported in this work are affected by data structure. Specifically, we seek to clarify the ability of the kernel smoothing method to trace modulations of the pulsations using relatively sparse and unevenly sampled data and to quantify detection limits. To this end, we have simulated trend-like, as well as combined trend- and oscillation-like variations of pulsation periods and amplitudes, for the range of parameters representative of our sample. Noise-free light curves were generated as follows. First, we determine instantaneous phase via (numerical) integration of the time-varying frequency: $$\phi(t)=\phi_{0}+\int_{0}^{t}f(s)ds,$$ (4) where we replaced $\phi_{0}=0$ without loss of generality. We then substituted $f(s)$ by combinations of a linear trend and periodic fluctuations in pulsation period: $$f(s)=\left(\alpha_{P}+\beta_{P}s+\Delta P\,\cos\,2\pi f_{P}s\right)^{-1}.$$ (5) Varying first harmonic amplitudes were produced by $$\displaystyle s_{1}(t)$$ $$\displaystyle=$$ $$\displaystyle\alpha_{s_{1}}\left(1+\frac{\beta_{A_{1}}}{\alpha_{A_{1}}}t+\frac% {\Delta A_{1}}{\alpha_{A_{1}}}\sin 2\pi f_{A_{1}}t\right),$$ (6) $$\displaystyle c_{1}(t)$$ $$\displaystyle=$$ $$\displaystyle\alpha_{c_{1}}\left(1+\frac{\beta_{A_{1}}}{\alpha_{A_{1}}}t+\frac% {\Delta A_{1}}{\alpha_{A_{1}}}\sin 2\pi f_{A_{1}}t\right),$$ (7) where the notation agrees with that of eq. (2), and $\alpha_{s_{1}}$ and $\alpha_{c_{1}}$ were taken from similar linear+oscillatory model fits to the time-varying harmonic coefficients, such that $\alpha_{s_{1}}^{2}+\alpha_{c_{1}}^{2}=\alpha_{A_{1}}^{2}$. Next, $\phi(t)$, $s_{1}(t)$ and $c_{1}(t)$ were inserted into the model formula (3) to obtain the pure noise-free light curves at times $t_{i}$: $$\displaystyle Y(t_{i})$$ $$\displaystyle=$$ $$\displaystyle a_{0}+s_{1}(t_{i})\sin\phi(t_{i})+c_{1}(t_{i})\cos\phi(t_{i})$$ $$\displaystyle+\sum_{m=2}^{M}\left[s_{m}\sin m\phi(t_{i})+c_{m}\cos m\phi(t_{i}% )\right],$$ where $a_{0}$ and the $m\geq 2$ harmonic coefficients were kept constant. We did not add long-term polynomial components $\sum_{k=1}^{3}a_{k}t_{i}^{k}$ in these simulations. To create realistic simulations, we adopted real time samplings and our estimates of fitted trend+oscillation model parameters for three real Cepheids: CEP-1405 (FO twin peak featuring oscillations in $A_{1}$, as well as a trend and oscillation in $P$), CEP-1748 (FU twin peak with significant variations only in $A_{1}$) and CEP-2470 (FU twin peak with significant variations only in $P$). We tested the performance of the kernel method in several different modulation types from simple to complex: (1) using period trends of increasing slope up to the trend found in OGLE-LMC-CEP-1405, (2) superposing oscillations of varying frequency and amplitude to the trend of OGLE-LMC-CEP-2470 and (3) simulate the full estimated model of OGLE-LMC-CEP-1748 and OGLE-LMC-CEP-1405. To these noise-free light curves, we added independent Gaussian error terms using the OGLE magnitude error estimates as the standard deviation of the Gaussian, generating 250 independent repetitions of noisy light curves for each investigated pure light curve. Finally, we repeated the sliding window estimation on each of the repetitions, using the same tuning parameters as in the investigation of the observed Cepheids. B.1 Trends in the pulsation period To investigate the method’s power of detecting trends in pulsation period, we simulated a stable light curve with parameters identical to those of CEP-1405, adding linear trends in pulsation period as $\beta_{P}=a\beta_{P,1405}$, where $\beta_{P,1405}=-1.8\times 10^{-7}d/d$ (see Table 3), and $a\in\{0,0.1,0.25,0.5,1,2\}$. Figure 13 shows the median estimates and the 0.025- and 0.975-quantiles for $a=0,0.25\mathrm{\;and\;}1$. The local kernel estimation is unbiased apart from end effects at the beginning and end of the observation span, both for the period trend and the constant amplitude. The characteristic OGLE time sampling does not leave a strong systematic imprint on the estimates. The simulation with $a=0$ (i.e., a perfectly stable light curve) reveals no tendency to incorrectly suggest a non-existing trend or fluctuation. Inserting non-zero trends into simulations of CEP-1405, we find that our method would have detected trends as small as a quarter of the observed trend, cf. the green line in Fig. 13. Both the trend in the period and the stability of the amplitude is well estimated, which suggests that the model is able to correctly disentangle effects in pulsation period and amplitude. B.2 Combined oscillation and trend in the pulsation period Trends are smooth long time-scale variations and are usually well-estimated by local models. However, modulations on shorter time-scales can also occur in the Cepheids, and the length of the temporal (sliding) window crucially affects what time-scales can be detected. To simulate the case of short time-scale modulations, we adopted the parameters for CEP-2470, whose period and amplitude fluctuations are near the detectability limit of our 3-year-wide sliding window. We superposed five different oscillations of the pulsation period to a period trend of $\beta_{P}=\beta_{P,2470}=-5.983\times 10^{-8}d/d$. Four of these had the same oscillation amplitude $\Delta_{P}=\Delta_{P,2470}$ as the best-fit estimate of CEP-2470, with frequencies $f_{P}=a\,f_{P,2470}$ with $a\in\{0.25,0.5,0.75,1\}$. The fifth simulation used $\Delta_{P}=4\Delta_{P,2470}$ and $f_{P}=f_{P,2470}$. For all, we kept all other parameters at the values of the stable reference model of CEP-2470 (no variations in the amplitude were added). Figure 14 illustrates the results of this investigation. As expected, slow period fluctuations that are on the order of or longer than the sliding window duration are well-estimated by the local kernel method (two top panels). The size of faster period fluctuations ($f_{P}\geq 0.75\,f_{P,2470}$) is increasingly under-estimated (middle panel), and the kernel procedure eventually fails to detect fluctuations on timescales that approach the time interval within which our weights are non-negligible. The kernel procedure does, however, allow to recover the correct fluctuation frequency if the intensity of such fast fluctuations is increased ($\Delta_{P}=4\Delta_{P,2470}$, bottom panel), even though the fluctuation intensity is then underestimated by about a factor of four. Thus, we conclude that long-timescale fluctuations (relative to sliding window size) are well-estimated, whereas the detectability of shorter timescale fluctuations decreases for shorter fluctuation timescales and depends on the intensity of the fluctuations. The estimate of the characteristic size of the fluctuations is underestimated. This bias is discussed in detail in Appendix C. The harmonic amplitudes $A_{1}$, which were kept constant in these simulations, were correctly reproduced by all five simulations and are not shown here for brevity. B.3 Adding amplitude variations We have verified the ability of the local kernel method to correctly detect amplitude modulations in the case of stable pulsation periods. To this end, we simulated variations in $A_{1}$ using CEP-1748’s estimated $A_{1}$ fluctuations, setting both $\beta_{P,1748}=0$ and $\Delta_{P,1748}=0$. As Figure 15 shows, amplitude modulations on timescales longer than the window length are well-estimated (bottom panel). However, based on the above discussion of bias in detection fluctuations in pulsation period, we expect similar detection bias regarding fluctuations of $A_{1}$, cf. App. C. The assumed constant pulsation period is recovered as such (top panel). Combining amplitude and period fluctuations, we have simulated a case in analogy to CEP-1405, cf. Figure 16. We here adopted a fluctuation timescale that would be readily detectable using a 3-year sliding window (cf. App. B.1). As Fig. 16 shows, the kernel method provides unbiased estimates of both the variations in $A_{1}$ and $P$, and reveals no issues related to the separation of the two phenomena. Appendix C Reliability and bias of the modulation parameter estimates Appendices B.2 and B.3 used simulated amplitude and period fluctuations to show that both types of fluctuations (separate or combined) are accurately recovered if the timescale of the fluctuations is at least half of the sliding window’s timespan. However, they also showed that the sliding window introduces bias in the estimated intensity of period and/or amplitude fluctuations, which implies that estimates obtained via a heuristic model (2) are also potentially biased or unreliable. Here, we investigate how the limitations of the kernel method influence our results, specifically regarding the frequency and amplitude of any detected fluctuations in $A_{1}$ and $P$. We selected five Cepheids spanning a range of different catalog pulsation periods (CEP-1527, $P_{\mathrm{cat}}=1.49d$, CEP-2217, $P_{\mathrm{cat}}=2.31d$, CEP-2191, $P_{\mathrm{cat}}=4.21d$, CEP-1140, $P_{\mathrm{cat}}=8.19d$, and CEP-1833, $P_{\mathrm{cat}}=19.16d$) and different time sampling. We then simulated ten different periodic period modulations for each Cepheid that cover the frequency range detectable by our sliding window using modulation frequencies $f_{i}=i/3652.5,\;i=1,2,\ldots,10$. The basic model in each case was the catalog pulsation period and the harmonic coefficients determined using the stable reference model. We here assumed high modulation amplitudes ($0.002c/d$) in order to clearly illustrate the relationship of the detection bias as a function of fluctuation frequency (the expected ratio between true and estimated modulation amplitude does not depend on the signal-to-noise ratio). We estimated each of the simulated light curves (with 200 different added Gaussian white noise sequence for each) with our sliding window method, and fitted model (2) to each of the estimated time series of periods. Figure 17 shows the results of this bias estimation. Its top panel shows the distribution of fitted modulation frequencies as a function of the true modulation frequency. For modulation frequencies $\lesssim 0.0015c/d$, i.e., a 2-year modulation period, our method achieves essentially unbiased frequency estimates: in most cases the estimates for all stars are very close to the true modulation frequency. Specifically, we do not find a dependence of the fluctuation estimates on the catalogue pulsation period. The simulations involving the longest-period Cepheid (CEP-1833) however exhibit significantly higher variance, in particular at the lowest frequencies. This can be due to its long period: fewer full light curve cycles are observed in one window, and thus the change in period cannot be estimated as precisely as for a shorter-period Cepheid. Above the limit modulation frequency of $0.0015c/d$, the estimated frequencies for all five simulated Cepheids are much more scattered, and interestingly, their median often falls not on the true value, but on its yearly alias. To assess the bias of the modulation amplitude estimates from model (2), we computed the ratios between the estimated and the true amplitude, as shown the bottom panel of Fig. 17. As hinted at by Fig. 14, the kernel method’s estimation of the modulation amplitude is biased. Fig. 17 suggests that this bias is nearly linear as a function of the true modulation frequency, very closely in the frequency range that can be reliably detected ($f_{P}<0.0015c/d$). Fitting this relationship, we obtain the empirical bias-correction formula $$\rho\approx 1.012-465.372f_{P},$$ (8) which we use to estimate the true underlying modulation in Table 3 and in Figures 7, 8 and 9. Furthermore, the above simulations can be used to form an idea how much the particular time sampling influences locally the shape of the estimated pattern. Figure 18 presents the estimated modulations of the period in a case where the frequency of the modulation was fixed around the upper detection limit of our 3-year window (corresponding to a period of 2 years). The simulation in the uppermost panel uses the time sampling, catalog period and stable harmonic parameters of CEP-2217 (FO, harmonic order $H=3$, $P_{\mathrm{cat}}=2.31d$), the middle one, that of CEP-1140 (FU, harmonic order $H=12$, $P_{\mathrm{cat}}=8.19d$), and the bottom panel, that of CEP-1833 (FU, harmonic order $H=12$, $P_{\mathrm{cat}}=19.16d$). The systematic distortions caused by the different time samplings are noticeable, as well as the increasing statistical uncertainty due partly to the need of estimating more parameters from a similar amount of data. The shape of the estimated signal for the characteristics of CEP-2217 and 1140 is remarkably stable, the signal, even at this relatively high frequency is doubtless present, albeit under-estimated, and the deformations depend predominantly on the time sampling, but only little or very little on the noise. In our other two simulations, using CEP-1527 and 2191, the estimated signal pattern is even less variable than using CEP-2217. The larger variance of the estimates in the case of CEP-1833 implies the higher uncertainty of the estimates of the frequency and the amplitude. Appendix D Figures of the temporal behaviour of a few variability parameters D.1 Pulsation period The figures show the kernel-estimated pulsation period (in days) versus Julian Date (in days). This function is plotted as a solid thick black line, together with its bootstrapped pointwise CI (thin black lines; see Section 2.2). The heavy orange line denotes the catalog period, which was used as known (non-optimized) value in the fitted stable reference model for the Cepheid. The orange band indicates the nonparametric bootstrap CI around this period, obtained from the procedure described in Section 2.2. The dotted black vertical lines indicate years after HJD$-2450000$. The grey background highlights time intervals where the deviation from the stable reference model was found significant by the multiple testing procedure of Benjamini & Yekutieli (2001). The times of the observations are indicated with a rugplot at the bottom of the panels. D.2 Amplitude of first harmonic term $A_{1}$ The figures show the kernel-estimated amplitude of the leading harmonic term (in magnitudes; for the definition, see in Section 3.2) versus Julian Date (in days). This function is plotted as a solid thick black line, together with its bootstrapped pointwise CI (thin black lines; see Section 2.2). The heavy orange line denotes the best-fit amplitude from the stable reference model. The orange band indicates the nonparametric bootstrap CI obtained from the procedure described in Section 2.2. The dotted black horizontal and vertical lines are aids to the eye to estimate the extent and time interval of the changes. The grey background highlights time intervals where the deviation from the stable reference model was found significant by the multiple testing procedure of Benjamini & Yekutieli (2001). The times of the observations are indicated with a rugplot at the bottom of the panels. D.3 Light curve shape The figures show the kernel-estimated light curve shapes in two little- or non-overlapping windows, in which the peak-to-peak amplitude or the amplitude of the leading harmonic term was very different. The two curves are indicated by the different colours, and the observations in the different windows, by the different colours and shapes of the symbols. The times of the window centres are given in the legend. The observations close to the window centre and therefore more influential in the fit have darker red or blue colours, whereas those in the wings are shown in lighter shades. Approximate confidence intervals for the curves are also given as light red and light blue stripes around the estimated lines. They were computed based on the bootstrap repetitions of the time-varying window estimates, as described in Section 2.2 and Appendix A. In both windows, the kernel-estimated coefficients on the repetitions in the window were used to reconstruct 250 bootstrap light curves. The stripes indicate the total span of these light curves. Appendix E Table of trends and fluctuations in the pulsation period and the first harmonic amplitude
A Magnetar Flare in the BATSE Catalog? A. Crider Abstract To identify extragalactic magnetar flares, we have searched for their periodic tails by generating Lomb periodograms of the emission following short bursts detected by the Burst and Transient Source Experiment (BATSE). Out of 358 short bursts examined, one has a significant tail periodicity ($T$ = 13.8 s, ${P=4\times 10}^{-5}$). The most probable host galaxy for this burst is “The Fireworks Galaxy” NGC 6946 ($d=5.9$ Mpc). At this distance, the energy of the spike, ${(2.7\pm 0.3)\times 10}^{44}$ ergs, is akin to those of the galactic magnetar giant flares, as are its duration ($\sim 0.4$ s) and temperature ($250\pm 60$ keV). For the tail emission, however, our estimated temperature of $60\pm 5$ keV is harder and the energy release of ${(4.3\pm 0.8)\times 10}^{45}$ ergs is larger than those of the galactic magnetar flares. Regardless of the host, such a large ratio of tail-to-spike energy would imply that magnetar flare tails might be detectable out to further distances than previously thought. Keywords:gamma ray bursts, magnetars : 98.70.Rz, 97.60.Jd 1 Introduction Three times within the past thirty years, short (0.2-0.35 s), intense gamma-ray flares followed by softer ($kT\sim 25$ keV), periodic ($T=5-8$ s) tails have erupted from the sources of the much fainter soft-gamma repeaters (SGR). The magnetar model proposed by Duncan and Thompson (Duncan and Thompson, 1992) has had considerable success in explaining SGR and the occasional giant flares. Assuming that our galaxy is not unique, then magnetar flares also occur in nearby galaxies. These would likely be labeled as short GRB, since the characteristic pulsating tail would near or below the background and the spectrum of the initial spike is similar to that of classical GRB (Fenimore et al., 1996). To identify them, one can exploit three distinguishing attributes: (a) their locations relative to nearby galaxies, (b) their spectral temperatures, and (c) their faint oscillating tails. In this study, we searched for faint oscillations in the emission following short GRB. 2 Procedures The current BATSE catalog contains 2702 gamma-ray bursts; 2041 have calculated $T_{90}$ durations. For the 358 short bursts ($T_{90}<1.0$ s), we extracted the 30-50 keV, 64-ms lightcurve (as the galactic magnetar flare tails were soft) and refit polynomial backgrounds using data from 100 s before to 200 s after the trigger. We then generated a Lomb periodogram (Press et al., 1992) for the 100-s intervals that immediately followed each short GRB. Of the 358 bursts analyzed, only one had a significance $P<1/358$. The periodogram plotted in Figure 1 for GRB 970110 (BATSE #5770) reveals a highly significant (Lomb power=17.8, $P={3\times 10}^{-5}$) peak periodicity of 13.8 s. We found the signal separately in both of the two BATSE Large Area Detectors (LADs) facing the event. Examining the 50-100 keV channel independently reveals the same strong periodicity (Lomb power=16.2, $P={1\times 10}^{-4}$). No significant signal was found in the upper two channels (¿100 keV), as might be expected for a soft magnetar flare tail. Figure 2 shows the countrate rebinned from 64-ms to 2-s bins and overlayed with a 13.8-s sinusoid to illustrate the periodicity found originally by the Lomb periodogram. To test if this periodic signal was from a source unrelated to the burst, we examined periodograms of the pre-burst (-100-to-0 s) and subsequent (100-to-200 s) regions and found no significant periodicity, with maximum significances of $P=0.55$ and 0.45 respectively. While the 1.024-s binning of the pre-burst emission limits the sensitivity of the Lomb periodogram to some extent, the period of interest (0-to-100 s) retains a marginally significant spike ($P=0.01$) when resampled to this resolution. Curiously, there may be pulses in phase with the tail during the 50 s before the trigger, as can be seen in Figure 2. However, the lack of a sustained periodic signal before or 100 s after the trigger suggests that the pulsations are indeed a transient phenomena temporally coincident with the spike. Additionally, we found no hint of a 13.8-s periodicity in the DISCLA data of the other six LADs, suggesting the source is indeed from the direction of the burst. The Australia Telescope National Facility (ATNF) Pulsar Catalog (Manchester et al., 2005) that includes the known gamma-ray pulsars and the anomalous X-ray pulsars contains no sources with a periodicity $>2$ s inside the 99.7% confidence location of GRB 970110. While Cyg X-1, in its ”soft/low” state on 1997 January 10, is just outside of the 99.7% BATSE confidence circle, there are no reports of it having a periodicity close to 13.8 s. The duration of the spike ($\sim 0.4$ s) is very similar to those of the magnetar flares. Using NASA-MSFC’s rmfit analysis software and BATSE DISCSC data, we also found a similar spike spectrum. An optically-thin thermal bremsstrahlung (OTTB) spectrum gives an acceptable fit ($\chi^{2}=1.32,\nu=2,P=0.52$) with a temperature ${kT}_{\rm{OTTB}}=250\pm 60$ keV. A blackbody spectrum fits more poorly ($\chi^{2}=8.2,\nu=2,P=0.017$) with ${kT}_{\rm{BB}}=30$ keV. Our result is consistent with the peak of the 1979 March 5 event (${kT}_{\rm{OTTB}}=246$ keV; (Fenimore et al., 1996)), but is softer than the the 2004 December 27 flare (${kT}_{\rm{BB}}=175\pm 25$ keV; (Hurley et al., 2005)). Using $rmfit$, we found the fluence of the spike to be ${(6.5\pm 0.5)\times 10}^{-8}~{}\rm{erg~{}{cm}^{-2}}$ in BATSE’s 25-2000 keV window. Determining the tail’s spectrum and fluence required calibrating the measured Lomb power and the total detector counts in each channel. We created several synthetic magnetar tails using the Swift BAT light curve for the 2004 December 27 magnetar flare ($t=205$ to 505 s). These were scaled to represent what would be seen in each BATSE channel, with total counts ranging from 200 to 400,000. A periodogram for each allowed us to estimate the functional relationship (roughly quadratic in the region of interest) between the integrated detector counts in a channel and the Lomb power . We found our Lomb powers of 17.7 in channel 0 and 16.2 in channel 1 corresponded to 6000 counts and 5100 counts, respectively. We next convolved OTTB spectrum ($kT_{OTTB}$ spanning 5 to 70 keV) with the detector response matrix for BATSE trigger #5770 to determine the number of counts expected in the four channels. From this, we estimated that the temperature of the tail is ${kT}_{\rm{OTTB}}=60\pm 5$ keV, notably harder than other magnetar flares. With this spectrum, and the Lomb power for channel 0, we estimate a tail energy fluence of ${(7.5\pm 1.5)\times 10}^{-7}~{}\rm{erg~{}cm}^{-2}$. The fraction of the total energy in the tail emission (94%) is much larger than the recent 2004 December 27 event (0.3%), but is comparable to the 1979 March 5 event (75%). 3 Discussion The BATSE localization of GRB 970110 includes few nearby galaxies. The dwarf spheroidal galaxy Draco ($d$ = 0.08 Mpc) and the blue compact dwarf galaxy NGC 6789 ($d$ = 3.6 Mpc) both fall just inside of the 95.4% confidence circle, but have low a priori probabilities of producing a magnetar flare based on their low star formation rates. Instead, we find a Bayesian probability of 87% that of the galaxies within 10 Mpc, the “Fireworks Galaxy” NGC 6946 ($d$ = 5.9 Mpc) is the host. While just outside of the 95.4% confidence circle, its very high star formation rate of 3.12 $M_{\odot}~{}{\rm{yr}}^{-1}$ (Karachentsev et al., 2005) makes it a likely source. In fact, we estimate a 38% a priori probability that a magnetar flare from NGC 6946 exists in the BATSE catalog. Assuming isotropic emission, the energy fluence in the spike corresponds to ${(2.7\pm 0.3)\times 10}^{44}$ erg, comparable to 1979 March 5 flare, which had a spike energy of ${1.2\times 10}^{44}$ erg (Mazets et al., 1979). Its tail energy of ${(4.3\pm 0.8)\times 10}^{45}$ ergs is larger than those of the three galactic magnetar flares, but only $\sim 10\times$ more than that of the 1979 March 5 event. The requisite dipole magnetic field strength of the magnetar that would constrain such a fireball would be  ${B_{\star}}~{}>~{}{1.4\times 10}^{15}~{}({{E}_{\rm{tail}}}/{{4.3\times 10}^{45% }~{}{\rm{erg}}})^{1/2}~{}({\Delta R}/{10~{}{\rm{km}}})^{-3/2}~{}{({1/2+\Delta R% /2R_{\star}})}^{3}\rm{G},$ where $R_{\star}$ is the stellar radius and $\Delta R$ is the outer radius of the magnetic loop confining the plasma (Thompson and Duncan, 1995). If instead GRB 970110 is from the Draco dwarf galaxy, then its energetics are more similar to the event that occurred two days after the 1997 August 27 giant flare (Ibrahim et al., 2001). In either case, the relatively large fraction of energy in the tail of this event suggests that the range to which Swift might detect similar periodicities should be extended. Hurley et al. (2005) calculated that magnetar periods might be measured by Swift out to a distance of $\sim 2-8.5$ Mpc based on the fluence observed for the 2004 December 27 event. Approximately 15% of the tail energy in the 1997 January 10 event (${6.5\times 10}^{44}$ ergs) was released in the Swift XRT band (0.3-100 keV), suggesting that if this event is from NGC 6946, the Swift detection range for tails can be extended to $\sim 5-20$ Mpc. The author thanks Michael Briggs, Dana Hurley Crider, and his PHY 251 students for helpful comments. This work was supported by a grant from Elon University. References Duncan and Thompson (1992) R. C. Duncan and C. Thompson, Astrophys. J. Lett. 392, L9–L13 (1992). Fenimore et al. (1996) E. E. Fenimore, R. W. Klebesadel, and J. G. Laros, Astrophys. J. 460, 964 (1996). Press et al. (1992) W. H. Press, et al. Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge (UK) and New York, 1992, 2nd edn., ISBN 0-521-43064-X. Manchester et al. (2005) R. N. Manchester, G. B. Hobbs, A. Teoh, and M. Hobbs, Astron. J. 129, 1993–2006 (2005). Hurley et al. (2005) K. Hurley et al., Nature 434, 1098–1103 (2005). Karachentsev et al. (2005) I. D. Karachentsev, S. S. Kajsin, Z. Tsvetanov, and H. Ford, Astron. Astrophys. 434, 935–938 (2005). Mazets et al. (1979) E. P. Mazets et al., Nature 282, 587–589 (1979). Thompson and Duncan (1995) C. Thompson and R. C. Duncan, Mon. Not. R. Astron. Soc. 275, 255–300 (1995). Ibrahim et al. (2001) A. I. Ibrahim et al., Astrophys. J. 558, 237–252 (2001).
††institutetext: Institute of Physics, Academia Sinica, Taipei 11529, Taiwan, Republic of China Quark-hadron duality for heavy meson mixings in the ’t Hooft model Hiroyuki Umeeda [email protected] Abstract We study local quark-hadron duality and its violation for the $D^{0}-\bar{D}^{0}$, $B^{0}_{d}-\bar{B}^{0}_{d}$ and $B^{0}_{s}-\bar{B}^{0}_{s}$ mixings in the ’t Hooft model, offering a laboratory to test QCD in two-dimensional spacetime together with the large-$N_{c}$ limit. With the ’t Hooft equation being numerically solved, the width difference is calculated as an exclusive sum over two-body decays. The obtained rate is compared to inclusive one that arises from four-quark operators to check the validity of the heavy quark expansion (HQE). In view of the observation in four-dimensions that the HQE prediction for the width difference in the $D^{0}-\bar{D}^{0}$ mixing is four orders of magnitude smaller than the experimental data, in this work we investigate duality violation in the presence of the GIM mechanism. We show that the order of magnitude of the observable in the $D^{0}-\bar{D}^{0}$ mixing is enhanced in the exclusive analysis relative to the inclusive counterpart, when the 4D-like phase space function is used for the inclusive analysis. By contrast, it is shown that for the $B^{0}_{d}-\bar{B}^{0}_{d}$ and $B^{0}_{s}-\bar{B}^{0}_{s}$ mixings, small yet non-negligible corrections to the inclusive result emerge, which are still consistent with what is currently indicated in four-dimensions. Keywords: Heavy Quark Physics, 1/N Expansion, Nonperturbative Effects 1 Introduction The theory of heavy quark physics, established since 1980s, has already experienced its mature stage. While its early development is characterized particularly by the heavy quark symmetry, nowadays it is turned into a systematic way to handle non-perturbative aspects of quantum chromodynamics (QCD). Equipped with Wilson’s operator product expansion (OPE) Wilson:1969zs ; Wilson:Proc ; Wilson:1973jj (the ideas were adopted to QCD in Refs. Shifman:1978bx ; Shifman:1978by ; Novikov:1984rf ), certain processes in the deep Euclidean domain are factorized into short and long distance objects. The former is calculated via perturbation theory while the latter is evaluated by non-perturbative methods such as lattice QCD. The OPE formula is then converted into one in the Minkowskian domain, on which physical processes of interest lie, via the analytic continuation. As a result, the observables are expanded by the inverse of heavy quark mass, $1/m_{Q}$. This methodology, referred to as the heavy quark expansion (HQE) Bigi:1992su ; Bigi:1992ne ; Blok:1992hw ; Blok:1992he (see, e.g., Refs. Bigi:1997fj ; Lenz:2014jha for reviews), is quite successful in describing inclusive processes for $b$ quark. The current results for the lifetime ratios of $b$-hadrons Lenz:2014jha ; Kirk:2017juj ; Cheng:2018rkz and the width difference in the $B^{0}_{s}-\bar{B}^{0}_{s}$ mixing Lenz:2019lvd show an excellent agreement with the Heavy Flavor Averaging Group (HFLAV) data Amhis:2019ckw . In contrast to the successful aspects of HQE for $b$ quark, there exists two-fold complexity for treating $c$ quark: (1) charm might be possibly too light for applying HQE and (2) due to Glashow-Iliopoulous-Miani (GIM) mechanism Glashow:1970gm , observables undergo severe cancellation unlike the milder one for $b$ quark. Due to the latter, specifically relevant for flavor-changing neutral current (FCNC) processes, observables are subject to the suppressions of SU(3) breaking Kingsley:1975fe and/or the tiny product of Cabibbo-Kobayashi-Maskawa (CKM) matrix Cabibbo:1963yz ; Kobayashi:1973fv elements, $V_{cb}^{*}V_{ub}$. One of such notoriously difficult FCNC processes of $c$ quark is the $D^{0}-\bar{D^{0}}$ mixing111For the experimental side, the first evidence was found by Belle Staric:2007dt and BABAR Aubert:2007wf collaborations in 2007. Subsequent confirmation was made by CDF Aaltonen:2007ac and LHCb Aaij:2013wda experiments. Currently, the average over large datasets Amhis:2019ckw show that the zero values of the mixing parameters are excluded by more than $11.5\sigma$ Amhis:2019ckw , so that the occurrence of the $D^{0}-\bar{D^{0}}$ mixing has been firmly verified. See Refs. Amhis:2019ckw ; Lenz:2020awd for the detail of the experimental status and references therein., that proceeds via $\Delta C=2$ transition (see Refs. Burdman:2003rs ; Lenz:2020awd for reviews). Two possible methods to calculate the $D^{0}-\bar{D^{0}}$ mixing exist in the literature: exclusive and inclusive approaches, where the latter is based on HQE. In the exclusive approach Falk:2001hx ; Wolfenstein:1985ft ; Donoghue:1985hh ; Colangelo:1990hj ; Buccella:1994nf ; Kaeding:1995zx ; Falk:2004wg ; Cheng:2010rv ; Gronau:2012kq ; Jiang:2017zwr , the experimental data of hadronic decays are utilized so that the relevant long-distance effect can be properly extracted. The modern analyses Cheng:2010rv ; Jiang:2017zwr showed that two-body decays of $D^{0}$ meson accommodate roughly a half of the width difference although there lies a difficulty in handling other multi-body modes. Hence, while the order of magnitude of the width difference was reproduced, the quantitative agreement is still not realized in the exclusive approach. On the other hand, the situation of the inclusive approach to the $D^{0}-\bar{D^{0}}$ mixing is somewhat different from that to the exclusive one. Owing to the severe GIM cancellation, the inclusive values of the mass and width differences are considerably suppressed, as can be seen from formulae obtained by the box diagrams in Refs. Hagelin:1981zk ; Cheng:1982hq ; Buras:1984pq ; Datta:1984jx and also by the heavy quark effective field theory in Refs. Georgi:1992as ; Ohl:1992sr . The later update including next-to-leading order (NLO) corrections, obtainable from proper replacement in the $B^{0}-\bar{B^{0}}$ mixing Beneke:1996gn ; Beneke:1998sy ; Dighe:2001gc ; Ciuchini:2003ww (see also Petrov:1997ch ), gives the width difference about four orders of magnitude smaller Golowich:2005pt ; Bobrowski:2010xg than the HFLAV data Amhis:2019ckw . This huge discrepancy is to be contrasted with the exclusive approach, in which the order of magnitude is accommodated. Another point to be mentioned is that the HQE prediction for $\tau(D^{+})/\tau(D^{0})$ in Ref. Kirk:2017juj is in agreement with the HFLAV data Amhis:2019ckw , albeit the huge uncertainty in the theoretical side, indicating that the HQE for $c$ quark is more or less meaningful in the processes without GIM cancellation. In order to interpret the aforementioned disagreement, several possibilities are discussed in the literature222See the status summarized in Ref. Jubb:2016mvq .: first one is attributed to the contributions of higher dimensional operators, potentially leading to an enhancement, as discussed in Georgi:1992as ; Bobrowski:2010xg ; Bigi:2000wn ; Falk:2001hx . For further clarifying this possibility, one should calculate a number of non-perturbative matrix elements for $D=9,12$ operators. Indeed, a new physics contribution is considered a candidate for explaining the gap. See, e.g., Refs. Golowich:2006gq ; Golowich:2007ka ; Golowich:2009ii ; Gedalia:2009kh for the studies in the context of new physics. A subtle point discussed in the recent work Lenz:2020efu is that if one adopts $\mu_{1}$, a scale at which the bi-local process induced by the $\Delta C=1$ oparators is calculated, different for individual internal quark contributions, the sufficient enhancement is realized after taking sum over flavors. In this respect, a natural question might be how large the next-to-next-to-leading order (NNLO) QCD corrections Asatrian:2017qaz ; Asatrian:2020zxa will be after its completion. Furthermore, another recent study Li:2020xrz where the dispersion relation is regarded as a constraining equation to determine the width difference at low energies indicated that the inclusive approach potentially leads to an enhancement. An alternative possibility to interpret the discrepancy is violation of quark-hadron duality.333In the past, duality violation was considered crucial to explain the lifetime ratio of $\tau(\Lambda_{b})/\tau(B_{d})$ although it was falsified due to the update in experimental data. The notion of duality is originated from investigations due to Bloom-Gilman Bloom:1970xb ; Bloom:1971ye and Poggio-Quinn-Weinberg Poggio:1975af stating that inclusive hadronic cross sections at high energies are described by the quark-gluon picture. The case with smearing observables over energies is referred to as “global duality,” while one without smearing is called “local duality.” The difficulty in handling duality violation is traced back to the truncations of perturbative series for $\alpha_{s}$ and OPE. Specifically, the proliferation of Feynman diagrams gives rise to factorial divergence, which is not included in the practical version of OPE. In addition, it is known that renormalons Beneke:1998ui , referring to countributions from particular diagrams, also lead to the factorial divergence. Furthermore, the series from OPE is divergent Shifman:1994yf ; Shifman:1995mt as well. Due to those corrections, the higher order perturbative series should be truncated at an optimal order, leaving an uncertainty in the perturbative prediction. Thus, the accuracy of the resultant HQE, intrinsically replying on the truncated series with the analytic continuation, is limited up to those non-perturbative effects. See, e.g., Refs. Shifman:2000jv ; Bigi:2001ys for further details regarding duality violation. While obviously a first principle method in the Minkowskian domain is preferable, duality violation is hard to quantify as long as one depends on the truncated perturbative series (for wording of “duality violation,” we follow the clear-cut definition due to Shifman Shifman:2000jv , referring to the error beyond the natural uncertainties of truncated series from $\alpha_{s}$ and OPE). In the literature, certain dynamical mechanisms are considered as models of duality violation. These approaches are: (a) instanton-based model in Refs. Chay:1994si ; Chay:1994dk ; Falk:1995yc ; Chibisov:1996wf and (b) resonance-based model in Refs. Shifman:1994yf ; Shifman:1995mt ; Zhitnitsky:1995qa ; Blok:1997hs ; Colangelo:1997ni ; Grinstein:1997xk ; Bigi:1998kc ; Grinstein:1998gc ; Bigi:1999fi ; Bigi:1999qe ; Burkardt:2000uu ; Burkardt:2000ez ; Lebed:2000gm ; Beane:2001uj ; Grinstein:2001zq ; Grinstein:2001nu ; Mondejar:2006ct ; Mondejar:2008pi ; Mondejar:2009td and also in Ref Golowich:1998pz .444Another pure phenomenological approach based on the simple model Jubb:2016mvq showed that $20\%$ violation of duality can account the width difference of the $D^{0}-\bar{D^{0}}$ mixing. See also Refs. Gambino:2020crt ; Fukaya:2020wpp for the recent works in lattice QCD to calculate inclusive processes. For (a), the usual perturbative analysis is replaced by one in the medium of (fixed-sized) instanton, classical solution to Yang-Mills equations in Euclidean space Belavin:1975fg . This procedure leads to the contribution of finite distance singularity from the quark Green function, in addition to the practical OPE as the short-distance expansion, and gives a possible duality violating term that has an exponential-like function form. By performing analytic continuation to the Minkowskian domain, an oscillatory correction to the practical OPE arises when quark mass is not heavy enough. As for (b), duality violation is studied on the basis of the tower of hadronic excited states that follow the linear Regge trajectory and the large-$N_{c}$ limit (the finite correction from $1/N_{c}$ can be also included). This was considered for the hadronic vacuum polarization in Ref. Blok:1997hs . By summing over each hadronic propagator, one finds that the vacuum polarization is recast into Euler’s $\psi$ function, whose asymptotic expansion leads to the OPE series. By comparing the hadronic result and the OPE series, where the latter is truncated in practice, one can investigate duality although for the vacuum polarization, either smearing or $1/N_{c}$ correction should be taken into account to gain a reasonable result, since local duality is maximally violated even for large energies for this case. Resonance-based investigation of duality is greatly facilitated with the help of the ’t Hooft model tHooft:1973 , $1+1$ dimensional SU($N_{c}$) gauge theory in the large-$N_{c}$ limit tHooft:1973alw ; Coleman:1985 ; Manohar:1998xv ; tHooft:2002ufq , in which case only the planar diagrams give non-vanishing contributions. The Bethe-Salpeter equation Nambu:1997vt ; Salpeter:1951sz in the light-cone gauge leads to a relation constraining wave functions and masses of mesons, the so-called ’t Hooft equation. Being solvable, the equation unambiguously determines the properties of mesons in this formalism, thereby offering a useful laboratory to examine the non-perturbative dynamics of strong interaction. The (asymptotic) linear Regge trajectory, a key ingredient in (b), can be demonstrated in the model. Supported by such tractable features, discreteness of the mass spectra is shown mathematically Federbush:1976eh , as is required by confinement. Posterior to the original work tHooft:1973 , the scattering amplitude, discussion in the axial gauge, chiral symmetry breaking, simulation on the lattice (with finite $N_{c}$), generalized parton distribution functions, weak decays of heavy quark, etc., are investigated in Refs. Callan:1975ps ; Einhorn:1976uz ; Pak:1976dk ; Hanson:1976ey ; Bars:1977ud ; Brower:1978wm ; Zhitnitsky:1985um ; Li:1986gf ; Li:1987hx ; Huang:1988br ; Burkardt:1992qm ; Burkardt:1991ea ; Jaffe:1991ib ; Grinstein:1992ub ; Grinstein:1994nx ; Barbon:1994au ; Aoki:1995dh ; Krauth:1996dg ; Abdalla:1998sg ; Abdalla:1999av ; Armoni:2000uw ; Berruto:2002gn ; Grinstein:2006pz ; Mondejar:2008dt ; Grinstein:2008wm ; Glozman:2012ev ; Jia:2017uul ; Jia:2018qee . Particularly noteworthy is that the intermediate meson pole contribution to the heavy-to-light form factor is demonstrated for any heavy quark mass, and the correction to the approximation is also determined, so that QCD dynamics in heavy quark decays can be clarified in $1+1$ dimensions Grinstein:1994nx . Numerical tHooft:1973 ; Hanson:1976ey ; Brower:1978wm ; Huang:1988br ; Jaffe:1991ib ; Krauth:1996dg ; Armoni:2000uw ; Fonseca:2006au , semi-analytical Harada:1997kq and analytical Lewy ; Hildebrandt1 ; Hildebrandt2 ; Hildebrandt3 ; Bruning ; Fateev:2009jf ; Ziyatdinov:2010vg ; Zubov:2015ura methods to obtain solutions to the ’t Hooft equation are investigated in the vast literature. The mentioned tractable features of the ’t Hooft model enable us to test quark-hadron duality. In the previous studies, this test is applied for hadronic spectral density functions Zhitnitsky:1995qa ; Blok:1997hs ; Bigi:1998kc ; Lebed:2000gm related to $e^{+}e^{-}$ annihilation and $\tau$ decays, deep inelastic scattering Mondejar:2008pi ; Mondejar:2009td and heavy meson decays Zhitnitsky:1995qa ; Grinstein:1997xk ; Bigi:1998kc ; Grinstein:1998gc ; Bigi:1999fi ; Bigi:1999qe ; Burkardt:2000ez ; Lebed:2000gm ; Grinstein:2001zq ; Mondejar:2006ct . Some of those references analytically gave the oscillating behavior for process rates, which is not captured in the practical OPE, as the energy/heavy quark mass is lowered. Thus, it is broadly considered that the ’t Hooft model offers one certain methodology to reliably analyze duality violation while how the result is altered quantitatively in $3+1$ dimensions remains unclear. In this work, we study quark-hadron duality and its violation for heavy meson mixings in the ’t Hooft model.555Duality violation in the $D^{0}-\bar{D^{0}}$ mixing was concerned in Ref. Bigi:2000wn , where the matrix element of the higher dimensional operator that linearly depends on strange quark mass avoiding the strong GIM cancellation was mainly discussed. We first calculate the meson mixings based on the box diagrams in two-dimensions, corresponding to the contributions of four-quark operators in the HQE. Also calculated is the same observable based on the exclusive sum over final states, where the two-body decays are dominant in the large-$N_{c}$ limit since $n$-mesons’ coupling is suppressed by $N_{c}^{1-n/2}$. To perform the exclusive analysis, by following the formalism in Refs. Grinstein:1997xk ; Grinstein:1998gc , we represent the topological amplitude Chau:1982da ; Chau:1986jb ; Chau:1987tk ; Chau:1989tk in terms of the overlap integrals for meson wave functions, which can be determined as numerical solutions to the ’t Hooft equation. Then, the two calculated quantities are compared, in order to check the validity of the HQE. A non-trivial point in this comparison is that the GIM mechanism potentially affects the order of magnitudes of the observables. The investigation for the $D^{0}-\bar{D^{0}}$ mixing, subject to the strong GIM cancellation, is distinguished from ones for the $B^{0}_{q}-\bar{B}^{0}_{q}(q=d,s)$ mixing. We show that the a large correction to the box diagram is realized for $D^{0}-\bar{D^{0}}$ when the phase space function is given solely by 4D-like one with certain choices of strange quark mass. As for the $B_{q}^{0}-\bar{B}_{q}^{0}$ mixing, the correction is much smaller than that for the $D^{0}-\bar{D}^{0}$ mixing, and consistent with the realistic observations in four-dimensions. Furthermore, this work deals with heavy meson decays into light mesons, such as $D^{0}\to\pi^{+}\pi^{-}\to\bar{D^{0}}$, in addition to decays into heavy mesons. Little has been known for duality in the former case while for latter, especially $\bar{B}^{0}_{s}\to D_{s}^{(*)}D_{s}^{(*)}\to B^{0}_{s}$, an agreement between the partonic rate and the exclusive rate is shown Aleksan:1993qp (see also the later study Chua:2011er ) in the small-velocity limit Shifman:1987rj together with heavy quark and large-$N_{c}$ limits. This paper is organized as follows: In Sec. 2, the formalism of the meson mixings, including formulae of the width differences, is exhibited. In Sec. 3, we first recapitulate the ’t Hooft model to establish the notation. Subsequently calculated is the absorptive part of partonic transitions, $c\bar{u}\to u\bar{c}$ and $b\bar{q}\to q\bar{b}$ ($q=d,s$). Then, by taking the matrix elements, we obtain the formula of the HQE from the four-quark operators. The counterpart in the exclusive approach is also obtained in the large-$N_{c}$ limit. We show the numerical results in regards to violation of local duality in Sec. 4, by first analyzing the width differences from the individual flavors and then showing the results in the presence of the GIM mechanism. Finally, we conclude in Sec. 5. 2 Formalism in the CP conserving limit 2.1 $D^{0}-\bar{D^{0}}$, $B^{0}_{d}-\bar{B^{0}_{d}}$ and $B^{0}_{s}-\bar{B^{0}_{s}}$ mixings For the the $D^{0}-\bar{D^{0}}$ mixing, we introduce mass eigenstates denoted by $\ket{D_{1,2}}$ that diagonalize the Schrödinger equations Zyla:2020zbs in the CP-conserving limit, where $\ket{D_{1}}(\ket{D_{2}})$ coincides with a CP-even (odd) state. The off-diagonal element of the mixing matrix is given by, $$\displaystyle M_{21}^{(D^{0})}-\frac{i}{2}\Gamma_{21}^{(D^{0})}=\frac{\bra{\bar{D^{0}}}\mathcal{H}_{W}^{(D^{0})}\ket{D^{0}}}{2M_{D^{0}}},\quad\mathcal{H}_{W}^{(D^{0})}=\mathcal{H}_{W}^{(D^{0},\>\mathrm{dis})}-\frac{i}{2}\mathcal{H}_{W}^{(D^{0},\>\mathrm{abs})}.$$ (1) $M_{21}^{(D^{0})}$ and $\Gamma_{21}^{(D^{0})}$ are associated with the contributions of off-shell and on-shell intermediate states, respectively. The width difference between the two CP states defined by $\Delta\Gamma_{D}=\Gamma_{1}^{(D^{0})}-\Gamma_{2}^{(D^{0})}$ can be expressed in terms of the off-diagonal element of the mixing matrix, $$\displaystyle\Delta\Gamma_{D}=2\Gamma_{21}^{(D^{0})},$$ (2) in the CP-conserving limit. The sign of the above observable is to be determined experimentally in this convention. As for the $B^{0}_{q}-\bar{B^{0}_{q}}$ mixing ($q=d,s$), a commonly adopted convention is based on $\ket{B_{H}}$ and $\ket{B_{L}}$, heavier and lighter eigenstates. In the CP conserving limit, one finds that the sign of $\Delta\Gamma=\Gamma_{H}-\Gamma_{L}$ depends on that of $M_{12}$ unlike in Eq. (2), as can be seen in Eq. (2.16) of Ref. Buras:1984pq . In order to compare the results of the $D^{0}-\bar{D^{0}}$ and the $B^{0}_{q}-\bar{B^{0}_{q}}$ mixings on the equal footing, the convention similar to that of the $D^{0}-\bar{D^{0}}$ mixing is adopted in the $B^{0}_{q}-\bar{B^{0}_{q}}$ mixing. That is, we introduce mass eigenstates of $\ket{B_{1,2}}$, where $\ket{B_{1}}(\ket{B_{2}})$ is a CP-even (odd) state, and define $\Delta\Gamma_{B_{q}}=\Gamma_{1}^{(B_{q})}-\Gamma_{2}^{(B_{q})}$. For this case, the following notation similar to one for the $D^{0}-\bar{D^{0}}$ mixing is introduced, $$\displaystyle M_{12}^{(\bar{B^{0}_{q}})}-\displaystyle\frac{i}{2}\Gamma_{12}^{(\bar{B^{0}_{q}})}=\displaystyle\frac{\bra{B^{0}_{q}}\mathcal{H}_{W}^{\bar{(B^{0}_{q})}}\ket{\bar{B^{0}_{q}}}}{2M_{B^{0}_{q}}},\quad\mathcal{H}_{W}^{(\bar{B^{0}_{q}})}=\mathcal{H}_{W}^{(\bar{B^{0}_{q}},\>\mathrm{dis})}-\frac{i}{2}\mathcal{H}_{W}^{(\bar{B^{0}_{q}},\>\mathrm{abs})},$$ (3) $$\displaystyle\Delta\Gamma_{B_{q}}=2\Gamma_{12}^{(\bar{B^{0}_{q}})}.$$ (4) Hereafter we exploit $\Gamma_{21}^{(D^{0})}=\Gamma_{12}^{(D^{0})}$, valid in the CP conserving limit, and do not utilize the notation of $\Gamma_{21}^{(D^{0})}$ for brevity: we calculate the $D^{0}\to\bar{D^{0}}$ transition for the $D^{0}-\bar{D^{0}}$ mixing while $\bar{B^{0}_{q}}\to B^{0}_{q}$ is computed for the $B^{0}_{q}-\bar{B^{0}_{q}}$ mixing, in the common notation of $\Gamma_{12}$. 2.2 Width differences For the $D^{0}-\bar{D^{0}}$ and $B^{0}_{q}-\bar{B^{0}_{q}}$ mixings, $\Gamma_{12}$ in Eqs. (1, 3) are given by the following expressions $(\alpha=\mathrm{inc},\mathrm{exc})$, $$\displaystyle\Gamma_{12}^{(D^{0},\>\mathrm{\alpha})}$$ $$\displaystyle=$$ $$\displaystyle\lambda_{d}^{2}\Gamma^{(D^{0},\>\mathrm{\alpha})}_{dd}+2\lambda_{s}\lambda_{d}\Gamma^{(D^{0},\>\mathrm{\alpha})}_{sd}+\lambda_{s}^{2}\Gamma^{(D^{0},\>\mathrm{\alpha})}_{ss},$$ (5) $$\displaystyle\Gamma_{12}^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}$$ $$\displaystyle=$$ $$\displaystyle\lambda_{u(q)}^{2}\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{uu}+2\lambda_{c(q)}\lambda_{u(q)}\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{cu}+\lambda_{c(q)}^{2}\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{cc}.\quad$$ (6) The products of the CKM matrix elements are defined by, $$\displaystyle\lambda_{i}$$ $$\displaystyle=$$ $$\displaystyle V_{ci}^{*}V_{ui},\quad(i=d,s,b)$$ (7) $$\displaystyle\lambda_{j(q)}$$ $$\displaystyle=$$ $$\displaystyle V_{jb}V_{jq}^{*},\quad(j=u,c,t~{}\mathrm{and}~{}q=d,s)$$ (8) where in the CP conserving limit, $\lambda_{i}$ and $\lambda_{j(q)}$ are both real-valued. We shall adopt the Wolfenstein parameters of Particle Data Group (PDG) Zyla:2020zbs to calculate Eqs. (7, 8) for the numerical results presented in Sec. 4.2. $\Gamma_{12}^{(H,\>\mathrm{inc})}(H=D^{0},\bar{B^{0}_{d}},\bar{B^{0}_{s}})$ is evaluated through the quark-level analysis of HQE while $\Gamma_{12}^{(H,\>\mathrm{exc})}$ is computed on the basis of the solution to the ’t Hooft equation by taking sum over exclusive hadronic final states. The three pieces, $\Gamma_{dd}^{(D^{0},\>\mathrm{inc})},\Gamma^{(D^{0},\>\mathrm{inc})}_{sd}$ and $\Gamma^{(D^{0},\>\mathrm{inc})}_{ss}$ (and similar objects for $\bar{B^{0}_{q}}$), represent individual quark contributions in the loop while the intermediate particles are given by the associated bound states for ones with $\mathrm{inc}\to\mathrm{exc}$. Exploiting the unitarity relation, $\lambda_{d}+\lambda_{s}+\lambda_{b}=0~{}(\lambda_{u(q)}+\lambda_{c(q)}+\lambda_{t(q)}=0)$, one can eliminate $\lambda_{d}~{}(\lambda_{u(q)})$ in Eq. (5) (Eq. (6)) and write, $$\displaystyle\Gamma_{12}^{(D^{0},\>\mathrm{\alpha})}$$ $$\displaystyle=$$ $$\displaystyle\lambda_{s}^{2}\Gamma_{(\mathrm{GIM},~{}1)}^{(D^{0},~{}\alpha)}+2\lambda_{s}\lambda_{b}\Gamma_{(\mathrm{GIM},~{}2)}^{(D^{0},~{}\alpha)}+\lambda_{b}^{2}\Gamma^{(D^{0},\>\mathrm{\alpha})}_{dd},\quad\qquad$$ (9) $$\displaystyle\Gamma_{12}^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}$$ $$\displaystyle=$$ $$\displaystyle\lambda_{c(q)}^{2}\Gamma_{(\mathrm{GIM},~{}1)}^{(\bar{B^{0}_{q}},~{}\alpha)}+2\lambda_{c(q)}\lambda_{t(q)}\Gamma_{(\mathrm{GIM},~{}2)}^{(\bar{B^{0}_{q}},~{}\alpha)}+\lambda_{t(q)}^{2}\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{uu},\quad\qquad$$ (10) where the combinations for individual contributions of flavors are given by, $$\displaystyle\Gamma_{(\mathrm{GIM},~{}1)}^{(D^{0},~{}\alpha)}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(D^{0},\>\mathrm{\alpha})}_{dd}+\Gamma^{(D^{0},\>\mathrm{\alpha})}_{ss}-2\Gamma^{(D^{0},\>\mathrm{\alpha})}_{sd}$$ (11) $$\displaystyle\Gamma_{(\mathrm{GIM},~{}2)}^{(D^{0},~{}\alpha)}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(D^{0},\>\mathrm{\alpha})}_{dd}-\Gamma^{(D^{0},\>\mathrm{\alpha})}_{sd}$$ (12) $$\displaystyle\Gamma_{(\mathrm{GIM},~{}1)}^{(\bar{B^{0}_{q}},~{}\alpha)}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{uu}+\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{cc}-2\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{cu}$$ (13) $$\displaystyle\Gamma_{(\mathrm{GIM},~{}2)}^{(\bar{B^{0}_{q}},~{}\alpha)}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{uu}-\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{\alpha})}_{cu}.$$ (14) One finds that Eqs. (11, 12) and Eqs. (13, 14) vanish for $s=d$ and $c=u$, respectively, so that the first two terms in Eq. (9) and Eq. (10) are sensitive to flavor symmetry breakings. The characteristic differences between $D^{0}-\bar{D^{0}}$, $B_{d}^{0}-\bar{B_{d}^{0}}$ and $B_{s}^{0}-\bar{B_{s}^{0}}$ mixings can be found in Eqs. (5-14). To see this, we exploit the hierarchy of the CKM matrix elements, $|\lambda_{s}|\gg|\lambda_{b}|$ for the $D^{0}-\bar{D^{0}}$ mixing and $|\lambda_{u(s)}|\ll|\lambda_{c(s)}|$ for the $B_{s}^{0}-\bar{B_{s}^{0}}$ mixing. If the SU(3) breaking in Eq. (11) is larger than the suppression from $\lambda_{b}$ for $D^{0}-\bar{D^{0}}$ mixing, we find that the $D^{0}-\bar{D^{0}}$ and $B^{0}_{s}-\bar{B^{0}_{s}}$ mixings are approximated by one term, $$\displaystyle\Gamma_{12}^{(D^{0})}$$ $$\displaystyle\simeq$$ $$\displaystyle\lambda_{s}^{2}\Gamma_{(\mathrm{GIM},~{}1)}^{(D^{0})},$$ (15) $$\displaystyle\Gamma_{12}^{(\bar{B^{0}_{s}})}$$ $$\displaystyle\simeq$$ $$\displaystyle\lambda_{c(s)}^{2}\Gamma_{cc}^{(B^{0}_{s})},$$ (16) where Eq. (9) is used for Eq. (15) while Eq. (6) is considered for Eq. (16). As for the $B_{d}^{0}-\bar{B_{d}^{0}}$ mixing, $|\lambda_{u(d)}|,|\lambda_{c(d)}|$ and $|\lambda_{t(d)}|$ are comparable so that the formula corresponding to Eqs. (15, 16) is not simplified, yet the strong sensitivity to (GIM, 1) is absent. It should be stressed that the order of magnitude for $\Gamma_{12}$ is characterized by flavor symmetry breaking specifically in the case of the $D^{0}-\bar{D^{0}}$ mixing, to be contrasted with the case of the $B_{q}^{0}-\bar{B}_{q}^{0}$ mixing. This aspect, arising from the different CKM structures in $D,B_{d},B_{s}$ systems, affects the order of the magnitude of final results, as we shall see in Sec. 4.2. In the later numerical analysis for violation of local duality, we use the exact formulas in Eqs. (9-14) instead of Eqs. (15, 16). In the CP conserving limit, $\Gamma_{12}^{(H,\>\mathrm{exc})}$ is expressed as a sum over final states Falk:2001hx ; Cheng:2010rv for $(H,\bar{H})=(D^{0},\bar{D^{0}}),(\bar{B^{0}_{d}},B^{0}_{d})$ and $(\bar{B^{0}_{s}},B^{0}_{s})$, $$\displaystyle\Gamma_{12}^{(H,\>\mathrm{exc})}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\displaystyle\sum_{n}\rho_{n}\left(\bra{\bar{H}}\mathcal{H}_{W}^{|\Delta F|=1}\ket{n}\bra{n}\mathcal{H}_{W}^{|\Delta F|=1}\ket{H}\right.$$ (17) $$\displaystyle\left.+\bra{H}\mathcal{H}_{W}^{|\Delta F|=1}\ket{n}\bra{n}\mathcal{H}_{W}^{|\Delta F|=1}\ket{\bar{H}}\right).$$ with $\rho_{n}$ being the phase space factor. By using CP transform, one rewrites the above formula, $$\displaystyle\Gamma_{12}^{(H,\>\mathrm{exc})}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\displaystyle\sum_{n}\eta(n)\rho_{n}\left(\bra{H}\mathcal{H}_{W}^{|\Delta F|=1}\ket{\bar{n}}\bra{n}\mathcal{H}_{W}^{|\Delta F|=1}\ket{H}\right.$$ (18) $$\displaystyle\left.+\bra{H}\mathcal{H}_{W}^{|\Delta F|=1}\ket{n}\bra{\bar{n}}\mathcal{H}_{W}^{|\Delta F|=1}\ket{H}\right).$$ where $\eta(n)$ is a phase that depends on each intermediate state. 3 Inclusive and exclusive analyses in $1+1$ dimensions 3.1 The ’t Hooft model The QCD Lagrangian in $1+1$ dimensions has a form apparently similar to one in $3+1$ dimensions, $$\displaystyle\mathcal{L}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{4}G_{\mu\nu}^{a}G^{\mu\nu}_{a}+\displaystyle\sum_{f}\bar{\psi}_{f}(i\not{D}-m_{f})\psi_{f},$$ (19) with the covariant derivative defined by $iD_{\mu}=i\partial_{\mu}+gA_{\mu}$. For the second term, the sum runs over flavors. $m_{f}$ and $g$ are a bare mass and a bare coupling, respectively, both of which have a unit mass dimension in $1+1$ spacetime. We introduce the following notation of the QCD coupling, $$\displaystyle\beta^{2}=\frac{g^{2}N_{c}}{2\pi}.$$ (20) $\beta$ is require to be a constant in the large-$N_{c}$ limit for the sensible counting for $N_{c}$, and gives a unit for any dimensional quantities in the model. We adopt the light-cone gauge satisfying $A_{-}=0$, in which case the theory becomes ghost-free while the field strength is simplified to be effectively Abelian. With the notations introduced above, the ’t Hooft equation is given by, $$\displaystyle M_{n}^{2}\phi_{n}^{q_{1}\bar{q}_{2}}(x)=\left(\frac{m_{1}^{2}-\beta^{2}}{x}+\frac{m_{2}^{2}-\beta^{2}}{1-x}\right)\phi_{n}^{(q_{1}\bar{q}_{2})}-\beta^{2}\>\mathrm{Pr}\int_{0}^{1}\mathrm{d}y\frac{\phi_{n}^{(q_{1}\bar{q}_{2})}(y)}{(x-y)^{2}}.$$ (21) where $x$ and $1-x$ represent the light-cone momentum fractions that are carried by $q_{1}$ and $\bar{q}_{2}$, respectively. $M_{n}$ denotes the meson mass while $m_{1}$ and $m_{2}$ are bare masses of $q_{1}$ and $\bar{q}_{2}$, respectively. $\phi_{n}$ is a meson wave function of the $n$-th $(n=0,1,\cdots)$ radial state that satisfies the boundary conditions, $\phi_{n}(0)=\phi_{n}(1)=0$. States labeled by $n=$ even are pseudoscalar mesons with $n=0$ being the ground state, the lightest hadron. The other states with $n=$ odd are scalar mesons. As was shown by ’t Hooft, Eq. (21) is independent of the infrared cut-off. The renormalizations for fermion masses were already taken into account by shifting the bare masses, $m_{1}^{2}\to m_{1}^{2}-\beta^{2}$ and $m_{2}^{2}\to m_{2}^{2}-\beta^{2}$, in Eq. (21). Furthermore, by introducing a meson decay constant for the $n$-th radial state consisting of $q_{1}$ and $\bar{q_{2}}$, $$\displaystyle f_{n}^{(q_{1}\bar{q_{2}})}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{\frac{N_{c}}{\pi}}c_{n}^{(q_{1}\bar{q_{2}})},$$ (22) $$\displaystyle c_{n}^{(q_{1}\bar{q_{2}})}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\mathrm{d}x\phi_{n}^{(q_{1}\bar{q}_{2})}(x),$$ (23) one writes a matrix element for the axial current, $$\displaystyle\bra{0}\bar{q_{2}}\gamma_{\mu}\gamma_{5}q_{1}\ket{H(p)}$$ $$\displaystyle=$$ $$\displaystyle f_{H}p_{\mu}.$$ (24) Above we used the mesonic notation, $H$, for the ground state consisting of $q_{1}\bar{q_{2}}$, corresponding to $n=0$ in Eq. (22, 23). The matrix element of the pseudoscalar bilinear similar to Eq. (24) can be derived by using the equation of motion while one for the scalar bilinear vanishes. As for the matrix element of the vector current, it can be rewritten as one for the axial vector current in Eq. (24) by using the relation of the gamma matrix in two-dimensions, as is done in Appendix. 3.2 HQE from leading operators We consider the weak vertex that has a generalized Lorentz structure parameterized as, $$\displaystyle\frac{-ig_{2}}{\sqrt{2}}V_{\mathrm{CKM}}\gamma^{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5}),$$ (25) with $V_{\mathrm{CKM}}$ being the CKM matrix element associated with a given process. $c_{\mathrm{V}}=-c_{\mathrm{A}}=1/2$ corresponds to the case where the weak interaction proceeds via the standard model-like V$-$A current. The $W$ boson propagator given as in $3+1$ dimensions is, $$\displaystyle\frac{-i}{q^{2}-M_{W}^{2}+i\epsilon}\left(g_{\mu\nu}-\xi\frac{q_{\mu}q_{\nu}}{M_{W}^{2}}\right),$$ (26) where fixing $\xi=1$ leads to the unitary gauge, in which case the contributions of the charged-Goldstone bosons are absent. We keep the contribution that is dominant in the limit of $M_{W}\to\infty$, corresponding to the $g_{\mu\nu}$ part in Eq. (26). Below, by using these Feynman rules, we give the effective Hamiltonian leading to the absorptive parts of $Q\bar{q}\to q\bar{Q}$ transition with $(Q,\bar{q})$ being $(c,\bar{u})$, $(b,\bar{d})$ or $(b,\bar{s})$ shown in Fig. 1. The detail of the calculation is given in Appendix. As a result, the absorptive parts of the effective Hamiltonian that contribute to the heavy meson mixing in the considered approximations are given by, $$\displaystyle\mathcal{H}_{W}^{(H,\>\mathrm{abs})}$$ $$\displaystyle=$$ $$\displaystyle\displaystyle\sum_{i,j}\lambda_{i}\lambda_{j}(C^{\rm A}_{ij}\mathcal{O}^{\rm A}+C^{\rm P}_{ij}\mathcal{O}^{\rm P}),$$ (27) The coefficients and the four-quark operators are given by, $$\displaystyle C^{\rm A}_{ij}$$ $$\displaystyle=$$ $$\displaystyle+4G_{F}^{2}(c_{\rm V}^{2}-c_{\rm A}^{2})\left[(c_{\rm V}^{2}-c_{\rm A}^{2})\left(F^{\rm(th)}_{ij}+2G^{\rm(th)}_{ij}\right)-(c_{\rm V}^{2}+c_{\rm A}^{2})\left(I^{\rm(th)}_{ij}+I^{\rm(th)}_{ji}\right)\right],\qquad$$ (28) $$\displaystyle C^{\rm P}_{ij}$$ $$\displaystyle=$$ $$\displaystyle-4G_{F}^{2}(c_{\rm V}^{2}-c_{\rm A}^{2})\left[(c_{\rm V}^{2}-c_{\rm A}^{2})\left(G^{\rm(th)}_{ij}+2H^{\rm(th)}_{ij}\right)+(c_{\rm V}^{2}+c_{\rm A}^{2})\left(I^{\rm(th)}_{ij}+I^{\rm(th)}_{ji}\right)\right],\qquad$$ (29) $$\displaystyle\mathcal{O}^{\rm A}$$ $$\displaystyle=$$ $$\displaystyle(\bar{q}^{\alpha}\gamma^{\mu}\gamma_{5}Q^{\alpha})(\bar{q}^{\beta}\gamma_{\mu}\gamma_{5}Q^{\beta}),$$ (30) $$\displaystyle\mathcal{O}^{\rm P}$$ $$\displaystyle=$$ $$\displaystyle(\bar{q}^{\alpha}i\gamma_{5}Q^{\alpha})(\bar{q}^{\beta}i\gamma_{5}Q^{\beta}).$$ (31) Here $F^{\rm(th)}_{ij},G^{\rm(th)}_{ij},H^{\rm(th)}_{ij}$ and $I^{\rm(th)}_{ij}$ represent phase space functions that have non-zero values in a physical region, $$\displaystyle F^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{1-2(z_{i}+z_{j})+(z_{i}-z_{j})^{2}},$$ (32) $$\displaystyle G^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{z_{i}+z_{j}-(z_{i}-z_{j})^{2}}{\sqrt{1-2(z_{i}+z_{j})+(z_{i}-z_{j})^{2}}},$$ (33) $$\displaystyle H^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sqrt{z_{i}z_{j}}}{\sqrt{1-2(z_{i}+z_{j})+(z_{i}-z_{j})^{2}}},$$ (34) $$\displaystyle I^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sqrt{z_{i}}(1+z_{i}-z_{j})}{\sqrt{1-2(z_{i}+z_{j})+(z_{i}-z_{j})^{2}}},$$ (35) with $z_{\beta}=m_{\beta}^{2}/m_{Q}^{2}~{}(\beta=i,j)$. The leading contribution for large $m_{Q}$ solely comes from the term proportional to $F^{\rm(th)}_{ij}$. One finds that the coefficients given in Eqs. (28, 29) are proportional to $(c_{\rm V}^{2}-c_{\rm A}^{2})$. Hence, the observables in meson mixings for $c_{\rm V}=\pm c_{\rm A}$ corresponding to the $V\pm A$ current vanish, which is not seen in four-dimensions. This is partially attributed to the fact that either vector current or axial current is reducible and can be written by another in two-dimensions. The derivation for Eqs. (28, 29) by means of the Fiertz rearrangements in two-dimensions is given in Appendix. Furthermore, the non-vanishing result in the limit of $m_{i},m_{j}\to 0$ for the $V\times V$ current ($c_{\mathrm{V}}\neq 0,c_{\mathrm{A}}=0$) observed via Eqs. (28, 29) is to be contrasted with Ref. Bigi:1998kc , where the contribution of the four-fermion operator in the annihilation-topology, calculated as an absorptive part, is shown to vanish at zeroth order in strong interaction. The matrix elements in Eq. (27) can be taken on the basis of the factorization in the large-$N_{c}$ limit with Eq. (24), $$\displaystyle\frac{\bra{\bar{H}}\mathcal{O}_{\rm A}\ket{H}}{2M_{H}}$$ $$\displaystyle=$$ $$\displaystyle f_{H}^{2}M_{H},$$ (36) $$\displaystyle\frac{\bra{\bar{H}}\mathcal{O}_{\rm P}\ket{H}}{2M_{H}}$$ $$\displaystyle=$$ $$\displaystyle f_{H}^{2}M_{H}R.$$ (37) with $R=[M_{H}/(m_{Q}+m_{q})]^{2}$. On r.h.s. of Eqs. (36, 37), the factor two, arising from two possible ways for taking the currents in inserting vacuum, are considered, and is cancelled out with $1/2$ on l.h.s. If we go beyond large-$N_{c}$ limit, an evaluation the non-perturbative matrix elements in Eqs. (36, 37) should be made, that is beyond our current scope. As long as the four-quark operators are concerned, however, the matrix elements do not give sources of flavor symmetry breaking in Eqs. (11-14). As a main result in this subsection, one finally obtains the HQE expression of the four-quark operators, $$\displaystyle\Gamma^{(H,\>\mathrm{inc})}_{ij}$$ $$\displaystyle=$$ $$\displaystyle(C_{\rm A}+C_{\rm P}R)f_{H}^{2}M_{H}$$ (38) where again $H$ is either $D^{0},\bar{B^{0}_{d}}$ or $\bar{B^{0}_{s}}$ and $(i,j)$ runs $(d,d),(s,d),(s,s)$ for the first case and $(u,u),(c,u),(c,c)$ for the latter two cases. In the limit of $m_{Q}\to\infty$, it is well-know that $c_{0}^{(Q\bar{q})}\to 1/\sqrt{m_{Q}}$ and $M_{H}\sim m_{Q}+\mathcal{O}(m_{Q}^{0})$ follow, so that $\Gamma_{ij}^{(H,\>\mathrm{inc})}$ behaves like $\Gamma_{ij}^{(H,\>\mathrm{inc})}\propto\mathrm{const}.$, to be contrasted with the case in $3+1$ dimensions, $\Gamma_{ij}^{(H,\>\mathrm{inc})}\propto m_{Q}^{2}$, as can be seen from Refs. Hagelin:1981zk ; Cheng:1982hq ; Buras:1984pq . This difference results from the fact that both Fermi constant and decay constant are dimensionless in $1+1$ spacetime. If we take the massless limit of internal quarks, Eq. (38) is recast into, $$\displaystyle\Gamma^{(H,\>\mathrm{inc})}_{ij}$$ $$\displaystyle\to$$ $$\displaystyle 4G_{F}^{2}(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})^{2}f_{H}^{2}M_{H}.$$ (39) As we shall see later, Eq. (39) agrees with the exclusive result in the same limit. The $1/m_{Q}$ expansion of the contributions of the four-quark operators in Eq. (38) can be readily studied in the static limit, $m_{Q}=m_{1}\to\infty$, in Eq. (21) as was first discussed in Refs. Burkardt:1991ea ; Burkardt:1992qm with $t=(1-x)m_{Q}$ and $\psi_{n}(t)=\phi_{n}(1-t/m_{Q})/\sqrt{m_{Q}}$. Below, we give the final results for the ground state in Ref. Lebed:2000gm , $$\displaystyle c_{0}^{(Q\bar{q})}\sqrt{m_{Q}}$$ $$\displaystyle=$$ $$\displaystyle\left[1-\frac{2}{3}\frac{2\bar{\Lambda}-m_{q}}{m_{Q}}\right]F^{(0)}+\mathcal{O}\left(\frac{1}{m_{Q}^{2}}\right),$$ (40) $$\displaystyle M_{H}-m_{Q}$$ $$\displaystyle=$$ $$\displaystyle\bar{\Lambda}+\frac{\braket{\bar{Q}(i\vec{D})^{2}Q}-\beta^{2}}{2m_{Q}}+\mathcal{O}\left(\frac{1}{m_{Q}^{2}}\right),$$ (41) where $F^{(n)}$ is a finite object in the static limit, $$\displaystyle F^{(n)}=\int_{0}^{\infty}\mathrm{d}t~{}\psi_{n}(t)=\lim_{m_{Q}\to\infty}c_{n}^{(Q\bar{q})}\sqrt{m_{Q}}.$$ (42) Moreover, it might be useful to introduce $\delta\equiv R-1$, a quantity power-suppressed by $m_{Q}$, where the expansion of $\delta$ is obtained from Eq. (41), $$\displaystyle\delta=2\frac{\bar{\Lambda}-m_{q}}{m_{Q}}+\mathcal{O}\left(\frac{1}{m_{Q}^{2}}\right).$$ (43) One also finds that the phase space functions in Eqs. (32-35) give corrections of the $1/m_{Q}$ expansion due to the expansion formulae, $$\displaystyle F^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle 1-(z_{i}+z_{j})-2z_{i}z_{j}+\mathcal{O}(z^{3}),$$ (44) $$\displaystyle G^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle(z_{i}+z_{j})+4z_{i}z_{j}+\mathcal{O}(z^{3}),$$ (45) $$\displaystyle H^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{z_{i}z_{j}}[1+(z_{i}+z_{j})]+\mathcal{O}(z^{3}),$$ (46) $$\displaystyle I^{\rm(th)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{z_{i}}[1+2(1+2z_{j})z_{i}+2z_{i}^{2}+\mathcal{O}(z^{3})].$$ (47) Only $F^{\rm(th)}_{ij}$ is non-vanishing in the static limit ($z_{i},z_{j}\to 0$) while $G^{\rm(th)}_{ij},H^{\rm(th)}_{ij}$ and $I^{\rm(th)}_{ij}$ are sub-leading functions. Combining Eqs. (40-47), one finds that the expansion for the width difference in Eq. (38) starts from $1/m_{Q}$, $$\displaystyle\Gamma^{(H,\>\mathrm{inc})}_{ij}$$ $$\displaystyle=$$ $$\displaystyle 4G_{F}^{2}(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})\frac{N_{c}}{\pi}\left[F^{(0)}\right]^{2}\left[(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})\left(1-\frac{5\bar{\Lambda}-4m_{q}}{3m_{Q}}\right)\right.$$ (48) $$\displaystyle\left.-2(c_{\mathrm{V}}^{2}+c_{\mathrm{A}}^{2})\frac{m_{i}+m_{j}}{m_{Q}}+\mathcal{O}\left(\frac{1}{m_{Q}^{2}}\right)\right].$$ It is possible to numerically obtain the explicit coefficients of each $1/m_{Q}$ term with a given mass of the spectator quark as in Ref. Lebed:2000gm . However, since the expansion of $1/m_{Q}$ is not necessary in our current purpose, the numerical results presented in Sec. 4 are based on Eq. (39) instead of Eq. (48). By using Eq. (38), one can write analytical expressions for the GIM combinations in the massless limit of $d$ quark, i.e., $z_{d}=0$. First we give the formula of Eqs. (11, 12) for the case where only $F_{ij}^{(\rm th)}$, corresponding to four-dimension-like phase space function, is considered with the other phase space functions, $G_{ij}^{(\rm th)},H_{ij}^{(\rm th)}$ and $I_{ij}^{(\rm th)}$ being neglected, $$\displaystyle\left.\Gamma^{(D^{0},\>\mathrm{inc})}_{(\mathrm{GIM},1)}\right|_{4D-\mathrm{like}}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(D^{0},\>\mathrm{inc})}_{dd}\times[-2z_{s}^{2}+\mathcal{O}(z_{s}^{3})],\qquad\qquad$$ (49) $$\displaystyle\left.\Gamma^{(D^{0},\>\mathrm{inc})}_{(\mathrm{GIM},2)}\right|_{4D-\mathrm{like}}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(D^{0},\>\mathrm{inc})}_{dd}\times z_{s},$$ (50) where $\Gamma^{(D^{0},\>\mathrm{inc})}_{dd}$ defined here is given by r.h.s. of Eq. (39). Although $F_{ij}^{(\rm th)}$ is a leading term in the limit of $m_{Q}\to\infty$, a certain care must be taken since the inclusion of the 2D-specific phase space function affects the resultant counting in $z_{s}$. We also give alternative expressions that take account of $F_{ij}^{(\mathrm{th})},G_{ij}^{(\mathrm{th})}$ and $H_{ij}^{(\mathrm{th})}$ without $I_{ij}^{(\mathrm{th})}$, $$\displaystyle\left.\Gamma^{(D^{0},\>\mathrm{inc})}_{(\mathrm{GIM},1)}\right|_{4D+2D}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(D^{0},\>\mathrm{inc})}_{dd}\times\left[-2z_{s}(1+\delta)+\mathcal{O}(z_{s}^{2})\right],\qquad\qquad$$ (51) $$\displaystyle\left.\Gamma^{(D^{0},\>\mathrm{inc})}_{(\mathrm{GIM},2)}\right|_{4D+2D}$$ $$\displaystyle=$$ $$\displaystyle\Gamma^{(D^{0},\>\mathrm{inc})}_{dd}\times\delta z_{s},$$ (52) to be contrasted with Eqs. (49, 50). Equations (49-52) clearly show that the width difference is suppressed by SU(3) breaking. It is also possible to consider the GIM combinations in the presence of all $F_{ij}^{(\mathrm{th})},G_{ij}^{(\mathrm{th})},H_{ij}^{(\mathrm{th})}$ and $I_{ij}^{(\mathrm{th})}$. 3.3 Topological amplitude in the large-$N_{c}$ limit In the remaining part of this section, we aim to obtain $\Gamma_{12}^{(\mathrm{exc})}$ as an exclusive sum by using the wave functions and masses of mesons from the ’t Hooft equation. Below, the on-shell intermediate contributions of $H\to f_{1}f_{2}\to\bar{H}$ to the meson mixings are considered with $f_{1}$ and $f_{2}$ being either pseudoscalar or scalar. The contributions of two-body decays are completed for this case since hadronic states with non-zero angular momentum, e.g., vector and axial vector mesons, are absent in two-dimensions. For neutral mesons, the two-body decay amplitudes are characterized by color-allowed tree ($T$), color-suppressed tree ($C$), exchange ($E$), penguin ($P$), penguin annihilation ($PA$) and penguin exchange ($PE$) diagrams. For explicit decomposition via the topological amplitudes, see Ref. Cheng:2012xb . In the naive counting, $T\propto N_{c}^{1/2},C,E,P,PA\propto N_{c}^{-1/2}$ and $PE\propto N_{c}^{-3/2}$ follow. Even if we take account of resonant contributions for some of the topological amplitudes, that strengthen $N_{c}$ dependence Grinstein:1998gc , the contribution of $T$ to width is still dominant compared with the others in the large-$N_{c}$ limit. The leading decay amplitudes from two-body pseudoscalar modes in the large-$N_{c}$ limit are given by, $$\displaystyle A[D^{0}\to\pi^{+}\pi^{-}]=V_{ud}V_{cd}^{*}T_{(c\bar{u})(d,d)}^{(0,0)},\quad A[D^{0}\to\pi^{+}K^{-}]=V_{ud}V_{cs}^{*}T_{(c\bar{u})(d,s)}^{(0,0)},$$ $$\displaystyle A[D^{0}\to K^{+}\pi^{-}]=V_{us}V_{cd}^{*}T_{(c\bar{u})(s,d)}^{(0,0)},\quad A[D^{0}\to K^{+}K^{-}]=V_{us}V_{cs}^{*}T_{(c\bar{u})(s,s)}^{(0,0)},$$ (53) where the superscript, $(0,0)$, represents the ground states in the final particles while the subscripts, $(c\bar{u})$ and $(i,j)=(d,d),(d,s),(s,d),(s,s)$, stand for the flavors in an initial and final states, respectively. Similarly, one introduces the decay amplitudes of $\bar{B^{0}_{d}}$ and $\bar{B^{0}_{s}}$ from the color-allowed tree diagrams as follows, $$\displaystyle A[\bar{B^{0}_{d}}\to\pi^{-}\pi^{+}]=V_{ud}^{*}V_{ub}T_{(b\bar{d})(u,u)}^{(0,0)},\quad A[\bar{B^{0}_{d}}\to\pi^{-}D^{+}]=V_{ud}^{*}V_{cb}T_{(b\bar{d})(u,c)}^{(0,0)},$$ $$\displaystyle A[\bar{B^{0}_{d}}\to D^{-}\pi^{+}]=V_{cd}^{*}V_{ub}T_{(b\bar{d})(c,u)}^{(0,0)},\quad A[\bar{B^{0}_{d}}\to D^{-}D^{+}]=V_{cd}^{*}V_{cb}T_{(b\bar{d})(c,c)}^{(0,0)},$$ $$\displaystyle A[\bar{B^{0}_{s}}\to K^{-}K^{+}]=V_{us}^{*}V_{ub}T_{(b\bar{s})(u,u)}^{(0,0)},\quad A[\bar{B^{0}_{s}}\to K^{-}D^{+}_{s}]=V_{us}^{*}V_{cb}T_{(b\bar{s})(u,c)}^{(0,0)},$$ $$\displaystyle A[\bar{B^{0}_{s}}\to D^{-}_{s}K^{+}]=V_{cs}^{*}V_{ub}T_{(b\bar{s})(c,u)}^{(0,0)},\quad A[\bar{B^{0}_{s}}\to D^{-}_{s}D^{+}_{s}]=V_{cs}^{*}V_{cb}T_{(b\bar{s})(c,c)}^{(0,0)}.$$ (54) We omitted processes that are given by color-allowed tree diagrams but do not contribute to $B^{0}_{q}-\bar{B^{0}_{q}}$ through the most color-favored topology, e.g., $\bar{B_{d}^{0}}\to K^{-}\pi^{+}$ and $\bar{B_{d}^{0}}\to D^{-}_{s}\pi^{+}$. For additionally including the contributions of scalar and other excited particles in the final states, one generalizes the notations in (53) and in (54) into, $$\displaystyle A[(c\bar{u})^{(0)}\to(u\bar{i})^{(k)}(j\bar{u})^{(m)}]$$ $$\displaystyle=$$ $$\displaystyle V_{ui}V_{cj}^{*}T_{(c\bar{u})(i,j)}^{(k,m)},$$ (55) $$\displaystyle A[(b\bar{q})^{(0)}\to(d\bar{i})^{(k)}(j\bar{q})^{(m)}]$$ $$\displaystyle=$$ $$\displaystyle V_{iq}^{*}V_{jb}T_{(b\bar{q})(i,j)}^{(k,m)}.$$ (56) Here, mesonic states are denoted by $(i\bar{j})^{(k)}$ with $i$ and $j$ being flavors forming the bound state and $k$ being a label of radially excited states with $q=d,s$ in Eq. (56). The initial states are assigned with the ground states, $D^{0}$ and $\bar{B}^{0}_{q}$, to describe processes relevant for the meson mixings. If one takes $k=m=0$, the definitions in Eq. (55) and Eq. (56) reduce to ones in (53) and (54), respectively. By performing the phase space integral in $1+1$ dimensions, one writes the partial decay width for $H\to f_{1}f_{2}$ decays, $$\displaystyle\Gamma$$ $$\displaystyle=$$ $$\displaystyle\frac{|A[H\to f_{1}f_{2}]|^{2}}{4M_{H}^{2}|p_{12}|},$$ (57) $$\displaystyle|p_{12}|$$ $$\displaystyle=$$ $$\displaystyle\frac{M_{H}}{2}\sqrt{1-2\frac{M_{1}^{2}+M_{2}^{2}}{M_{H}^{2}}+\frac{(M_{1}^{2}-M_{2}^{2})^{2}}{M_{H}^{4}}}.$$ (58) where $p_{12}$ denotes a momentum of either daughter meson in the rest frame of $H$. The peculiarity of the phase space, showing that the width looks divergent when $M_{H}=M_{1}+M_{2}$ or equivalently $p_{12}=0$, is present in Eq. (57). This point is obviously distinct from the case with $3+1$ dimensions, $\Gamma\propto|p_{12}|$ due to the phase space, leading to the vanishing width for $p_{12}=0$. In the analytical study Bigi:1998kc , it was shown that this singularity is cancelled out with an amplitude in the semi-leptonic decay, in which massless particles are involved. As mentioned, we consider the rigorous large-$N_{c}$ limit, where the resonant width associated with strong decays vanishes, in which case the topological amplitude does not develop its imaginary part. One can write the individual internal quark contributions on r.h.s. in Eqs. (9, 10) by allocating $n$ in Eq. (18) on the basis of the relevant quantum numbers. The diagrams for exclusive processes, given by the hadronic degrees of freedom in the most color-allowed topology, are shown in Fig. 2 for the heavy meson mixings. Armed with Eq. (18) and the vanishing of the strong phases, we evaluate Fig. 2, $$\displaystyle\Gamma^{(D^{0},\>\mathrm{exc})}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\displaystyle\sum_{k,m}(-1)^{k+m}\frac{T_{(c\bar{u})(i,j)}^{(k,m)}T_{(c\bar{u})(j,i)}^{(m,k)\>*}}{4M_{D^{0}}^{2}|p_{km}|}\quad\mathrm{for}~{}(i,j)=(d,d),(s,d),(s,s),$$ (59) $$\displaystyle\Gamma^{(\bar{B^{0}_{q}},\>\mathrm{exc})}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\displaystyle\sum_{k,m}(-1)^{k+m}\frac{T_{(b\bar{q})(i,j)}^{(k,m)}T_{(b\bar{q})(j,i)}^{(m,k)\>*}}{4M_{B^{0}_{q}}^{2}|p_{km}|}\quad\mathrm{for}~{}(i,j)=(u,u),(c,u),(c,c).$$ (60) The momenta denoted by $p_{km}$ is understood as one in Eq. (58) with the relevant final state. For the individual $(i,j)$ contribution, the sum over $(k,m)$, representing the tower of kinematically allowed excited particles (in addition to the ground states) for final states, is taken. The prefactor of $(-1)^{k+m}$ comes from $\eta(n)$ in Eq. (18) to account for the parity-odd property of the topological amplitude in the case of $k+m=\mathrm{even}$ due to the proportionality to the spatial component of a momentum with overall negative signs. It should be also noted that there exists no orbital angular momentum via the relative motion of particles in the final state in two-dimensions. The explicit formula for color-allowed tree amplitude is obtained in Ref. Grinstein:1997xk . Below we denote the momenta of mesons labeled by $\mathbf{k},\mathbf{m}$ as $q$ and $p$, respectively. The kinematical variable defined by $\omega=q_{-}/p_{-}$ is determined by, $$\displaystyle\omega=\frac{1}{2}\left[1+\left(\frac{q^{2}-M_{m}^{2}}{M_{0}^{2}}\right)-\sqrt{1-2\left(\frac{q^{2}+M_{m}^{2}}{M_{0}^{2}}\right)+\left(\frac{q^{2}-M_{m}^{2}}{M_{0}^{2}}\right)^{2}}\right].$$ (61) With the generalized Lorentz structure in Eq. (25), we can use the formulas Grinstein:1997xk ; Bigi:1999fi , valid for $(Q,\bar{q})$ equal to $(c,\bar{u}),(b,\bar{d})$ and $(b,\bar{s})$ in the limit of $M_{W}\to\infty$, $$\displaystyle T^{(k,m)}_{(Q\bar{q})(i,j)}$$ $$\displaystyle=$$ $$\displaystyle 2\sqrt{2}G_{F}(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})\sqrt{\frac{N_{c}}{\pi}}c_{k}^{(q\bar{i})}\left[\displaystyle\sum_{n=0}\frac{[(-1)^{k}q^{2}+(-1)^{n}M_{n}^{2}]c_{n}^{(Q\bar{j})}}{q^{2}-M_{n}^{2}}F_{nm}\right.$$ (62) $$\displaystyle\left.+(-1)^{k+1}q^{2}\mathcal{C}_{m}+m_{Q}m_{j}\mathcal{D}_{m}\right],$$ For an on-shell process, $q^{2}$ is set to $M_{k}^{2}$ in Eqs. (61, 62). $F_{nm}$ denotes the triple overlap integral while $\mathcal{C}_{m}$ and $\mathcal{D}_{m}$ are the quark-model type contact terms Bigi:1999fi , $$\displaystyle F_{nm}$$ $$\displaystyle=$$ $$\displaystyle\omega(1-\omega)\int_{0}^{1}\mathrm{d}x\int_{0}^{1}\mathrm{d}y\frac{\phi_{n}^{(Q\bar{j})}(x)\phi_{m}^{(j\bar{q})}(y)}{[\omega(1-x)+(1-\omega)y]^{2}}$$ (63) $$\displaystyle\times\{\phi_{0}^{(Q\bar{q})}(\omega x)-\phi_{0}^{(Q\bar{q})}[1-(1-\omega)(1-y)]\},$$ $$\displaystyle\mathcal{C}_{m}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1-\omega}{\omega}\int_{0}^{1}\mathrm{d}x\phi_{0}^{(Q\bar{q})}[1-(1-\omega)(1-x)]\phi_{m}^{(j\bar{q})}(x),$$ (64) $$\displaystyle\mathcal{D}_{m}$$ $$\displaystyle=$$ $$\displaystyle-\omega\int_{0}^{1}\mathrm{d}x\frac{\phi_{0}^{(Q\bar{q})}[1-(1-\omega)(1-x)]}{1-(1-\omega)(1-x)}\displaystyle\frac{\phi_{m}^{(j\bar{q})}(x)}{x},$$ (65) One can analytically simplify $m_{Q}$ dependence of the width difference in Eqs. (59, 60) in the massless limit of quarks except for heavy quarks in initial and final states. For this case, the only non-vanishing contribution in Fig. 2 is $k=m=0$ due to the vanishing property of the decay constants for excited states, $c_{n}=0~{}(n\neq 0)$, for the massless constituents. Since $M_{k}$ and $M_{m}$ with $k=m=0$ vanish in this limit, $q^{2}\to 0$ together with $\omega\to 0$ follows for both of the two interfering amplitudes in Eqs. (59, 60), in which case the terms except for $\mathcal{C}$ in Eq. (62) vanish. Then, the interference of the amplitudes is simplified as, $$\displaystyle T^{(0,0)}_{(Q\bar{q})(i,j)}T^{(0,0)\>*}_{(Q\bar{q})(j,i)}$$ $$\displaystyle=$$ $$\displaystyle 8G_{F}^{2}(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})^{2}M_{H}^{4}\frac{N_{c}}{\pi}c_{0}^{(j\bar{q})}c_{0}^{(q\bar{i})}$$ (66) $$\displaystyle\times\int_{0}^{1}\mathrm{d}x\phi_{0}^{(Q\bar{q})}(x)\phi_{0}^{(j\bar{q})}(x)\int_{0}^{1}\mathrm{d}y\phi_{0}^{(Q\bar{q})}(y)\phi_{0}^{(q\bar{i})}(y).$$ By using $c_{0}=1$ and $\phi_{0}(x)=1$ (except for the end points) for massless constituents, we find that the width difference in Eqs. (59, 60) is reduced to, $$\displaystyle\Gamma^{(H,\>\mathrm{exc})}_{ij}$$ $$\displaystyle=$$ $$\displaystyle 4G_{F}^{2}(c_{\rm V}^{2}-c_{\rm A}^{2})^{2}f_{H}^{2}M_{H},$$ (67) It should be noted that Eq. (67) agrees with the HQE result in Eq. (39). Therefore, local duality is unambiguously seen in the massless limit of quarks except the heavy decaying one, that is indeed an analogy of the Pauli interference Bigi:1999fi ; Bigi:1999qe . Moreover local duality in the heavy meson mixings is understood as an example of the “exclusive” duality Shifman:2000jv , where one exclusive mode approximates the inclusive result. The heavy quark limit is unnecessary to derive duality in this case. Another point to mention is that the twisted sum over exclusive states in Eqs. (59, 60) asymptotically gives $\Gamma_{ij}^{(H,\mathrm{exc})}\to\mathrm{const.}$ while non-twisted sum, corresponding to non-leptonic decay, scales like $\Gamma_{ij}^{(H,\mathrm{nl})}\propto m_{Q}$ Grinstein:1997xk so that whether the topology is twisted affects the asymptotic $m_{Q}$ dependence of the observables. 4 Local duality for massive flavors In reality, $s$ and $c$ quarks cannot be regarded as massless particles. Including these masses is crucial in the presence of the GIM mechanism, since otherwise the net observables vanish in the limit where a particular CKM product is neglected. To this end, in this section, we investigate local duality and its violation for those massive quarks by numerically solving the ’t Hooft equation. In Sec. 4.1, duality in the contributions from individual flavors is discussed. Subsequently, the result for the GIM combination that appears in the observable is presented in Sec. 4.2. In numerically solving the ’t Hooft equation, standard methods adopted in the literature might be the Multhopp technique (see Ref. Multhopp and also appendices in Refs. Jaffe:1991ib ; Grinstein:1997xk for the detail), where the wave function is expanded by the trigonometric basis function. The integral equation is then regarded as an eigenvalue problem, yielding the asymptotically linear Regge trajectory of meson mass spectra. The normalization of the eigenvectors obtained is rescaled so as to satisfy $\int_{0}^{1}\mathrm{d}x[\phi(x)]^{2}=1$. It is often pointed out for the Multhopp technique, however, that the behaviors at the end points, $\phi_{n}(x)=x^{\beta_{1}}$ for $x\sim 0$ and $\phi_{n}(x)=(1-x)^{\beta_{2}}$ for $x\sim 1$ with $\beta_{1,2}$ being $\pi\beta_{1,2}\cot(\pi\beta_{1,2})=\beta^{2}-m_{1,2}^{2}$, are not straightforward to obtain. Then, the BSW-improved Multhopp method Brower:1978wm is developed, rendering the behavior at the end points better controlled. In this work, we adopt the method in Ref. Lebed:2000gm , introducing the following expansion, $$\displaystyle\phi_{n}(x)=\displaystyle\sum_{k=1}^{N}a_{k}^{(n)}\sin(k\theta),\quad\theta=\mathrm{arccos}(2x-1).$$ (68) We then convert the ’t Hooft equation into the eigenvalue problem with the recursive formula in Ref. Lebed:2000gm , where the accuracy nearby end points are improved by taking large $N$, and obtain $a_{k}^{(n)}$ and $M_{n}^{2}$. Nonetheless, the endpoint behaviors for $x\to 0,1$ are still given by square root, so that great care must be taken for the accuracy. As Q that forms the bound state of $Q\bar{q}$ gets heavier, the meson wave function at the vicinity of $x=1$ becomes rather singular. Excited states that are formed by light quark and anti-quark with large $n$, whose wave functions rapidly oscillate, also cause errors in the presence of the limited precision around the endpoints. In this work, we take $N$ in Eq. (68) as 500 and solve the ’t Hooft equation, and then truncate heavier $(500-N_{\mathrm{eff}})$ excited states, that do not follow the linear Regge trajectory, as well as eigenvectors. $N_{\rm eff}$ is varied to test the stability of the numerical results. Moreover, the numerical analysis requires the evaluation of the overlap integrals for the convolution of meson wave functions in Eqs. (63-65), distinguished from the simpler one for semi-leptonic decays of heavy mesons in Ref. Lebed:2000gm . In order to guarantee the numerical stability of the result presented below, we neglect the triple overlap integral in Eq. (63), that gives a contribution suppressed by at least $1/m_{Q}^{2}$ Bigi:1999fi to the decay amplitudes, relative to the leading terms in Eqs. (64, 65). For this case, the stability under the variation of $N_{\rm eff}$ is verified. Hereafter we fix $N_{\rm eff}=300$. Further improvement in the numerical results entails technical tasks, including the accurate calculation of endpoint behaviors, as well as the precise evaluation of the convolution integral, which are beyond our current scope, while the exclusive results presented below capture the leading behaviors in the $1/m_{Q}$ expansion. As was obtained in Sec. 3.2, the HQE result includes the term proportional to $(c_{\rm V}^{4}-c_{\rm A}^{4})$ in addition to one multiplied by $(c_{\rm V}^{2}-c_{\rm A}^{2})^{2}$, where the former is not included in the exclusive results in Eqs. (59, 60) with Eq. (62). For comparing inclusive and sum of exclusive result in a consistent manner, we take only the terms proportional to $(c_{\rm V}^{2}-c_{\rm A}^{2})^{2}$ in Eqs. (28, 29) in what follows. Before proceeding to results, further remarks are addressed: • For the heavy quark decays, spikes of the rate emerge Grinstein:1997xk ; Grinstein:1998gc when the heavy quark mass gets larger than threshold values for $M_{H}=M_{k}+M_{m}$ due to the hadronic phase space unlike the case in $3+1$ dimensions. In order to quantify violation of local duality, the middle point between $i$-th and $(i+1)$-th thresholds should be discussed Bigi:1998kc . For the width difference in the heavy meson mixings, the analogous spikes appears for massive final states, as well as decays. The numerical results presented below are based on discrete points for heavy quark mass that are not (exactly) at the thresholds to avoid obvious singularities in Eqs. (59, 60). • In principle, bare masses and a bare coupling for $d=2$ have no intrinsic relations to ones for $d=4$. For an illustrative reason, we take reference values of bare masses for $d=2$ as central values from PDG Zyla:2020zbs as $\overline{m}_{s}(2~{}\mathrm{GeV})=93~{}\mathrm{MeV}$, $\overline{m}_{c}(\overline{m}_{c})=1.280~{}\mathrm{GeV}$, $m_{c}^{\mathrm{pole}}=1.67~{}\mathrm{GeV}$, $\overline{m}_{b}(\overline{m}_{b})=4.18~{}\mathrm{GeV}$, $m_{b}^{1S}=4.65~{}\mathrm{GeV}$, $m_{b}^{\mathrm{pole}}=4.78~{}\mathrm{GeV}$. In the calculation of the $B^{0}_{s}-\bar{B}^{0}_{s}$ mixing, the bare mass of strange quark is fixed by the $\overline{\rm MS}$ mass of the strange quark mass at the scale of bottom quark mass evaluated by the renormalization group evoluation Chetyrkin:2000yt for $d=4$. The bare masses for $u$ and $d$ quarks are fixed to zero in what follows. As for the bare coupling, we adopt an ansatz, $\beta=340~{}\mathrm{MeV}$, that is obtained in such a way that the string tension of QCD${}_{4}$ is fitted Burkardt:2000uu ; Jia:2017uul by $(\pi/2)\beta^{2}=0.18~{}\mathrm{GeV}^{2}$. 4.1 Numerical result for individual flavors Both inclusive and sum of exclusive width differences for the $D^{0}-\bar{D}^{0}$, $B^{0}_{d}-\bar{B}^{0}_{d}$ and $B^{0}_{s}-\bar{B}^{0}_{s}$ mixings are exhibited in Figs. 3-5. The value of $\beta$ affects only the normalization of the vertical axes of the plots and also the locations of vertical lines showing quark mass in four-dimensions. Figure 3a (4a) is based on $m_{s}/\beta=0.32$ ($m_{c}/\beta=2.9$) corresponding to the $\overline{\rm MS}$ mass at the scale of charm (bottom) quark while Fig. 3b (4b) shows the result for $m_{s}/\beta=0.40$ ($m_{c}/\beta=4.9$ corresponding to the pole mass). In each panel, two types of the width difference including one or two massive flavors, i.e., $sd$ and $ss$ intermediate states for the $D^{0}-\bar{D}^{0}$ mixing and $cu$ and $cc$ intermediate states for the $B^{0}_{q}-\bar{B}^{0}_{q}$ $(q=d,s)$, are shown. Results similar to Fig. 4 except that the $B_{d}^{0}-\bar{B}_{d}^{0}$ mixing is replaced by the $B_{s}^{0}-\bar{B}_{s}^{0}$ mixing are exhibited in Fig. 5. In addition to the results plotted in Figs. 3-5, there are also $\Gamma_{dd}^{(D^{0},\alpha)},\Gamma_{uu}^{(\bar{B}^{0}_{d},\alpha)}$ and $\Gamma_{uu}^{(\bar{B}^{0}_{s},\alpha)}$ $(\alpha=\mathrm{exc},\mathrm{inc})$, that are not presented in the figures. Since those cases include the massless intermediate quarks, the numerical results should be consistent with the analytical results in Sec. 3.3. Indeed, the reasonable agreement between inclusive and sum of exclusive width differences is numerically confirmed for all of the three cases including $\Gamma_{uu}^{(B^{0}_{s},\mathrm{exc})}$ based on massive intermediate kaons, in which case the analytical discussion in the massless limit is not applied. One can find that for the $\Gamma_{ss}^{(D^{0},\mathrm{exc})},\Gamma_{cc}^{(\bar{B}^{0}_{d},\mathrm{exc})}$ and $\Gamma_{cc}^{(\bar{B}^{0}_{s},\mathrm{exc})}$ in Figs. 3-5, the spikes for width differences when the heavy quark mass gets larger than the threshold values are shown obviously. These are to be contrasted with the results for $\Gamma_{sd}^{(D^{0},\mathrm{exc})},\Gamma_{cu}^{(\bar{B}^{0}_{d},\mathrm{exc})}$ and $\Gamma_{cu}^{(\bar{B}^{0}_{s},\mathrm{exc})}$. The absence for the obvious threshold singularities for the latter three cases can be understood analytically as follows: we take $\Gamma_{sd}^{(D^{0},\mathrm{exc})}$ as an example while the similar discussion is applied for $\Gamma_{cu}^{(\bar{B}^{0}_{d},\mathrm{exc})}$. Due to the vanishing properties of decay constants for the excited states of pions, we find that the sum over pion states in Eq. (59) is reduced only to the ground state, as was discussed in Sec. 3.3, so that, $$\displaystyle\Gamma^{(D^{0},\>\mathrm{exc})}_{ds}$$ $$\displaystyle=$$ $$\displaystyle\displaystyle\sum_{m}(-1)^{m}\frac{T_{(c\bar{u})(d,s)}^{(0,m)}T_{(c\bar{u})(s,d)}^{(m,0)\>*}}{2M_{D}(M_{D}^{2}-M_{m}^{2})}.$$ (69) By recalling that in the massless limit of $u$ and $d$ quarks, the only surviving contribution in $T_{(c\bar{u})(d,s)}^{(0,m)}$ arises from the contact interaction term in Eq. (64), one finds, $T_{(c\bar{u})(d,s)}^{(0,m)}\propto q^{2}(1-\omega)/\omega=(M_{D}^{2}-M_{m}^{2})$ in the limit of $q^{2}\to 0$ together with $\omega\to 0$. Substituting this relation into Eq. (69), we find that the phase space singularities for each threshold of $m$ cancel out with the decay amplitude of $D^{0}\to\pi^{+(0)}K^{-(m)}$. Moreover, for $\Gamma_{sd}^{(D^{0},\mathrm{exc})},\Gamma_{cu}^{(\bar{B}^{0}_{d},\mathrm{exc})}$ and $\Gamma_{cu}^{(\bar{B}^{0}_{s},\mathrm{exc})}$, when the heavy quark mass is large, the agreement between inclusive and sum of exclusive width differences is better than $\Gamma_{ss}^{(D^{0},\mathrm{exc})},\Gamma_{cc}^{(\bar{B}^{0}_{d},\mathrm{exc})}$ and $\Gamma_{cc}^{(\bar{B}^{0}_{s},\mathrm{exc})}$ although the analytical understanding for this remains unclear (the coincidence is slowly improved for $\Gamma_{cu}^{(\bar{B}^{0}_{d},\alpha)}$ and $\Gamma_{cu}^{(\bar{B}^{0}_{s},\alpha)}$ in the plotted domains of Figs. 4-5). Consequently, it is expected that patterns of flavor symmetry breaking in (GIM, 1) given in Eqs. (11, 13) is rather different from (GIM, 2) in Eqs. (12, 14) in the currently considered case. 4.2 Numerical result in the presence of the GIM mechanism We would like to remind the reader that the inclusive width difference, discussed in Sec. 3.2, has quite different function forms, depending on whether (1) 4D-like phase space term in Eq. (32) is only considered or (2) the 2D-specific terms in Eqs. (33, 34) are additionally included. For the former, the GIM 1 for the $D^{0}-\bar{D}^{0}$ mixing defined in Eq. (11) behaves like $(m_{s}/m_{c})^{4}$ while it is $(m_{s}/m_{c})^{2}$ for the latter in the large $m_{c}$ limit, due to Eq. (49) and Eq. (51), respectively, meaning that the former is more suppressed. The similar discussion is applied for the $B_{q}^{0}-\bar{B}_{q}^{0}$ mixing by replacing $m_{c}\to m_{b}$ and $m_{s}\to m_{c}$. Thus, the order of the magnitude of $|\Gamma^{\rm(exc)}/\Gamma^{\rm(inc)}|$ strongly depends on whether (1) or (2) is adopted for the inclusive side. Below, we present the results based on both (1) and (2). In Figs. 6-8, absolute values for the ratio of exclusive the GIM 1 combination to inclusive one defined both in Eqs. (11, 13) are given for the three meson mixings. The two panels in each figure are associated with different choices of bare masses for the external quarks. The $\overline{\rm MS}$ masses shown as reference values are evaluated at the scale of the external heavy quark mass for $d=4$. For the $D^{0}-\bar{D}^{0}$ mixing, the enhancement of the exclusive result is larger than $10^{3}$ for $m_{s}/\beta<0.25$ when the inclusive rate includes only the 4D-like phase space term, $F_{ij}^{\rm(th)}$ in Eq. (32). As for the $B^{0}_{q}-\bar{B}^{0}_{q}$ $(q=d,s)$ mixing, a similar enhancement is observed when only the 4D-like phase space term is included, although the enhancement for the $B_{q}^{0}-\bar{B}_{q}^{0}$ mixing is not as strong as the $D^{0}-\bar{D}^{0}$ mixing. The pattern for the $B^{0}_{d}-\bar{B}^{0}_{d}$ mixng in Fig. 7 is similar to that of $B^{0}_{s}-\bar{B}^{0}_{s}$ in Fig. 8. Except that the plotted ratios undergo some jumps when the external quark mass crosses the hadronic thresholds, the results are given by regular curves in all of Figs. 6-8. The dumping behaviors of the results in Figs. 6-8 based on only the 4D-like phase space term as the external quark mass is enlarged indicate that the sum of the exclusive width difference is scaled as $\Gamma_{(\rm GIM,1)}^{(D^{0},\mathrm{exc})}\propto m^{n}_{s}$ and $\Gamma_{(\rm GIM,1)}^{(\bar{B}^{0}_{q},\mathrm{exc})}\propto m^{n}_{c}$ with $n<4$ since the 4D-like inclusive width difference behaves like $n=4$ as shown in Eq. (49). It should be noted that for the $D^{0}-\bar{D}^{0}$ mixing the quantity plotted in Fig. 6 is of direct relevance in phenomenology, while this is not the case for the $B^{0}_{q}-\bar{B}^{0}_{q}$ $(q=d,s)$ mixing, as was discussed in Sec. 2.2. The numerical stabilities under the variation of $N_{\rm eff}$ are confirmed for what are plotted in Figs. 6-8, especially $m_{s}/\beta>0.14$ in Fig. 6. The ratio of the inclusive observable to the sum of exclusive ones defined in Eqs. (2, 4) is shown in Figs. 9-12 for the three meson mixings. In obtaining the figures, we included all the three terms in Eqs. (9, 10). The numerical results are stabilized as the second terms give quite small contributions. One finds that for the $D^{0}-\bar{D}^{0}$ mixing, the patterns in Fig. 9 are precisely similar to those in Fig. 6, which are regarded as the cases in the limit of $\lambda_{b}\to 0$ in Fig. 9. Hence, the net observable for the $D^{0}-\bar{D}^{0}$ mixing is enhanced when the phase space is given by one in four-dimensions, as well as Fig. 6. Meanwhile, the patterns in Figs. 10-11 for the $B_{q}^{0}-\bar{B}_{q}^{0}$ mixing are distinguished from those in Figs. 7-8: the enhancement in the order of magnitude does not occur in Figs. 10-11, yet the visible difference between inclusive and exclusive results exists. This gloss pattern is consistent with the realistic observation in the $D^{0}-\bar{D}^{0}$ and $B^{0}_{q}-\bar{B}^{0}_{q}(q=d,s)$ mixings. That the huge enhancement occurs solely for the $D^{0}-\bar{D}^{0}$ mixing is interpreted as the strong sensitivity to (GIM 1), unlike the $B^{0}_{q}-\bar{B}^{0}_{q}$ mixing, as seen in the approximate relations in Eqs. (15, 16). For the $B^{0}_{q}-\bar{B}^{0}_{q}$ mixing, further comparison between the four-dimensional observation and two-dimensional results is given in order. For $q=d$, the HFLAV result for $\Delta\Gamma_{B_{d}}$ Amhis:2019ckw is consistent with zero within an error while the four-dimensional HQE result is given by $\Delta\Gamma_{B_{d}}=(2.6\pm 0.4)\times 10^{-3}~{}\mathrm{ps}^{-1}$ Lenz:2019lvd . Due to this situation in four-dimensions, a visible size of the correction to the HQE prediction in the $B_{d}^{0}-\bar{B}_{d}^{0}$ mixing is possible, being still consistent with the two-dimensional result in Fig 10. As for $q=s$, by combining the results of the HFLAV Amhis:2019ckw and the HQE Lenz:2019lvd , one obtains a ratio, $\Delta\Gamma_{B_{s}}^{\rm(ex)}/\Delta\Gamma_{B_{s}}^{\rm(th)}=0.99\pm 0.15$ in four-dimensions (the error largely comes from the theoretical side). For the two-dimensional result, the correction to $|\Delta\Gamma_{B_{s}}^{\rm(exc)}/\Delta\Gamma_{B_{s}}^{\rm(inc)}|$ from unity is less than $20\%~{}(18\%)$ for $m_{b}/\beta=13.7~{}(14.1)$ for the plotted points with $m_{c}<m_{c}^{\rm pole,4D}$ in Fig. 11. For this region of charm quark mass, the result in two-dimensions is consistent with what is currently indicated in four-dimensions. In order to check the region for larger bottom quark mass, the width differences with $m_{b}/\beta=15.5$ and $17.0$ are shown in Fig. 12 for the $B_{s}^{0}-\bar{B}^{0}_{s}$ mixing. One can find that the correction to $|\Delta\Gamma_{B_{s}}^{\rm(exc)}/\Delta\Gamma_{B_{s}}^{\rm(inc)}|$ from unity is less than $11\%$ $(8\%)$ for $m_{b}/\beta=15.5$ $(17.0)$ in the region of $m_{c}<m_{c}^{\rm pole,4D}$, being consistent with the observation in four-dimensions within $1\sigma$. 5 Conclusion We have studied local quark-hadron duality and its violation in the heavy quark mixings on the basis of one certain dynamical mechanism. For the inclusive analysis, we have obtained the leading HQE expression that arises from the four-quark operators by evaluating the box diagrams in two-dimensions. The resulting width difference scales like a constant for large $m_{Q}$, with the correction to this starting from $1/m_{Q}$, which was clarified in the static limit. Care must be taken for the fact that, in the presence of the GIM mechanism, the order of magnitude for the inclusive observables strongly depends on whether the 4D-like phase space is solely considered or 2D-specific ones are also included. We have analytically shown that local duality is unambiguously seen in the massless limits for $u$ and $d$ quarks, which might be relevant for $D^{0}\to\pi^{+}\pi^{-}\to\bar{D}^{0}$ and $\bar{B}^{0}_{d}\to\pi^{-}\pi^{+}\to B^{0}_{d}$, by comparing the inclusive and exclusive width differences. This is interpreted as an example of the “exclusive” duality. For the massive case, duality violation is numerically investigated for the three meson mixings with the ’t Hooft equation being solved. For two massive intermediate contributions, e.g., $c\bar{u}\to s\bar{s}\to u\bar{c}$, the spikes for the exclusive width differences appear when the heavy quark mass gets larger than values at each kinematical threshold. As stressed in the Introduction, the realistic observation in four-dimension indicates that the discrepancy between theory and experiment is of four orders of magnitude for the observable in the $D^{0}-\bar{D}^{0}$ mixing when the HQE result is given by the four-quark operators. In an attempt to interpret this observation, we have investigated how the exclusive observable is enhanced, relative to one obtained by the inclusive analysis, in the presence of the GIM mechanism. For the $D^{0}-\bar{D}^{0}$ mixing, the enhancement for the exclusive result is shown, confirmed to be larger than $10^{3}$ for $0.14<m_{s}/\beta<0.25$, when the phase space function is given by only the 4D-like term, although a huge enhancement is absent when the contributions of the 2D-specific phase space terms are added. As for the $B^{0}_{q}-\bar{B}^{0}_{q}$ $(q=d,s)$ mixing, no huge enhancement of the exclusive observable is realized, yet the visible correction to $|\Delta\Gamma_{B_{q}}^{\rm(exc)}/\Delta\Gamma_{B_{q}}^{\rm(inc)}|$ from unity is seen, particularly arising from the $b\bar{q}\to c\bar{c}\to q\bar{b}$. Further improvement in the precision of the exclusive analysis is a technical task. If the domain of $m_{c}<m_{c}^{\rm pole,4D}$ is considered, the correction to the ratio for the $B_{s}^{0}-\bar{B}_{s}^{0}$ mixing is typically less than $(20\%,18\%,11\%,8\%)$ for $m_{b}/\beta=(13.7,14.1,15.5,17.0)$, being still consistent with what is currently indicated in four-dimensions. Those non-negligible corrections to the HQE based on the most color-allowed topology motivate the future measurement in the $B_{d}^{0}-\bar{B}_{d}^{0}$, and suggest that the HQE prediction for $B_{s}^{0}-\bar{B}_{s}^{0}$ should be made more precise, in order to check whether non-negligible duality violation is seen. Acknowledgements.The author would like to thank Hai-Yang Cheng, Hsiang-nan Li and Takuya Morozumi for reading the manuscript and useful comments. Part of the numerical computation in this project was performed by the computational resources at Academia Sinica Grid Computing Centre (ASGC). This work was supported in part by MOST of R.O.C. under Grant No. MOST-107-2119-M-001-035-MY3. Appendix A Box diagram in $1+1$ dimensions For $Q(p_{1})\bar{q}(p_{2})\to q(p_{3})\bar{Q}(p_{4})$, one finds that Fig. 1a with internal quarks being labeled as $(i,j)=(d,d),(s,d),(s,s)$ for $c\bar{u}\to u\bar{c}$ and $(i,j)=(u,u),(c,u),(u,u)$ for $b\bar{q}\to q\bar{b}$ $(q=d,s)$ is calculated in $d$ dimensions, $$\displaystyle\left.\mathcal{A}_{ij}\right|_{(\mathrm{a})}$$ $$\displaystyle=$$ $$\displaystyle-\lambda_{i}\lambda_{j}\left(\frac{-ig_{2}}{\sqrt{2}}\right)^{4}\int\frac{\mathrm{d}^{d}q}{(2\pi)^{d}i}\bar{q}(p_{3})\gamma^{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})\frac{1}{\not{q}-m_{i}+i\epsilon}\gamma^{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})Q(p_{1})$$ (70) $$\displaystyle\times$$ $$\displaystyle\bar{q}(p_{2})\gamma_{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})\frac{1}{\not{q}-\not{p}_{1}-\not{p}_{2}-m_{j}+i\epsilon}\gamma_{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})Q(p_{4})$$ $$\displaystyle\times$$ $$\displaystyle\frac{1}{(q-p_{1})^{2}-M_{W}^{2}+i\epsilon}\frac{1}{(q-p_{3})^{2}-M_{W}^{2}+i\epsilon},\qquad\qquad$$ The above expression is readily evaluated in the approximation where the momenta of heavy quark ($Q$) is much larger than ones of the spectator quark ($q$), i.e., $p_{1}\gg p_{2},p_{4}\gg p_{3}$. This is done by decomposing the product of the propagators into partial fractions Cheng:1982hq with $x_{\beta}=m_{\beta}^{2}/M_{W}^{2}~{}(\beta=Q,i,j)$, $$\displaystyle\frac{1}{q^{2}-m_{i}^{2}}\frac{1}{(q-p_{1})^{2}-m_{j}^{2}}\frac{1}{(q-p_{1})^{2}-M_{W}^{2}}\frac{1}{q^{2}-M_{W}^{2}}$$ $$\displaystyle=\frac{1}{M_{W}^{4}(1-x_{i}^{2})(1-x_{j}^{2})}\left[\frac{1}{(q^{2}-m_{i}^{2})[(q-p_{1})^{2}-m_{j}^{2}]}+\frac{1}{(q^{2}-M_{W}^{2})[(q-p_{1})^{2}-M_{W}^{2}]}\right.$$ $$\displaystyle\left.\qquad\qquad\qquad\qquad\qquad-\frac{1}{(q^{2}-m_{i}^{2})[(q-p_{1})^{2}-M_{W}^{2}]}-\frac{1}{(q^{2}-M_{W}^{2})[(q-p_{1})^{2}-m_{j}^{2}]}\right].\qquad\qquad$$ (71) Hereafter we suppress $1/[(1-x_{i})(1-x_{j})]$ in Eq. (71), which approaches unity in $M_{W}\to\infty$. By defining an object analogous to the Fermi constant, $G_{F}/\sqrt{2}=g_{2}^{2}/8M_{W}^{2}$, one gets, $$\displaystyle\left.\mathcal{A}_{ij}\right|_{(\mathrm{a})}$$ $$\displaystyle=$$ $$\displaystyle-8\lambda_{i}\lambda_{j}G_{F}^{2}\left(\displaystyle\sum_{k=1}^{2}-\displaystyle\sum_{k=3}^{4}\right)\left\{\left[g_{\rho\sigma}F^{(k)}_{ij}-p_{1\rho}p_{1\sigma}G^{(k)}_{ij}\right]\right.$$ (72) $$\displaystyle\times$$ $$\displaystyle[\bar{q}(p_{3})\gamma_{\mu}\gamma^{\rho}\gamma_{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\sigma}\gamma^{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{4})]$$ $$\displaystyle+(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})^{2}m_{i}m_{j}H^{(k)}_{ij}[\bar{q}(p_{3})\gamma_{\mu}\gamma_{\nu}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\mu}Q(p_{4})]$$ $$\displaystyle-(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})m_{i}p_{1\rho}I^{(k)}_{ij}[\bar{q}(p_{3})\gamma_{\mu}\gamma_{\nu}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\rho}\gamma^{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{4})]$$ $$\displaystyle\left.+(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})m_{j}p_{1}^{\rho}I^{(k)}_{ji}[\bar{q}(p_{3})\gamma_{\mu}\gamma_{\rho}\gamma_{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\mu}Q(p_{4})]\right\},\qquad\quad$$ where $F^{(k)}_{ij},G^{(k)}_{ij},H^{(k)}_{ij}$ and $I^{(k)}_{ij}$ are loop integrals given by, $$\displaystyle F^{(k)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\mathrm{d}\alpha\int\frac{\mathrm{d}^{d}q}{(2\pi)^{d}i}\frac{q^{2}/d}{[q^{2}-M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)]^{2}}=-\frac{1}{2}\frac{\Gamma(1-\frac{d}{2})}{(4\pi)^{d/2}}\int_{0}^{1}\mathrm{d}\alpha\left(\frac{1}{M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)}\right)^{1-d/2},\qquad\quad$$ (73) $$\displaystyle G^{(k)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\mathrm{d}\alpha\int\frac{\mathrm{d}^{d}q}{(2\pi)^{d}i}\frac{\alpha(1-\alpha)}{[q^{2}-M_{W}^{2}\Lambda_{ij}^{(k)}(\alpha)]^{2}}=\frac{\Gamma(2-\frac{d}{2})}{(4\pi)^{d/2}}\int_{0}^{1}\mathrm{d}\alpha\>\alpha(1-\alpha)\left(\frac{1}{M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)}\right)^{2-d/2},\quad\qquad$$ (74) $$\displaystyle H^{(k)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\mathrm{d}\alpha\int\frac{\mathrm{d}^{d}q}{(2\pi)^{d}i}\frac{1}{[q^{2}-M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)]^{2}}=\frac{\Gamma(2-\frac{d}{2})}{(4\pi)^{d/2}}\int_{0}^{1}\mathrm{d}\alpha\left(\frac{1}{M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)}\right)^{2-d/2},\quad\qquad$$ (75) $$\displaystyle I^{(k)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\mathrm{d}\alpha\int\frac{\mathrm{d}^{d}q}{(2\pi)^{d}i}\frac{\alpha}{[q^{2}-M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)]^{2}}=\frac{\Gamma(2-\frac{d}{2})}{(4\pi)^{d/2}}\int_{0}^{1}\mathrm{d}\alpha~{}\alpha\left(\frac{1}{M_{W}^{2}\Lambda^{(k)}_{ij}(\alpha)}\right)^{2-d/2},\quad\qquad$$ (76) where some objects analogous to those in $3+1$ dimensions Cheng:1982hq ; Buras:1984pq are introduced, $$\displaystyle\Lambda^{(1)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle(1-\alpha)x_{i}+\alpha x_{j}-\alpha(1-\alpha)x_{Q}-i\epsilon,$$ (77) $$\displaystyle\Lambda^{(2)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle 1-\alpha(1-\alpha)x_{Q}-i\epsilon,$$ (78) $$\displaystyle\Lambda^{(3)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle(1-\alpha)x_{i}+\alpha-\alpha(1-\alpha)x_{Q}-i\epsilon,$$ (79) $$\displaystyle\Lambda^{(4)}_{ij}$$ $$\displaystyle=$$ $$\displaystyle(1-\alpha)+\alpha x_{j}-\alpha(1-\alpha)x_{Q}-i\epsilon.$$ (80) There are a few points to be mentioned. First, due to the asymmetric sum of $k$, terms independent of $k$ vanish in Eq. (72). Second, a threshold relevant for two internal quarks is associated with $\Lambda_{ij}^{(1)}$ in Eq. (72) while $\Lambda^{(3)}_{ij}$ and $\Lambda^{(4)}_{ij}$ ($\Lambda^{(2)}_{ij}$) correspond to that of the single (double) $W$ boson(s). Thus, only $k=1$ in Eq. (72) is of our current interest to calculate the absorptive part. Third, for $d=2$, all of the functions in Eqs. (73-76) give rises to discontinuity, contributing to the width difference. As we will see later, the discontinuities of $G_{ij}^{(k)},H_{ij}^{(k)}$ and $I_{ij}^{(k)}$ have function forms distinct from ones for $d=4$. Assembling the above-mentioned points, and fixing $d=2$, we take the finite contributions in Eq. (72), $$\displaystyle\left.\mathcal{A}_{ij}\right|_{(\mathrm{a})}$$ $$\displaystyle=$$ $$\displaystyle-\lambda_{i}\lambda_{j}\frac{G_{F}^{2}}{\pi}\left\{\left[g_{\rho\sigma}\bar{F}_{ij}-2\frac{p_{1\rho}p_{1\sigma}}{m_{Q}^{2}}\bar{G}_{ij}\right]\right.$$ (81) $$\displaystyle\times[\bar{q}(p_{3})\gamma_{\mu}\gamma^{\rho}\gamma_{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\sigma}\gamma^{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{4})]$$ $$\displaystyle+2(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})^{2}\bar{H}_{ij}[\bar{q}(p_{3})\gamma_{\mu}\gamma_{\nu}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\mu}Q(p_{4})]$$ $$\displaystyle-2(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})\frac{p_{1\rho}}{m_{Q}}\bar{I}_{ij}[\bar{q}(p_{3})\gamma_{\mu}\gamma_{\nu}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\rho}\gamma^{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{4})]$$ $$\displaystyle\left.+2(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})\frac{p_{1}^{\rho}}{m_{Q}}\bar{I}_{ji}[\bar{q}(p_{3})\gamma_{\mu}\gamma_{\rho}\gamma_{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}Q(p_{1})][\bar{q}(p_{2})\gamma^{\nu}\gamma^{\mu}Q(p_{4})]\right\},\qquad\quad$$ where the functions that have branch cut are introduced by, $$\displaystyle\bar{F}_{ij}=\displaystyle\int_{0}^{1}\ln(M_{W}^{2}\Lambda^{(1)}_{ij})\mathrm{d}\alpha,\quad\bar{G}_{ij}=m_{Q}^{2}\int_{0}^{1}\frac{\alpha(1-\alpha)\mathrm{d}\alpha}{M_{W}^{2}\Lambda^{(1)}_{ij}},$$ $$\displaystyle\bar{H}_{ij}=m_{i}m_{j}\displaystyle\int_{0}^{1}\frac{\mathrm{d}\alpha}{M_{W}^{2}\Lambda^{(1)}_{ij}},\quad\bar{I}_{ij}=m_{i}m_{Q}\displaystyle\int_{0}^{1}\frac{\alpha\mathrm{d}\alpha}{M_{W}^{2}\Lambda^{(1)}_{ij}}.\quad$$ (82) The discontinuities of Eq. (82) in a physical region are (the sign is associated with ones above branch cut), $$\displaystyle\mathrm{Disc}\>\bar{F}_{ij}=-2\pi iF^{\rm(th)}_{ij},\quad\mathrm{Disc}\>\bar{G}_{ij}=+2\pi iG^{\rm(th)}_{ij},$$ $$\displaystyle\mathrm{Disc}\>\bar{H}_{ij}=+4\pi iH^{\rm(th)}_{ij},\quad\mathrm{Disc}\>\bar{I}_{ij}=+2\pi iI^{\rm(th)}_{ij}.\quad$$ (83) where $F^{\rm(th)}_{ij},G^{\rm(th)}_{ij},H^{\rm(th)}_{ij}$ and $I^{\rm(th)}_{ij}$ are defined in Eqs. (32-35). The terms proportional to $\bar{F}_{ij}$ and $\bar{H}_{ij}$ in Eq. (81) are facilitated by the Fiertz rearrangement in two-dimensions, $$\displaystyle[\overline{\psi}_{1}\gamma^{\mu}\gamma^{\rho}\gamma^{\nu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}\psi_{2}][\overline{\psi}_{3}\gamma_{\nu}\gamma_{\rho}\gamma_{\mu}(c_{\mathrm{V}}+c_{\mathrm{A}}\gamma_{5})^{2}\psi_{4}]$$ $$\displaystyle\stackrel{{\scriptstyle 2D}}{{=}}$$ $$\displaystyle-4(c_{\mathrm{V}}^{2}-c_{\mathrm{A}}^{2})^{2}(\overline{\psi}_{1}\gamma^{\mu}\gamma_{5}\psi_{2})(\overline{\psi}_{3}\gamma_{\mu}\gamma_{5}\psi_{4}),\quad\qquad\;\;$$ (84) $$\displaystyle[\overline{\psi}_{1}\gamma^{\mu}\gamma^{\nu}\psi_{2}][\overline{\psi}_{3}\gamma_{\nu}\gamma_{\mu}\psi_{4}]$$ $$\displaystyle\stackrel{{\scriptstyle 2D}}{{=}}$$ $$\displaystyle 2[(\overline{\psi}_{1}\psi_{2})(\overline{\psi}_{3}\psi_{4})-(\overline{\psi}_{1}i\gamma_{5}\psi_{2})(\overline{\psi}_{3}i\gamma_{5}\psi_{4})],$$ (85) where in Eq. (84) we used $\gamma_{\mu}=\epsilon_{\mu\nu}\gamma^{\nu}\gamma_{5}~{}(\epsilon_{01}=+1)$ valid in two-dimensions, that yields $V^{\mu}\times V_{\mu}=-A^{\mu}\times A_{\mu}$. As for the terms proportional to $\bar{G}_{ij}$ and $\bar{I}_{ij}$, the relevant Fiertz rearrangements are also obtainable straightforwardly, with the equation of motion for heavy quark being implemented. Below, we omit the bilinears that do not contribute to heavy meson mixings for the ground state in the large-$N_{c}$ limit. By substituting Eqs. (83-85) into Eq. (81), we obtain the absorptive part of Fig. 1a, $$\displaystyle\left.\mathrm{Disc}\>\mathcal{A}_{ij}\right|_{(\mathrm{a})}$$ $$\displaystyle\to$$ $$\displaystyle-8i\lambda_{i}\lambda_{j}G_{F}^{2}(c_{\rm V}^{2}-c_{\rm A}^{2})\left\{\left[(c_{\rm V}^{2}-c_{\rm A}^{2})\left(F^{\rm(th)}_{ij}+2G^{\rm(th)}_{ij}\right)-(c_{\rm V}^{2}+c_{\rm A}^{2})\left(I^{\rm(th)}_{ij}+I^{\rm(th)}_{ji}\right)\right]\right.$$ (86) $$\displaystyle\times\left.[\bar{q}(p_{3})\gamma^{\mu}\gamma_{5}Q(p_{1})][\bar{q}(p_{2})\gamma_{\mu}\gamma_{5}Q(p_{4})]\right.$$ $$\displaystyle-\left[(c_{\rm V}^{2}-c_{\rm A}^{2})\left(G^{\rm(th)}_{ij}+2H^{\rm(th)}_{ij}\right)+(c_{\rm V}^{2}+c_{\rm A}^{2})\left(I^{\rm(th)}_{ij}+I^{\rm(th)}_{ji}\right)\right]$$ $$\displaystyle\left.\times[\bar{q}(p_{3})i\gamma_{5}Q(p_{1})][\bar{q}(p_{2})i\gamma_{5}Q(p_{4})]\right\},\qquad\qquad$$ Thus, the contribution of the $V\pm A$ current, corresponding to $c_{\mathrm{V}}=\pm c_{\mathrm{A}}$, vanishes for $g_{\mu\nu}$ part of the $W$ propagator. This point is distinct from the familiar case in four-dimensions, where the Fiertz rearrangement gives, $$\displaystyle[\overline{\psi}_{1}\gamma^{\mu}\gamma^{\rho}\gamma^{\nu}(1\pm\gamma_{5})\psi_{2}][\overline{\psi}_{3}\gamma_{\nu}\gamma_{\rho}\gamma_{\mu}(1\pm\gamma_{5})\psi_{4}]\stackrel{{\scriptstyle 4D}}{{=}}4[\overline{\psi}_{1}\gamma^{\mu}(1\pm\gamma_{5})\psi_{2}][\overline{\psi}_{3}\gamma_{\mu}(1\pm\gamma_{5})\psi_{4}],\qquad$$ (87) so that (part of) the final result is proportional to the $(V\pm A)\times(V\pm A)$ operator in four-dimensions. This difference is due to the vanishing of $\gamma_{\mu}\gamma_{\alpha}\gamma^{\mu}=(2-d)\gamma_{\alpha}$, and also to the higher redundancy for products of gamma matrices for $d=2$ than that for $d=4$. Likewise, one can also calculate the absorptive part of Fig. 1b, which gives the amplitude similar to Eq. (86) except that the momentum arrangement for the spinors is different. By combining these results, we finally obtain the effective Hamiltonian in Eq. (27) with Eqs. (1, 3). References (1) K. G. Wilson, “Nonlagrangian models of current algebra,” Phys. Rev. 179, 1499-1512 (1969). (2) K. Wilson, in Proceedings of the 1971 International Symposium on Electron and Photon Interactions at High Energies, edited by N. Mistry (Laboratory of Nuclear Studies, Cornell University, Ithaca, NY, 1972), p. 115. (3) K. G. Wilson and J. B. Kogut, “The Renormalization group and the epsilon expansion,” Phys. Rept. 12, 75-199 (1974). (4) M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “QCD and Resonance Physics. Theoretical Foundations,” Nucl. Phys. B 147, 385-447 (1979). (5) M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “QCD and Resonance Physics: Applications,” Nucl. Phys. B 147, 448-518 (1979). (6) V. A. Novikov, M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, “Wilson’s Operator Expansion: Can It Fail?,” Yad. Fiz. 41, 1063-1079 (1985). (7) I. I. Y. Bigi, N. G. Uraltsev and A. I. Vainshtein, “Nonperturbative corrections to inclusive beauty and charm decays: QCD versus phenomenological models,” Phys. Lett. B 293, 430-436 (1992) [erratum: Phys. Lett. B 297, 477-477 (1992)] [arXiv:hep-ph/9207214 [hep-ph]]. (8) I. I. Y. Bigi, B. Blok, M. A. Shifman, N. G. Uraltsev and A. I. Vainshtein, “A QCD ’manifesto’ on inclusive decays of beauty and charm,” [arXiv:hep-ph/9212227 [hep-ph]]. (9) B. Blok and M. A. Shifman, “The Rule of discarding $1/N_{c}$ in inclusive weak decays. 1.,” Nucl. Phys. B 399, 441-458 (1993) [arXiv:hep-ph/9207236 [hep-ph]]. (10) B. Blok and M. A. Shifman, “The Rule of discarding $1/N_{c}$ in inclusive weak decays. 2.,” Nucl. Phys. B 399, 459-476 (1993) [arXiv:hep-ph/9209289 [hep-ph]]. (11) I. I. Y. Bigi, M. A. Shifman and N. Uraltsev, “Aspects of heavy quark theory,” Ann. Rev. Nucl. Part. Sci. 47, 591-661 (1997) [arXiv:hep-ph/9703290 [hep-ph]]. (12) A. Lenz, “Lifetimes and heavy quark expansion,” Int. J. Mod. Phys. A 30, no.10, 1543005 (2015) [arXiv:1405.3601 [hep-ph]]. (13) M. Kirk, A. Lenz and T. Rauh, “Dimension-six matrix elements for meson mixing and lifetimes from sum rules,” JHEP 12, 068 (2017) [erratum: JHEP 06, 162 (2020)] [arXiv:1711.02100 [hep-ph]]. (14) H. Y. Cheng, “Phenomenological Study of Heavy Hadron Lifetimes,” JHEP 11, 014 (2018) [arXiv:1807.00916 [hep-ph]]. (15) A. Lenz and G. Tetlalmatzi-Xolocotzi, “Model-independent bounds on new physics effects in non-leptonic tree-level decays of $B$-mesons,” JHEP 07, 177 (2020) [arXiv:1912.07621 [hep-ph]]. (16) Y. S. Amhis et al. [HFLAV], “Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of 2018,” [arXiv:1909.12524 [hep-ex]]. (17) S. L. Glashow, J. Iliopoulos and L. Maiani, “Weak Interactions with Lepton-Hadron Symmetry,” Phys. Rev. D 2, 1285-1292 (1970). (18) R. Kingsley, S. Treiman, F. Wilczek and A. Zee, “Weak Decays of Charmed Hadrons,” Phys. Rev. D 11, 1919 (1975). (19) N. Cabibbo, “Unitary Symmetry and Leptonic Decays,” Phys. Rev. Lett. 10, 531-533 (1963). (20) M. Kobayashi and T. Maskawa, “CP Violation in the Renormalizable Theory of Weak Interaction,” Prog. Theor. Phys. 49, 652-657 (1973). (21) M. Staric et al. [BELLE], “Evidence for $D^{0}$ - $\bar{D}^{0}$ Mixing,” Phys. Rev. Lett. 98, 211803 (2007) [arXiv:hep-ex/0703036 [hep-ex]]. (22) B. Aubert et al. [BaBar], “Evidence for $D^{\,0}-\overline{D}^{\,0}$ Mixing,” Phys. Rev. Lett. 98, 211802 (2007) [arXiv:hep-ex/0703020 [hep-ex]]. (23) T. Aaltonen et al. [CDF], “Evidence for $D^{0}-\bar{D}^{0}$ mixing using the CDF II Detector,” Phys. Rev. Lett. 100, 121802 (2008) [arXiv:0712.1567 [hep-ex]]. (24) R. Aaij et al. [LHCb], “Measurement of $D^{0}–\bar{D}^{0}$ Mixing Parameters and Search for $CP$ Violation Using $D^{0}\to K^{+}\pi^{-}$ Decays,” Phys. Rev. Lett. 111, no.25, 251801 (2013) [arXiv:1309.6534 [hep-ex]]. (25) A. Lenz and G. Wilkinson, “Mixing and $CP$ violation in the charm system,” [arXiv:2011.04443 [hep-ph]]. (26) G. Burdman and I. Shipsey, “$D^{0}$ - $\bar{D}^{0}$ mixing and rare charm decays,” Ann. Rev. Nucl. Part. Sci. 53, 431-499 (2003) [arXiv:hep-ph/0310076 [hep-ph]]. (27) L. Wolfenstein, “$D^{0}-\bar{D^{0}}$ Mixing,” Phys. Lett. B 164, 170-172 (1985). (28) J. F. Donoghue, E. Golowich, B. R. Holstein and J. Trampetic, “Dispersive Effects in $D^{0}-\bar{D^{0}}$ Mixing,” Phys. Rev. D 33, 179 (1986). (29) P. Colangelo, G. Nardulli and N. Paver, “On $D^{0}-\bar{D^{0}}$ Mixing in the Standard Model,” Phys. Lett. B 242, 71-76 (1990). (30) F. Buccella, M. Lusignoli, G. Miele, A. Pugliese and P. Santorelli, “Nonleptonic weak decays of charmed mesons,” Phys. Rev. D 51, 3478-3486 (1995) [arXiv:hep-ph/9411286 [hep-ph]]. (31) T. A. Kaeding, “$D$ meson mixing in broken SU(3),” Phys. Lett. B 357, 151-155 (1995) [arXiv:hep-ph/9505393 [hep-ph]]. (32) A. F. Falk, Y. Grossman, Z. Ligeti and A. A. Petrov, “SU(3) breaking and $D^{0}-\bar{D^{0}}$ mixing,” Phys. Rev. D 65, 054034 (2002) [arXiv:hep-ph/0110317 [hep-ph]]. (33) A. F. Falk, Y. Grossman, Z. Ligeti, Y. Nir and A. A. Petrov, “The $D^{0}-\bar{D^{0}}$ mass difference from a dispersion relation,” Phys. Rev. D 69, 114021 (2004) [hep-ph/0402204]. (34) H. Y. Cheng and C. W. Chiang, “Long-Distance Contributions to $D^{0}-\bar{D}^{0}$ Mixing Parameters,” Phys. Rev. D 81, 114020 (2010) [arXiv:1005.1106 [hep-ph]]. (35) M. Gronau and J. L. Rosner, “Revisiting $D^{0}-\bar{D^{0}}$ mixing using U-spin,” Phys. Rev. D 86, 114029 (2012) [arXiv:1209.1348 [hep-ph]]. (36) H. Y. Jiang, F. S. Yu, Q. Qin, H. n. Li and C. D. Lü, “$D^{0}-\bar{D^{0}}$ mixing parameter $y$ in the factorization-assisted topological-amplitude approach,” Chin. Phys. C 42, 063101 (2018) [arXiv:1705.07335 [hep-ph]]. (37) J. S. Hagelin, “Mass Mixing and CP Violation in the $B^{0}-\bar{B}^{0}$ system,” Nucl. Phys. B 193, 123 (1981). (38) H. Y. Cheng, “CP Violating Effects in Heavy Meson Systems,” Phys. Rev. D 26, 143 (1982). (39) A. J. Buras, W. Slominski and H. Steger, “$B^{0}-\bar{B^{0}}$ Mixing, CP Violation and the B Meson Decay,” Nucl. Phys. B 245, 369-398 (1984). (40) A. Datta and D. Kumbhakar, “$D^{0}-\bar{D^{0}}$ Mixing: A Possible Test of Physics Beyond the Standard Model,” Z. Phys. C 27, 515 (1985). (41) H. Georgi, “$D-\bar{D}$ mixing in heavy quark effective field theory,” Phys. Lett. B 297, 353-357 (1992) [arXiv:hep-ph/9209291 [hep-ph]]. (42) T. Ohl, G. Ricciardi and E. H. Simmons, “$D-\bar{D}$ mixing in heavy quark effective field theory: The Sequel,” Nucl. Phys. B 403, 605-632 (1993) [arXiv:hep-ph/9301212 [hep-ph]]. (43) M. Beneke, G. Buchalla and I. Dunietz, “Width Difference in the $B_{s}-\bar{B_{s}}$ System,” Phys. Rev. D 54, 4419-4431 (1996) [erratum: Phys. Rev. D 83, 119902 (2011)] [arXiv:hep-ph/9605259 [hep-ph]]. (44) M. Beneke, G. Buchalla, C. Greub, A. Lenz and U. Nierste, “Next-to-leading order QCD corrections to the lifetime difference of $B_{s}$ mesons,” Phys. Lett. B 459, 631-640 (1999) [arXiv:hep-ph/9808385 [hep-ph]]. (45) A. S. Dighe, T. Hurth, C. S. Kim and T. Yoshikawa, “Measurement of the lifetime difference of $B_{d}$ mesons: Possible and worthwhile?,” Nucl. Phys. B 624, 377-404 (2002) [arXiv:hep-ph/0109088 [hep-ph]]. (46) M. Ciuchini, E. Franco, V. Lubicz, F. Mescia and C. Tarantino, “Lifetime differences and CP violation parameters of neutral B mesons at the next-to-leading order in QCD,” JHEP 08, 031 (2003) [arXiv:hep-ph/0308029 [hep-ph]]. (47) A. A. Petrov, “On dipenguin contribution to $D^{0}-\bar{D^{0}}$ mixing,” Phys. Rev. D 56, 1685-1687 (1997) [arXiv:hep-ph/9703335 [hep-ph]]. (48) E. Golowich and A. A. Petrov, “Short distance analysis of $D^{0}-\bar{D^{0}}$ mixing,” Phys. Lett. B 625, 53 (2005) [hep-ph/0506185]. (49) M. Bobrowski, A. Lenz, J. Riedl and J. Rohrwild, “How Large Can the SM Contribution to CP Violation in $D^{0}-\bar{D}^{0}$ Mixing Be?,” JHEP 03, 009 (2010) [arXiv:1002.4794 [hep-ph]]. (50) T. Jubb, M. Kirk, A. Lenz and G. Tetlalmatzi-Xolocotzi, “On the ultimate precision of meson mixing observables,” Nucl. Phys. B 915, 431-453 (2017) [arXiv:1603.07770 [hep-ph]]. (51) I. I. Bigi and N. G. Uraltsev, “$D^{0}-\bar{D^{0}}$ oscillations as a probe of quark hadron duality,” Nucl. Phys. B 592, 92-106 (2001) [arXiv:hep-ph/0005089 [hep-ph]]. (52) E. Golowich, S. Pakvasa and A. A. Petrov, “New Physics contributions to the lifetime difference in $\bar{D^{0}}-\bar{D^{0}}$ mixing,” Phys. Rev. Lett. 98, 181801 (2007) [arXiv:hep-ph/0610039 [hep-ph]]. (53) E. Golowich, J. Hewett, S. Pakvasa and A. A. Petrov, “Implications of $D^{0}$ - $\bar{D}^{0}$ Mixing for New Physics,” Phys. Rev. D 76, 095009 (2007) [arXiv:0705.3650 [hep-ph]]. (54) E. Golowich, J. Hewett, S. Pakvasa and A. A. Petrov, “Relating $D^{0}-\bar{D^{0}}$ Mixing and $D^{0}\to l^{+}l^{-}$ with New Physics,” Phys. Rev. D 79, 114030 (2009) [arXiv:0903.2830 [hep-ph]]. (55) O. Gedalia, Y. Grossman, Y. Nir and G. Perez, “Lessons from Recent Measurements of $D^{0}-\bar{D^{0}}$ Mixing,” Phys. Rev. D 80, 055024 (2009) [arXiv:0906.1879 [hep-ph]]. (56) A. Lenz, M. L. Piscopo and C. Vlahos, “Renormalization scale setting for D-meson mixing,” Phys. Rev. D 102, no.9, 093002 (2020) [arXiv:2007.03022 [hep-ph]]. (57) H. M. Asatrian, A. Hovhannisyan, U. Nierste and A. Yeghiazaryan, “Towards next-to-next-to-leading-log accuracy for the width difference in the $B_{s}-\bar{B}_{s}$ system: fermionic contributions to order $(m_{c}/m_{b})^{0}$ and $(m_{c}/m_{b})^{1}$,” JHEP 10, 191 (2017) [arXiv:1709.02160 [hep-ph]]. (58) H. M. Asatrian, H. H. Asatryan, A. Hovhannisyan, U. Nierste, S. Tumasyan and A. Yeghiazaryan, “Penguin contribution to the width difference and $CP$ asymmetry in $B_{q}$-$\bar{B}_{q}$ mixing at order $\alpha_{s}^{2}N_{f}$,” Phys. Rev. D 102, no.3, 033007 (2020) [arXiv:2006.13227 [hep-ph]]. (59) H. N. Li, H. Umeeda, F. Xu and F. S. Yu, “$D$ meson mixing as an inverse problem,” Phys. Lett. B 810, 135802 (2020) [arXiv:2001.04079 [hep-ph]]. (60) E. D. Bloom and F. J. Gilman, “Scaling, Duality, and the Behavior of Resonances in Inelastic electron-Proton Scattering,” Phys. Rev. Lett. 25, 1140 (1970). (61) E. D. Bloom and F. J. Gilman, “Scaling and the Behavior of Nucleon Resonances in Inelastic electron-Nucleon Scattering,” Phys. Rev. D 4, 2901 (1971). (62) E. C. Poggio, H. R. Quinn and S. Weinberg, “Smearing the Quark Model,” Phys. Rev. D 13, 1958 (1976). (63) M. Beneke, “Renormalons,” Phys. Rept. 317, 1-142 (1999) [arXiv:hep-ph/9807443 [hep-ph]]. (64) M. A. Shifman, “Theory of preasymptotic effects in weak inclusive decays,” [arXiv:hep-ph/9405246 [hep-ph]]. (65) M. A. Shifman, “Recent progress in the heavy quark theory,” [arXiv:hep-ph/9505289 [hep-ph]]. (66) M. A. Shifman, “Quark hadron duality,” [arXiv:hep-ph/0009131 [hep-ph]]. (67) I. I. Y. Bigi and N. Uraltsev, “A Vademecum on quark hadron duality,” Int. J. Mod. Phys. A 16, 5201-5248 (2001) [arXiv:hep-ph/0106346 [hep-ph]]. (68) J. Chay and S. J. Rey, “Instanton contribution to $B\to X_{u}e\bar{\nu}$ decay,” Z. Phys. C 68, 431-438 (1995) [arXiv:hep-ph/9404214 [hep-ph]]. (69) J. Chay and S. J. Rey, “Instanton contribution to $B\to X_{s}\gamma$ decay,” Z. Phys. C 68, 425-430 (1995) [arXiv:hep-ph/9406279 [hep-ph]]. (70) A. F. Falk and A. Kyatkin, “Instantons and the endpoint of the lepton energy spectrum in charmless semileptonic $B$ decays,” Phys. Rev. D 52, 5049-5055 (1995) [arXiv:hep-ph/9502248 [hep-ph]]. (71) B. Chibisov, R. D. Dikeman, M. A. Shifman and N. Uraltsev, “Operator product expansion, heavy quarks, QCD duality and its violations,” Int. J. Mod. Phys. A 12, 2075-2133 (1997) [arXiv:hep-ph/9605465 [hep-ph]]. (72) A. R. Zhitnitsky, “Lessons from QCD in two-dimensions ($N\to\infty$): Vacuum structure, asymptotic series, instantons and all that,” Phys. Rev. D 53, 5821-5833 (1996) [arXiv:hep-ph/9510366 [hep-ph]]. (73) P. Colangelo, C. A. Dominguez and G. Nardulli, “Violations of local duality in the heavy quark sector,” Phys. Lett. B 409, 417-424 (1997) [arXiv:hep-ph/9705390 [hep-ph]]. (74) B. Blok, M. A. Shifman and D. X. Zhang, “An Illustrative example of how quark hadron duality might work,” Phys. Rev. D 57, 2691-2700 (1998) [erratum: Phys. Rev. D 59, 019901 (1999)] [arXiv:hep-ph/9709333 [hep-ph]]. (75) B. Grinstein and R. F. Lebed, “Explicit quark-hadron duality in heavy-light meson weak decays in the ’t Hooft model,” Phys. Rev. D 57, 1366-1378 (1998) [arXiv:hep-ph/9708396 [hep-ph]]. (76) I. I. Y. Bigi, M. A. Shifman, N. Uraltsev and A. I. Vainshtein, “Heavy flavor decays, OPE and duality in two-dimensional ’t Hooft model,” Phys. Rev. D 59, 054011 (1999) [arXiv:hep-ph/9805241 [hep-ph]]. (77) B. Grinstein and R. F. Lebed, “Quark hadron duality in the ’t Hooft model for meson weak decays: Different quark diagram topologies,” Phys. Rev. D 59, 054022 (1999) [arXiv:hep-ph/9805404 [hep-ph]]. (78) I. I. Y. Bigi and N. Uraltsev, “Heavy quark expansion and preasymptotic corrections to decay widths in the ’t Hooft model,” Phys. Rev. D 60, 114034 (1999) [arXiv:hep-ph/9902315 [hep-ph]]. (79) I. I. Y. Bigi and N. Uraltsev, “Pauli interference in the ’t Hooft model: Heavy quark expansion and quark hadron duality,” Phys. Lett. B 457, 163-169 (1999) [arXiv:hep-ph/9903258 [hep-ph]]. (80) M. Burkardt, “Off forward parton distributions in (1+1)-dimensional QCD,” Phys. Rev. D 62, 094003 (2000) [arXiv:hep-ph/0005209 [hep-ph]]. (81) M. Burkardt and N. Uraltsev, “Analytical heavy quark expansion in the ’t Hooft model,” Phys. Rev. D 63, 014004 (2001) [arXiv:hep-ph/0005278 [hep-ph]]. (82) R. F. Lebed and N. G. Uraltsev, “Precision studies of duality in the ’t Hooft model,” Phys. Rev. D 62, 094011 (2000) [arXiv:hep-ph/0006346 [hep-ph]]. (83) S. R. Beane, “Constraining quark hadron duality at large $N_{c}$,” Phys. Rev. D 64, 116010 (2001) [arXiv:hep-ph/0106022 [hep-ph]]. (84) B. Grinstein, “Global duality in heavy flavor decays in the ’t Hooft model,” Phys. Rev. D 64, 094004 (2001) [arXiv:hep-ph/0106205 [hep-ph]]. (85) B. Grinstein, “Global duality in heavy flavor hadronic decays,” Phys. Lett. B 529, 99-104 (2002) [arXiv:hep-ph/0112323 [hep-ph]]. (86) J. Mondejar, A. Pineda and J. Rojo, “Heavy meson semileptonic differential decay rate in two dimensions in the large $N_{c}$,” JHEP 09, 060 (2006) [arXiv:hep-ph/0605248 [hep-ph]]. (87) J. Mondejar and A. Pineda, “Breakdown of the operator product expansion in the ’t Hooft model,” Phys. Rev. Lett. 101, 152002 (2008) [arXiv:0807.0011 [hep-ph]]. (88) J. Mondejar and A. Pineda, “Deep inelastic scattering and factorization in the ’t Hooft Model,” Phys. Rev. D 79, 085011 (2009) [arXiv:0901.3113 [hep-ph]]. (89) E. Golowich and A. A. Petrov, “Can nearby resonances enhance $D^{0}-\bar{D^{0}}$ mixing?,” Phys. Lett. B 427, 172-178 (1998) [arXiv:hep-ph/9802291 [hep-ph]]. (90) P. Gambino and S. Hashimoto, “Inclusive Semileptonic Decays from Lattice QCD,” Phys. Rev. Lett. 125, no.3, 032001 (2020) [arXiv:2005.13730 [hep-lat]]. (91) H. Fukaya, S. Hashimoto, T. Kaneko and H. Ohki, “Towards fully nonperturbative computations of inelastic $\ell N$ scattering cross sections from lattice QCD,” Phys. Rev. D 102, no.11, 114516 (2020) [arXiv:2010.01253 [hep-lat]]. (92) A. A. Belavin, A. M. Polyakov, A. S. Schwartz and Y. S. Tyupkin, “Pseudoparticle Solutions of the Yang-Mills Equations,” Phys. Lett. B 59, 85-87 (1975). (93) G. ’t Hooft, “A Two-Dimensional Model for Mesons,” Nucl. Phys. B 75, 461-470 (1974). (94) G. ’t Hooft, “A Planar Diagram Theory for Strong Interactions,” Nucl. Phys. B 72, 461 (1974). (95) S. R. Coleman, “Aspects of Symmetry”, Cambridge University Press (1985). (96) A. V. Manohar, “Large $N$ QCD,” [arXiv:hep-ph/9802419 [hep-ph]]. (97) G. ’t Hooft, “Large $N$,” [arXiv:hep-th/0204069 [hep-th]]. (98) Y. Nambu, “Force potentials in quantum field theory,” Prog. Theor. Phys. 5, 614-633 (1950). (99) E. E. Salpeter and H. A. Bethe, “A Relativistic equation for bound state problems,” Phys. Rev. 84, 1232-1242 (1951). (100) P. Federbush and A. Tromba, “A Note on ’t Hooft’s Hamiltonian in Two-Dimensional QCD,” Phys. Rev. D 15, 2913 (1977). (101) C. G. Callan, Jr., N. Coote and D. J. Gross, “Two-Dimensional Yang-Mills Theory: A Model of Quark Confinement,” Phys. Rev. D 13, 1649 (1976). (102) M. B. Einhorn, “Form-Factors and Deep Inelastic Scattering in Two-Dimensional Quantum Chromodynamics,” Phys. Rev. D 14, 3451 (1976). (103) N. K. Pak and H. C. Tze, “On ’t Hooft Bound State Equation: A View from Two Gauges,” Phys. Rev. D 14, 3472 (1976). (104) A. J. Hanson, R. D. Peccei and M. K. Prasad, “Two-Dimensional SU(N) Gauge Theory, Strings and Wings: Comparative Analysis of Meson Spectra and Covariance,” Nucl. Phys. B 121, 477-504 (1977). (105) I. Bars and M. B. Green, “Poincare and Gauge Invariant Two-Dimensional QCD,” Phys. Rev. D 17, 537 (1978). (106) R. C. Brower, W. L. Spence and J. H. Weis, “Bound states and asymptotic limits for quantum chromodynamics in two dimensions,” Phys. Rev. D 19, 3024 (1979). (107) A. R. Zhitnitsky, “On Chiral Symmetry Breaking in QCD${}_{2}$ ($N_{c}\to$ Infinity),” Sov. J. Nucl. Phys. 43, 999 (1986). (108) M. Li, “Large-$N$ Two-dimensional QCD and Chiral Symmetry,” Phys. Rev. D 34, 3888-3893 (1986). (109) M. Li, L. Wilets and M. C. Birse, “QCD${}_{2}$ in the axial gauge,” J. Phys. G 13, 915-923 (1987). (110) S. Huang, J. W. Negele and J. Polonyi, “Meson Structure in QCD${}_{2}$,” Nucl. Phys. B 307, 669-704 (1988). (111) M. Burkardt, “The Momentum distribution of heavy quarks,” Phys. Rev. D 46, 2751-2755 (1992). (112) M. Burkardt and E. S. Swanson, “Isgur-Wise symmetry in two-dimensions,” Phys. Rev. D 46, 5083-5091 (1992). (113) R. L. Jaffe and P. F. Mende, “When is field theory effective?,” Nucl. Phys. B 369, 189-218 (1992). (114) B. Grinstein and P. F. Mende, “Heavy Mesons in Two-Dimensions,” Phys. Rev. Lett. 69, 1018-1021 (1992) [arXiv:hep-ph/9204206 [hep-ph]]. (115) B. Grinstein and P. F. Mende, “Form-factors in the heavy quark and chiral limit: Pole dominance in $\bar{B}\to\pi e\bar{\nu}_{e}$,” Nucl. Phys. B 425, 451-470 (1994) [arXiv:hep-ph/9401303 [hep-ph]]. (116) J. L. F. Barbon and K. Demeterfi, “Effective Hamiltonians for 1/$N$ expansion in two-dimensional QCD,” Nucl. Phys. B 434, 109-138 (1995) [arXiv:hep-th/9406046 [hep-th]]. (117) K. Aoki and T. Ichihara, “(1+1)-dimensional QCD with fundamental bosons and fermions,” Phys. Rev. D 52, 6435-6444 (1995) [arXiv:hep-th/9506058 [hep-th]]. (118) W. Krauth and M. Staudacher, “Nonintegrability of two-dimensional QCD,” Phys. Lett. B 388, 808-812 (1996) [arXiv:hep-th/9608122 [hep-th]]. (119) E. Abdalla and R. Mohayaee, “Decay amplitudes in two-dimensional QCD,” Phys. Rev. D 57, 3777-3785 (1998). (120) E. Abdalla and N. A. Alves, “Bound state structure of two-dimensional QCD: Formalism and numerical results,” Annals Phys. 277, 74-93 (1999). (121) A. Armoni, Y. Frishman and J. Sonnenschein, “Massless QCD${}_{2}$ from current constituents,” Nucl. Phys. B 596, 459-470 (2001) [arXiv:hep-th/0011043 [hep-th]]. (122) F. Berruto, L. Giusti, C. Hoelbling and C. Rebbi, “A Study of the ’t Hooft model with the overlap Dirac operator,” Phys. Rev. D 65, 094516 (2002) [arXiv:hep-lat/0201010 [hep-lat]]. (123) B. Grinstein, “Shape and soft functions of HQET and SCET in the ’t Hooft Model,” Nucl. Phys. B 755, 199-220 (2006) [arXiv:hep-ph/0607159 [hep-ph]]. (124) J. Mondejar and A. Pineda, “$1/N_{c}$ and $1/n$ preasymptotic corrections to Current-Current correlators,” JHEP 06, 039 (2008) [arXiv:0803.3625 [hep-ph]]. (125) B. Grinstein, R. Jora and A. D. Polosa, “A Note on large $N$ scalar QCD${}_{2}$,” Phys. Lett. B 671, 440-444 (2009) [arXiv:0812.0637 [hep-ph]]. (126) L. Y. Glozman, V. K. Sazonov, M. Shifman and R. F. Wagenbrunn, “How Chiral Symmetry Breaking Affects the Spectrum of the Light-Heavy Mesons in the ’t Hooft Model,” Phys. Rev. D 85, 094030 (2012) [arXiv:1201.5814 [hep-th]]. (127) Y. Jia, S. Liang, L. Li and X. Xiong, “Solving the Bars-Green equation for moving mesons in two-dimensional QCD,” JHEP 11, 151 (2017) [arXiv:1708.09379 [hep-ph]]. (128) Y. Jia, S. Liang, X. Xiong and R. Yu, “Partonic quasidistributions in two-dimensional QCD,” Phys. Rev. D 98, no.5, 054011 (2018) [arXiv:1804.04644 [hep-th]]. (129) P. Fonseca and A. Zamolodchikov, “Ising spectroscopy. I. Mesons at $T<T_{c}$,” [arXiv:hep-th/0612304 [hep-th]]. (130) K. Harada, T. Heinzl and C. Stern, “Variational mass perturbation theory for light front bound state equations,” Phys. Rev. D 57, 2460-2474 (1998) [arXiv:hep-th/9705159 [hep-th]]. (131) H. Lewy, “Expansion of solutions of t’ Hooft’s equation. A study in the confluence of analytic boundary conditions.”, Manuscr. Math., 26, 411-421 (1979). (132) S. Hildebrandt, “Mathematical aspects of ‘t Hooft’s eigenvalue problem in two-dimensional quantum chromodynamics Part I. A variational approach, and nodal properties of the eigenfunctions,” Manuscr. Math., 24, 45-79 (1978). (133) S. Hildebrandt, “Mathematical aspects of ’t Hooft’s eigenvalue problem in two-dimensional quantum chromodynamics Part II. Behavior of the eigenfunctions of BEP and HEP at the singular boundary points,” Ark. Mat. 17 (1979) 29-38. (134) S. Hildebrandt, V. Visnjić-Triantafillou, “Mathematical aspects of ’t Hooft’s eigenvalue problem in two-dimensional quantum chromodynamics. Part III,” Math. Z. 168 (1979) 223-240. (135) J. Brüning, “On the eigenvalue problem of ’t Hooft,” Manuscr. Math., 39, 125-146. (136) V. A. Fateev, S. L. Lukyanov and A. B. Zamolodchikov, “On mass spectrum in ’t Hooft’s 2D model of mesons,” J. Phys. A 42, 304012 (2009) [arXiv:0905.2280 [hep-th]]. (137) I. Ziyatdinov, “Asymptotic properties of mass spectrum in ’t Hooft’s model of mesons,” Int. J. Mod. Phys. A 25, 3899-3910 (2010) [arXiv:1003.4304 [hep-th]]. (138) R. A. Zubov, S. A. Paston and E. V. Prokhvatilov, “Exact solution of the ’t Hooft equation in the limit of heavy quarks with unequal masses,” Theor. Math. Phys. 184, no.3, 1281-1286 (2015). (139) L. L. Chau, “Quark Mixing in Weak Interactions,” Phys. Rept. 95, 1-94 (1983). (140) L. L. Chau and H. Y. Cheng, “Quark Diagram Analysis of Two-body Charm Decays,” Phys. Rev. Lett. 56, 1655-1658 (1986). (141) L. L. Chau and H. Y. Cheng, “Analysis of Exclusive Two-Body Decays of Charm Mesons Using the Quark Diagram Scheme,” Phys. Rev. D 36, 137 (1987). (142) L. L. Chau and H. Y. Cheng, “Analysis of the Recent Data of Exclusive Two-body Charm Decays,” Phys. Lett. B 222, 285-292 (1989). (143) R. Aleksan, A. Le Yaouanc, L. Oliver, O. Pene and J. C. Raynal, “Estimation of $\Delta\Gamma$ for the $B_{s}-\bar{B}_{s}$ system: Exclusive decays and the parton model,” Phys. Lett. B 316, 567-577 (1993). (144) C. K. Chua, W. S. Hou and C. H. Shen, “Long-Distance Contribution to $\Delta\Gamma_{s}$ of the $B_{s}-\bar{B}_{s}$ System,” Phys. Rev. D 84, 074037 (2011) [arXiv:1107.4325 [hep-ph]]. (145) M. A. Shifman and M. B. Voloshin, “On Production of D and D* Mesons in B-Meson Decays,” Sov. J. Nucl. Phys. 47, 511 (1988) ITEP-87-64. (146) P. A. Zyla et al. [Particle Data Group], “Review of Particle Physics,” PTEP 2020, no.8, 083C01 (2020). (147) H. Y. Cheng and C. W. Chiang, “SU(3) symmetry breaking and CP violation in $D\to PP$ decays,” Phys. Rev. D 86, 014014 (2012) [arXiv:1205.0580 [hep-ph]]. (148) K. Karamcheti, “Principles of Ideal-Fluid Aerodynamics”, Wiley, New York (1966). (149) K. G. Chetyrkin, J. H. Kuhn and M. Steinhauser, “RunDec: A Mathematica package for running and decoupling of the strong coupling and quark masses,” Comput. Phys. Commun. 133, 43-65 (2000) [arXiv:hep-ph/0004189 [hep-ph]].
Dynamics and rheology of semidilute solutions of ring-linear polymer blends in planar extensional flow Charles D. Young Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801    Yuecheng Zhou Current address: Department of Chemistry, Stanford University, Stanford, California 94305, USA Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801    Charles M. Schroeder Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801    Charles E. Sing [email protected] Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801 (December 8, 2020) Abstract We use Brownian dynamics (BD) simulations and single molecule experiments to investigate the influence of topological constraints and hydrodynamic interactions on the dynamics and rheology of solutions of ring-linear polymer blends at the overlap concentration. We find agreement between simulation and experiment in that rings in solution blends exhibit large conformational fluctuations, including extension overshoots on the startup of flow and tumbling and tank-treading at steady state. Ring polymer fluctuations increase with blend fraction of linear polymers and are peaked at a ring Weissenberg number $\textrm{Wi}_{R}\approx 1.5$. On the contrary, linear and ring polymers in their pure solution show a peak in fluctuations at the critical coil-stretch Weissenberg number $\textrm{Wi}=0.5$. BD simulations show that extension overshoots on startup of flow are due to flow-induced intermolecular ring-linear polymer hooks, whereas fluctuations at steady state are driven by intermolecular hydrodynamic interactions (HI). The streamlines determined from BD simulation are qualitatively comparable to elastic instabilities in cross-slot flow of polymer solutions, and we speculate that the dynamics of solution blends may be understood by deviations of the flow from planar extension. This is supported by simulations of bidisperse linear polymer solution blends, which show similar trends in conformational dynamics between rings and linear polymers with a matched contour length. Comparing to BD simulations, single molecule experiments show quantitatively larger fluctuations, which could arise because experiments are performed on higher molecular weight polymers with stronger entanglement effects. To this end, we have advanced the understanding of the effects of topological interactions and intermolecular HI on the dynamics of semidilute ring-linear polymer blend solutions. I Introduction The dynamics of ring polymers are of broad interest to fundamental polymer physics, Rubinstein (1986); McLeish (2002) technological applications, Kaitz, Diesendruck, and Moore (2013); Lloyd et al. (2018) and biological systems. Taanman (1999) In the context of polymer physics, ring polymers are a model for understanding the effect of chain architecture. The dynamics and rheology of linear polymer solutions at varying concentration have been widely studied. Doi and Edwards (1988); Rubinstein and Colby (2003) In the dilute limit, dynamics are governed by intramolecular excluded volume (EV) and hydrodynamic interactions (HI). Polymer solutions enter the semidilute regime above the overlap concentration $c^{*}=M/(N_{A}R_{g}^{3})$, where $M$ is the polymer molecular weight, $N_{A}$ is Avogadro’s number, and $R_{g}$ is the dilute polymer radius of gyration. In semidilute solutions, polymer dynamics are determined by both intra- and intermolecular EV and HI. Finally, in melt EV and HI are fully screened, and polymer dynamics are well described by either the Rouse model for unentangled polymers or the tube-reptation model for entangled polymers. The introduction of an end-free constraint in ring polymers, however, changes the conformational dynamics and rheology of polymers in all three concentration regimes qualitatively and quantitatively. In melt, polymer rheology can largely be understood on the basis of topological constraints. Linear polymer melts relax stress by reptation of free ends along the polymer backbone, leading to a rubbery plateau modulus. Doi and Edwards (1988); Rubinstein and Colby (2003) However, in ring polymer melts the end-free constraint significantly suppresses entanglements and drives rings toward globular conformations, resulting in a significantly lower zero-shear viscosity as compared to linear polymer melts and the absence of a rubbery plateau modulus. Kapnistos et al. (2008); Halverson et al. (2012); Ge, Panyukov, and Rubinstein (2016) Thus, ring polymers are an appealing option for applications requiring high polymer mass transport but low viscosity. These results are only valid for pure ring melts, which remain challenging to synthesize. Doi et al. (2015); Pasquino et al. (2013) The introduction of even trace linear contaminants results in significantly higher zero-shear viscosity and the return of a rubbery plateau modulus. Kapnistos et al. (2008); Halverson et al. (2012) These rheological features are intrinsically connected to the polymer dynamics. Molecular dynamics (MD) simulations show long-lived relaxation modes in ring-linear polymer melt blends associated with the threading of linear polymers through rings. Tsalikis and Mavrantzas (2014) MD simulations also show a maximum in zero-shear viscosity as a function of blend ratio at $\phi_{linear}=0.5$ (where $\phi_{linear}$ is the linear polymer weight fraction) due to ring-linear threading. Halverson et al. (2012) Polymer processing generally involves strong flows, where the coupling of architecture and deformation complicate the idea of suppressed entanglements in pure ring melts. Filament stretching rheometry has shown that ring polymer melts in uniaxial extensional flow thicken at unexpectedly low strain rates, Huang et al. (2019) a phenomenon which has been shown by MD simulations to be caused by topological linking. O’Connor et al. (2020) Ring linking has also been observed in melts under shear, although the effects on shear viscosity are less pronounced than in extension. Halverson et al. (2012); Tsalikis, Mavrantzas, and Vlassopoulos (2016); Tsamopoulos et al. (2019) The effect of blend ratio is also relevant in flow. MD simulations show the shear viscosity is independent of blend ratio when melts are exposed to high shear rates, Halverson et al. (2012) but there may be stronger effects in extension. In contrast to melts, entanglement effects are weaker in solutions; Doi and Edwards (1988); Rubinstein and Colby (2003) the coupling of HI and EV to polymer architecture becomes important, particularly in strong flows. Mai et al. (2018); Schroeder (2018) In the dilute limit, equilibrium ring polymer dynamics and conformations have been widely studied by single molecule experiments, Robertson, Laib, and Smith (2006) theory, Jagodzinski, Eisenriegler, and Kremer (1992) and simulation. Hegde et al. (2011) In planar extensional flow, ring polymers exhibit a delayed coil-stretch transition due to intramolecular hydrodynamics which drive the ring to stretch in the flow-neutral direction into an open loop conformation. Li et al. (2015); Hsiao, Schroeder, and Sing (2016) In shear flow, rings undergo ‘end-over-end’ tumbling as observed in linear polymers, Schroeder et al. (2005) as well as tank-treading dynamics as seen in vesicles. Chen, Chen, and An (2013); Liebetreu, Ripoll, and Likos (2018) In the more general case of mixed flows, rings exhibit both tumbling and stretching behavior. Young et al. (2019) Overall, rings in dilute solution demonstrate unique conformational dynamics and quantitatively different rheological responses as compared to linear polymers. Semidilute polymer solutions incorporate the physics of both the melt and dilute regimes, namely entanglement or topological constraints and intra and inter-molecular interactions due to excluded volume and solvent-mediated hydrodynamics. Doi and Edwards (1988); Rubinstein and Colby (2003) At equilibrium, the influence of chain architecture in pure solution is primarily quantitative, as ring and linear polymers demonstrate similar scaling of dynamic and static properties with concentration and molecular weight. Tsalikis et al. (2020) However, blends of ring and linear polymers in semdilute solution can display qualitatively different dynamics, particularly above the entanglement concentration $c_{e}$ as traditionally defined for linear polymers. Rubinstein and Colby (2003) Single molecule studies on the diffusion of trace rings in a background of semidilute linear polymers have shown a scaling with concentration lower than that of the pure linear solution prediction, suggesting ring-linear threading inhibits ring diffusion in a manner not described by reptation theories. Robertson and Smith (2007) Further studies found that as the blend fraction of ring polymers decreased from a primarily ring to primarily linear polymer solution, the diffusion of ring polymers decreased markedly. Chapman et al. (2012) Monte Carlo simulations found good qualitative agreement with this trend, attributing the slower dynamics to ring-linear threading. Chapman et al. (2012) This is supported by MD melt simulations, which show that ring-ring threadings are significantly less probable and have a weaker effect on ring dynamics as compared to ring-linear threads. Tsalikis, Mavrantzas, and Vlassopoulos (2016) Notably, slow ring dynamics are most apparent above the entanglement concentration $c_{e}$, which for the 45 kbp DNA used is related to the overlap concentration as $c_{e}\approx 3\ c^{*}$. Zhou and Schroeder (2018); Pan et al. (2014) At lower concentrations, the dependence on blend fraction is less pronounced. Chapman et al. (2012) However, the flow dynamics of semidilute solutions of pure rings or ring-linear polymer blend is still not well understood. In particular, it is not clear if the application of flow introduces topological constraints or strong solvent-mediated HI which are absent at equilibrium. Recently, Zhou et al. performed single molecule experiments on ring polymers trapped at the stagnation point of planar extensional flow in background solutions of linear polymers at 0.025 $c^{*}$, 0.1 $c^{*}$, and 1 $c^{*}$. Zhou et al. (2019) At 0.025 $c^{*}$, ring polymer conformational fluctuations were peaked at $\textrm{Wi}_{R}\approx 0.5$, where $\textrm{Wi}_{R}$ is the dimensionless flow strength on trace ring polymers defined as $\textrm{Wi}_{R}=\dot{\epsilon}\tau_{R}$, $\dot{\epsilon}$ is the strain rate, and $\tau_{R}$ is the longest relaxation time of the trace ring polymer. This is in agreement with the expectation that conformational fluctuations are largest at the critical coil-stretch transition flow rate, which is supported by previous experiments and simulations on semidulte linear polymer solution Hsiao et al. (2017); Sasmal et al. (2017); Stoltz, de Pablo, and Graham (2006); Young and Sing (2019), and dilute linear and ring polymer solution. Li et al. (2015); Hsiao, Schroeder, and Sing (2016); Young et al. (2019) At 0.1 $c^{*}$, however, the maximum conformation fluctuation was shifted to $\textrm{Wi}_{R}\approx 0.9$, with only a weak decrease thereafter. Approaching the semidilute regime at 1 $c^{*}$, fluctuations increased up to $\textrm{Wi}_{R}\approx 1.5$ and then plateaued. These results raise questions regarding the nature of intermolecular interactions in non-dilute ring-linear polymer blends. Single molecule experiments and simulations of linear semidilute polymer solutions have already suggested plausible explanations. At 1 $c^{*}$, slow-stretching and fast-stretching end-coiled sub-populations emerge upon the startup of planar extensional flow, leading to broader conformational distributions as compared to dilute solutions. Hsiao et al. (2017); Sasmal et al. (2017) The authors suggested this could be due to flow induced intermolecular hooks which form between folded and end-coiled conformations. In the current study, we performed Brownian dynamics (BD) simulations at similar conditions and observed both broader conformational distributions at 1 $c^{*}$ and transient intermolecular hooks, although rarely found at 1 $c^{*}$. Young and Sing (2019) Simulations also revealed a population of linear polymers which interconverted between coiled and stretched states at moderate flow rates $\textrm{Wi}_{L}\approx 1.5$, whereas linear polymers in dilute solution have already stretched under such flow rate. Schroeder, Shaqfeh, and Chu (2004); Young and Sing (2019) The slow stretching dynamics and retraction were assigned to intermolecular hydrodynamic interactions (HI), but a more detailed study was deferred. Past molecular simulation studies of semidilute polymer solutions in flow have focused on molecular scale physics. Stoltz, de Pablo, and Graham (2006); Huang et al. (2010, 2011); Sasmal et al. (2017); Young and Sing (2019) However, the modification of applied flow in viscoelastic liquids due to the elastic stress may be relevant to polymer dynamics. The nonlinear stress term in polymer solutions can cause striking changes in flow behavior. Groisman and Steinberg (2000) With respect to the current study at negligible Reynolds number, purely elastic instabilities arise from normal stress differences along curved streamlines. Shaqfeh (1996); Pakdel and McKinley (1996) Particle tracking velocimetry experiments and continuum simulations of unentangled polymer solution flow in cross-slot microfluidic devices have shown elastic instabilities associated with the extra polymer stress. Arratia et al. (2006); Poole, Alves, and Oliveira (2007) As the flow strength increases above $\textrm{Wi}\approx 0.5$, the polymer disturbance to the flow causes stagnation point displacement, steady asymmetric flow across the axis of extension, and unsteady mixed flows. Haward, McKinley, and Shen (2016); Cruz and Alves (2018) BD simulations of dilute polymer solutions in mixed flows show broad conformational distributions as polymers can tumble as in shear and stretch as in extension. Woo and Shaqfeh (2003); Young et al. (2019) Thus, elastic instabilities may be relevant to microscopic polymer dynamics, although the connection is not clear for semidilute polymer solutions under straining flows. In this work, we combine BD simulations and single molecule experiments to study the influence of planar extensional flow and blend composition in semidilute solutions of ring-linear polymer blends. We consider solutions at the overlap concentration $c^{*}$ while increasing flow rate through the coil-stretch transition with $\textrm{Wi}_{R}=0.4-3.2$ and varying the blend composition from a pure ring polymer solution to a trace ring in linear polymer background solution. The article is organized as follows: In Sec. II we describe the simulation method and experimental procedure. In Sec. III we quantify the conformational dynamics and solution rheology. Ring polymers are found to exhibit large conformational fluctuations which are sensitive to the blend ratio. We further investigate the origin of ring conformational fluctuations in Sec IV. We find that intermolecular ring-linear polymer hooks lead to overshoots in ring extension on startup of flow, and intermolecular HI lead to large fluctuations in extension at steady state. Finally, we summarize our results and highlight topics for future studies in Sec. V. II Methods II.1 Simulation Governing Equations We perform BD simulations of semidilute ring-linear polymer blend solutions in a planar extensional flow. The simulations consists of $n_{R}$ ring polymers and $n_{L}$ linear polymers each with the same number of coarse-grained beads per chain $N_{R}=N_{L}=150$. The total number of beads is then $N=n_{R}N_{R}+n_{L}N_{L}$. The position $\bm{r}_{i}$ of a bead $i$ is updated according to the Langevin equation $$\frac{d\tilde{\bm{r}}_{i}}{d\tilde{t}}=\tilde{\bm{\kappa}}\cdot\tilde{\bm{r}}_% {i}-\sum_{j}\tilde{\textbf{D}}_{ij}\nabla_{\tilde{\bm{r}}_{j}}(\tilde{U})+% \tilde{\bm{\xi}}_{i}$$ (1) Tildes denote dimensionless quantities. Positions are normalized by the bead radius ($\tilde{\bm{r}}=\bm{r}/a$), energies are normalized by the thermal energy $k_{B}T$ ($\tilde{U}=U/(k_{B}T)$), times are normalized by the single-bead diffusion time ($\tilde{t}=t/\tau_{0}$, where $\tau_{0}=6\pi\eta_{s}a^{3}/(k_{B}T)$ and $\eta_{s}$ is the solvent viscosity), and the diffusion tensor is normalized by the drag coefficient of the spherical polymer beads ($\tilde{\textbf{D}}_{ij}=\textbf{D}_{ij}(6\pi\eta a/(k_{B}T))$). Polymer beads experience flow via the $3N\times 3N$ block diagonal tensor $\tilde{\bm{\kappa}}$, which has $3\times 3$ diagonal blocks given by the solvent velocity gradient tensor $(\nabla\tilde{\textbf{v}})^{T}$. For planar extensional flow, $$\nabla\tilde{\textbf{v}}=\begin{pmatrix}\tilde{\dot{\epsilon}}&0&0\\ 0&-\tilde{\dot{\epsilon}}&0\\ 0&0&0\end{pmatrix}$$ (2) where $\tilde{\dot{\epsilon}}=\dot{\epsilon}\tau_{0}$ is the dimensionless strain rate. Beads interact via a potential $\tilde{U}=\tilde{U}^{B}+\tilde{U}^{EV}$ consisting of bonded and excluded volume contributions. We use a finitely extensible non-linear elastic (FENE) spring force for connectivity $$\tilde{U}^{B}=-0.5\tilde{k}_{s}\tilde{r}_{max}^{2}\textrm{ln}\left[1-\left(% \frac{\tilde{r}_{ij}}{\tilde{r}_{max}}\right)^{2}\right]$$ (3) where $\tilde{k}_{s}=30\tilde{u}/\tilde{\sigma}^{2}$ is the spring constant, $\tilde{u}=1.0$ gives the strength of EV interactions, and $\tilde{\sigma}=2$ is the diameter of a bead. The maximum extension of a spring is $\tilde{r}_{max}=1.5\tilde{\sigma}$, and $\tilde{r}_{ij}$ is the distance between two connected beads. Excluded volume interactions are modeled by a shifted, purely repulsive Lennard-Jones potential $$\tilde{U}^{EV}=4\tilde{u}\left[\left(\frac{\tilde{\sigma}}{\tilde{r}_{ij}}% \right)^{12}-\left(\frac{\tilde{\sigma}}{\tilde{r}_{ij}}\right)^{6}+\frac{1}{4% }\right]\Theta(2^{1/6}\tilde{\sigma}-\tilde{r})$$ (4) which yields chain statistics representative of a good solvent. We find $\nu\approx 0.59$ from the scaling relation $\tau_{Z}\sim N^{3\nu}$ and relaxation time data from equilibrium single chain simulations, in agreement with the result for a polymer in good solvent. Doi and Edwards (1988); Rubinstein and Colby (2003) This model has been widely utilized to study polymer dynamics in solution and melt and has been shown to prevent chain crossings in simulations of entangled melts in extensional flow. Kremer and Grest (1990) Solvent-mediated HI and Stokes drag are included via the diffusion tensor, given here by the Rotne-Prager Yamakawa (RPY) tensor, Rotne and Prager (1969); Yamakawa (1970) $$\tilde{\textbf{D}}_{ij}=\small\begin{cases}\bm{I},&i=j\\ \frac{3}{4\tilde{r}_{ij}}\left[\left(1+\frac{2}{3{\tilde{r}_{ij}}^{2}}\right)% \bm{I}+\left(1-\frac{2}{{\tilde{r}_{ij}}^{2}}\right)\bm{\hat{r}}_{ij}\bm{\hat{% r}}_{ij}\right],&i\neq j,{\tilde{r}_{ij}}\geq 2\\ \left(1-\frac{9{\tilde{r}_{ij}}}{32}\right)\bm{I}+\frac{3{\tilde{r}_{ij}}}{32}% \hat{\bm{r}}_{ij}\hat{\bm{r}}_{ij},&i\neq j,{\tilde{r}_{ij}}\leq 2\\ \end{cases}$$ (5) $\hat{\bm{r}}_{ij}=\tilde{\bm{r}}_{ij}/r_{ij}$ is a unit vector in the direction of $\tilde{\bm{r}}_{ij}=\tilde{\bm{r}}_{j}-\tilde{\bm{r}}_{i}$ and $\bm{I}$ is the identity matrix. The average and first moment of the Brownian noise $\tilde{\bm{\xi}}_{i}$ are given by the fluctuation-dissipation theorem as $\langle\tilde{\bm{\xi}}_{i}(t)\rangle=0$ and $\langle\tilde{\bm{\xi}}_{i}(t)\tilde{\bm{\xi}}_{j}(t^{\prime})\rangle=2\tilde{% \textbf{D}}_{ij}\delta(t-t^{\prime})$ respectively. Simulation implementation requires the decomposition of the diffusion tensor as $\tilde{\textbf{D}}=\textbf{BB}^{T}$ so that the Brownian noise can be computed via $\tilde{\bm{\xi}}_{i}=\sqrt{2}\textbf{B}_{ij}\bm{f}_{j}$, where $\bm{f}_{j}$ is a Gaussian random variable with mean 0 and variance $dt$. In traditional BD simulations, evaluation of the Brownian noise is a computational bottleneck, with the cost scaling as $O(N^{2})-O(N^{3})$ depending on the algorithm. Ermak and McCammon (1978); Fixman (1986); Ando et al. (2012) To bypass this expense, we use the iterative conformational averaging (CA) method. A brief description of the method and extension to the case of ring-linear polymer blends is given in Section II.2, and a more detailed derivation and verification can be found in the authors’ previous work. Miao, Young, and Sing (2017); Young, Marvin, and Sing (2018); Young and Sing (2019) Polymers are simulated in an initially rectangular simulation cell of volume $\tilde{V}=\tilde{l}_{x}\tilde{l}_{y}\tilde{l}_{z}$. The initial cell dimensions in the extension and compression directions are $\tilde{l}_{x}$ and $\tilde{l}_{y}$ respectively which must be equal due to the use of Kraynik-Reinelt boundary conditions for deformation of the box with the flow. Kraynik and Reinelt (1992); Todd and Daivis (1998) We specify the cell size in the neutral direction $\tilde{l}_{z}$ to be smaller than $\tilde{l}_{x}$ and $\tilde{l}_{y}$ so that the cell dimension in the extension direction is larger. This reduces finite size effects arising from polymers interacting with their own periodic images. A similar approach has been used in simulations of polymer melts in planar extensional flow, which found that results from the rectangular cell simulation were in quantitative agreement with results from a cubic box simulation. Sefiddashti, Edwards, and Khomami (2018) The cell volume is determined by $\tilde{V}=N/\tilde{c}$, where $\tilde{c}$ is the polymer concentration. We set the concentration via the normalized value $\tilde{c}/\tilde{c}^{*}_{L}$, where $\tilde{c}^{*}_{L}=N_{L}/(4/3\pi\langle\tilde{R}_{g0,L}\rangle^{3})$ is the overlap concentration. This defines the overlap with respect to the dilute linear polymer radius of gyration $\langle\tilde{R}_{g0,L}\rangle$. We have adopted this definition for consistency with the single molecule experiments. Zhou et al. (2019) As a result, the effective normalized concentration decreases with increasing ring polymer blend fraction due to the smaller radius of gyration of the ring. The difference between the equilibrium sizes of ring and linear polymers is considerable ($\langle\tilde{R}_{g0,L}\rangle=19.5$ vs $\langle\tilde{R}_{g0,R}\rangle=14.5$), suggesting a change in the effective concentration may be important. However, we have performed simulations of pure ring polymer solutions at $\tilde{c}^{*}_{R}$ based on the ring polymer radius of gyration and found the results to be nearly quantitatively consistent with those presented here for $f_{R}=1$, which use $\tilde{c}/\tilde{c}^{*}_{L}=1.0,\tilde{c}/\tilde{c}^{*}_{R}=0.4$. All following references to the overlap concentration $c^{*}$ indicate the value for the pure linear solution $\tilde{c}^{*}_{L}$, and tildes are dropped because only the normalized concentration is used. We consider solution blends at the overlap concentration $c^{*}$ for a range of linear polymer fractions $f_{R}=0.02-1$ and flow rates $\textrm{Wi}_{R}=0.4-3.2$. The ring polymer fraction controls the blend ratio and is defined as $f_{R}=n_{R}/(n_{R}+n_{L})$. We define the ring polymer Weissenberg number $\textrm{Wi}_{R}=\dot{\epsilon}\tau_{R}$, where $\tau_{R}$ is the longest ring linear polymer relaxation time. We consider the single exponential ring relaxation time at the relevant blend fraction, which is determined as described in Section III.2. The number of polymers in the simulation box and the resulting box dimensions are given in Table 1. Note that simulations at $f_{R}=0.02$ use a smaller box size for computational efficiency, as only one ring polymer trajectory is gathered per simulation run. We have also performed $n_{run}=3$ simulation runs at $f_{R}=0.01$ using the same larger box dimensions as for $f_{R}=0.17-0.83$. We find the linear polymer dynamics agree quantitatively between the larger and smaller boxes, so to study ring polymer dynamics we use the latter. A smaller simulation box size is also used for $f_{R}=1$ because the contour length of ring polymers is half that of linear polymers. Polymer conformations are initialized following a procedure inspired by simulation of ring-linear polymer blend melts. Halverson et al. (2012) Rings are introduced as randomly oriented circular ellipses on a square lattice with spacing greater than the diameter of the rings. This ensures that rings are initially non-concatenated. The number of beads is generally greater than intended because the box is cubic and the lattice is filled. Rings are randomly removed until the target number of beads $N$ is reached. Rings are then relaxed from their circular conformations in a freely draining (FD) simulation for a duration $\tau_{R}^{FD}$ corresponding to the FD ring relaxation time determined from dilute solution FD simulations. At this point, the concentration is lower than intended due to the large initial lattice spacing, so we decrease the box size to the target dimensions. The ring polymers are further relaxed for $10\tau_{R}^{FD}$ before $n_{L}$ rings are removed and replaced with random-walk non-overlapping linear polymers to reach the target blend ratio. Finally, the system is allowed to relax for another $10\tau_{L}^{FD}$ corresponding to the dilute solution FD linear polymer relaxation time. By this procedure, we aim to reach an accurate equilibrium conformation for a ring-linear polymer blend solution. As discussed in Section IV.1, the initial probability of a linear polymer threading a ring is of significant importance to the transient dynamics on startup of flow. Unfortunately, simulation studies on the equilibrium conformations, dynamics, and threading probability of ring-linear polymer blends in solution are limited, and existing results focus on concentrations $c>c_{e}$. The problem of dynamics is particularly challenging because of the need to accurately resolve solvent-mediated HI. The polymer density density at $c^{*}$ is relatively low ($\rho=N/V=0.04/\sigma^{3}$), so we assume that our procedure provides accurate equilibrium conformations and continue to out-of-equilibrium dynamics and rheology, which are the focus of this work. Initialization is followed by a production run including flow and HI. Kraynik-Reinelt boundary conditions (KRBCs) Kraynik and Reinelt (1992) are implemented such that the simulation cell deforms consistently with the applied flow. We follow the algorithm of Todd and Daivis,Todd and Daivis (1998) which allows for unrestricted strain accumulation. Hydrodynamics are accounted for using an Ewald sum, Beenakker (1986); Jain et al. (2012) which overcomes the slow convergence of the RPY tensor by splitting the sum into exponentially decaying real space and reciprocal space parts. Exlcuded volume interactions are accelerated using a cell list generalized for homogeneous linear 3D flow. Dobson, Fox, and Saracino (2016) As the simulation progresses, the accumulated Hencky strain is given by the applied flow rate $\epsilon_{H}=\dot{\epsilon}t$. We simulate until a total strain $\epsilon_{tot}=15-20$, after which flow is halted and the box remains in the conformation at the cessation flow time $t_{cess}=\epsilon_{tot}/\dot{\epsilon}$. We then simulate relaxation dynamics for $~{}10\tau_{L}^{FD}$. The simulation is advanced by explicit Euler integration of the Langevin equation using a time step of $dt=5\times 10^{-4}\tau_{0}$. II.2 Iterative conformational averaging method We follow the approach of Geyer and Winter, who introduced the truncated expansion ansatz (TEA) approximation to the correlated Brownian noise. Geyer and Winter (2009) The CA method introduces two further assumptions: (i) the decomposition coefficients of the TEA are conformationally averaged (ii) for interparticle distances above a cutoff $\tilde{r}_{c}$, the RPY tensor is discretely evaluated on a grid. The Langvein equation then becomes $$\frac{d\tilde{\bm{r}}_{i}^{(w)}}{d\tilde{t}}=\tilde{\bm{\kappa}}\cdot\tilde{% \bm{r}}_{i}^{(w)}-\sum_{j}\tilde{\textbf{D}}_{ij}^{eff}\nabla_{\tilde{\bm{r}}_% {j}}(\tilde{U})+\tilde{\bm{\xi}}_{i}^{(w)}(\epsilon_{o})$$ (6) where the superscript $(w)$ denotes the iteration number. The diffusion tensor is approximated by an exact Ewald sum within a cutoff radius $\tilde{r}_{c}=12a$ and a discrete approximation to the RPY tensor outside the cutoff $$\tilde{\textbf{D}}_{ij}^{eff}=\tilde{\textbf{D}}_{ij}^{RPY}\Theta(\tilde{r}_{c% }-\tilde{r}_{ij})+\tilde{\textbf{D}}_{ij}^{G}(t)\Theta(\tilde{r}_{ij}-\tilde{r% }_{c})$$ (7) The $RPY$ superscript indicates the full Ewald sum and the $G$ the discrete grid space approximation, given by $\tilde{\textbf{D}}_{ij}^{G}=\tilde{\textbf{D}}^{RPY}(\Delta\tilde{\bm{r}}_{ij})$, where $\Delta\tilde{\bm{r}}_{ij}=(\Delta\tilde{x}_{ij},\Delta\tilde{y}_{ij},\Delta% \tilde{z}_{ij})$ is the pair displacement rounded to the nearest grid point. We use a grid spacing $d_{g}=1.0a$ for all simulations. Note that the diffusion tensor does not include an iteration superscript $(w)$ because in this formulation the HI depends only on the inter-bead displacements and no preavaging is used. The diffusion tensor is updated every $\lambda^{RPY}=\Delta tn^{RPY}=0.05\tau_{0}$, where $n^{RPY}=100$ is the the number of time steps between updates. The conformationally averaged form of the TEA gives the Brownian noise at an accumulated strain $\epsilon$ as $$\tilde{\xi}_{l}^{(w)}(\tilde{t},\epsilon)=\tilde{\textrm{D}}_{ll}^{eff}(\tilde% {t})\langle C_{l}\rangle^{(w-1)}(\epsilon)\langle\beta^{\prime}\rangle^{(w-1)}% (\epsilon)\sum_{m=1}^{3N}\frac{\tilde{\textrm{D}}_{lm}^{eff}(\tilde{t})}{% \tilde{\textrm{D}}_{ll}^{eff}(\tilde{t})}f_{m}(\tilde{t})$$ (8) Note the index $l$ gives an individual component of the size $3N$ noise vector and not three components $(x,y,z)$ for a bead $i$ as in Eqn. 6. The TEA is a pairwise approximation to the exact decomposition. The $\beta^{\prime}$ parameter describes the average hydrodynamic coupling and the coefficients $C_{l}$ ensure the beads experience the correct Stokes drag. The average quantities sampled from the previous iteration are evaluated transiently to account for startup and relaxation transience, leading to $$\langle\beta^{\prime}\rangle^{(w)}(\epsilon_{o})=\frac{1}{T}\sum_{\epsilon_{o}% t_{\epsilon_{bin}}}^{(\epsilon_{o}+1)t_{\epsilon_{bin}}}{\beta^{\prime(w)}(t)}\\ $$ (9) $$\langle C_{l}\rangle^{(w)}(\epsilon_{o})=\frac{1}{T}\sum_{\epsilon_{o}t_{% \epsilon_{bin}}}^{(\epsilon_{o}+1)t_{\epsilon_{bin}}}{C^{(w)}_{l}(t)}$$ (10) where $\epsilon_{o}$ refers to the strain bin, and the sums indicate sampling of a strain interval during the stretching phase and a time interval during the relaxation phase as detailed in the authors’ previous work. Young and Sing (2019) The instantaneous samples of the average are defined as $$\beta^{\prime}=\frac{1-\sqrt{1-3N(\varepsilon^{2}-\varepsilon)}}{3N(\epsilon^{% 2}-\varepsilon)}$$ (11) where $\varepsilon$ is an average over the off-diagonal entries of the diffusion tensor $$\varepsilon=\frac{1}{(3N)^{2}}\sum_{l}\sum_{m\neq l}\frac{\tilde{\textrm{D}}_{% lm}}{\tilde{\textrm{D}}_{ll}}$$ (12) The coefficients are given by $$C_{l}=\sqrt{\frac{1}{1+\beta^{\prime 2}\sum_{l}\sum_{m\neq l}\frac{\tilde{% \textrm{D}}_{lm}^{2}}{\tilde{\textrm{D}}_{ll}\tilde{\textrm{D}}_{mm}}}}$$ (13) Further details are given by Geyer and Winter Geyer and Winter (2009) and the authors’ previous work. Young and Sing (2019) We make minor modifications to the method for the blend case. In principle, there are 3 coefficients $C_{l}$ associated with each bead. Assuming polymers to be distinguishable only by their initial configurations, we previously used a ensemble averaged set of $3N_{L}$ coefficients for each linear chain. For blends, there is an additional set of $3N_{R}$ coefficients for rings. The $\beta^{\prime}$ parameter is a solution averaged quantity that is not specific to the chain architecture. We perform two iterations to obtain freely draining (FD, $w=0$) and hydrodynamically interacting (HI, $w=1$) results. The authors have shown that the second iteration ($w=1$) provides excellent agreement with traditional BD simulations, and a third ($w=2$) iteration does not significantly improve accuracy. Young and Sing (2019) II.3 Experimental Methodology To prepare ring-linear polymer blend solutions with trace molecules, small amounts of 45 kbp circular DNA molecules are first fluorescently labeled with an intercalating dye (YOYO-1, Molecular Probes, Thermo Fisher) with a dye-to-base pair ratio of 1:4 for $>$1 h in dark at room temperature. Trace amounts of fluorescently labeled 45 kbp DNA are then added to background solutions of unlabeled 45 kbp semidilute ring-linear DNA blends. Details regarding the preparation of 45 kbp circular DNA and semidilute ring-linear DNA blend solutions are described elsewhere. Robertson, Laib, and Smith (2006); Zhou et al. (2019). Single molecule fluorescence microscopy and imaging is performed using an inverted epifluorescence microscope (IX71, Olympus) coupled to an electron-multiplying charge coupled device (EMCCD) camera (iXon, Andor Technology) as described in detail before. Zhou and Schroeder (2016, 2018). In brief, labeled DNA blend solutions are introduced into a PDMS-based microfluidic cross-slot with 300 $\mu$m in width and 100 $\mu$m in height. A 50 mW 488 nm laser directed through a 2.2 absorbance neutral density (N.D.) filter (Thorlabs, NJ, USA) is reflected by a 488 nm single edge dichroic mirror (ZT488rdc, Chroma) and used to illuminate the labeled DNA molecules. Fluorescence emission is collected by a 1.45 NA, 100$\times$ oil immersion objective lens (UPlanSApo, Olympus) followed by a 1.6$\times$ tube lens and a 525 nm single-band bandpass filter (FF03-525/50-25, Semrock) in the detection path. Fluorescence images are acquired by an Andor iXon EMCCD camera (512$\times$512 pixels, 16 $\mu$m pixel size) under frame transfer mode at a frame rate of 33 Hz (0.030 s${}^{-1}$). III Results To investigate polymer dynamics, we primarily consider the fractional extension in the flow direction, $\Delta x/L$. In simulation, the extension is $\Delta x=\textrm{max}(x_{i})-\textrm{min}(x_{i})$. For linear polymers, the contour length is $L_{L}=(N_{L}-1)r_{max}$, whereas for ring polymers $L_{R}=(N_{R}-1)r_{max}/2$ due to the closed loop constraint. In experiment, the extension can be directly visualized and normalized by the ring contour length $L_{R}=10\mu$m. We also probe the solution rheology via the reduced extensional viscosity, in which we have normalized by the monomer concentration to account for the linear concentration dependence and used $c^{*}$ as the reference concentration $$\eta_{r}=\frac{\eta_{p}c^{*}}{\eta_{s}c}$$ (14) where $\eta_{p}$ is the polymer contribution to the extensional viscosity $$\eta_{p}=-\frac{\tau_{p,xx}-\tau_{p,yy}}{\dot{\epsilon}}$$ (15) and $\tau_{p,\alpha\beta}$ is the polymer contribution to the stress tensor determined by the Kirkwood formulaDoi and Edwards (1988) $$\tau_{p,\alpha\beta}=\frac{1}{V}\sum_{i}^{N}\sum_{j>i}^{N}r_{ij,\alpha}F_{ij,\beta}$$ (16) $F_{ij,\beta}$ is the conservative force between particles $i$ and $j$ in the $\beta$ direction. III.1 Comparison between simulation and experiment While simulation variables are matched to experimental conditions as possible, there are limitations which prevent quantitative comparison. First, we use a flexible chain with a FENE-WCA force law, as compared to the wormlike force-extension behavior of DNA. Polymer stiffness influences the extensional rheology of polymer solutions, Dinic and Sharma (2020) suggesting differences in conformational dynamics. The FENE-WCA force law was used for convenient implementation of topological constraints on the length scale of an individual Kuhn segment. Generally, WLC models used in simulation are coarse-grained, Marko and Siggia (1995) which is not sufficient for capturing hooking behavior. WLC models on the level of an individual Kuhn segment have been developed, Underhill and Doyle (2006); Saadat and Khomami (2016) but they are incompatible with steep LJ excluded volume potentials without the use of a predictor-corrector integrator, which are computationally inefficient in semidilute solutions. Thus, alternative algorithms for preventing spring crossings such as spring-spring repulsions Kumar and Larson (2001) or slip-links Likhtman (2005); Uneyama and Masubuchi (2012); Ramírez-Hernández et al. (2015) would be required. Implementation in a semidilute solution of fine-grained polymers is non-trivial, however. It is also not clear if these approaches developed for coarse-grained models would be accurate in the current study. Second, simulations consider relatively short chains with $n_{K}\approx 83$ Kuhn segments per chain as compared to $n_{K}\approx 200$ for 45 kbp ring DNA and $\lambda$-DNA (48.5 kbp). Simulations are limited to short chains because of the size of the box, with the current $N_{R}=150$ systems using $N=19,200$. Higher molecular weight chains would require larger systems, which are intractable due to the $O(N^{2})$ computational scaling of the CA method. BD algorithms with improved computational scalings of $O(N)$ Fiore et al. (2017) or $O(N\textrm{log}N)$ Liu and Chow (2014); Saadat and Khomami (2015) may help in overcoming this limitation, although they have not been implemented in the CA method. Third, there are differences in the implementation of flow. Simulations consider a periodic domain in unbounded homogeneous planar extension. While we set the simulation cell size to limit artificial self-interactions of the chain, it is challenging to eliminate these effects because HI are long-ranged. Experiments are performed in a 300 $\times$ 300 $\times$ 100 $\mu$m cross-slot, and the accumulated strain at the stagnation point is $\epsilon_{H}\approx 7-10$. While the trapped ring is exposed to high strain, the boundary conditions of the cross-slot require continuous inflow of background solution. As we show in Sec. IV, the accumulated strain of the background polymers is important to threading dynamics. In contrast, all chains in are exposed to the same applied flow in simulation. Finally, while the applied flow is planar extensional in both simulations and experiment, it is not clear the resulting flow with polymers is the same. In simulation, we observe modification of the solvent velocity due to the polymer disturbance. Flow measurements with simultaneous polymer trapping are challenging in experiment, so direct comparison is not made in this work. However, previous work has shown global asymmetric unsteady flow which is sensitive to channel shape, Arratia et al. (2006); Haward, McKinley, and Shen (2016); Cruz and Alves (2018) channel aspect ratio, Cruz et al. (2016) polymer concentration and molecular weight. Haward, McKinley, and Shen (2016); Sousa et al. (2015) These observations are directly connected to the geometry of the cross-slot, and thus there may be qualitative differences compared to the unbounded flow implemented in molecular simulation. III.2 Relaxation after flow cessation The longest polymer relaxation time is determined by fitting a single exponential to the linear entropic regime Schroeder (2018) $\Delta x/L<0.3$ after cessation of constant strain rate flow following an applied strain $\epsilon=20$. Fig 1 shows the ensemble average relaxation for varying blend ratio. In all cases we find a good fit to $\Delta x=(\Delta x_{0}^{2}-\Delta x_{\infty}^{2})\textrm{exp}(-t/\tau_{R})+% \Delta x_{\infty}^{2}$. The resulting $\tau_{R}$ normalized by the dilute limit ring relaxation time $\tau_{R,d}$ is plotted versus blend ratio in the inset and compared to results from single molecule experiments. We find the relaxation time decreases weakly with blend fraction of rings $f_{R}$. Simulations agree with experiments to within the stochastic distribution of single molecule relaxation times. We fit the linear polymer relaxation trajectories in the same way and find a similar dependence on blend fraction. Generally, we find that at the same molecular weight, the linear polymer relaxation time $\tau_{L}$ is larger than $\tau_{R}$ ($\tau_{L}\approx 3.5\tau_{R}$) because rings are topologically constrained such that they cannot satisfy the lowest order Rouse mode boundary condition. Hsiao, Schroeder, and Sing (2016) We primarily refer to the Weissenberg number as defined using the ring polymer relaxation time $\textrm{Wi}_{R}$ to describe ring dynamics. However, when considering the solution average flow properties of blends, there are a spectrum of relaxation modes associated with the two components. Thus, we also report the linear polymer relaxation time when appropriate, such as in the solution flow modification for a trace ring in a semidilute linear background (Sec. IV.2.4). It may be useful to determine a nominal relaxation time from the decay of solution average properties including the extensional viscosity and the birefringence, but in this study we consider only the polymer conformational relaxation time. III.3 Transient molecular conformations Next we investigate the transient conformations of ring polymers in startup and steady state planar extensional flow. In Fig. 2 we present BD simulation results at a fixed strain rate and decreasing blend fraction of rings $f_{R}=0.02-1.00$. The ring polymer Weissenberg number $\textrm{Wi}_{R}\approx 1.5$ is approximately constant, with slight variations because the ring polymer relaxation time decreases with ring blend fraction. For a pure ring solution, polymers stretch in the flow direction and then exhibit small fluctuations around the steady state average. The ensemble average extension reaches a constant at $\epsilon\approx 4-5$. This steady state is achieved faster than in the case of dilute or semidilute linear polymer solutions, consistent with previous experiments and simulations of dilute ring polymer solutions. Li et al. (2015); Hsiao, Schroeder, and Sing (2016) The faster stretching of ring polymers was ascribed to reduced molecular individualism among rings. Once steady state is reached, the dynamics are consistent with previous semidilute linear polymer solution simulations. Young and Sing (2019) For blends of ring and linear polymers, however, we find markedly different behavior. While the ensemble average fractional still plateaus at $\epsilon\approx 4-5$, a sub-population of rings stretches significantly beyond the average to $\Delta x/L\approx 0.5-0.6$. This behavior is most noticeable upon the startup of flow, although for majority linear polymer blends, rings can also reach highly stretched conformations $\Delta x/L>0.5$ at steady state. Additionally, after the steady state average extension is reached, rings can retract back to equilibrium levels of extension $\Delta x/L\approx 0.1-0.2$. These conformations are not steady. The extension of individual rings fluctuates significantly in time, consistent with the authors’ previous Zhou et al. (2019) and current experiments. Most rings fluctuate in the range of $\Delta x/L\approx 0.3-0.5$. These fluctuations grow in magnitude as the blend fraction of rings decreases, which we quantify in the following section. We also present results from experiments at similar conditions in Fig. 3, where $\textrm{Wi}_{R}\approx 1.5$ and $f_{R}=0.00-0.83$. The majority ring polymer solution $f_{R}=0.83$ is consistent with the $f_{R}=1$ simulation results. Rings stretch to the steady state average and exhibit small fluctuations. As the blend fraction of rings decreases, large fluctuations emerge as seen in simulation. While the trends are comparable there are noticeable quantitative differences, which we ascribe to the inconsistencies between simulation and experiment discussed in Section III.1. Despite these discrepancies, we find that the qualitative agreement is sufficient that the detailed molecular information available from simulation is valuable in understanding the dynamics and rheology of ring-linear polymer blends. III.4 Average conformational fluctuations We quantify the conformational fluctuations described above via the ‘steady state’ ensemble average fluctuation quantity $$\langle\delta\rangle=\frac{\sum_{i=1}^{n}\sum_{\epsilon_{ss}}^{\epsilon_{cess}% }\sqrt{(x_{i}(t)/L-\langle x_{i}/L\rangle)^{2}}}{n(\epsilon_{cess}-\epsilon_{% ss})}$$ (17) where $n$ is the ensemble size, $\epsilon_{ss}$ is the accumulated strain at which the ensemble average fractional extension plateaus, and $\epsilon_{cess}$ corresponds to flow cessation. We thereby remove effects of initial transient stretching, although ring polymers in solution blend with linear chains never reach true steady state conformations. This definition is consistent for both ring and linear polymers, with the linear steady state strain $\epsilon_{ss}\approx 8-9$, as compared to $\epsilon_{ss}\approx 4-5$ for the rings. In Fig. 4, we report the fluctuation quantity as a function of Wi for a variety of architectures and blend ratios as determined from simulation and experiment. In all cases, the dimensionless flow rate Wi is determined using the relaxation time upon cessation of planar extensional flow at the relevant concentration and blend ratio. In addition to the simulations and experiments presented in this work, we have included results for the case of a pure linear solution at 1 $c^{*}$ and a dilute ring polymer solution from previous work. Young and Sing (2019); Hsiao et al. (2017) In simulation, the pure linear solution undergoes a maximum in the fluctuation quantity at $\textrm{Wi}\approx 0.5$, after which $\langle\delta\rangle$ decreases. This is expected because conformational fluctuations are largest at the coil-stretch transition. de Gennes (1974) For dilute polymer solutions and semidilute solutions performed in the previous work, Young and Sing (2019) we find consistent behavior with slight quantitative shifts (data not shown). The fluctuation quantity for pure linear solutions determined from experiment is consistently peaked at low Wi, although quantitatively smaller. We again attribute this to the inconsistencies between simulation and experiment, in addition to the relatively short duration of the experimentsHsiao et al. (2017) ($\epsilon_{cess}\approx 5-10$) as compared to simulationsYoung and Sing (2019) ($\epsilon_{cess}\approx 20-30$). Dilute ring polymer solutions exhibit significantly suppressed conformational fluctuations as compared to pure linear polymer solutions. This is also expected due to the constrained conformations of the ring. In dilute solution, the majority of linear polymer fluctuations arise from end retraction, which are absent in the ring case. This trend appears to be consistent for semidilute pure ring polymer solutions. Fluctuations are larger than the dilute case due to intermolecular HI but are again peaked at low Wi, and both are lower than the linear polymer case. The semidilute ring polymer solution plateaus at high Wi and meets the pure linear solution result. A distinct departure from the behavior of either pure linear or ring polymer solutions is observed in the case of ring polymers in ring-linear semidilute blends. For all blend ratios presented here, fluctuations are not peaked at $\textrm{Wi}\approx 0.5$. Instead, they increase up to $\textrm{Wi}\approx 1.5$, with a weak decrease thereafter. Furthermore, fluctuations increase as the blend ratio shifts towards linear chains from $f_{R}\approx 0.83$ to $f_{R}\approx 0.02$, confirming the visual observations of molecular trajectories in Figs. 2 and 3. Simulation results show that most of this increase occurs from $f_{R}=1$ to $f_{R}=0.83$, continuing to $f_{R}=0.5$. As more linear chains are added, however, further increase in fluctuations are small. This is largely consistent with experiments, which also show increasing $\langle\delta\rangle$ with blend fraction of linear polymers. For $\textrm{Wi}\approx 1.5$, experiments show a non-monotonic trend in fluctuations with blend ratio, with a maximum at $f_{R}=0.17$, although this trend is not found in simulations. To study the influence of HI, we perform ‘freely-draining’ (FD) simulations which neglect HI for semidilute solutions at blend fraction $f_{R}=1-0.17$. The relaxation time used to define $\textrm{Wi}_{R}$, is the Rouse relaxation time. Because the Rouse time is greater than the Zimm relaxation time obtained with HI, Doi and Edwards (1988); Rubinstein and Colby (2003), the strain rate is reduced to obtain the same values of $\textrm{Wi}_{R}$. In the absence of HI, ring fluctuations are significantly suppressed. The semidilute blend results quantitatively agree with the dilute ring solution case, suggesting the FD simulations are effectively non-interacting. This is consistent with previous BD simulations which have shown FD simulations exhibit weak concentration dependence in planar extensional flow. Stoltz, de Pablo, and Graham (2006); Young and Sing (2019) As compared to the ring polymer component of the blends, the linear polymer component exhibits conformational fluctuations nearly quantitatively consistent with the pure linear semidilute solution case. There is a slight quantitative increase in fluctuations with linear polymer fraction, but the trend is weak compared to the ring polymer case. Thus to the knowledge of the authors, the overshoots in ring extension on startup of flow and the fluctuations at steady state appear unique to rings in semidilute blend with linear polymers. In the remainder of this section, we investigate other quantities commonly used to describe polymer solution dynamics for further evidence of this feature. III.5 Steady state conformations and rheology The steady state conformations and and extensional viscosity of dilute polymer solutions are well understood for linear and ring architectures. Schroeder, Shaqfeh, and Chu (2004); Li et al. (2015) Both undergo a transition from an equilibrium coil to a stretched conformation at $\textrm{Wi}\approx 0.5$, although the ring transition is more gradual due to intramolecular hydrodynamics. Hsiao et al. (2017) As polymers stretch, the solution viscosity increases due to the entropic restoring force. Given the large instantaneous fluctuations observed for ring polymers in semidilute blends, we are motivated to determine if there is an effect of blend ratio on the ensemble average stretch and bulk viscosity. In Fig. 5a we plot the ensemble average steady state ring fractional extension after $\epsilon\approx 4-5$ versus the blend ratio $f_{R}$ and $\textrm{Wi}_{R}$. We also include results from pure linear solutions at $c^{*}$ for comparison. The data for the linear component of the blend is not shown because the linear polymer relaxation time is larger than the ring relaxation time, $\tau_{L}\approx 3.5\tau_{R}$. Thus the effective linear polymer Weissenberg number $\textrm{Wi}_{L}=\dot{\epsilon}\tau_{L}$ at the same strain rate is higher, and the linear chains are stretched for all strain rates presented here. We first observe that the more gradual coil-stretch transition found in dilute solutionHsiao et al. (2017) persists in semidilute solution. This is expected because HI is nominally unscreened at the overlap concentration, and intramolecular HI drives ring extension in the neutral $z$ direction. Hsiao, Schroeder, and Sing (2016) The open ring conformation is significant to both topological interactions and intermolecular HI, as shown in Section IV. Considering the effect of blend ratio, the average extension curves collapse nearly quantitatively. This suggests that the dominant contribution to the average stretch is only the dimensionless flow strength $\textrm{Wi}_{R}$. In the following section we argue this is consistent with the large transient fluctuations. In particular, we show how intermolecular HI and topological interactions can drive instantaneous retraction and extension of the ring polymers. We consider the influence of blend ratio on steady reduced extensional viscosity $\eta_{r,ss}$ in Fig. 5b. As expected, viscosity increases with deceasing fraction of rings because the stress in unentangled polymer solutions is dominated by stretching. Linear polymers are more stretched than rings at the same strain rate, so as $f_{R}$ decreases, the linear chain contribution to the polymer stress dominates. When we plot viscosity against $f_{R}$ at constant strain rate, we find a nearly linear relationship, suggesting the bulk viscosity is determined by simply mixing the linear and ring polymer contributions. III.6 Conformational distributions We conclude our characterization of ring polymer conformations with probably distributions of fractional extension, $P(\Delta x/L)$, in steady and startup flow. Conformational distributions have been widely used to quantify molecular individualism in dilute solution. Currently, we investigate the effect of intermolecular interactions on molecular individualism. In Fig. 6, we plot the distributions of ring polymer extension for approximately fixed $\textrm{Wi}_{R}$ and varying blend ratio $f_{R}=0.17,0.83,1.00$. Results for blend fractions $f_{R}=0.02,0.50$ exhibit near quantitative matching with $f_{R}=0.17$ and have been omitted for visual clarity. At low flow rates $\textrm{Wi}_{R}\approx 0.4$, rings are relatively unperturbed from their equilibrium conformations, and the distribution is nearly quantitatively consistent for varying blend ratio. In the blend cases, the linear chains are moderately stretched to $\Delta x/L\approx 0.4$ because the effective linear flow strength is $\textrm{Wi}_{L}\approx 1.4$. However, given the small change in ring conformations, it appears that interactions with linear polymers do not affect ring dynamics in weak flow. When exposed to stronger flows, $\textrm{Wi}_{R}=0.8-3.2$, rings are stretched from their equilibrium conformations. Notably, distributions broaden with decreasing fraction of ring polymers $f_{R}$ for all flow rates $\textrm{Wi}_{R}>0.5$. For comparison, we include distributions for linear polymers in a nearly pure linear blend ($f_{R}=0.02$) at matched effective flow strength $\textrm{Wi}_{R}\approx\textrm{Wi}_{L}$. The comparison clearly shows that the conformational distributions of ring polymers in pure ring solutions agree with those of linear polymers. Specifically, distributions become narrower with increasing flow strength. This is not the case for rings in blend with linear polymers, where distributions are broadest at $\textrm{Wi}_{R}\approx 1.6$, and broader than the pure polymer solution cases at $\textrm{Wi}_{R}=0.8$ and $\textrm{Wi}_{R}=3.2$. While the low extension tail is similar for all architectures and blend ratios, ring polymers in blends exhibit a high extension tail that becomes more pronounced with decreasing $f_{R}$. We also quantify molecular individualism upon startup flow transience in Fig. 7. In particular, we consider a constant strain rate $\textrm{Wi}_{R}\approx 1.6$ and varying blend ratio at increasing accumulated strain from $\epsilon=2$ to $\epsilon=6$. We co-plot instantaneous distributions with the steady state averages from Fig. 6, which show transient ring distributions match the steady state conformations after $\epsilon\approx 5$, consistent with the ensemble average extension plateau. First we note features present in all blend ratios; at low strain $\epsilon=2-4$, distributions are broad as some rings have already stretched to the steady state ensemble average extension, while others remain coiled. This is consistent with previous single molecule experiments and simulations, in which the rate of polymer stretching can be divided into subpopulations based on initial conformations. As further strain is accumulated, the distributions narrow towards the steady state case as the majority of polymers become stretched. Transient distributions of ring-linear polymer blends are broader than those of pure ring solutions, as in the steady state results. This is apparent even at small blend ratios of linear chains, $f_{R}=0.83$. As the blend ratio of rings decreases further, the distribution broadens similar to the steady state case. However, we find another feature in startup of flow at $f_{R}=0.17,0.02$ which is absent at steady state. There is a small population of rings which are highly stretched to $\Delta x/L\approx 0.55-0.65$ for $\epsilon=2-5$. In a majority blend of linear chains ($f_{R}=0.17$) this effect is nearly negligible, but for a single ring in a linear semidilute background ($f_{R}=0.02$), the population is more pronounced. The transient distribution even appears shifted to the right of the steady state at $\epsilon=4-5,f_{R}=0.02$. In Section IV, we show that this occurs due to hooking of linear polymers with rings. Overall, conformational distributions are consistent with trends in the average fluctuation quantity $\langle\delta\rangle$ and observations from individual trajectories. Thus, we find a non-trivial dependence of dynamic ring conformations on blend ratio and flow strength despite the collapse of steady ensemble average fractional extension data. In the remainder of this work, we seek to understand these results on the basis of transient intermolecular interactions. IV Discussion We consider two mechanisms for ring conformational fluctuations: i) intermolecular topological constraints in which a ring polymer is ‘hooked’ or ‘threaded’ by a linear polymer or another ring ii) intermolecular hydrodynamic interactions which fluctuate in space and time with the local concentration of linear and ring polymers. As discussed in the introduction, the first mechanism is well motivated by the observation of unique dynamics and rheology in ring polymer solutions and melts from bulk measurements, Kapnistos et al. (2008); Huang et al. (2019) single molecule experiments, Robertson and Smith (2007); Chapman et al. (2012) and molecular simulation. Tsalikis and Mavrantzas (2014); O’Connor et al. (2020) A focus of this work is to determine if flow increases the frequency and importance of ring-linear threading such that the ring dynamics are significantly altered, despite the fact that at equilibrium ring dynamics appear to be unaffected by threading. The second mechanism is motivated by the observation of flow instabilites in low Reynolds number extensional flow of non-dilute polymer solutions. Previous BD-HI simulations have focused on molecular conformations and solution rheology, but we suggest that similar flow modification as observed in experiment and continuum simulations also emerges in molecular simulation. We visualize these flow profiles as a function of time and make direct connections to fluctuations in ring polymer conformations. IV.1 Intermolecular hooking We observe three types of topological interactions: ring-linear hooks, ring-ring hooks, and linear-linear hooks. We find ring-linear hooks to be the most common. Linear-linear hooks are less common and thus require a larger ensemble to quantify. We use data from simulation of pure linear polymer solutions at 1 $c^{*}$ and 3 $c^{*}$ previously published by the authors Young and Sing (2019) while forgoing the blend cases due to insufficient sampling. Ring-ring hooks are the least common, such that we are unable to quantify their frequency. IV.1.1 Observations of ring-linear and ring-ring hooks We first show simulation snapshots of ring-linear and ring-ring hooks to provide a qualitative description of the conformational dynamics. For examples of linear-linear hooks, we refer to our previous work. Young and Sing (2019) In the ring-linear case, a polymer is loosely threaded through a ring polymer upon the startup of flow (Fig 8b). At equilibrium, the excluded volume force between the ring and linear polymer is weak due to the low concentration. At low strain $\epsilon=2.2$, the polymers have not yet collided so the ring stretches at approximately the same rate as the ensemble average (Fig 8c). Upon further strain accumulation, however, the crossing constraint sets in, and the linear chain adopts a folded conformation around the outside loop of the ring, causing it to retract and reorient (Fig 8d). The ring then becomes fully reoriented and overshoots the ensemble average fractional extension before the hook is released and the ring retracts to average levels of extension (Fig 8e,f). For demonstrative purposes, he have highlighted a trajectory in which the ring both retracts and stretches due to the topological constraint. However, this is a rare occurrence. The majority of ring-linear hooks involve a linear polymer hooked ‘though’ the ring (Supplementary Information Movie X) rather than ‘around’, in which case the initial retraction does not occur. Additionally, in this example the linear polymer is already threaded with the ring upon startup of flow. This is the case in the majority of hooks we observe, although it is not required. We include in the Supplemental Information a movie for a trajectory in which an initially unthreaded linear chain adopts a folded conformation, advects into the closed contour of the ring, forms a topological hook that causes an overshoot in ring extension, and then advects away from the ring once the linear chain becomes fully stretched and the constraint is released. We note that rings tend to hook only with linear chains that adopt ‘folded’ or ‘dumbbell’ conformations. Perkins, Smith, and Chu (1997) Linear chains adopting ‘kinked’ or ‘coiled’ conformations lack a hooked structure at their free ends which may advect into the ring. These latter conformations can still thread through the ring, but in this case the excluded volume force is weak because the low polymer concentration allows the ring deform and relax the constraint. Thus, there appears to be aspects of ‘molecular individualism’ as first considered for dilute solution polymer stretching which are relevant in the semidilute case. Due to limited sampling and high computational expense, we are unable to study these problems in further detail here. The influence of Brownian noise is of particular interest for further study, given that we observe ring-linear threaded conformations upon the startup of flow which we would expect to form a strong hook, but do not because the linear polymer stretches before the two polymers collide. This connects to concepts of ‘molecular predestination’, which have been successful in successful predicting stretching trajectories upon startup of flow on the basis of initial conformations. Larson et al. (1999) Next we consider an example of ring-ring hooking. A weakly stretched ring polymer advects towards a stretched ring polymer with position fixed at the stagnation point (dark-orange and light-orange rings respectively in Fig 9b). The unstretched advecting ring then loops around the stretched ring and pulls it towards a coiled conformation (Fig 9c,d). As strain accumulates, the constraint tightens and the ring fixed at the stagnation point adopts a hooked conformation as the advecting ring stretches beyond the ensemble average extension (Fig 9e). Finally, both rings stretch beyond the ensemble average as the constraint is released (Fig 9f), followed by relaxation to average levels of extension in both rings. As opposed to ring-linear hooks, we find ring-ring hooks form between rings which are initially unthreaded at equilibrium. After the ring ensemble average extension plateaus at $\epsilon\approx 4-5$, rings can collide and form strong topological constraints which are not encountered at equilibrium. The excluded volume forces associated with ring-ring hooks are considerably weaker than ring-linear hooks. The tension in the hook is related to the contour length via the flow gradient across the span of the ‘hooking’ polymer. Therefore, at half the contour length of the linear chains, the ‘hooking’ rings impose a weaker constraint and the extra stretching of the ‘hooked’ ring is small. It is not necessary that one of the rings is fixed at the stagnation point, as we show in another example in the Supplemental Information. However, we speculate that in the case of weak topological constraints, fixing the position of one polymer at the stagnation point may increase the frequency of intermolecular hooks. IV.1.2 Quantifying hooking behavior We now establish a procedure for detecting hooks and quantifying how often they occur. For the ring-linear and linear-linear hooks, we detect constraints by a combination of topology strand crossings and excluded volume interactions between hooked chains. In particular, we use the writhe for two polymer chains $\alpha$ and $\beta$ $$\begin{split}\displaystyle Wr_{\alpha\beta}&\displaystyle=\frac{1}{4\pi}\int_{% C_{\alpha}}\int_{C_{\beta}}d\bm{r}_{1}\times d\bm{r}_{2}\cdot\frac{\bm{r}_{1}-% \bm{r}_{2}}{\left|\bm{r}_{1}-\bm{r}_{2}\right|^{3}}\\ &\displaystyle\approx\frac{1}{4\pi}\sum_{i=1}^{N_{\alpha}}\sum_{j=1}^{N_{\beta% }}\frac{\bm{r}_{i}-\bm{r}_{j}}{r_{ij}^{3}}\cdot(\bm{r}_{i}\times\bm{r}_{j})% \end{split}$$ (18) where the line integrals are over the continuous polymer curves $C_{\alpha}$ and $C_{\beta}$ are approximated by the discrete bead positions. In the case of a ring, the curve is closed, whereas for a linear chain it is open. While the writhe gives a measure of topological crossings, we are primarily interested in cases where these threads drive changes in the polymer conformation through excluded volume interactions. Thus, we also consider the flow-direction total excluded volume force between the two chains $$F_{x,\alpha\beta}^{EV}=\sum_{i=1}^{N_{\alpha}}\sum_{j=1}^{N_{\beta}}F_{x,ij}^{EV}$$ (19) with the excluded volume force between segments given by Eq. 4. We set a threshold writhe $|Wr_{h}|=1$ and excluded volume force $|F_{x,h}^{EV}|=10kT/a$, which when both are met indicate the presence of a hook. We then determine the writhe and excluded volume force between all chains on a pairwise basis for the simulation trajectories presented in the previous section and assign them a hook status $h_{\alpha\beta}(t)$ as a function of time. We note that it is challenging to detect ring-ring hooks in this procedure because one half of the penetrating ring will form a positive crossing and the other half a negative crossing, yielding a writhe of zero. Instead, we use only the excluded volume condition for ring-ring hooks and confirm by visual observation. We find ring-ring hooks are nearly negligible at 1 $c^{*}$, in agreement with equilibrium melt simulations showing ring-ring threads are significantly less probable than ring-linear threads. Tsalikis, Mavrantzas, and Vlassopoulos (2016) In particular, there are only $\sim$10 ring-ring hooks among the  1500 ring trajectories collected at all strain rates and blend fractions $f_{R}>0.02$ excluding the trace ring case. The topological constraint analysis reveals that once linear chains are fully stretched, ring-linear hooks are negligible. We explain this by the fact that a linear chain must have a significant end retraction of at least  1/3 its steady state extension to be able hook with a ring as in the startup flow case. As shown in Fig 4, conformational fluctuations of linear polymers for $\textrm{Wi}_{L}>1$ are small due to the large flow gradient across their span. Because of the difference in ring and linear polymer relaxation times ($\tau_{L}\approx 3.5\tau_{R}$), the strain rates applied to the blends correspond to $\textrm{Wi}_{L}=1.4-11.0$. Therefore, we focus on the startup of flow $t=t_{0},\epsilon=0$ to $t=t_{ss},\epsilon=8$. In particular, we consider the number of ring-linear hooks per ring polymer, defined as $$\langle n_{h}\rangle_{t,n}=\frac{1}{n_{run}n_{R}(t_{ss}-t_{0})}\sum_{i=1}^{n_{% run}}\sum_{\alpha=1}^{n_{R}(i)}\sum_{\beta=1}^{n_{L}(i)}\sum_{t_{0}}^{t_{ss}}h% _{\alpha\beta}(i,t)$$ (20) where the average is taken over time for the ensemble of ring-linear pairs at a given blend fraction and strain rate. The first summation is taken over the number of independent simulation runs. This definition accounts for the possibility of multiply-hooked rings, although they are not found in our simulations. Fig 10 shows the results of the procedure as a function of flow rate and blend fraction. First we note the small quantitative values, reaching a maximum of $\langle n_{h}\rangle\approx 0.03$ for the case of a single ring in a linear background, indicating that at a given time during startup of flow, only 3% of rings are hooked. Therefore, it appears that topological interactions alone cannot explain the large ring conformational fluctuations observed in simulation. Detailed quantitative study of the transient evolution of hook density and duration remain challenging, although several trends emerge. A clear result is that the average number of hooks increases with blend fraction. This is expected, as rings form hooks with linear chains more readily than other rings. Majority ring polymer blends $f_{R}=0.83,0.50$ exhibit nearly the same number of ring-linear hooks as linear-linear hooks in a pure linear polymer solution at the same concentration. For majority linear polymer blends $f_{R}=0.17,0.02$, the average number of hooks increases by as much as a factor of 5 at low flow rates. We emphasize that $\langle n_{h}\rangle$ is determined on a per ring basis, such that blends at lower $f_{R}$ do not necessarily exhibit a larger number of ring-linear hooks per solution volume. Another clear trend is the decreasing number of ring-linear hooks for solutions at 1 $c^{*}$ with flow rate. This is in distinct contrast to the pure linear polymer solution at 3 $c^{*}$. We suggest that a crossover in the behavior of topological constraints in strong flows occurs in the range of $c/c^{*}\approx 1-3$. At 1 $c^{*}$, linear polymers threaded through a ring upon startup of flow tend to become fully stretched on a faster time scale than a hook forms for $\textrm{Wi}_{R}>2$. Only initial configurations which are ‘predestined’ to hook due to the threading of a folded linear polymer through a ring, for example in Fig 8, form constraints. At lower flow rates, linear chains remain relatively coiled, allowing the constraint time to deform the ring. In the pure linear polymer solution at 3 $c^{*}$, the stretching of polymers to escape the constraint is limited by the surrounding chains. The entanglement concentration for the linear polymer solutions is $c_{e}\approx$ 8-10 $c^{*}$, and the equilibrium diffusion follows unentangled scaling at 3 $c^{*}$. Young, Marvin, and Sing (2018) Therefore, our results show diverse behavior as a function of polymer topology, concentration, and flow rate which are not captured by current theories. We note that Fig 10 does not consider the magnitude of excluded volume force imposed by the constraint or the resulting chain in ring extension. Further studies with higher molecular weight polymers may reveal the functional effect of ring-linear hooks upon ring conformation and solution stress. With the current simulation data, quantitative comparison of simulation results to experiments remains challenging. Given that the entanglement concentration is dependent on the polymer molecular weight and flexibility Rubinstein and Colby (2003), the higher molecular weight DNA could exhibit more topological constraints than the simulated polymers. Additionally, while the single molecule experiments expose the trapped ring to a total strain of $\epsilon\approx 20-25$, the background is continuously flushed with fresh polymer solution that is exposed to a strain of $\epsilon\approx 7-10$ before reaching the stagnation point. Previous single molecule experiments and BD simulations have shown that a subpopulation of linear chains can remain coiled to $\epsilon\approx 12-15$. Based on our observation that ring-linear hooks involving a stretched linear chain are nearly negligible, this could increase the probability of forming new constraints in the experimental system at steady state. Within the context of molecular simulations utilizing periodic boundary conditions, this case is challenging to reproduce. Fluctuating immersed boundary simulations may be able to reproduce the boundary conditions of the experiment while retaining HI, but the simulation would exceed our computational resources given the the number of polymers and level of fine-grained resolution required. IV.2 Flow modification by intermolecular hydrodynamic interactions We now consider the influence of intermolecular hydrodynamic interactions on ring conformational fluctuations. While rings adopt diverse shapes that cannot be completely described by the fractional extension in the flow direction, we observe several characteristic motions we refer to as i) overstretching ii) retraction iii) tank-treading. The first two cases refer to fluctuations in fractional extension more than the three times average fluctuation quantity quantified in Section III.4, $|\Delta x(t)/L-\langle\Delta x/L\rangle|>3\langle\delta\rangle$, either above or below the steady state average respectively. The third motion refers to rotation of the ring along its contour in the flow-neutral $z$-direction. These dynamics are observed at steady state $\epsilon_{ss}>8$ in the absence of topological constraints. We suggest that the fluctuations can be explained by a modification of the applied flow by intermolecular HI. In particular, we define the total effective flow surrounding a tagged ring polymer as $$\bm{v}^{*}(\bm{r}_{i})=\nabla\textbf{v}\cdot(\bm{r}_{i}-\bm{r}_{CoM})+\sum_{j}% ^{{}^{\prime}}\textbf{D}(\bm{r}_{ij})\bm{F}_{j}$$ (21) where $\bm{r}_{i}$ is the displacement from the tagged ring center of mass $\bm{r}_{CoM}$, $\bm{r}_{ij}=\bm{r}_{j}-\bm{r}_{i}$, and $\bm{r}_{j}$ and $\bm{F}_{j}$ are the positions and total conservative force respectively of polymer bead $j$. The first term accounts for the applied planar extensional flow in the frame of reference of the ring center of mass. Because the applied flow is homogeneous and unbounded, we can define a new origin $\bm{r}_{CoM}$ without qualitatively changing the flow measurement to provide a more intuitive flow field. The second term accounts for the polymer disturbance velocity where the sum is over polymer bead indices, and the prime indicates that beads on the tagged ring are excluded. Intramolecular HI, or the ring’s response to the flow, are excluded in order to visualize the forces which drive ring fluctuations. In the retraction case, this is not necessary as the ring is unstretched and the tagged ring contribution to the disturbance is weak. In the overstretching and tank-treading cases, however, the ring can reach highly stretched conformations, yielding strong restoring forces and flow disturbances. The intermolecular HI which drives ring stretching are thus largely cancelled out and not easily visualized when the intramolecular component is included. The total flow $\bm{v}^{*}$ is then evaluated on a uniform rectilinear mesh grid at locations $\bm{r}_{i}$ surrounding the ring center of mass. We find that snapshots of the flow field on the time scale of a single time step $dt=5\times 10^{-4}\tau_{0}$ are noisy due to Brownian noise. To overcome this, we perform additional simulations which sample the disturbance velocity every 50 time steps for an interval $50\tau_{0}$ or $0.17\tau_{R}$. Polymer advection is minimal on this time scale and the resulting streamlines are a close approximation to the instantaneous flow. The flow sampling increases the computational expense of the simulation by $An_{p}N$ where $A$ is a constant associated with evaluating the RPY tensor for a grid point polymer bead pair, and $n_{p}$ is the number of grid points. As $n_{p}$ exceeds $N$ the simulation becomes computationally expensive, so we consider a volume extending only slightly past the extents of the tagged ring. Visualization of the full 3D flow field is an interesting area for further study but is unnecessary for understanding the dynamics of individual ring polymers. The effective velocity is normalized by a reference value $U=\textrm{max}(|\nabla\textbf{v}\cdot(\bm{r}_{i}-\bm{r}_{CoM})|)$ corresponding to the maximum applied flow magnitude on the mesh. For positions further from the ring the applied flow becomes stronger relative to the polymer disturbance, but these flows are not relevant to the ring conformation. We now consider several specific examples of the characteristic motions described above and visualize the surrounding flow fields. IV.2.1 Overstretching In Fig. 11a we show the transient fractional extension of a ring in a majority linear blend $f_{R}=0.17$ at $\textrm{Wi}_{R}=1.6$ which fluctuates around the ensemble average extension. At $\epsilon\approx 12.0$, the ring becomes highly stretched to $\Delta x(t)/L-\langle\Delta x/L\rangle>3\delta$. We plot the streamlines for the effective velocity in the $xy$-plane at this time superimposed with the ring conformation and selected linear conformations in Fig 11b. The effective flow at the ring end $\tilde{x}\approx 50$ is $\sim$ 4.5 times stronger than the applied flow alone, causing the ring to stretch far beyond its average extension. We see that this clearly coincides with the position of a linear chain end. At the opposite end of the ring, $\tilde{x}\approx-50$ the ring is similarly stretched by a linear chain end, although this is not visible in the displayed slice because the ring is slightly stretched in the $z$-direction. Thus, we find that local fluctuations in the polymer concentration drive flow modifications and determine ring conformational dynamics. In particular, the proximity of linear chain ends to ring ‘ends’ can cause overstretching. While the highlighted linear chain ends do not completely reproduce the total flow, they contribute the essential features of strong extension. We note that only a section of the linear chains are shown. The full conformations extend in the flow direction away from the ring. If we approximate the linear chains as dumbells, the importance of the linear chain ends becomes clear. The stresslet portion of the force dipole leads to a strong disturbance at the chain ends, while the disturbance at the chain center goes to zero. IV.2.2 Retraction and tumbling Next we consider an example of ring retraction at $f_{R}=0.5,\textrm{Wi}_{R}=1.6$. The ring is initially stretched in the flow direction at $\epsilon\approx 13$, at which point a rotational flow field emerges (Fig 12b). The flow causes the ring to retract and undergo ‘end-over-end’ tumbling, passing through a minimum in fractional extension at $\epsilon\approx 15.3$ (Fig 12c). Finally, the flow surrounding the ring returns to approximately planar extension with a stagnation point laterally displaced to $\tilde{x}\approx 40$ (Fig 12d). The ring then completes reorientation in the flow direction and stretches back to average levels of extension. To facilitate visualization of this process, we have colored one half of the ring conformation grey and the other blue, showing that from $\epsilon=14.3$ to $\epsilon=16.1$ the two halves essentially exchange places. Not all cases of ring retraction involve end-over-end tumbling. More often, the ring extension decreases by only $x(t)/L-\langle x/L\rangle\approx 1-2\langle\delta\rangle$ before restretching. This is due to the degree of flow modification, with weaker retractions corresponding to a mixed flow between rotation and extension, and end-over-end tumbling corresponding to rotational flow. The molecular mechanism for flow modification in this case is not clear. Both spatial and temporal variations of the flow field are important due to advection of the ring and evolution of the microstructure respectively. We speculate that the rotational flow observed in Fig 12b,c could emerge due to a superposition of rotlet portions of the linear chain force dipoles. Similarly, the mixed flows which drive smaller retractions could be due to a more complex superposition of stresslet fields from linear chains advecting in opposite directions with respect to the extension axis. However, we are not able to identify any minimal set of linear chains which reproduce the total flow as in the overstretching case seen in Fig 11. IV.2.3 Neutral direction gradients and tank-treading In both the overstretching and tumbling cases, we see that the ring is slightly stretched in the $z$-direction, which is neutral to flow. This is not unexpected, as previous simulations have shown that the coupling of chain architecture and intramolecular HI in rings drives the chain to an open loop conformation. Hsiao, Schroeder, and Sing (2016) The stretching of the ring is the cause of the mild coil-stretch transition and delay in critical Weissenberg number as compared to linear polymers. In the semidilute case, this may be complicated by the screening of HI, which causes rings to compress in the neutral direction as seen in dilute freely draining simulations. Alternatively, the concentration gradients which lead to the mixed flows seen in the extension-compression plane in Figs. 11 and 12 could introduce flow gradients in the neutral direction which are absent in the dilute case. In Fig 13, we highlight a case where the ring is observed to ‘tank-tread’, meaning the ring rotates along its contour while remaining stretched in the flow direction. We include a simulation movie in the Supporting Information. Tank-treading has been reported for dilute rings in shear flow, where the driving force for rotation is the shear gradient on ring polymers adopting elliptical conformations in the flow-gradient plane. Chen, Chen, and An (2013) Due to the deformability of the chain, tank-treading in rings is not continuous but is coexisting with the end-over-end tumbling motions first observed in linear polymers. For high Wi, clear instances of tank-treading were resolved via an angular autocorrelation function, and a scaling with Wi was found. In this work, the dynamics are markedly different. The applied flow is planar extensional, such that there is no component of rotation and the ring is compressed in the $y$ direction. The tank-treading motion can emerge only due to a combination of intramolecular HI driving the ring open in the neutral direction and intermolecular HI introducing a driving force for rotation. The origin of rotation is clearly seen in Fig 13a, where the polymer disturbance velocity causes rotational flow in the $xz$ (extension-neutral) plane. We note that the ring does retract slightly during the rotation, as seen in the Supporting Information movie, so the motion is not purely tank-treading. Additionally, we see at a later time, where the ring has rotated around its contour by $\approx$ 90⁢°90°90 ⁢ °, that the rotational field has returned to primarily extensional flow, although with notable gradients still in the $z$-direction at the ring ends, $\tilde{x}\approx\pm 40$ (Fig 13b). This evolution of the flow field is directly connected to the motion of neighboring chains. In the simulation movie we highlight three linear polymers which primarily drive the flow. At $\epsilon=18.1$ (start of the movie), two linear chains which appear overlapping in the extension-compression plane but with a separation in the neutral direction of approximately the ring stretch, $\Delta z_{R}\approx z_{L1}-z_{L2}$. The linear chains are highly stretched with nearly constant $z$ positions along their contours. The two linear chains are advecting in opposite directions with respect to the axis of extension due to their center of mass positions in the applied flow. The superposition of their disturbance velocities leads to the rotational field. Once the linear chains have advected away from the ring, their disturbance decays and the rotational field dissipates, leading to the gradient in the $z$-direction seen in Fig 13b. A third chain is highlighted which plays a minor role, and other chains which contribute to the total flow are made transparent for visual clarity. Note that at two points during the movie the polymer positions appear to undergo a large instantaneous translation. This is simply an artifact of visualizing the periodic boundary conditions in a deforming simulation box and does not represent the dynamics as they occur in simulation. More generally, the role of flow gradients in the $z$ direction are challenging to characterize. Tank-treading is rare because the spatiotemporal variations of the flow occur on time scales shorter than the tank treading frequency. We conclude that the conformational fluctuations as measured in a coarse-grained sense by the average fluctuations in the fractional extension $\langle\delta\rangle$ are due to diverse features of the flow which arise from the polymer disturbance velocity. As the blend fraction of rings $f_{R}$ decreases, the flow modification increases due to the larger disturbance velocity of the highly stretched linear chains as compared to the lightly stretched rings. However, isolating well defined mechanisms for the dynamics on the basis of intermolecular interactions requires further study. Lower concentration solutions, which have been shown by the authors to exhibit similar fluctuations, Zhou et al. (2019) may be more tractable. A possible direction is to connect local concentration changes to conformational fluctuations. A challenge is that it is not just the concentration but also the direction of the disturbance velocity which determine whether a polymer retracts or stretches. Thus, a model must include the local concentration of polymer segments as well as the orientation of the restoring force on the segments. For example, in a field-based model, the relevant spatial and temporal distribution of segments is $\bm{\Psi}(\bm{r},t,c,\bm{f})$. IV.2.4 Bulk flow Modification Interestingly, we find that many of the flow structures observed in simulation are qualitatively consistent with particle tracking velocimetry measurements of flow in non-dilute polymer solutions in microfluidic devices. In particular, Haward et al. have found transient rotational and shear flows in an optimized-shape cross-slot extensional rheometer (OSCER) for polymer solutions at $c/c^{*}=0.1-0.4$ and negligible Reynolds number Re. Haward, McKinley, and Shen (2016) The authors reported two instabilities in the applied planar extensional flow arising from the extra polymer stress. In particular, they find transient lateral displacement of the stagnation point at moderate $\textrm{Wi}\approx 0.75$ and transient global flow asymmetry at $\textrm{Wi}\approx 2.0$. In this work, we have focused on the flow profile in the Lagrangian frame of a tagged ring. However, if we consider the Eulerian frame we observe the stagnation point displacement (Fig 14a) and transient flow asymmetry instabilities (Fig 14b,c) reported by Haward et al. at consistent Wi values. Haward, McKinley, and Shen (2016) There are qualitative differences in simulation because the flow is unbounded. Most notably, the flow asymmetry for $\textrm{Wi}>2$ decays at distances far from the stagnation point. Additionally, polymers in the OSCER experience finite strain $\epsilon\approx 7$ before flowing out of the hyperbolic region, and the accumulated strain is dependent on the $y$ position. In simulation, all fluid elements experience the same accumulated strain which in principal is unlimited. While the resemblance between the flow structures emerging in simulation and experiment is striking, we do not draw further conclusions because of the possibility of finite size effects imposed by using a relatively small periodic simulation box as compared to the polymer size. In the current study, a fine-grained polymer model was required to resolve the influence of topological interactions. Due to this limitation, increasing the system size further is computationally prohibitive and finite size effects cannot be directly investigated. However, if we neglect chain crossing constraints, coarse-grained bead-spring polymer models could be used to access larger length scales. Although originally used in dilute BD simulations, coarse-grained models have also proved useful in predicting detailed features of the solution stress, Prabhakar et al. (2017) and they may also be suitable for investigating the influence of intermolecular interactions on flow instabilities. IV.3 Test case: a bidisperse linear blend The observation that ring polymers exhibit large conformational fluctuations due to size and shape differences with the linear portion of the blend raises the question: what is the influence of chain architecture? Is there a unique feature of the ring polymers that causes these dynamics, or is it simply because their contour length is half that of the linear chain? To investigate this topic, we have performed simulations of bidisperse linear polymer solution blends of short $N_{S}=75$ and long $N_{L}=150$ linear blends. In this case, the contour length of the short chains is matched to the rings, $L_{S}=L_{R}=0.5L_{L}$, and the relaxation time is comparable ($\tau_{R}=274\tau_{0}$ vs $\tau_{S}=322\tau_{0}$ in pure solution $f_{R}=1$ and $f_{S}=1$ respectively at $c_{L}^{*}$). The polymer concentration by mass $c=N/V$ is matched to the ring-linear blend simulations. We define the mass fraction of short linear chains $f_{S}=n_{S}N_{S}/(n_{S}N_{S}+n_{L}N_{L})$, and simulations are performed at $f_{S}=1,0.5$. Essentially, each ring polymer is replaced with two short linear chains, all chains are initialized as non-overlapping random walks, and the TEA parameters are modified appropriately. Otherwise, details of the simulation are the same as described in Section II. We define a Weissenberg for the short linear polymers $\textrm{Wi}_{S}=\dot{\epsilon}\tau_{S}$, where $\tau_{S}$ is the short polymer relaxation time at the appropriate blend ratio and is determined by the same procedure as for the ring-linear blends. In Fig. 15 we compare the results of ring-linear blends and bidisperse linear blends at varying blend fraction and flow rate Wi, where the relaxation time $\tau_{R}$ or $\tau_{S}$ at the relevant blend fraction is used. The two systems are in qualitative and in some cases quantitative agreement. The primary result is that fluctuation quantity for the short linear chains follows the trend of the rings. In the pure short linear polymer solution $f_{S}=1$, $\langle\delta\rangle$ is peaked at $\textrm{Wi}=0.5$ and monotonically decreases with Wi, in agreement with the reference data. Young and Sing (2019) In the bidisperse linear blend $f_{S}=0.5$, however, the fluctuation quantity matches the pure solution case at $\textrm{Wi}\approx 0.5$, increases up to $\textrm{Wi}\approx 1.5$, and decreases for stronger flows. We also find that the fluctuations are quantitatively larger than the ring-linear blend case. A clear explanation for this behavior is that at low Wi, the dynamics are weakly dependent on blend fraction. By nature ring polymer conformational fluctuations are smaller due to the lack of free ends, so $\langle\delta\rangle$ for rings simply starts at a lower value. The end-free constraint is relevant in strong flows as well. For example, short linear polymers in the bidisperse blend can undergo end-over-end tumbling due to flow gradients in the $z$-direction while remaining significantly compressed in the $y$-direction (see Supplementary Information movie for an example at $f_{S}=0.5,\textrm{Wi}_{S}=3.5$). This conformational pathway is unavailable to rings, however, which retract weakly or rotate along their contours as seen in Fig. 13, but cannot undergo ‘end-over-end’ tumbling while fully compressed in the $y$-direction. Instead, rings must swell in the $y$-direction to completely tumble (Fig 12), a pathway that is suppressed by the compressional component of the applied planar extension. Pure linear polymer solutions at matched contour length with the rings ($N_{S}=75,N_{R}=150$) are stretched to the same extension as rings at the same Wi. However, the bidisperse linear blend $f_{S}=0.5$ is less stretched. Interestingly, a similar result is obtained in experiment for ring-linear blends at $f_{R}=0.83$ vs $f_{R}=0.5,0.17$. We explain this result by the larger fluctuation quantity $\langle\delta\rangle$ for the bidisperse linear blends, which are close to the quantitative values for rings observed in experiment. At high Wi, chains cannot stretch far beyond their average extension due to finite extensibility. Therefore, conformational fluctuations at high Wi correspond to retractions. For sufficiently large $\langle\delta\rangle$, this results in a lower $\langle\Delta x/L\rangle_{ss}$. The extensional viscosity of the pure linear solution $f_{S}=1$ is slightly greater than the pure ring solution, while the 50/50 blends match quantitatively. This supports our previous conclusion that the stress arises from the polymer stretch. The small difference at $f_{S}=1,f_{R}=1$ is due to the difference in fractional extension, whereas for the blends the fractional extension matches. Intermolecular hooking is virtually absent in the bidisperse linear blends $f_{S}=0.5$, supporting the conclusion that in both blends the increase in $\eta_{r}$ with increasing fraction of $N_{L}=150$ linear polymers is due to the higher stretch. Overall, the bidisperse linear blend results show that the large conformational fluctuations we observe for the ring-linear case may be a more general feature in polymer solution blends and polydisperse solutions. Polydisperse solutions are often modeled with two-state models, where polymers are assumed to be either stretched or coiled based on their relaxation time, and intermolecular interactions are neglected. Our results show that intermolecular interactions in semidilute solutions under strong flows are not screened. In fact, they can lead to qualitatively different dynamics. The introduction of non-linear polymer architectures further complicates the issue. While we observe topological constraints to be minimal in simulation, it is possible they play a more important role at higher molecular weight, as suggested by the larger $\langle\delta\rangle$ in experiment, and higher concentrations. Even in the absence of topological constraints, the effect of architecture is not trivial. Polymer size and shape play an important role which is challenging to predict a priori given the large concentration fluctuations present in semidilute solutions. Further study is required to draw conclusions on the connection of bulk stress to the conformational dynamics observed here. Although the ring dynamics are surprising, they appear to have minimal influence on the extensional viscosity, which can be predicted by a simple linear interpolation between the pure ring and pure linear case via the chain restoring force. Exploration of a broader range of molecular weight and concentration may reveal structure-property relationships not observed here. V Conclusions We have investigated the dynamics and rheology of ring-linear polymer blend solutions at the overlap concentration in planar extensional flow via Brownian dynamics simulations and single molecule experiments. Simulations and experiments both show that as the blend fraction of rings $f_{R}$ decreases from a pure ring solution, the conformational fluctuations of the ring polymer portion of the blend increase. Simulations reveal that the origin of these dynamics are a combination of intermolecular topological and hydrodynamic interactions. We show that application of strong flows $\textrm{Wi}_{R}>1$ can cause strong topological constraints in which a linear chain threads through a ring and forms an intermolecular ‘hook’ which strongly deforms the ring conformation. Equilibrium diffusion of rings at the overlap concentration is relatively insensitive to blend ratio, suggesting that flow introduces new dynamics. This is supported by the observation that pure linear solutions at 1 $c^{*}$ form significantly fewer intermolecular hooks than ring-linear blends at $f_{R}=0.83,0.99$. Hooking leads overshoots in transient fractional extension on startup of flow for individual ring trajectories, which is quantified by conformational distributions. However, in simulation we find that the average number of hooks per chain upon startup of flow is low ($n_{h}<0.05$), and once linear chains stretch fully at $\epsilon\approx 8$, hooking is nearly negligible. Considering that experiments show larger fluctuations, this effect could be sensitive to molecular weight and details of the flow geometry. We show that steady state conformational fluctuations in simulation are driven by intermolecular HI. Three characteristic ring motions are observed: overstretching, retraction, and tank-treading. We measure the effective flow field due to the applied planar extensional flow plus the polymer disturbance velocity to explain these dynamics. We find that fluctuations in local concentration modify the flow and introduce regions of shear, rotational, and enhanced extensional flows that drive ring dynamics. We suggest that the flow modification is stronger for majority linear polymer blends because of the stronger restoring force, causing fluctuations to increase with decreasing blend fraction of rings. We directly test the influence of polymer architecture, size, and shape by comparing the ring-linear systems to bidisperse linear polymer blends in which ring polymers are replaced with linear chains of matched contour length. The dynamics of the short linear chains are in good qualitative and quantitative agreement with the rings. Thus, our simulations have broader relevance to polymer solution blends and polydisperse solutions in which intermolecular HI could drive unexpected dynamics. The current work considered only solutions at 1 $c^{*}$. The entanglement concentration for $\lambda$-DNA is $c_{e}\approx$ 3 $c^{*}$, and for the simulation model we estimate $c_{e}\approx$ 8-10 $c^{*}$. Despite this low concentration, we see emergent flow-induced topological constraints which could lead to entanglement dynamics below the equilibrium $c_{e}$. A potential area for future study is to quantify the crossover to entangled dynamics in non-equilibrium solutions with non-linear architectures. More generally, it is of interest to qualitatively characterize non-equilibrium conformations and solution stress in ring-linear blends above $c^{*}$, as equilibrium single molecule experiments suggest unique dynamics emerge compared to pure ring or pure linear solutions. Robertson and Smith (2007); Chapman et al. (2012) A more detailed understanding of the influence of intermolecular HI is also essential to elucidate the ring dynamics and bulk flow properties. While equilibrium scaling theories normally neglect intermolecular HI below $c^{*}$, there is considerable evidence from bulk rheology Clasen et al. (2006); Dinic, Biagioli, and Sharma (2017); Dinic and Sharma (2020) and molecular simulations, Stoltz, de Pablo, and Graham (2006); Prabhakar et al. (2017); Young and Sing (2019) and theory Prabhakar et al. (2016) that as the pervaded volume of the polymer increases with strain rate, intermolecular interactions become relevant at concentrations significantly below $c^{*}$. Furthermore, a detailed study of flow-concentration coupling in connection to molecular dynamics is of interest. Here, we clearly show that spatiotemporal concentration fluctuations modify the effective flow and drive conformational dynamics. The motions observed at the molecular scale may also connect to elastic flow instabilities. More generally, instabilities in mixed flows Cromer, Fredrickson, and Gary Leal (2017); Corona et al. (2018) and shear-induced demixing of entangled blends Hindawi, Higgins, and Weiss (1992); Peterson, Fredrickson, and Leal (2020) have also been reported. Recent developments in molecular simulation allow for unlimited strain accumulation in 2D mixed flows. Hunt, Bernardi, and Todd (2010); Jain et al. (2015) Thus molecular simulations could provide a micromechanical mechanism for flow modification as well as detailed insight into the influence of macromolecular design variables including solvent quality, molecular weight, concentration, and chain architecture. Acknowledgements. This work was funded by the National Science Foundation under Grant No. CBET-1803757 for CES, a DuPont Science & Engineering fellowship for CDY, the National Science Foundation (NSF) Award CBET-1604038 for CMS and partially supported by the NSF through the University of Illinois at Urbana-Champaign Materials Research Science and Engineering Center (MRSEC) DMR-1720633 (YZ and CMS). The authors thank Sarit Dutta for helpful discussions. References Rubinstein (1986) M. Rubinstein, Physical review letters 57, 3023 (1986). McLeish (2002) T. McLeish, Science 297, 2005 (2002). Kaitz, Diesendruck, and Moore (2013) J. A. Kaitz, C. E. Diesendruck,  and J. S. Moore, Journal of the American Chemical Society 135, 12755 (2013). Lloyd et al. (2018) E. M. Lloyd, H. Lopez Hernandez, A. M. Feinberg, M. Yourdkhani, E. K. Zen, E. B. Mejia, N. R. Sottos, J. S. Moore,  and S. R. White, Chemistry of Materials 31, 398 (2018). Taanman (1999) J.-W. Taanman, Biochimica et Biophysica Acta (BBA)-Bioenergetics 1410, 103 (1999). Doi and Edwards (1988) M. Doi and S. F. Edwards, The theory of polymer dynamics, Vol. 73 (Oxford University Press, 1988). Rubinstein and Colby (2003) M. Rubinstein and R. H. Colby, Polymer physics, Vol. 23 (Oxford University Press, 2003). Kapnistos et al. (2008) M. Kapnistos, M. Lang, D. Vlassopoulos, W. Pyckhout-Hintzen, D. Richter, D. Cho, T. Chang,  and M. Rubinstein, Nature materials 7, 997 (2008). Halverson et al. (2012) J. D. Halverson, G. S. Grest, A. Y. Grosberg,  and K. Kremer, Physical review letters 108, 038301 (2012). Ge, Panyukov, and Rubinstein (2016) T. Ge, S. Panyukov,  and M. Rubinstein, Macromolecules 49, 708 (2016). Doi et al. (2015) Y. Doi, K. Matsubara, Y. Ohta, T. Nakano, D. Kawaguchi, Y. Takahashi, A. Takano,  and Y. Matsushita, Macromolecules 48, 3140 (2015). Pasquino et al. (2013) R. Pasquino, T. C. Vasilakopoulos, Y. C. Jeong, H. Lee, S. Rogers, G. Sakellariou, J. Allgaier, A. Takano, A. R. Brás, T. Chang, et al., ACS macro letters 2, 874 (2013). Tsalikis and Mavrantzas (2014) D. G. Tsalikis and V. G. Mavrantzas, ACS Macro Letters 3, 763 (2014). Huang et al. (2019) Q. Huang, J. Ahn, D. Parisi, T. Chang, O. Hassager, S. Panyukov, M. Rubinstein,  and D. Vlassopoulos, Physical review letters 122, 208001 (2019). O’Connor et al. (2020) T. C. O’Connor, T. Ge, M. Rubinstein,  and G. S. Grest, Physical Review Letters 124, 027801 (2020). Tsalikis, Mavrantzas, and Vlassopoulos (2016) D. G. Tsalikis, V. G. Mavrantzas,  and D. Vlassopoulos, ACS Macro Letters 5, 755 (2016). Tsamopoulos et al. (2019) A. J. Tsamopoulos, A. F. Katsarou, D. G. Tsalikis,  and V. G. Mavrantzas, Polymers 11, 1194 (2019). Mai et al. (2018) D. J. Mai, A. Saadat, B. Khomami,  and C. M. Schroeder, Macromolecules 51, 1507 (2018). Schroeder (2018) C. M. Schroeder, Journal of Rheology 62, 371 (2018). Robertson, Laib, and Smith (2006) R. M. Robertson, S. Laib,  and D. E. Smith, Proceedings of the National Academy of Sciences 103, 7310 (2006). Jagodzinski, Eisenriegler, and Kremer (1992) O. Jagodzinski, E. Eisenriegler,  and K. Kremer, Journal de Physique I 2, 2243 (1992). Hegde et al. (2011) G. A. Hegde, J.-f. Chang, Y.-l. Chen,  and R. Khare, The Journal of chemical physics 135, 184901 (2011). Li et al. (2015) Y. Li, K.-W. Hsiao, C. A. Brockman, D. Y. Yates, R. M. Robertson-Anderson, J. A. Kornfield, M. J. San Francisco, C. M. Schroeder,  and G. B. McKenna, Macromolecules 48, 5997 (2015). Hsiao, Schroeder, and Sing (2016) K.-W. Hsiao, C. M. Schroeder,  and C. E. Sing, Macromolecules 49, 1961 (2016). Schroeder et al. (2005) C. M. Schroeder, R. E. Teixeira, E. S. Shaqfeh,  and S. Chu, Physical Review Letters 95, 018301 (2005). Chen, Chen, and An (2013) W. Chen, J. Chen,  and L. An, Soft Matter 9, 4312 (2013). Liebetreu, Ripoll, and Likos (2018) M. Liebetreu, M. Ripoll,  and C. N. Likos, ACS Macro Letters 7, 447 (2018). Young et al. (2019) C. D. Young, J. R. Qian, M. Marvin,  and C. E. Sing, Physical Review E 99, 062502 (2019). Tsalikis et al. (2020) D. G. Tsalikis, T. S. Alexiou, P. V. Alatas,  and V. G. Mavrantzas, Macromolecular Theory and Simulations , 2000016 (2020). Robertson and Smith (2007) R. M. Robertson and D. E. Smith, Macromolecules 40, 3373 (2007). Chapman et al. (2012) C. D. Chapman, S. Shanbhag, D. E. Smith,  and R. M. Robertson-Anderson, Soft Matter 8, 9177 (2012). Zhou and Schroeder (2018) Y. Zhou and C. M. Schroeder, Physical Review Letters 120, 267801 (2018), arXiv:1805.06303 . Pan et al. (2014) S. Pan, D. A. Nguyen, T. Sridhar, P. Sunthar,  and J. R. Prakash, Journal of Rheology 58, 339 (2014). Zhou et al. (2019) Y. Zhou, K.-W. Hsiao, K. E. Regan, D. Kong, G. B. McKenna, R. M. Robertson-Anderson,  and C. M. Schroeder, Nature communications 10, 1753 (2019). Hsiao et al. (2017) K.-W. Hsiao, C. Sasmal, J. Ravi Prakash,  and C. M. Schroeder, Journal of Rheology 61, 151 (2017). Sasmal et al. (2017) C. Sasmal, K.-W. Hsiao, C. M. Schroeder,  and J. Ravi Prakash, Journal of Rheology 61, 169 (2017). Stoltz, de Pablo, and Graham (2006) C. Stoltz, J. J. de Pablo,  and M. D. Graham, Journal of Rheology 50, 137 (2006). Young and Sing (2019) C. D. Young and C. E. Sing, The Journal of Chemical Physics 151, 124907 (2019). Schroeder, Shaqfeh, and Chu (2004) C. M. Schroeder, E. S. Shaqfeh,  and S. Chu, Macromolecules 37, 9242 (2004). Huang et al. (2010) C.-C. Huang, R. G. Winkler, G. Sutmann,  and G. Gompper, Macromolecules 43, 10107 (2010). Huang et al. (2011) C.-C. Huang, G. Sutmann, G. Gompper,  and R. Winkler, EPL (Europhysics Letters) 93, 54004 (2011). Groisman and Steinberg (2000) A. Groisman and V. Steinberg, Nature 405, 53 (2000). Shaqfeh (1996) E. S. Shaqfeh, Annual Review of Fluid Mechanics 28, 129 (1996). Pakdel and McKinley (1996) P. Pakdel and G. H. McKinley, Physical Review Letters 77, 2459 (1996). Arratia et al. (2006) P. E. Arratia, C. Thomas, J. Diorio,  and J. P. Gollub, Physical review letters 96, 144502 (2006). Poole, Alves, and Oliveira (2007) R. Poole, M. Alves,  and P. J. Oliveira, Physical review letters 99, 164503 (2007). Haward, McKinley, and Shen (2016) S. J. Haward, G. H. McKinley,  and A. Q. Shen, Scientific reports 6, 33029 (2016). Cruz and Alves (2018) F. Cruz and M. Alves, Physical Review Fluids 3, 113301 (2018). Woo and Shaqfeh (2003) N. Woo and E. S. Shaqfeh, The Journal of chemical physics 119, 2908 (2003). Kremer and Grest (1990) K. Kremer and G. S. Grest, The Journal of Chemical Physics 92, 5057 (1990). Rotne and Prager (1969) J. Rotne and S. Prager, The Journal of Chemical Physics 50, 4831 (1969). Yamakawa (1970) H. Yamakawa, The Journal of Chemical Physics 53, 436 (1970). Ermak and McCammon (1978) D. L. Ermak and J. McCammon, The Journal of Chemical Physics 69, 1352 (1978). Fixman (1986) M. Fixman, Macromolecules 19, 1204 (1986). Ando et al. (2012) T. Ando, E. Chow, Y. Saad,  and J. Skolnick, The Journal of Chemical Physics 137, 064106 (2012). Miao, Young, and Sing (2017) L. Miao, C. D. Young,  and C. E. Sing, The Journal of Chemical Physics 147, 024904 (2017). Young, Marvin, and Sing (2018) C. D. Young, M. Marvin,  and C. E. Sing, The Journal of chemical physics 149, 174904 (2018). Kraynik and Reinelt (1992) A. Kraynik and D. Reinelt, International journal of multiphase flow 18, 1045 (1992). Todd and Daivis (1998) B. Todd and P. J. Daivis, Physical review letters 81, 1118 (1998). Sefiddashti, Edwards, and Khomami (2018) M. H. N. Sefiddashti, B. J. Edwards,  and B. Khomami, Physical review letters 121, 247802 (2018). Beenakker (1986) C. Beenakker, The Journal of Chemical Physics 85, 1581 (1986). Jain et al. (2012) A. Jain, P. Sunthar, B. Dünweg,  and J. R. Prakash, Physical Review E 85, 066703 (2012). Dobson, Fox, and Saracino (2016) M. Dobson, I. Fox,  and A. Saracino, Journal of Computational Physics 315, 211 (2016). Geyer and Winter (2009) T. Geyer and U. Winter, The Journal of Chemical Physics 130, 114905 (2009). Zhou and Schroeder (2016) Y. Zhou and C. M. Schroeder, Macromolecules 49, 8018 (2016). Dinic and Sharma (2020) J. Dinic and V. Sharma, Macromolecules 53, 4821 (2020). Marko and Siggia (1995) J. F. Marko and E. D. Siggia, Macromolecules 28, 8759 (1995). Underhill and Doyle (2006) P. T. Underhill and P. S. Doyle, Journal of Rheology 50, 513 (2006). Saadat and Khomami (2016) A. Saadat and B. Khomami, The Journal of Chemical Physics 145, 204902 (2016). Kumar and Larson (2001) S. Kumar and R. G. Larson, The Journal of Chemical Physics 114, 6937 (2001). Likhtman (2005) A. E. Likhtman, Macromolecules 38, 6128 (2005). Uneyama and Masubuchi (2012) T. Uneyama and Y. Masubuchi, The Journal of chemical physics 137, 154902 (2012). Ramírez-Hernández et al. (2015) A. Ramírez-Hernández, B. L. Peters, M. Andreev, J. D. Schieber,  and J. J. de Pablo, The Journal of chemical physics 143, 243147 (2015). Fiore et al. (2017) A. M. Fiore, F. Balboa Usabiaga, A. Donev,  and J. W. Swan, The Journal of Chemical Physics 146, 124116 (2017). Liu and Chow (2014) X. Liu and E. Chow, in Parallel and Distributed Processing Symposium, 2014 IEEE 28th International (IEEE, 2014) pp. 563–572. Saadat and Khomami (2015) A. Saadat and B. Khomami, Physical Review E 92, 033307 (2015). Cruz et al. (2016) F. Cruz, R. Poole, A. Afonso, F. Pinho, P. Oliveira,  and M. Alves, Journal of Non-Newtonian Fluid Mechanics 227, 65 (2016). Sousa et al. (2015) P. Sousa, F. Pinho, M. Oliveira,  and M. Alves, Soft matter 11, 8856 (2015). de Gennes (1974) P.-G. de Gennes, The Journal of Chemical Physics 60, 5030 (1974). Perkins, Smith, and Chu (1997) T. T. Perkins, D. E. Smith,  and S. Chu, Science 276, 2016 (1997). Larson et al. (1999) R. Larson, H. Hu, D. Smith,  and S. Chu, Journal of Rheology 43, 267 (1999). Prabhakar et al. (2017) R. Prabhakar, C. Sasmal, D. A. Nguyen, T. Sridhar,  and J. R. Prakash, Physical Review Fluids 2, 011301 (2017). Clasen et al. (2006) C. Clasen, J. Plog, W.-M. Kulicke, M. Owens, C. Macosko, L. Scriven, M. Verani,  and G. H. McKinley, Journal of Rheology 50, 849 (2006). Dinic, Biagioli, and Sharma (2017) J. Dinic, M. Biagioli,  and V. Sharma, Journal of Polymer Science Part B: Polymer Physics 55, 1692 (2017). Prabhakar et al. (2016) R. Prabhakar, S. Gadkari, T. Gopesh,  and M. Shaw, Journal of Rheology 60, 345 (2016). Cromer, Fredrickson, and Gary Leal (2017) M. Cromer, G. H. Fredrickson,  and L. Gary Leal, Journal of Rheology 61, 711 (2017). Corona et al. (2018) P. T. Corona, N. Ruocco, K. M. Weigandt, L. G. Leal,  and M. E. Helgeson, Scientific reports 8, 1 (2018). Hindawi, Higgins, and Weiss (1992) I. Hindawi, J. Higgins,  and R. Weiss, Polymer 33, 2522 (1992). Peterson, Fredrickson, and Leal (2020) J. D. Peterson, G. H. Fredrickson,  and L. G. Leal, arXiv preprint arXiv:2002.04556  (2020). Hunt, Bernardi, and Todd (2010) T. A. Hunt, S. Bernardi,  and B. Todd, The Journal of chemical physics 133, 154116 (2010). Jain et al. (2015) A. Jain, C. Sasmal, R. Hartkamp, B. D. Todd,  and J. R. Prakash, Chemical Engineering Science 121, 245 (2015).
\useunder \ul 11institutetext: Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), National Observatory of Athens, I. Metaxa & Vas. Pavlou St., 15236 Penteli, Greece 11email: [email protected] 22institutetext: IRAP, Université Toulouse III—Paul Sabatier, CNRS, CNES, Toulouse, France 33institutetext: Space Physics and Astronomy Research Unit and Sodankylä Geophysical Observatory, University of Oulu, Oulu, Finland 44institutetext: Department of Physics and Astronomy, University of Turku, 20500 Turku, Finland 55institutetext: Institut für Experimentelle und Angewandte Physik, Christian-Albrechts-Universität zu Kiel, 24118 Kiel, Germany Abstract Context: Aims:The first relativistic solar proton event of solar cycle 25 (SC25) was detected on 28 October 2021 by neutron monitors (NMs) on the ground and particle detectors onboard spacecraft in the near-Earth space. This is the first ground level enhancement (GLE) of the current cycle. A detailed reconstruction of the NM response together with the identification of the solar eruption that generated these particles is investigated based on in-situ and remote-sensing measurements. Methods:In-situ proton observations from a few MeV to $\sim$500 MeV were combined with the detection of a solar flare in soft X-rays (SXRs), a coronal mass ejection (CME), radio bursts and extreme ultraviolet (EUV) observations to identify the solar origin of the GLE. Timing analysis was performed and a relation to the solar sources was outlined. Results:GLE73 reached a maximum particle rigidity of $\sim$2.4 GV and is associated with type III, type II, type IV radio bursts and an EUV wave. A diversity of time profiles recorded by NMs was observed. This points to an anisotropic nature of the event. The peak flux at E$>$10 MeV was only $\sim$ 30 pfu and remained at this level for several days. The release time of $\geq$1 GV particles was found to be $\sim$15:40 UT. GLE73 had a moderately hard rigidity spectrum at very high energies ($\gamma\sim$5.5). Comparison of GLE73 to previous GLEs with similar solar drivers is performed. Conclusions: The First Ground Level Enhancement of Solar Cycle 25 on 28 October 2021 A. Papaioannou 11    A. Kouloumvakos 22    A. Mishev 33    R. Vainio 44    I. Usoskin 33    K. Herbst 55    A. P. Rouillard 22    A. Anastasiadis 11    J. Gieseler 44    R. Wimmer-Schweingruber 55    P. Kühl 55 Key Words.: solar–terrestrial relations – coronal mass ejections (CMEs) – solar energetic particles (SEPs) – solar flares – solar activity – ground level enhancements 1 Introduction Ground Level Enhancements (GLEs) represent the high-energy tail of Solar Energetic Particle (SEP) events. GLEs require acceleration processes capable of producing $\geq$ 1 GV (in rigidity) particles with sufficient intensity to allow their secondary products to reach the terrestrial ground and be detected by neutron monitors (NMs) (e.g. Poluianov et al., 2017, and references therein). Due to their fast propagation, relativistic protons in GLEs are particularly useful for the identification of SEP sources at the Sun (Aschwanden, 2012). The relationship between manifestations of solar activity and energetic protons has been investigated in a series of works (e.g. Belov et al., 2005; Gopalswamy et al., 2012; Mäkelä et al., 2015; Firoz et al., 2019; Kouloumvakos et al., 2019). However, given the relation of GLEs to both strong solar flares and fast and wide CMEs, usually, their acceleration site cannot be unambiguously identified. Detailed studies of specific GLE events have been conducted (e.g. Bombardieri et al., 2008; Mishev et al., 2018) but the conditions and processes that lead to such strong SEP events are still not completely understood. GLEs usually have a gradual proton component with E$>$10 MeV that lasts for several days and leads to a significant SEP peak flux. Hence GLEs are thought to be dominated by CME-driven shocks (see, e.g., Kahler et al., 2012; Nitta et al., 2012). On the other hand, studies of the timing of GLE events have shown evidence for two distinct components, with one being driven by re-connection processes leading to the so-called prompt component (PC) and the other associated with the expanding CME-driven shock that gives ground to the delayed component (DC) (e.g. Vashenyuk et al., 2006; McCracken et al., 2008; Moraal & McCracken, 2012). Therefore, until today the debate about the exact nature of GLE mechanisms is still ongoing (see e.g. Kouloumvakos et al., 2020; Kocharov et al., 2021). GLEs are rare (i.e. only 73 events in $\sim$ 80 years of observations)111https://gle.oulu.fi/ with a rate of $\sim$ 0.9 events per year (Vainio et al., 2017). These events have been primarily recorded by NMs on the ground, and their lower energy components were seen by spacecraft in the near-Earth space. Thus, their analysis was hampered by the lack of identification in other vantage points within the heliosphere. However, in recent years, with the launch of the Solar Terrestrial Relations Observatory (STEREO) twin mission (Kaiser et al., 2008) and the landing of the Mars Science Laboratory (MSL) on Mars (Grotzinger et al., 2012), GLE71 (17 May 2012) & GLE72 (10 September 2017) have been identified and investigated as multi-spacecraft events (see e.g. Rouillard et al., 2016; Battarbee et al., 2018; Guo et al., 2018; Cohen & Mewaldt, 2018). Adding to this, GLEs have only been investigated based on recordings made in the inner heliosphere for a handful of cases (e.g. Cliver, 2006; Reames et al., 2013). Nonetheless, missions of the present day like Solar Orbiter (SolO; Müller et al., 2020), Parker Solar Probe (PSP; Fox et al., 2016) and BepiColombo (Benkhoff et al., 2010) may provide a new view on open scientific questions on the origin of relativistic particles since those offer concurrent measurements of protons and complementary electromagnetic observations at a set of vantage points in the inner heliosphere. The present letter combines measurements of GLE73 –the first such event recorded in SC25– at the near-Earth space and on the ground together with observations of the CME evolution, context solar information and modelling of SEPs based on NM recordings. 2 Observations 2.1 Overview The first GLE event (GLE73) of SC25 was observed by several neutron monitors around the Earth (see Table 2), on 28 October 2021. Figure 1 shows an overview of observations during the GLE73 event. The peak intensity was maximum for the two conventional NM stations located on the Antartic plateau, $\sim$7.3% for DOMC (Dome C NM at Concordia station) and 5.4% for SOPO (South Pole). Bare (lead-free) NMs at the same sites detected a higher response (14.0% for DOMB and 6.6% for SOPB). Energetic protons were also observed by the Solar and Heliospheric Observatory (SOHO)/Energetic and Relativistic Nuclei and Electron (ERNE) (Torsti et al., 1995) at a range of energies (see Appendix A). Figure 1 (b) depicts three proton channels of ERNE. GLE73 was associated with an X1.0 class flare starting at 15:17 UT and peaking at 15:35 UT (see Figure 1 (c)). The source active region NOAA AR12887 was located at W02S26 (in Heliographic Stonyhurst (HGS) coordinate system at 15:20 UT) as observed by Earth’s viewpoint. In addition, from metric to kilo-metric wavelengths (radio domain) type III, type II and IV radio bursts were also observed in association with the solar event. In particular, the start time of the first type III is marked at 15:28 UT (see Figure 1(d)) which further coincides with the start of a type II radio burst222http://soleil.i4ds.ch/solarradio/data/BurstLists/2010-yyyy_Monstein/2021/e-CALLISTO_2021_10.txt. The group of type III bursts is evident from $\sim$15:30-15:50 UT, whereas a metric type IV radio burst is also marked at $\sim$15:37 UT (see the inset in Figure 1(d)). 2.2 The CME and the EUV wave GLE73 was also associated with an Extreme Ultraviolet (EUV) wave that was observed in the low corona by EUV imagers (i.e. the Atmospheric Imaging Assembly (AIA) of Solar Dynamics Observatory (SDO; Lemen et al., 2011) and (STEREO-A/EUVI; Howard et al., 2008)), and a CME and white light (WL) shock wave that was observed higher in the corona by the SOHO/LASCO (Brueckner et al., 1995) and STEREO-A coronagraphs (Howard et al., 2008). In Figure 2, we show remote-sensing observations during the solar event. EUV observations show a classic picture of an EUV wave, namely a circular propagating bright front, to form at $\sim$15:28 UT. The EUV wave expanded coherently toward every direction and evolved clearly as a global wave from the Earth’s viewpoint, engulfing the visible disk by 16:20 UT (see the complementary movie). The CME was well observed by two different spacecraft, namely STEREO-A and SOHO, that were separated by 38${}^{\circ}$ (see also Figure 6). At LASCO/C2 and STEREO-A/COR1 the CME was observed for the first time at 15:48 UT (at $\sim$2.83 $R_{\sun}$ and a Position Angle (PA): $\sim$185${}^{\circ}$) and at 15:36 UT (at $\sim$1.91 $R_{\sun}$ and PA:$\sim$230${}^{\circ}$), respectively. Both viewpoints reveal the emergence of a broad CME forming a halo and a clear pressure wave in front moving faster than the erupting plasma. The wave appears to interact with coronal streamers located on the CME flanks (see Figure 2). There is also a narrow and slow CME that erupted a few hours before GLE73, from an AR located just behind the west solar limb. The west flank of the WL pressure/shock seems to interact with the southern section of this previous CME’s legs. From a linear fit to the height-time measurements, we obtained a plane-of-sky CME speed at its leading-edge (PA:$\sim$185${}^{\circ}$) of the combined LASCO/C2 & C3 field of view of $\sim$1240$\pm$40 km/s (see Figure 1(c)). At the same direction we obtained for the WL shock a plane-of-sky speed of $\sim$1640$\pm$40 km/s. 2.3 Neutron Monitor Data During GLE73 differences in the time profiles of the cosmic-ray intensity are evident, as revealed by the Fort Smith (FSMT), Dome C Concordia (DOMC), South Pole (SOPO), Oulu (OULU) and Peawanuck (PWNK) NMs presented in Figure 1 (a). Herein, we use five-minute integrated de-trended NM data retrieved from the international GLE database333https://gle.oulu.fi/. NM data are also presented in the neutron monitor database (NMDB). One can see that the event revealed a typical gradual increase and moderate anisotropy (see details in Sections 3.1 & 3.4) during the onset since a moderate count-rate increase is recorded by stations looking in the sunward direction (FSMT, PWNK, SOPO). As can be seen in Figure 1, during GLE73 the flux remained above the background level for almost 4.5 hours. The NM situated at high-altitude polar stations, i.e. DOMC and SOPO recorded the greatest count rate increases. The rapid rise as shown by the FSMT, SOPO and PWNK NMs intensity time-profile (Figure 1) indicates that energetic protons had reasonable access to the Sun-Earth-connecting field lines. For twelve NMs and the two bare NMs the onset and peak time, as well as the maximum increase (in %) were calculated using the de-trended NM data (Usoskin et al., 2020), as discussed in Appendix C. All results are presented in Table 2. 3 Results 3.1 Modeling the neutron monitors response The analysis of GLEs based on NM data consists of several consecutive steps (see Smart et al., 2000). The detailed description of the model used in this work is given in Mishev et al. (2014) and Mishev & Poluianov (2021). A method that has been recently applied to a series of GLEs (i.e. Mishev et al., 2017, 2018). Figure 3 shows the calculated viewing directions of the NMs used in this analysis at around the onset of GLE73 (15:50 UT) for particles of 1 to 5 GV, accordingly 0.7– 5 GV for the high-altitude polar NMs, whilst in the analysis the rigidity range up to 20 GV was implied. The FSMT, SOPO, PWNK and Nain (NAIN) NMs possess viewing directions that are close to the nominal sunward direction, whilst Inuvik (INVK) NM had viewing direction close to the nominal anti-sunward direction. The SOPO and FSMT NMs observed an earlier onset, with a more rapid rise being exhibited by FSMT and PWNK, while INVK revealed gradual rise. Naturally, it is related to the location of those station(s). Employing the model presented in Appendix C, we derived the spectra (see Eq. (1)), pitch-angle distribution (PAD) and apparent source (see Eq. (2)) position of the solar protons during the main phase of GLE73. The spectra gradually softened in the course of the event, specifically during the initial and main phases of the event, the latter corresponding to about 17:30–18:20 UT, that is during the peak intensity of the event (e.g. see the discussion in Mishev et al., 2021). The results are presented in Figure 4 and the details are given in Table 3. The derived spectra are moderately hard with moderate steepening ($\delta\gamma$). Moreover, we derived moderately anisotropic angular distribution fitted with a function similar to a Gaussian, without any signature of protons arriving from the anti-sunward direction, nor a complicated PAD as depicted in Mishev et al. (2014). The derived angular distribution gradually broadens during the main phase of the event. 3.2 In-situ particles GLE73 was clearly recorded by particle instruments on near-Earth orbiting spacecraft such as ERNE onboard SOHO, the Space Environment in Situ Suite (SEISS) on Geostationary Operational Environmental Satellite (GOES) (Kress et al., 2020) and the High Energy Telescope (HET) of the Energetic Particle Detector (EPD) on Solar Orbiter (SolO) (Rodríguez-Pacheco et al., 2020). Figure 1(b) shows the recordings from SOHO/ERNE for a set of three energy channels with an effective energy of 15.4, 29.1 & 57.4 MeV. Figure 7 in Appendix A shows the 5-min averaged recordings of solar particles on GOES/SEISS [6.5-500 MeV]; SOHO/ERNE [15.4-57.4 MeV] and SolO/HET [13.68-89.46 MeV] together with the recordings of the BCB-counter of SolO/HET [E$>$157 MeV; (see details in Freiherr von Forstner et al., 2021)]. In addition, Figure 6 shows the relative position of spacecraft at 15:15 UT and indicates that in addition to GOES and SOHO, SolO was also close to Earth at a distance of 0.80 AU (astronomical units). 3.3 Relation to solar sources For the first arriving particles it is possible to perform time-shifting analysis (TSA; Vainio et al., 2013) to infer their release time at the Sun (Solar Release Time; SRT). A low-end energy limit of particles recorded by a sea level NM station is $\sim$1 GV (i.e. 433 MeV) thus the corresponding mean velocity for such energetic protons would be $u=0.73c$. For GLE73 particles with rigidities up to $\sim$2.4 GV (1.6 GeV) have been identified, with a mean velocity of $u=0.93c$. The length of the Parker spiral $L$ can be computed based on the solar wind speed during the event. During GLE73, the solar wind speed was slow $V_{SW}$=300 km/s, leading to $L$ = 1.28 AU. For the first arriving particles we assume scatter-free propagation and calculated the expected SRT of the relativistic protons, $t_{rel}$, adding 500s for comparison with remote-sensing measurements at 1 AU (e.g. radio observations) (Papaioannou et al., 2014). For SOPO NM station, that registered the earlier onset, we obtained $t_{onset}$ = 15:45 UT (see Table 2). The travel time of the relativistic protons of $\sim$2.4 GV was calculated to be $\sim$11 min and the corresponding anticipated $t_{rel}$ $\sim$15:42 UT. For a set of rigidities $\geq$1 GV $t_{rel}$ ranges from 15:39-15:42 UT. Since 5-min resolution NM data are used, there is a 5-min uncertainty in these calculations. From SDO/AIA images we track the expansion of the EUV wave toward the footpoints of the magnetic field lines connected to Earth that we determined using the Potential Field Source Surface (PFSS) model and global photospheric magnetic maps (see Appendix B). We find that the footpoints were located $\sim$72${}^{\circ}$ west from AR12887. The release time of the relativistic particles for the GLE73 seems to connect well to the time that the EUV wave passed by the location of the footpoints magnetically connected to Earth (see Figure 8 at $\sim$15:39 UT). Comparing with the SXR and radio observations, we find that the release of $\sim$2.4 GV particles ($\sim$15:42 UT) is $\sim$5 minutes after the flare peak time and 12 minutes after the start of the first type III and the type II ratio burst (Figure 1). Around the release time of energetic protons (R$\geq$ 1GV which ranges between $\sim$15:39-15:42 UT) there is radio emission from a group of the type IIIs and a moving type VI radio bursts (see Figure 1). At the release time of the $\sim$2.4 GV particles the WL shock is located at a height of $\sim$2.32 $R_{\sun}$. Table 1 provides a timeline of events during GLE73 based on the measurements and calculations. 3.4 Comparison with other GLEs There are only 5 GLEs since 1976 that were associated with an $\leq$X1.0 SXR flare (i.e. GLE30, GLE32, GLE58, GLE62 & GLE71). However, only GLE58 is associated with a central (E09) X1.0 flare. Figure 5 shows the time distribution of all GLEs since 1976 with respect to E$>$10 MeV proton peak flux $I_{P}$ detected by the series of GOES satellites. Despite the similar flare flux and position, GLE58 (orange square) has an $I_{P}$ 6.7 times larger than GLE73 (red square). GLE40 & GLE50 (purple squares) have similar $I_{P}$ ($\sim$30 pfu) but both were limb events ($>$W85). Around the time of release of the 2.4 GV particles the height of the CME and the WL shock was $\sim$1.84 $R_{\sun}$ and $\sim$2.32 $R_{\sun}$, respectively. Both values are lower than the mean values reported for other poorly connected GLEs (see Gopalswamy et al., 2012). Also the median plane-of-sky (projected) speed for GLEs is $\sim$1810 km/s (see Gopalswamy et al., 2012) whereas the GLE73 CME speed from LASCO C2 & C3 was estimated to be $\sim$1240 km/s and the plane-of-sky speed of the WL shock was $\sim$1640 km/s. 4 Conclusions In this work a summary of observations for GLE73 that took place on 28 October 2021 –the first such event of SC25– is presented. Detailed modeling and reconstruction of the spectral and angular characteristics of high-energy SEPs in the vicinity of the Earth was performed. Ground-based NMs, together with space-borne data were employed in the corresponding data analysis. One of the characteristic aspects of this GLE is its association with a central-disk (W02) X1.0 flare (fairly untypical for GLEs) and a CME (of $\sim$1240 km/s) driving a WL shock (of $\sim$1640 km/s). The main results of the study are: 1. During the main phase of GLE73 the rigidity spectrum was moderately hard ($\gamma\sim$5.5), with significant steepness $\delta\gamma\sim$0.4. During this stage of the event the derived PAD was relatively wide ($\sigma^{2}\approx$4.5 rad${}^{2}$). 2. The event was characterized by a directional particle flux arriving from the sunward direction, hence GLE73 was characterized by a relatively strong anisotropy. 3. The SRT of the very high energy particles was found to be $\sim$15:40 UT and around this SRT the CME-driven shock was located at a height of $\sim$2.32 ($\pm 0.2$) $R_{\sun}$. 4. Timing of the EUV wave evolution towards the field lines magnetically connected to Earth and the inferred release time of high energy protons seem to be in good agreement. Acknowledgements. The authors acknowledge the International Space Science Institute and the supported International Team 441: HEROIC. AP acknowledge support from NASA/LWS project NNH19ZDA001N-LWS. AK, RV and JG acknowledges financial support from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101004159 (SERPENTINE). AK and AR acknowledge the ANR COROSHOCK project (ANR-17-CE31-0006-01). AM and AP acknowledge the support by the Academy of Finland (project 330064 QUASARE). KH acknowledges the support of the DFG priority program SPP 1992 “Exploring the Diversity of Extrasolar Planets (HE 8392/1-1)”. AP and KH also acknowledge the supported International Team 464: ETERNAL. IU acknowledges a partial support from the Academy of Finland (project ESPERA No. 321882). RV and JG acknowledge the support of Academy of Finland (FORESAIL, grants 312357 and 336809). We acknowledge the NMDB database (www.nmdb.eu) founded under the European Union’s FP7 programme (contract no. 213007), and the PIs of individual neutron monitors. Italian polar program PNRA (via the LTCPAA PNRA 2015/AC3 and the BSRN PNRA OSS-06 projects), the French Polar Institute IPEV and FINNARP are acknowledged for the hosting of DOMB/DOMC NMs. References Alken et al. (2021) Alken, P., Thacbault, E., Beggan, C., et al. 2021, Earth, Planets and Space, 73 Arge et al. (2013) Arge, C. N., Henney, C. J., Hernandez, I. G., et al. 2013, in American Institute of Physics Conference Series, Vol. 1539, Solar Wind 13, 11–14 Aschwanden (2012) Aschwanden, M. J. 2012, Space science reviews, 171, 3 Battarbee et al. (2018) Battarbee, M., Guo, J., Dalla, S., et al. 2018, Astronomy & Astrophysics, 612, A116 Belov et al. (2005) Belov, A., Garcia, H., Kurt, V., Mavromichalaki, H., & Gerontidou, M. 2005, Solar Physics, 229, 135 Benkhoff et al. (2010) Benkhoff, J., Van Casteren, J., Hayakawa, H., et al. 2010, Planetary and Space Science, 58, 2 Bombardieri et al. (2008) Bombardieri, D., Duldig, M., Humble, J., & Michael, K. 2008, The Astrophysical Journal, 682, 1315 Brueckner et al. (1995) Brueckner, G., Howard, R., Koomen, M., et al. 1995, in The SOHO mission (Springer), 357–402 Caballero-Lopez (2016) Caballero-Lopez, R. 2016, Journal of Geophysical Research A: Space Physics, 121, 7461 Cliver (2006) Cliver, E. 2006, The Astrophysical Journal, 639, 1206 Cohen & Mewaldt (2018) Cohen, C. & Mewaldt, R. 2018, Space Weather, 16, 1616 Cooke et al. (1991) Cooke, D., Humble, J., Shea, M., et al. 1991, Il Nuovo Cimento C, 14, 213 Cramp et al. (1997) Cramp, J., Duldig, M., Flückiger, E., et al. 1997, Journal of Geophysical Research, 102, 24 237 Desorgher (2005) Desorgher, L. 2005, MAGNETOCOSMICS, http://cosray.unibe.ch/~laurent/magnetocosmics/ Firoz et al. (2019) Firoz, K., Gan, W., Moon, Y.-J., Rodríguez-Pacheco, J., & Li, Y. 2019, The Astrophysical Journal, 883, 91 Fox et al. (2016) Fox, N., Velli, M., Bale, S., et al. 2016, Space Science Reviews, 204, 7 Freiherr von Forstner et al. (2021) Freiherr von Forstner, J. L., Dumbović, M., Möstl, C., et al. 2021, Astronomy and Astrophysics, 656, A1 Gleeson & Axford (1968) Gleeson, L. J. & Axford, W. 1968, The Astrophysical Journal, 154, 1011 Gopalswamy et al. (2012) Gopalswamy, N., Xie, H., Yashiro, S., et al. 2012, Space Science Reviews, 171, 23 Grotzinger et al. (2012) Grotzinger, J. P., Crisp, J., Vasavada, A. R., et al. 2012, Space science reviews, 170, 5 Guo et al. (2018) Guo, J., Dumbović, M., Wimmer-Schweingruber, R. F., et al. 2018, Space Weather, 16, 1156 Howard et al. (2008) Howard, R. A., Moses, J., Vourlidas, A., et al. 2008, Space Science Reviews, 136, 67 Kahler et al. (2012) Kahler, S., Cliver, E., Tylka, A., & Dietrich, W. 2012, Space science reviews, 171, 121 Kaiser et al. (2008) Kaiser, M. L., Kucera, T., Davila, J., et al. 2008, Space Science Reviews, 136, 5 Kocharov et al. (2021) Kocharov, L., Omodei, N., Mishev, A., et al. 2021, The Astrophysical Journal, 915, 12 Kouloumvakos et al. (2020) Kouloumvakos, A., Rouillard, A. P., Share, G. H., et al. 2020, The Astrophysical Journal, 893, 76 Kouloumvakos et al. (2019) Kouloumvakos, A., Rouillard, A. P., Wu, Y., et al. 2019, ApJ, 876, 80 Kress et al. (2020) Kress, B. T., Rodriguez, J. V., & Onsager, T. G. 2020, in The GOES-R Series (Elsevier), 243–250 Kurt et al. (2019) Kurt, V., Belov, A., Kudela, K., et al. 2019, Solar Physics, 294, 1 Kuwabara et al. (2006) Kuwabara, T., Bieber, J., Clem, J., et al. 2006, Space Weather, 4 Lara et al. (2016) Lara, A., Borgazzi, A., & Caballero-Lopez, R. 2016, Advances in Space Research, 58, 1441 Lemen et al. (2011) Lemen, J. R., Akin, D. J., Boerner, P. F., et al. 2011, in The solar dynamics observatory (Springer), 17–40 Mäkelä et al. (2015) Mäkelä, P., Gopalswamy, N., Akiyama, S., Xie, H., & Yashiro, S. 2015, The Astrophysical Journal, 806, 13 McCracken et al. (2008) McCracken, K., Moraal, H., & Stoker, P. 2008, Journal of Geophysical Research: Space Physics, 113 Mishev et al. (2014) Mishev, A., Kocharov, L., & Usoskin, I. 2014, Journal of Geophysical Research: Space Physics, 119, 670 Mishev et al. (2021) Mishev, A., Koldobskiy, S., Kocharov, L., & Usoskin, I. 2021, Solar Physics, 296 Mishev & Poluianov (2021) Mishev, A. & Poluianov, S. 2021, Solar Physics, 296 Mishev et al. (2017) Mishev, A., Poluianov, S., & Usoskin, I. 2017, Journal of Space Weather and Space Climate, 7, A28 Mishev & Usoskin (2020) Mishev, A. & Usoskin, I. 2020, Journal of Space Weather and Space Climate, 10, 17 Mishev et al. (2018) Mishev, A., Usoskin, I., Raukunen, O., et al. 2018, Solar Physics, 293, 1 Mishev et al. (2020) Mishev, A. L., Koldobskiy, S. A., Kovaltsov, G. A., Gil, A., & Usoskin, I. G. 2020, Journal of Geophysical Research: Space Physics, 125, e2019JA027433 Moraal & McCracken (2012) Moraal, H. & McCracken, K. 2012, Space Science Reviews, 171, 85 Müller et al. (2020) Müller, D., Cyr, O. S., Zouganelis, I., et al. 2020, Astronomy & Astrophysics, 642, A1 Nitta et al. (2012) Nitta, N., Liu, Y., DeRosa, M., & Nightingale, R. 2012, Space science reviews, 171, 61 Papaioannou et al. (2014) Papaioannou, A., Souvatzoglou, G., Paschalis, P., Gerontidou, M., & Mavromichalaki, H. 2014, Solar Physics, 289, 423 Poluianov et al. (2017) Poluianov, S., Usoskin, I., Mishev, A., Shea, M., & Smart, D. 2017, Solar Physics, 292, 176 Reames et al. (2013) Reames, D. V., Ng, C. K., & Tylka, A. J. 2013, Solar Physics, 285, 233 Rodríguez-Pacheco et al. (2020) Rodríguez-Pacheco, J., Wimmer-Schweingruber, R., Mason, G., et al. 2020, Astronomy & Astrophysics, 642, A7 Rouillard et al. (2020) Rouillard, A. P., Pinto, R. F., Vourlidas, A., et al. 2020, A&A, 642, A2 Rouillard et al. (2016) Rouillard, A. P., Plotnikov, I., Pinto, R. F., et al. 2016, ApJ, 833, 45 Smart et al. (2000) Smart, D., Shea, M., & Flückiger, E. 2000, Cosmic Rays and Earth, 305 Torsti et al. (1995) Torsti, J., Valtonen, E., Lumme, M., et al. 1995, Solar Physics, 162, 505 Tsyganenko (1989) Tsyganenko, N. A. 1989, Planetary and Space Science, 37, 5 Usoskin et al. (2017) Usoskin, I., Gil, A., Kovaltsov, G., Mishev, A., & Mikhailov, V. 2017, Journal of Geophysical Research, 122, 3875 Usoskin et al. (2020) Usoskin, I., Koldobskiy, S., Kovaltsov, G., et al. 2020, Astronomy & Astrophysics, 640, A17 Usoskin et al. (2005) Usoskin, I. G., Alanko-Huotari, K., Kovaltsov, G. A., & Mursula, K. 2005, Journal of Geophysical Research: Space Physics, 110 Vainio et al. (2017) Vainio, R., Raukunen, O., Tylka, A. J., Dietrich, W. F., & Afanasiev, A. 2017, Astronomy & Astrophysics, 604, A47 Vainio et al. (2013) Vainio, R., Valtonen, E., Heber, B., et al. 2013, Journal of space weather and space climate, 3, A12 Vashenyuk et al. (2006) Vashenyuk, E., Balabin, Y. V., Perez-Peraza, J., Gallegos-Cruz, A., & Miroshnichenko, L. 2006, Advances in Space Research, 38, 411 Vos & Potgieter (2015) Vos, E. & Potgieter, M. 2015, Astrophysical Journal, 815, 119 Appendix A Near Earth measurements of the SEP event of 28 October 2021 At the time of the GLE73, SolO, STEREO-A, and PSP were trailing Earth by -3${}^{\circ}$, -38${}^{\circ}$ and -54${}^{\circ}$, respectively, while BepiColombo was leading Earth by 90${}^{\circ}$. Figure 6 shows the positions of various spacecraft at the heliosphere and the Parker spirals connecting at each location. A measured solar wind speed was used for each spacecraft when available444For SolO we used the solar wind speed during October 30 before the shock arrival., or else a speed of 350 km/s was assumed. Using the measured solar wind speeds shown at the legend of Figure 6, for Earth, STEREO-A, and SolO, we calculated the location of the footpoints of the nominal Parker spirals. The footpoints connected to Earth, STEREO-A, and SolO were located at W81N05, W31N07, and W63N02, respectively (in HGS system at 15:20 UT). GLE73 was clearly recorded by near-Earth spacecraft GOES and SOHO, as well as, SolO that was in a favorable position to record it. The analysis of GLE73 using all heliospheric vantage points is beyond the scope of this letter, and we concentrate instead on the near-Earth spacecraft and SolO, which is the least separated from Earth (see Figure 6). The time history of SEP measurements during the GLE73 as recorded (from top to bottom) from GOES/SEISS [6.5-500 MeV], SOHO/ERNE [15.4-57.4 MeV] and SolO/HET [13.68-89.46 MeV & BCB-counter [E¿157 MeV; counts/min] (Freiherr von Forstner et al. 2021)] are presented in Figure 7. High energy protons at each spacecraft (all indicated with a red line in each panel) seem to have a prompt increase, with GOES/P10 (275-500 MeV) having an onset time at 15:55 UT, SOHO/ERNE (at 57.4 MeV) records the event at 16:18 UT and the BCB-counter of SolO (Figure 7, third panel from the top; brown line) has an onset time at 15:40 UT. Note, however, that at the lowest energies, there seems to be some high-energy contamination in the GOES channels (see Figure 7, top panel). Appendix B Magnetic connectivity using the PFSS model Since the magnetic configurations in the low corona are more complex than the simple Parker spiral model employed in Figure 2, we also used the PFSS model and global photospheric magnetic maps to calculate the magnetic field configuration in the low corona555http://connect-tool.irap.omp.eu/(Rouillard et al. 2020). This gives some further context to the magnetic connectivity of Earth. For the input magnetic maps we used the maps provided by the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model (Arge et al. 2013). The ADAPT maps are global magnetograms of the photospheric magnetic flux. Then we used the PFSS model and the global maps of the radial magnetic field at the photosphere and calculated the magnetic field from the solar surface to the 3.0 R${}_{\sun}$ which is the assumed height of the source surface. From the location of the footpoint of the Parker spiral at the source surface we determined the field lines connected to Earth. We found that most of the footpoints of the magnetic field lines connected to Earth gather to the west from the AR12886 which was located at W59S19, (e.g., $\sim$57${}^{\circ}$ west from AR12887). Specifically, we found that the average location of the footpoints was at W74S25 and they sparsed for about 10${}^{\circ}$ from this location. In addition, Figure 8 provides the combined outputs of PFSS and the evolution of the EUV wave at the inferred release time (i.e. 15:39 UT) of the high energy particles ($\geq$1 GV). Appendix C Analysis of Neutron Monitor measurements Measurements Inspection of the NM data from various stations around the world indicated the presence of particles with a rigidity up to $\sim$2 GV. Newark NM, with a vertical cut-off rigidity of 2.4 GV, recorded an increase of marginal significance that may or may not be related to GLE73. The de-trended NM data (Usoskin et al. 2020) were used in the study. Essentially, the de-trended data account for smooth temporal variability in the baseline, allowing for a clear estimation of the contribution of solar particles in GLEs, free from the effect of short-time variability of galactic cosmic rays (GCRs) due to interplanetary transients and local anisotropy. Figure 9 illustrates the recordings of Calgary (GALG) NM and shows that GLE73 in the recordings of this station occurred on the background of a strong diurnal wave caused by the local GCR anisotropy. Moreover, the anisotropy is usually assessed by a direct comparison of count rates of northern and southern near-polar NMs. For GLE73 we compared the count rate of two sub-polar NMs, namely Thule (THUL) and Jang Bogo (JBGO), both of which have similar characteristics (i.e. R = 0.30 GV and altitude of 260 m & 30 m, respectively). This comparison directly indicates the presence (or not) of north-south anisotropy. As it can be seen in Figure 10 the difference (red line) remained close to 0% (mean = 0.02%, median = 0.06%) during GLE73 and thus the comparison does not reveal any significant north-south anisotropy component. Additionally, in case of an isotropic GLE, high latitude NMs situated at altitudes near the sea level should display almost the same increases caused by SEPs. There are 11 such NM stations with a nominal cut-off rigidity $R_{C}<$1.4 GV (Kurt et al. 2019). Figure 11 shows a comparison of the averaged data of 8 of these stations (blue line) against the recordings of the FSMT NM (black line). The fact that FSMT is the only NM which shows a larger increase compared to all other high latitude stations indicates a moderate longitudinal anisotropy of GLE73 in the first $\sim$2 hrs of the event. As it can be seen in Figure 11 the difference (red line) had a maximum of $\sim$3($\pm$0.72)% during GLE73 at around $\sim$16:40-16:50 UT (mean = 0.92%, median = 0.87%). Table 2 provides the characteristics (onset, peak time and maximum increase (%)) of GLE73. Column 1 provides the name (conventional acronym) of the NM used in the analysis, column 2 the GLE onset time (in UT), column 3 the peak time (also in UT), and column 4 the maximum increase (in %) of the NM station. All products were calculated based on 5-min de-trended NM data (Usoskin et al. 2020). Although, finer time resolution data (i.e. 1-min) would in principal facilitate a better relation to the solar source, the statistical fluctuations for such a moderate GLE, however, would be too large. DOMC and SOPO as high-altitude and high-latitude stations with a vertical cut-off rigidity of 0.10 GV, allow the registration of lower energy particles compared to the bulk of NMs, that is, they are more sensitive (Kuwabara et al. 2006; Mishev & Poluianov 2021). As a result, these NMs recorded the most intense flux during GLE73 compared to all other NMs. At the same time the bare NMs at these locations (i.e. DOMB & SOPB) recorded the most pronounced signals of solar particles for GLE73 (see Table 2). Calculation of the asymptotic directions A straightforward computation of the rigidity cut-offs and asymptotic directions of the allowed trajectories (Cooke et al. 1991) requires a combination of the International Geomagnetic Reference Field (IGRF) geomagnetic model (Alken et al. 2021) for the internal field model with the Tsyganenko 89 model for the external field (Tsyganenko 1989). All computations of the particle transport in the geomagnetic field were performed with the MAGNETOCOSMICS code (Desorgher 2005). It is plausible to assume that the first nearly relativistic protons arriving in the vicinity of the Earth propagate along the interplanetary magnetic field (IMF). Therefore, a NM whose asymptotic cone is aligned nearly to the IMF is expected to register the earliest signal over the background, that is the event onset, and possibly the greatest count rate increase (Bombardieri et al. 2008; Papaioannou et al. 2014). Modeling the response of neutron monitors In the model employed in this work, a modified power-law rigidity spectrum of SEPs is assumed: $$J_{\parallel}(P)=J_{0}(P)^{(\gamma+\delta\gamma(P-1))}$$ (1) where $J_{\parallel}(P)$ is the particle flux arriving from the Sun along the symmetry axis, whose direction is defined by the geographic coordinate angles $\Lambda$ and $\psi$, $\gamma$ is the power-law spectral exponent at a rigidity P = 1 GV, and $\delta\gamma$ is the rate of the spectrum steepening. The pitch angle distribution (PAD) is assumed to be similar to a Gaussian: $$G(\alpha(P))\sim\exp(-\alpha^{2}/\sigma^{2})$$ (2) where $\alpha$ is the pitch angle, and $\sigma$ is the parameter that corresponds to the width of the pitch angle distribution. The pitch angle is defined as the angle between the asymptotic direction and the axis of anisotropy. Note that a steady convergence and reliable solution are usually obtained when the merit function $\mathcal{D}$, that is the residual according to Mishev et al. (2021) is $\sim$ 5, yet for weak events it can be about 12–15 (e.g. for details see Vashenyuk et al. 2006; Mishev et al. 2021, and the discussion therein) (see Table 3). For the GCR spectrum we employed a parametrisation based on the force-field model (Gleeson & Axford 1968), the full details are given in Usoskin et al. (2005), where the local interstellar spectrum (LIS) is considered according to Vos & Potgieter (2015). The modulation is considered following the procedure by Usoskin et al. (2017). Here, the modelling of the NM response is performed with a new altitude-dependent NM yield function (Mishev et al. 2020), that is, each NM is modelled with a yield function corresponding to the exact station altitude, leading to significant improvement of the unfolding procedure compared to previous studies (e.g. Cramp et al. 1997; Mishev et al. 2021). Here we rescaled the DOMC/DOMB mini NMs to a standard 6NM64 similarly to Caballero-Lopez (2016); Lara et al. (2016).
Equivalence of additive-combinatorial linear inequalities for Shannon entropy and differential entropy Ashok Vardhan Makkuva and Yihong Wu Ashok Vardhan Makkuva is with the Department of ECE and the Coordinated Science Lab, University of Illinois at Urbana-Champaign, Urbana, IL, email: [email protected]. Yihong Wu is with the Department of Statistics, Yale University, New Haven CT 06511, email: [email protected]. Abstract This paper addresses the correspondence between linear inequalities of Shannon entropy and differential entropy for sums of independent group-valued random variables. We show that any balanced (with the sum of coefficients being zero) linear inequality of Shannon entropy holds if and only if its differential entropy counterpart also holds; moreover, any linear inequality for differential entropy must be balanced. In particular, our result shows that recently proved differential entropy inequalities by Kontoyiannis and Madiman [KM14] can be deduced from their discrete counterparts due to Tao [Tao10] in a unified manner. Generalizations to certain abelian groups are also obtained. Our proof of extending inequalities of Shannon entropy to differential entropy relies on a result of Rényi [Rén59] which relates the Shannon entropy of a finely discretized random variable to its differential entropy and also helps in establishing the entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum; the converse uses the asymptotics of the differential entropy of convolutions with weak additive noise. Contents 1 Introduction and main result 1.1 Additive-combinatorial inequalities for cardinality and Shannon entropy 1.2 Equivalence of Shannon and differential entropy inequalities 1.3 Main results 1.4 Organization 2 On sharp constants in additive-combinatorial entropy inequalities 3 Proof of Theorem 1 4 Proof of Theorem 2 5 Proofs of lemmas 5.1 Proof of Lemma 1 5.2 Proof of Lemma 2 5.3 Proof of Lemma 4 5.4 Proof of Lemma 5 6 Extensions to general groups A Proof of Proposition 1 1 Introduction and main result 1.1 Additive-combinatorial inequalities for cardinality and Shannon entropy Over the past few years, the field of additive combinatorics has invited a great deal of mathematical activity; see [TV06] for a broad introduction. An important repository of tools in additive combinatorics is the sumset inequalities, relating the cardinalities of the sumset and the difference set $A\pm B=\{a\pm b:a\in A,b\in B\}$ to those of $A$ and $B$, where $A$ and $B$ are arbitrary subsets of integers, or more generally, any abelian group. One can consider the information-theoretic analogs of these additive combinatoric inequalities by replacing the sets by (independent, discrete, group-valued) random variables and, correspondingly, the log-cardinality by the Shannon entropy. For example, the inequality $$\max\{|A|,|B|\}\leq|A+B|\leq|A||B|$$ translates to $$\max\left\{H\left(X\right),H\left(Y\right)\right\}\leq H\left(X+Y\right)\leq H\left(X\right)+H\left(Y\right),$$ (1) which follows from elementary properties of entropy. The motivation to consider these analogs comes from the interpretation that the Shannon entropy $$H\left(X\right)\triangleq\sum_{x}\mathbb{P}\left[X=x\right]\log\frac{1}{\mathbb{P}\left[X=x\right]}$$ of a discrete random variable $X$ can be viewed as the logarithm of the effective cardinality of the alphabet of $X$ in the sense of asymptotic equipartition property (AEP) [CT06], which states that the random vector consisting of $n$ independent copies of $X$ is concentrated on a set of cardinality $\exp(n(H(X)+o(1))$ as $n\to\infty$. While this observation was fruitful in deducing certain entropy inequality, e.g., Han’s inequality [Han78], directly from their set counterpart cf. [Ruz09a, p. 5], it has not proven useful for inequalities dealing with sums since the typical set of sums can be exponentially larger than sums of individual typical sets. Forgoing this soft approach and capitalizing on the submodularity property of entropy, in the past few years several entropy inequalities for sums and differences have been obtained [TV05, LP08, Mad08, Tao10, MK10, MMT12], such as the sum-difference inequality [Tao10, Eq. (2.2)] $$H(X+Y)\leq 3H(X-Y)-H(X)-H(Y),$$ (2) which parallels the following (cf., e.g., [GHR07, Eq. (4)]) $$|A+B|\leq\frac{|A-B|^{3}}{|A||B|}.$$ More recently, a number of entropy inequalities for integer linear combinations of independent random variables have been obtained in [WSV15, Appendix E], e.g., $$H(pX+qY)-H(X+Y)\leq(7{\left\lfloor{\log|p|}\right\rfloor}+7{\left\lfloor{\log|q|}\right\rfloor}+2)(2H(X+Y)-H(X)-H(Y)),$$ for non-zero integers $p,q$, which are counterparts of results on sum of dilated sets in [Buk08]. It is worth noting that all of the aforementioned results for Shannon entropy are linear inequalities for entropies of weighted sums of independent random variables, which are of the general form: $$\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}Z_{j}\right)\leq 0,$$ (3) with $a_{ij}\in\mathbb{Z}$, $\alpha_{i}\in\mathbb{R}$, $Z_{1},\ldots,Z_{m}$ being independent discrete group-valued random variables. 1.2 Equivalence of Shannon and differential entropy inequalities Recall that the differential entropy of a real-valued random vector $X$ with probability density function (pdf) $f_{X}$ is defined as $$h\left(X\right)=\int f_{X}(x)\log\frac{1}{f_{X}(x)}dx.$$ Again, in the sense of AEP, $h(X)$ can be interpreted as the log-volume of the effective support of $X$ [CT06]. In a similar vein, one can consider similar additive-combinatorial inequalities for differential entropies on Euclidean spaces. Recently Kontoyiannis and Madiman [KM14] and Madiman and Kontoyiannis [MK10, MK15] made important progress in this direction by showing that while the submodularity property, the key ingredient for proving discrete entropy inequalities, fails for differential entropy, several linear inequalities for Shannon entropy nevertheless extend verbatim to differential entropy; for example, the sum-difference inequality (2) admits an exact continuous analog [KM14, Theorem 3.7]: $$h(X+Y)\leq 3h(X-Y)-h(X)-h(Y).$$ (4) These results prompt us to ask the following question, which is the focus of this paper: Question 1. Do all linear inequalities of the form (3) for discrete entropy extend to differential entropies, and vice versa? A simple but instructive observation reveals that all linear inequalities for differential entropies are always balanced, that is, the sum of all coefficients must be zero. In other words, should $$\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}Z_{j}\right)\leq 0,$$ (5) hold for all independent $\mathbb{R}^{d}$-valued $Z_{j}$’s, then we must have $\sum_{i=1}^{n}\alpha_{i}=0$. To see this, recall the fact that $h(aZ)=h(Z)+d\log a$ for any $a>0$; in contrast, Shannon entropy is scale-invariant. Therefore, whenever the inequality (5) is unbalanced, i.e., $\sum_{i=1}^{n}\alpha_{i}\neq 0$, scaling all random variables by $a$ and sending $a$ to either zero or infinity leads to a contradiction. For instance, in (1), the left inequality (balanced) extends to differential entropy but the right inequality (unbalanced) clearly does not. Surprisingly, as we show in this paper, a balanced linear inequality holds for Shannon entropy if and only if it holds for differential entropy, thereby fully resolving Question 1. This result, in a way, demystifies the striking parallel between discrete and continuous entropy inequalities. In particular, it shows that the results in [KM14, MK15], which are linear inequalities for mutual information such as $I(X;X+Y)=h(X+Y)-h(Y)$ or Ruzsa distance $\mathrm{dist}_{R}(X,Y)\triangleq h(X-Y)-\frac{1}{2}h(X)-\frac{1}{2}h(Y)$ [Ruz09a, Tao10, KM14]) and hence expressible as balanced linear inequalities for differential entropy, can be deduced from their discrete counterparts [Tao10] in a unified manner. While our results establish that all balanced linear inequalities for Shannon entropy extend to differential entropy and vice versa, it is worth pointing out that this does not hold for affine inequalities. Note that non-trivial affine inequality for Shannon entropy does not exist simply because one can set all random variables to be deterministic; however, this is not the case for differential entropy. For instance, the following balanced affine inequality $$\displaystyle h(X+Y)\geq\frac{1}{2}\left(h(X)+h(Y)\right)+\frac{d}{2}\log 2$$ (6) holds for any independent $\mathbb{R}^{d}$-valued random variables $X$ and $Y$, which is a direct consequence of the entropy power inequality (see [Bar84, Lemma 3.1] for generalizations of (6)). However, the Shannon entropy analogue of (6), replacing all $h$ by $H$, is clearly false (consider deterministic $X$ and $Y$).On the other hand, there exists no unbalanced linear inequality for differential entropy while it’s not true for Shannon entropy. Consider for instance, the Shannon entropy inequality $$\displaystyle H(X+Y)\leq H(X)+H(Y)$$ holds for any independent discrete random variables $X$ and $Y$, which follows directly from the elementary properties of Shannon entropy. However, the differential entropy counterpart, $h(X+Y)\leq h(X)+h(Y)$ can be shown to be false by taking $X$ and $Y$ to be independent Gaussian random variables with zero mean and variance $\frac{1}{2\pi e}$ and $1$ respectively. To explain our proof that discrete entropy inequalities admit continuous counterparts, we first note that the main tool for proving differential entropy inequalities in [MK10, KM14, MK15] is the data processing inequality of mutual information, replacing the submodularity of Shannon entropy exploited in [Tao10]. However, this method has been applied on a case-by-case basis as there seems to be no principled way to recognize the correct data processing inequality that needs to be introduced. Instead, to directly deduce a differential inequality from its discrete version, our strategy is to rely on a result due to Rényi [Rén59] which gives the asymptotic expansion of the Shannon entropy of a finely quantized continuous random variable in terms of its differential entropy, namely, $$H({\left\lfloor{mX}\right\rfloor})=d\log m+h(X)+o(1),\quad m\to\infty$$ (7) for continuous $\mathbb{R}^{d}$-valued $X$. In fact, this approach has been discussed in [KM14] at the suggestion of a reviewer, where it was noted that differential entropy inequalities can be approximately obtained from their discrete counterparts via this quantization approach, since $H({\left\lfloor{mX}\right\rfloor}+{\left\lfloor{mY}\right\rfloor})$ and $H({\left\lfloor{m(X+Y)}\right\rfloor})$ can only differ by a few bits, which might be further improvable. Indeed, as we shall prove later in Lemma 1, this entropy difference is in fact vanishingly small, which enables the additive-combinatorial entropy inequalities to carry over exactly from discrete to Euclidean spaces, and, even more generally, for connected abelian Lie groups. Interestingly, in addition to bridging the discrete and continuous notion of entropy, Rényi’s result also plays a key role in establishing the vanishing entropy difference. In establishing that all linear discrete entropy inequalities follow from their continuous analogs, the following are the two key ideas of our approach: First we show that given any finite collection of discrete $\mathbb{R}^{d}$-valued random variables, we can embed them into a high dimensional Euclidean space and project them back to $\mathbb{R}^{d}$ such that the Shannon entropy of any linear combinations of the projected random variables is equal to an arbitrarily large multiple of that the given random variables. Next we add independent noise, e.g., Gaussian, with arbitrarily small variance to these projected discrete random variables and relate their Shannon entropy to the differential entropy of their noisy versions. Sending the variance to zero and then the dimension to infinity yields the desired inequality for discrete entropy. 1.3 Main results Throughout the rest of the paper, to make the statements concise and exclude trivial cases, all differential entropies are assumed to exist and be finite. We now state our main results on linear entropy inequalities. Theorem 1. Let $\left(a_{ij}\right)\in\mathbb{Z}^{n\times m}$ satisfies that $a_{i1},\ldots,a_{im}$ are relatively prime, for each $i=1,\ldots,n$. Let $\alpha_{1},\ldots,\alpha_{n}\in\mathbb{R}$ be such that $\sum_{i=1}^{n}\alpha_{i}=0$. Suppose for any independent $\mathbb{Z}^{d}$-valued random variables $U_{1},\ldots,U_{m}$, the following holds: $$\displaystyle\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)\leq 0.$$ (8) Then for any independent $\mathbb{R}^{d}$-valued continuous random variables $X_{1},\ldots,X_{m}$, the following holds: $$\displaystyle\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}X_{j}\right)\leq 0$$ (9) Remark 1. Without loss of any generality, we can always assume that the coefficients of each linear combination of random variables in (8) are relatively prime. This is because for each $i$ we can divide $a_{i1},\ldots,a_{im}$ by their greatest common divisor so that the resulting entropy inequality remains the same, thanks to the scale invariance of the Shannon entropy. Theorem 2. Let $(a_{ij})\in\mathbb{R}^{n\times m}$ and $\alpha_{1},\ldots,\alpha_{n}\in\mathbb{R}$ be such that $\sum_{i=1}^{n}\alpha_{i}=0$. If $$\displaystyle\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}X_{j}\right)\leq 0$$ holds for any $\mathbb{R}^{d}$-valued independent and continuous random variables $X_{1},\ldots,X_{m}$, then $$\displaystyle\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)\leq 0$$ holds for any $\mathbb{R}^{d}$-valued independent and discrete random variables $U_{1},\ldots,U_{m}$. Remark 2 (iid random variables). For additive-combinatorial entropy inequalities, when (some of) the random variables are further constrained to be identically distributed, a number of strengthened inequalities have been obtained. For instance, if $U$ and $U^{\prime}$ are independent and identically distributed (iid) discrete random variables, then (cf., e.g., [MK10, Theorems 1.1 and 2.1]) $$\frac{1}{2}\leq\frac{H(U-U^{\prime})-H(U)}{H(U+U^{\prime})-H(U)}\leq 2$$ (10) and for iid continuous $X,X^{\prime}$, $$\frac{1}{2}\leq\frac{h(X-X^{\prime})-h(X)}{h(X+X^{\prime})-h(X)}\leq 2$$ (11) which are stronger than what would be obtained from (2) and (4) by substituting $Y=X^{\prime}$. As evident from the proof, both Theorem 1 and Theorem 2 apply verbatim to entropy inequalities involving independent random variables with arbitrary distributions. Consequently, (11) and (10) are in fact equivalent. Formally, fix a partition $S_{1},\ldots,S_{K}$ of $[m]\triangleq\{1,\ldots,m\}$. Then (8) holds for independent $U_{1},\ldots,U_{m}$ so that $\{U_{j}\}_{j\in S_{k}}$ are iid for $k\in[K]$ if and only if (9) holds for independent $X_{1},\ldots,X_{m}$ so that $\{X_{j}\}_{j\in S_{k}}$ are iid for $k\in[K]$. It is worth noting that this result is not a special case of Theorems 1 and 2; nevertheless, the proofs are identical. Remark 3. The nature of the equivalence results that we obtained in this paper for linear inequalities for weighted sums of independent random variables bear some similarity to a result established by Chan in [Cha03] for linear entropy inequalities of subsets of random variables, as opposed to sums of independent random variables. In particular, he established that the class of linear inequalities for Shannon entropy and differential entropy are equivalent provided the inequalities are “balanced” in the following sense. For example, consider the following entropy inequalities for discrete random variables $X_{1}$ and $X_{2}$: $$\displaystyle H(X_{1})+H(X_{2})-H(X_{1},X_{2})\geq 0,$$ (12) $$\displaystyle H(X_{1},X_{2})-H(X_{1})\geq 0.$$ (13) The inequality (12) is said to be balanced because the sum of the coefficients of the entropy terms in which $X_{1}$ appears equals zero and the same is true for $X_{2}$ as well. However, the inequality (13) is unbalanced because $X_{2}$ appears only in the first term. Though the notion of balancedness considered in [Cha03] is different from ours, the technique employed for extending the discrete entropy inequalities to the continuous case is similar to ours, i.e., through discretization of continuous random variables; however, as discussed before, the key argument is to show that the entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum, a difficulty which is not present in dealing with subsets of random variables. To deduce the discrete inequality from its continuous counterpart, the method in [Cha03] is to assume, without loss of generality, the discrete random variables are integer-valued and use the fact that $H(A)=h(A+U)$ for any $\mathbb{Z}$-valued $A$ and $U$ independently and uniformly distributed on $[0,1]$. Clearly this method does not apply to sums of independent random variables. 1.4 Organization The rest of the paper is organized as follows. Before giving the proof of the main results, in Section 2 we pause to discuss the open problem of determining the sharp constants in additive-combinatorial entropy inequalities and the implications of our results. The proof of the main theorems are given in Sections 3 and 4, with the technical lemmas proved in Section 5. Following [KM14], the notion of differential entropy can be extended to locally compact groups by replacing the reference measure (Lebesgue) by the corresponding Haar measure. In Section 6 we generalize Theorem 1 to random variables taking values in connected abelian Lie groups. 2 On sharp constants in additive-combinatorial entropy inequalities The entropy inequalities (10) and (11) can be viewed as the information theoretic analogs of the following additive-combinatorial inequality proved by Ruzsa [Ruz91]: For any finite $A\subset\mathbb{Z}^{n}$( or any abelian group) $$\displaystyle\log\frac{|A-A|}{|A|}\leq 2\log\frac{|A+A|}{|A|}.$$ (14) The constant $``2"$ in (14) is known to be sharp (see [HRY99] or [Ruz09b, p. 107]). The crucial idea for the construction is to approximate cardinality by volume by considering the lattice points inside a convex body. In particular, for any convex body $K$ in $\mathbb{R}^{n}$, denote its quantized version $\left[K\right]_{L}\triangleq K\cap(\frac{1}{L}\mathbb{Z}^{n})$, where $L\in\mathbb{N}$. The sum and difference sets of $\left[K\right]_{L}$ is related to those of $K$ through $\left[K\pm K\right]_{L}=\left[K\right]_{L}\pm\left[K\right]_{L}$. If we fix the dimension $n$ and let $L\rightarrow\infty$, it is well-known that the cardinality of $\left[K\right]_{L}$ is related to the volume of $K$ via $|[K]_{L}|=\text{vol}(K)L^{n}(1+o(1))$. Thus, $$\displaystyle\frac{|[K]_{L}\pm[K]_{L}|}{|[K]_{L}|}=\frac{\text{vol}(K\pm K)}{\text{vol}(K)}(1+o(1)).$$ A classical result of Rogers and Shephard [RS57] states that for any convex $K\in\mathbb{R}^{n}$, $\text{vol}(K-K)\leq{2n\choose n}\text{vol}(K)$ with equality if and only if $K$ is a simplex. Since $K$ is convex, $K+K=2K$ and thus $\text{vol}(K+K)=2^{n}\text{vol}(K)$. Now taking $K$ to be the standard simplex $\Delta_{n}=\left\{x\in\mathbb{R}^{n}_{+}:\sum_{i=1}^{n}x_{i}\leq 1\right\}$, we obtain $$\displaystyle\frac{\log\frac{|[\Delta_{n}]_{L}-[\Delta_{n}]_{L}|}{|[\Delta_{n}]_{L}|}}{\log\frac{|[\Delta_{n}]_{L}+[\Delta_{n}]_{L}|}{|[\Delta_{n}]_{L}|}}=\frac{\log\frac{{2n\choose n}}{n!}-\log\frac{1}{n!}+o_{L}(1)}{\log\frac{2^{n}}{n!}-\log\frac{1}{n!}+o_{L}(1)}=\frac{\log\binom{2n}{n}+o_{L}(1)}{n\log 2+o_{L}(1)},$$ where we used $\text{vol}(\Delta_{n})=\frac{1}{n!},\text{vol}(\Delta_{n}-\Delta_{n})=\frac{1}{n!}{2n\choose n}$ and $\text{vol}(\Delta_{n}+\Delta_{n})=\frac{2^{n}}{n!}$. Sending $L\rightarrow\infty$ followed by $n\rightarrow\infty$ yields that the sharpness of (14). Analogously, one can investigate the best possible constants in the Shannon entropy entropy inequalities (10) as well as its continuous analog (11). It is unclear if the constants $1/2$ and $2$ are the best possible. However, as a consequence of Theorem 1 and Theorem 2, one can establish that the sharp constants for the discrete and continuous versions are the same, and dimension-free (see Appendix A for a proof): Proposition 1. For i.i.d. $U$ and $U^{\prime}$ and i.i.d. $X$ and $X^{\prime}$, $$\displaystyle\frac{1}{2}\leq$$ $$\displaystyle\inf_{U\in\mathbb{Z}^{n}}\frac{H(U-U^{\prime})-H(U)}{H(U+U^{\prime})-H(U)}=\inf_{X\in\mathbb{R}^{n}}\frac{h(X-X^{\prime})-h(X)}{h(X+X^{\prime})-h(X)}$$ $$\displaystyle\leq$$ $$\displaystyle\sup_{X\in\mathbb{R}^{n}}\frac{h(X-X^{\prime})-h(X)}{h(X+X^{\prime})-h(X)}=\sup_{U\in\mathbb{Z}^{n}}\frac{H(U-U^{\prime})-H(U)}{H(U+U^{\prime})-H(U)}\leq 2.$$ Furthermore, the infimum and the supremum are independent of the dimension $n$. It is worth pointing out that the dimension-freeness of the best Shannon entropy ratio follows from standard arguments (tensorization and linear embedding of $\mathbb{Z}^{n}$ into $\mathbb{Z}$), which have been previously used for proving analogous results for set cardinalities [HRY99]; however, it is unclear how to directly prove the ratio of differential entropy is dimension-independent without resorting to Theorem 1. In view of the success of continuous approximation in proving the sharpness of (14), proving the sharpness of (11) for differential entropies might be more tractable than its discrete counterpart (10). 3 Proof of Theorem 1 We first introduce the notations followed throughout the paper. For $x\in\mathbb{R}$, let $\lfloor x\rfloor\triangleq\max\{k\in\mathbb{Z}:k\leq x\}$ and $\{x\}=x-{\left\lfloor{x}\right\rfloor}$ denote its integer and fractional parts, respectively. For any $k\in\mathbb{N}$, define $$\left[x\right]_{k}\triangleq\frac{\lfloor 2^{k}x\rfloor}{2^{k}},\quad\left\{x\right\}_{k}\triangleq\frac{\{2^{k}x\}}{2^{k}}.$$ (15) Hence, $$\displaystyle x=\frac{\lfloor 2^{k}x\rfloor}{2^{k}}+\frac{\{2^{k}x\}}{2^{k}}=\left[x\right]_{k}+\{x\}_{k}.$$ For $x\in\mathbb{R}^{d}$, $\left[x\right]_{k}$ and $\left\{x\right\}_{k}$ are defined similarly by applying the above operations componentwise. For $N>0$, denote the hypercube $B_{N}^{(d)}\triangleq\left[-N,N\right]^{d}$. For a $\mathbb{R}^{d}$-valued random variable $X$, let $X^{(N)}$ denote a random variable distributed according to the conditional distribution $P_{X|{X\in B_{N}^{(d)}}}$. If $X$ has a pdf $f_{X}$, then $X^{(N)}$ has the following pdf: $$\displaystyle f_{X^{(N)}}(x)=\frac{f_{X}(x)\mathbbm{1}\{x\in B_{N}^{(d)}\}}{\mathbb{P}[X\in B_{N}^{(d)}]}.$$ (16) The following lemma is the key step to proving Theorem 1. Lemma 1. Let $X_{1},\ldots,X_{m}$ be independent $\left[0,1\right]^{d}$-valued continuous random variables such that both $h\left(X_{j}\right)$ and $H\left(\lfloor X_{j}\rfloor\right)$ are finite for each $j\in\left[m\right]$. Then for any $a_{1},\ldots,a_{m}\in\mathbb{Z}$ that are relatively prime, $$\lim_{k\rightarrow\infty}\left(H\bigg{(}\left[\sum_{i=1}^{m}a_{i}X_{i}\right]_{k}\bigg{)}-H\bigg{(}\sum_{i=1}^{m}a_{i}\left[X_{i}\right]_{k}\bigg{)}\right)=0.$$ The next lemma allows us to focus on bounded random variables. Lemma 2 (Truncation). Let $X_{1},\ldots,X_{m}$ be independent $\mathbb{R}^{d}$-valued random variables and $a_{1},\ldots,a_{m}\in\mathbb{R}$. If each $X_{j}$ has an absolutely continuous distribution and $h(X_{j})$ is finite, then $$\displaystyle\lim_{N\to\infty}h\left(\sum_{j=1}^{m}a_{j}X_{j}^{(N)}\right)=h\left(\sum_{j=1}^{m}a_{j}X_{j}\right).$$ The following lemma is a particularization of [Rén59, Theorem 1] (see (7)) to the dyadic subsequence $m=2^{k}$: Lemma 3. For any $\mathbb{R}^{d}$-valued random variable $X$ with an absolutely continuous distribution such that both $H\left(\lfloor X\rfloor\right)$ and $h\left(X\right)$ are finite, $$\lim_{k\rightarrow\infty}\left(H\left(\left[X\right]_{k}\right)-dk\log 2\right)=h\left(X\right).$$ We are now ready to prove Theorem 1. Proof. We start by considering the case where $X_{j}\in\left[0,1\right]^{d}$ for each $j\in\left[m\right]$. Since $X_{j}$’s are independent and $2^{k}\left[X_{j}\right]_{k}$ is $\mathbb{Z}^{d}$-valued for each $j\in\left[m\right]$, by assumption, $$\displaystyle\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}\left[X_{j}\right]_{k}\right)\leq 0$$ (17) holds where $$\displaystyle\sum_{i=1}^{n}\alpha_{i}=0.$$ (18) By Lemma 3, $H\left(\left[X\right]_{k}\right)=dk\log 2+h\left(X\right)+o_{k}(1)$. Thus, $$\displaystyle h\left(\sum_{j=1}^{m}a_{ij}X_{j}\right)+dk\log 2+o_{k}(1)$$ $$\displaystyle=H\left(\left[\sum_{j=1}^{m}a_{ij}X_{j}\right]_{k}\right)$$ $$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}H\left(\sum_{j=1}^{m}a_{ij}\left[X_{j}\right]_{k}\right)+o_{k}(1),$$ where (a) follows from Lemma 1. Multiplying on both sides by $\alpha_{i}$ and summing over $i$, and in view of (18), we have $$\displaystyle\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}X_{j}\right)+o_{k}(1)$$ $$\displaystyle=\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}\left[X_{j}\right]_{k}\right).$$ By (17), sending $k$ to infinity yields the desired result. For the general case where $X_{j}\in\mathbb{R}^{d}$, let $Y_{i}=\sum_{j=1}^{m}a_{ij}X_{j}$ for $i\in\left[n\right]$. Let $\tilde{X}_{j}^{(N)}\triangleq\frac{X_{j}^{(N)}+N}{2N}$, which belongs to $\left[0,1\right]^{d}$. Thus, $$\displaystyle\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}\tilde{X}_{j}^{(N)}\right)$$ $$\displaystyle=\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}X_{j}^{(N)}\right)+\sum_{i=1}^{n}\alpha_{i}\cdot\log\left(\frac{1}{2N}\right)^{d}$$ $$\displaystyle=\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}{X}_{j}^{(N)}\right),$$ (19) where (19) follows from (18). Hence, $$\displaystyle\sum_{i=1}^{n}\alpha_{i}h\left(Y_{i}\right)$$ $$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\lim_{N\rightarrow\infty}\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}X_{j}^{(N)}\right)$$ $$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\lim_{N\rightarrow\infty}\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}\tilde{X}_{j}^{(N)}\right)$$ $$\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}0,$$ where $(a)$ follows form Lemma 2 and $(b)$ follows from (19), and $(c)$ follows from the earlier result for $\left[0,1\right]^{d}$-valued random variables. The proof of Theorem 1 is now complete. ∎ 4 Proof of Theorem 2 Theorem 2 relies of the following two lemmas. The first result is a well-known asymptotic expansion of the differential entropy of a discrete random variable contaminated by weak additive noise. For completeness, we provide a short proof in Section 5.3. Lemma 4. Let $U$ be a discrete $\mathbb{R}^{d}$-valued random variable such that $H(U)<\infty$ and $Z$ be a $\mathbb{R}^{d}$-valued continuous random variable with $h(Z)>-\infty$. If $U$ and $Z$ are independent, then $$\displaystyle h(U+\varepsilon Z)=h(Z)+\log\varepsilon+H(U)+o_{\varepsilon}(1).$$ The following lemma, proved in Section 5.4, allows us to blow up the Shannon entropy of linear combinations of discrete random variables arbitrarily. Lemma 5. Let $U_{1},\ldots,U_{m}$ be $\mathbb{R}^{d}$-valued discrete random variables. Let $k\in\mathbb{N}$. Then for any $A=(a_{ij})\in\mathbb{R}^{n\times m}$, there exist $\mathbb{R}^{d}$-valued discrete random variables $U_{1}^{(k)},\ldots,U_{m}^{(k)}$ such that $$\displaystyle H\left(\sum_{j=1}^{m}a_{ij}U_{j}^{(k)}\right)=kH\left(\sum_{j=1}^{m}a_{ij}U_{j}\right),\forall i\in[n].$$ We now prove Theorem 2. Proof. Let $Z_{j}$ be independent $\mathbb{R}^{d}$-valued Gaussian random variables with zero mean and $U_{1},\ldots,U_{m}$ be independent $\mathbb{R}^{d}$-valued discrete random variables. Let $U_{1}^{(k)},\ldots,U_{m}^{(k)}$ be independent $\mathbb{R}^{d}$-valued discrete random variables such that $H\left(\sum_{j=1}^{m}a_{ij}U_{j}^{(k)}\right)=kH\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)$ for each $i\in[n]$, guaranteed by Lemma 5. Let $\varepsilon>0$. For each $j\in[m]$, let $X_{j}=U_{j}^{(k)}+\varepsilon Z_{j}$. Then we have, $$\displaystyle h\left(X_{j}\right)=H(U_{j}^{(k)})+h(Z_{j})+\log\varepsilon+o_{\varepsilon}(1).$$ Hence, for each $i\in[n]$, $$\displaystyle h\left(\sum_{j=1}^{m}a_{ij}X_{j}\right)$$ $$\displaystyle=h\left(\sum_{j=1}^{m}a_{ij}U_{j}^{(k)}+\varepsilon\sum_{j=1}^{m}a_{ij}Z_{j}\right)$$ $$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}H\left(\sum_{j=1}^{m}a_{ij}U_{j}^{(k)}\right)+h\left(\sum_{j=1}^{m}a_{ij}Z_{j}\right)+\log\varepsilon+o_{\varepsilon}(1)$$ $$\displaystyle=kH\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)+h\left(\sum_{j=1}^{m}a_{ij}Z_{j}\right)+\log\varepsilon+o_{\varepsilon}(1),$$ where $(a)$ follows from Lemma 4. Since $X_{j}$’s are independent, by assumption, $\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}X_{j}\right)\leq 0$ where $\sum_{i=1}^{n}\alpha_{i}$. Hence, $$\displaystyle k\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)+\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}Z_{j}\right)+o_{\varepsilon}(1)\leq 0.$$ Thus, $$\displaystyle\sum_{i=1}^{n}\alpha_{i}H\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)+\frac{\sum_{i=1}^{n}\alpha_{i}h\left(\sum_{j=1}^{m}a_{ij}Z_{j}\right)}{k}+\frac{o_{\varepsilon}(1)}{k}\leq 0.$$ The proof is completed by letting $\varepsilon\rightarrow 0$ followed by $k\rightarrow\infty$. ∎ 5 Proofs of lemmas 5.1 Proof of Lemma 1 Let $a_{1},\ldots,a_{m}\in\mathbb{Z}$ and $X_{1},\ldots,X_{m}$ be $\mathbb{R}^{d}$-valued random variables. Then $$\displaystyle\left[\sum_{i=1}^{m}a_{i}X_{i}\right]_{k}$$ $$\displaystyle=\frac{\Big{\lfloor}2^{k}\sum_{i=1}^{m}a_{i}X_{i}\Big{\rfloor}}{2^{k}}=\frac{\Big{\lfloor}\sum_{i=1}^{m}a_{i}\lfloor 2^{k}X_{i}\rfloor+\lfloor\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\Big{\rfloor}}{2^{k}}$$ $$\displaystyle=\sum_{i=1}^{m}a_{i}[X_{i}]_{k}+\frac{\lfloor\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\rfloor}{2^{k}}.$$ Define $$\displaystyle A_{k}\triangleq 2^{k}\left[\sum_{i=1}^{m}a_{i}X_{i}\right]_{k},\quad B_{k}\triangleq 2^{k}\sum_{i=1}^{m}a_{i}\left[X_{i}\right]_{k},\quad Z_{k}\triangleq\Bigg{\lfloor}\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\Bigg{\rfloor}.$$ It is easy to see that $A_{k},B_{k},Z_{k}\in\mathbb{Z}^{d}$ and $A_{k}=B_{k}+Z_{k}$. Since $\{2^{k}X\}\in[0,1)^{d}$, each component of $Z_{k}$ takes integer values in the set $a_{1}[0,1)+\ldots+a_{m}[0,1)$ and hence $Z_{k}\in\mathcal{Z}\triangleq\{a,a+1,\ldots,b-1\}^{d}$, where $b\triangleq\sum_{i=1}^{m}a_{i}\mathbbm{1}_{\{a_{i}>0\}}$ and $a\triangleq\sum_{i=1}^{m}a_{i}\mathbbm{1}_{\{a_{i}<0\}}$. Hence $Z_{k}$ takes at most $(b-a)^{d}$ values, which is bounded for all $k$. Next we describe the outline of the proof: 1. The goal is to prove $|H(A_{k})-H(B_{k})|\to 0$. Since $A_{k}=B_{k}+Z_{k}$, we have $$\displaystyle H\left(A_{k}\right)-H\left(B_{k}\right)=I\left(Z_{k};A_{k}\right)-I\left(Z_{k};B_{k}\right).$$ (20) Hence it suffices to show that both mutual informations vanish as $k\to\infty$. 2. Lemma 9 proves $I\left(Z_{k};B_{k}\right)\to 0$ based on the data processing inequality and Lemma 6 which asserts that asymptotic independence between the integral part $\lfloor 2^{k}X\rfloor$ and the fractional part $\{2^{k}X\}$, in the sense of vanishing mutual information. As will be evident in the proof of Lemma 6, this is a direct consequence of Rényi’s result (Lemma 3). 3. Since $Z_{k}$ takes a bounded number of values, $I(Z_{k};A_{k})\to 0$ is equivalent to the total variation between $P_{Z_{k},A_{k}}$ and $P_{Z_{k}}\otimes P_{A_{k}}$ vanishes, known as the $T$-information [Csi96, PW16]. By the triangle inequality and data processing inequality for the total variation, this objective is further reduced to proving the convergence of two pairs of conditional distributions in total variation: one is implied by Pinsker’s inequality and Lemma 9, and the other one follows from an elementary fact on the total variation between a pdf and a small shift of itself (Lemma 8). Lemma 10 contains the full proof; notably, the argument crucially depends on the assumption that $a_{1},\ldots,a_{m}$ are relatively prime. We start with the following auxiliary result. Lemma 6. Let $X$ be a $\left[0,1\right]^{d}$-valued continuous random variable such that both $h\left(X\right)$ and $H\left(\lfloor X\rfloor\right)$ are finite. Then $$\displaystyle\lim_{k\rightarrow\infty}I(\lfloor 2^{k}X\rfloor;\{2^{k}X\})=0.$$ Proof. Since $X\in[0,1]^{d}$, we can write $X$ in terms of its binary expansion as: $$X=\sum_{i\geq 1}X_{i}2^{-i},X_{i}\in\{0,1\}^{d}.$$ In other words, $\lfloor 2^{k}X\rfloor=2^{k-1}X_{1}+\ldots+X_{k}$. Thus, $\lfloor 2^{k}X\rfloor$ and $\left(X_{1},\ldots,X_{k}\right)$ are in a one-to-one correspondence and so are $\{2^{k}X\}$ and $\left(X_{k+1},\ldots\right)$. So, $$\displaystyle I(\lfloor 2^{k}X\rfloor;\{2^{k}X\})$$ $$\displaystyle=I(X_{1}^{k};X_{k+1}^{\infty})\triangleq I\left(X_{1},\ldots,X_{k};X_{k+1},\ldots\right).$$ Then $I\left(X_{1}^{k};X_{k+1}^{\infty}\right)=\lim_{m\rightarrow\infty}I(X_{1}^{k};X_{k+1}^{k+m})$ cf. [PW15, Section 3.5]. Let $a_{k}\triangleq H\left(X_{1}^{k}\right)-dk\log 2-h\left(X\right)$. Then Lemma 3 implies $\lim_{k\rightarrow\infty}a_{k}=0$. Hence for each $k,m\geq 1$, we have $$\displaystyle I(X_{1}^{k};X_{k+1}^{k+m})$$ $$\displaystyle=H(X_{1}^{k})+H(X_{k+1}^{k+m})-H(X_{1}^{k+m})$$ $$\displaystyle=h(X)+dk\log 2+a_{k}-(h(X)+d(k+m)\log 2+a_{k+m})+H(X_{k+1}^{k+m})$$ $$\displaystyle=a_{k}-a_{k+m}+H(X_{k+1}^{k+m})-md\log 2$$ $$\displaystyle\leq a_{k}-a_{k+m},$$ (21) where (21) follows from the fact that $X_{k+1}^{k+m}$ can take only $2^{md}$ values. Since $I(X_{1}^{k};X_{k+1}^{k+m})\geq 0$, by (21), sending $m\rightarrow\infty$ first and then $k\rightarrow\infty$ completes the proof. ∎ Recall that the total variation distance between probability distributions $\mu$ and $\nu$ is defined as: $$\displaystyle d_{\mathrm{TV}}\left(\mu,\nu\right)\triangleq\sup_{F}|\mu(F)-\nu(F)|,$$ where the supremum is taken over all measurable sets $F$. Lemma 7. Let $X,Y,Z$ be random variables such that $Z=f\left(X\right)=f\left(Y\right)$, for some measurable function $f$. Then for any measurable $E$ such that $\mathbb{P}\left[Z\in E\right]>0$, $$d_{\mathrm{TV}}\left(P_{X|Z\in E},P_{Y|Z\in E}\right)\leq\frac{d_{\mathrm{TV}}\left(P_{X},P_{Y}\right)}{\mathbb{P}\left[Z\in E\right]}.$$ Proof. For any measurable $F$, $$\displaystyle\left|P_{X\in F|Z\in E}-P_{Y\in F|Z\in E}\right|=\frac{\left|\mathbb{P}\left[X\in F,f\left(X\right)\in E\right]-\mathbb{P}\left[Y\in F,f\left(Y\right)\in E\right]\right|}{\mathbb{P}\left[Z\in E\right]}\leq\frac{d_{\mathrm{TV}}\left(P_{X},P_{Y}\right)}{\mathbb{P}\left[Z\in E\right]}.$$ The claim now follows from taking supremum over all $F$. ∎ Lemma 8. If $X$ is a $\mathbb{R}$-valued continuous random variable, then: $$\displaystyle d_{\mathrm{TV}}(P_{X},P_{X+a})\rightarrow 0\mbox{ as }a\rightarrow 0.$$ Proof. Let $f$ be the pdf of $X$. Since continuous functions with compact support are dense in $\mathcal{L}^{1}(\mathbb{R})$, for any $\varepsilon>0$, there exists a continuous and compactly supported function $g$ such that $\|f-g\|_{1}<\frac{\varepsilon}{3}$. Because of the uniform continuity of continuous functions on compact sets, there exists a $\delta>0$ such that, whenever $|a|<\delta$, $\|g(\cdot+a)-g(\cdot)\|_{1}<\frac{\varepsilon}{3}$. Hence $\|f(\cdot+a)-f(\cdot)\|_{1}<2\|f(\cdot)-g(\cdot)\|_{1}+\|g(\cdot+a)-g(\cdot)\|_{1}<\varepsilon$. Hence the claim follows. ∎ Lemma 9. If $X_{1},\ldots,X_{m}$ are independent $\left[0,1\right]^{d}$-valued continuous random variables such that both $h\left(X_{j}\right)$ and $H\left(\lfloor X_{j}\rfloor\right)$ are finite for each $j\in\left[m\right]$, then $$\lim_{k\rightarrow\infty}I\left(Z_{k};B_{k}\right)=0.$$ Proof. We have $$\displaystyle I(Z_{k};B_{k})$$ $$\displaystyle=I\Big{(}\Big{\lfloor}\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\Big{\rfloor};\sum_{i=1}^{m}a_{i}\lfloor 2^{k}X_{i}\rfloor\Big{)}$$ $$\displaystyle=I\Big{(}\Big{\lfloor}\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\Big{\rfloor};\Big{\lfloor}\sum_{i=1}^{m}a_{i}\lfloor 2^{k}X_{i}\rfloor\Big{\rfloor}\Big{)}$$ $$\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}I\big{(}a_{1}\{2^{k}X_{1}\},\ldots,a_{m}\{2^{k}X_{m}\};a_{1}\lfloor 2^{k}X_{1}\rfloor,\ldots,a_{m}\lfloor 2^{k}X_{m}\rfloor\big{)}$$ $$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\sum_{i=1}^{m}I(\{2^{k}X_{i}\};\lfloor 2^{k}X_{i}\rfloor),$$ where $(a)$ follows from the data processing inequality and $(b)$ follows from the fact that $X_{1},\ldots,X_{m}$ are independent. Applying Lemma 6 to each $X_{i}$ finishes the proof. ∎ In view of (20), Lemma 1 follows from Lemma 9 and the next lemma: Lemma 10. Under the assumptions of Lemma 9 and if $a_{1},\ldots,a_{m}\in\mathbb{Z}$ are relatively prime, $$\lim_{k\rightarrow\infty}I(Z_{k};A_{k})=0.$$ Proof. Define the $T$-information between two random variables $X$ and $Y$ as follows: $$T(X;Y)\triangleq d_{\mathrm{TV}}(P_{XY},P_{X}P_{Y}).$$ By [PW16, Proposition 12], if a random variable $W$ takes values in a finite set $\mathcal{W}$, then $$\displaystyle I(W;Y)\leq\log(|\mathcal{W}|-1)T(W;Y)+h(T(W;Y)),$$ (22) where $h(x)=x\log\frac{1}{x}+(1-x)\log\frac{1}{1-x}$ is the binary entropy function. Since $Z_{k}$ takes at most $\left(b-a\right)^{d}$ values, by (22), it suffices to prove that $\lim_{k\rightarrow\infty}T(Z_{k};A_{k})=0$. It is well-known that the uniform fine quantization error of a continuous random variable converges to the uniform distribution (see, e.g., [JWW07, Theorem 4.1]). Therefore $\{2^{k}X_{i}\}\xrightarrow{\mathcal{L}}\mathrm{Unif}[0,1]^{d}$ for each $i\in[m]$. Furthermore, since $X_{i}$ are independent, $Z_{k}=\lfloor\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\rfloor\xrightarrow{\mathcal{L}}\lfloor\sum_{i=1}^{m}a_{i}U_{i}\rfloor$ where $U_{1},\dotsc,U_{m}$ are i.i.d. $\mathrm{Unif}[0,1]^{d}$ random variables. Let $\mathcal{Z}^{\prime}\triangleq\{z\in\mathcal{Z}:\mathbb{P}\left[\lfloor\sum_{i=1}^{m}a_{i}U_{i}\rfloor=z\right]>0\}$. Since $Z_{k}\xrightarrow{\mathcal{L}}\lfloor\sum_{i=1}^{m}a_{i}U_{i}\rfloor$, $\lim_{k\rightarrow\infty}\mathbb{P}\left[Z_{k}=z\right]>0$ for any $z\in\mathcal{Z}^{\prime}$ and $\lim_{k\rightarrow\infty}\mathbb{P}\left[Z_{k}=z\right]=0$ for any $z\in\mathcal{Z}\backslash\mathcal{Z}^{\prime}$. Since $$\displaystyle T(Z_{k};A_{k})$$ $$\displaystyle=\sum_{z\in\mathcal{Z}}\mathbb{P}\left[Z_{k}=z\right]d_{\mathrm{TV}}(P_{A_{k}},P_{A_{k}|Z_{k}=z})$$ $$\displaystyle\leq\sum_{z\in\mathcal{Z}^{\prime}}d_{\mathrm{TV}}(P_{A_{k}},P_{A_{k}|Z_{k}=z})+\sum_{z\in\mathcal{Z}\backslash\mathcal{Z}^{\prime}}\mathbb{P}\left[Z_{k}=z\right],$$ it suffices to prove that $d_{\mathrm{TV}}(P_{A_{k}},P_{A_{k}|Z_{k}=z})\to 0$ for any $z\in\mathcal{Z}^{\prime}$. Using the triangle inequality and the fact that $P_{A_{k}}=\sum_{z^{\prime}\in\mathcal{Z}}\mathbb{P}\left[Z_{k}=z^{\prime}\right]P_{A_{k}|Z_{k}=z^{\prime}}$, we have $$\displaystyle d_{\mathrm{TV}}(P_{A_{k}},P_{A_{k}|Z_{k}=z})$$ $$\displaystyle\leq\sum_{z^{\prime}\in\mathcal{Z}}\mathbb{P}\left[Z_{k}=z^{\prime}\right]d_{\mathrm{TV}}(P_{A_{k}|Z_{k}=z},P_{A_{k}|Z_{k}=z^{\prime}})$$ $$\displaystyle\leq\sum_{z^{\prime}\in\mathcal{Z}^{\prime}}d_{\mathrm{TV}}(P_{A_{k}|Z_{k}=z},P_{A_{k}|Z_{k}=z^{\prime}})+\sum_{z\in\mathcal{Z}\backslash\mathcal{Z}^{\prime}}\mathbb{P}\left[Z_{k}=z\right].$$ Thus it suffices to show that $d_{\mathrm{TV}}(P_{A_{k}|Z_{k}=z},P_{A_{k}|Z_{k}=z^{\prime}})\to 0$ for any $z,z^{\prime}\in\mathcal{Z}^{\prime}$. Since $A_{k}=B_{k}+Z_{k}$, we have $$\displaystyle d_{\mathrm{TV}}(P_{A_{k}|Z_{k}=z},P_{A_{k}|Z_{k}=z^{\prime}})$$ $$\displaystyle=d_{\mathrm{TV}}(P_{B_{k}+Z_{k}|Z_{k}=z},P_{B_{k}+Z_{k}|Z_{k}=z^{\prime}})$$ $$\displaystyle=d_{\mathrm{TV}}(P_{B_{k}+z|Z_{k}=z},P_{B_{k}+z^{\prime}|Z_{k}=z^{\prime}})$$ $$\displaystyle\leq d_{\mathrm{TV}}(P_{B_{k}+z|Z_{k}=z},P_{B_{k}+z|Z_{k}=z^{\prime}})+d_{\mathrm{TV}}(P_{B_{k}+z|Z_{k}=z^{\prime}},P_{B_{k}+z^{\prime}|Z_{k}=z^{\prime}})$$ $$\displaystyle=d_{\mathrm{TV}}(P_{B_{k}|Z_{k}=z},P_{B_{k}|Z_{k}=z^{\prime}})+d_{\mathrm{TV}}(P_{B_{k}+z|Z_{k}=z^{\prime}},P_{B_{k}+z^{\prime}|Z_{k}=z^{\prime}}).$$ (23) Thus it suffices to prove that each term on the right-hand side of (23) vanishes. For the first term, note that $$\displaystyle d_{\mathrm{TV}}(P_{B_{k}|Z_{k}=z},P_{B_{k}|Z_{k}=z^{\prime}})\leq d_{\mathrm{TV}}(P_{B_{k}|Z_{k}=z},P_{B_{k}})+d_{\mathrm{TV}}(P_{B_{k}|Z_{k}=z^{\prime}},P_{B_{k}}),$$ where $d_{\mathrm{TV}}(P_{B_{k}|Z_{k}=z},P_{B_{k}})\to 0$ for any $z\in\mathcal{Z}^{\prime}$ because, from the Pinsker’s inequality, $$\displaystyle I(Z_{k};B_{k})$$ $$\displaystyle=\sum_{z\in\mathcal{Z}}\mathbb{P}\left[Z_{k}=z\right]D(P_{B_{k}}\|P_{B_{k}|Z_{k}=z})$$ $$\displaystyle\geq 2\sum_{z\in\mathcal{Z}}\mathbb{P}\left[Z_{k}=z\right]d_{\mathrm{TV}}^{2}(P_{B_{k}},P_{B_{k}|Z_{k}=z})$$ $$\displaystyle\geq 2\mathbb{P}\left[Z_{k}=z\right]d_{\mathrm{TV}}^{2}(P_{B_{k}},P_{B_{k}|Z_{k}=z}),$$ and $I(Z_{k};B_{k})\to 0$ by Lemma 9 and $\liminf_{k\rightarrow\infty}\mathbb{P}\left[Z_{k}=z\right]>0$ for any $z\in\mathcal{Z}^{\prime}$. Thus it remains to prove the second term on the right-hand of (23) vanishes for any $z,z^{\prime}\in\mathcal{Z}^{\prime}$. Since $a_{1},\ldots,a_{m}$ are relatively prime, for any $p\in\mathbb{Z}$, there exists $q_{1},\ldots,q_{m}\in\mathbb{Z}$ such that $p=\sum_{i=1}^{m}a_{i}q_{i}$. Hence, for any $z,z^{\prime}\in\mathbb{Z}^{d}$, there exists $b_{1},\ldots,b_{m}\in\mathbb{Z}^{d}$ such that $$z^{\prime}-z=\sum_{i=1}^{m}a_{i}b_{i}.$$ Then, $$\displaystyle B_{k}+\left(z^{\prime}-z\right)$$ $$\displaystyle=\sum_{i=1}^{m}a_{i}\lfloor 2^{k}X_{i}\rfloor+\sum_{i=1}^{m}a_{i}b_{i}=\sum_{i=1}^{m}a_{i}\Big{\lfloor}2^{k}(X_{i}+\frac{b_{i}}{2^{k}})\Big{\rfloor}.$$ By definition, $Z_{k}=\lfloor\sum_{i=1}^{m}a_{i}\{2^{k}X_{i}\}\rfloor=\lfloor\sum_{i=1}^{m}a_{i}\{2^{k}(X_{i}+\frac{b_{i}}{2^{k}})\}\rfloor$. Consider the second term on the right-hand of (23). We have $$\displaystyle d_{\mathrm{TV}}(P_{B_{k}+z|Z_{k}=z^{\prime}},P_{B_{k}+z^{\prime}|Z_{k}=z^{\prime}})$$ $$\displaystyle=d_{\mathrm{TV}}(P_{B_{k}+\left(z^{\prime}-z\right)|Z_{k}=z^{\prime}},P_{B_{k}|Z_{k}=z^{\prime}})$$ $$\displaystyle=d_{\mathrm{TV}}\big{(}P_{\sum_{i=1}^{m}a_{i}\lfloor 2^{k}(X_{i}+\frac{b_{i}}{2^{k}})\rfloor|Z_{k}=z^{\prime}},P_{\sum_{i=1}^{m}a_{i}\lfloor 2^{k}X_{i}\rfloor|Z_{k}=z^{\prime}}\big{)}$$ $$\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}d_{\mathrm{TV}}(P_{X_{1}+\frac{b_{1}}{2^{k}},\ldots,X_{m}+\frac{b_{m}}{2^{k}}|Z_{k}=z^{\prime}},P_{X_{1},\ldots,X_{m}|Z_{k}=z^{\prime}})$$ $$\displaystyle\stackrel{{\scriptstyle(b)}}{{\leq}}\frac{1}{\mathbb{P}\left[Z_{k}=z^{\prime}\right]}d_{\mathrm{TV}}(P_{X_{1}+\frac{b_{1}}{2^{k}},\ldots,X_{m}+\frac{b_{m}}{2^{k}}},P_{X_{1},\ldots,X_{m}})$$ $$\displaystyle\stackrel{{\scriptstyle(c)}}{{\leq}}\frac{1}{\mathbb{P}\left[Z_{k}=z^{\prime}\right]}\sum_{i=1}^{m}d_{\mathrm{TV}}(P_{X_{i}+\frac{b_{i}}{2^{k}}},P_{X_{i}}),$$ where $(a)$ follows from the data processing inequality for total variation and $(b)$ follows from Lemma 7, and $(c)$ follows from the independence of $X_{1},\ldots,X_{m}$. Letting $k\rightarrow\infty$ in view of Lemma 8 finishes the proof. ∎ 5.2 Proof of Lemma 2 Proof. Let $X_{1},\ldots,X_{m}$ be independent and $\mathbb{R}^{d}$-valued continuous random variables. With out loss of generality, we may assume $a_{i}\neq 0$. For each $i\in\left[m\right]$, $\mathbb{P}\left[X_{i}\in B_{N}^{(d)}\right]\xrightarrow{N\rightarrow\infty}1$. Recall the conditional pdf notation (16). For $x\in\mathbb{R}^{d}$, we have $$\displaystyle f_{a_{i}X_{i}^{(N)}}(x)=\frac{1}{|a_{i}|}f_{X_{i}^{(N)}}\left(\frac{x}{a_{i}}\right)$$ $$\displaystyle=\dfrac{\frac{1}{|a_{i}|}f_{X_{i}}\left(\frac{x}{a_{i}}\right)\mathbbm{1}\left\{\frac{x}{|a_{i}|}\in B_{N}^{(d)}\right\}}{\mathbb{P}\left[X_{i}\in B_{N}^{(d)}\right]}=\dfrac{f_{a_{i}X_{i}}(x)\mathbbm{1}\left\{\frac{x}{|a_{i}|}\in B_{N}^{(d)}\right\}}{\mathbb{P}\left[X_{i}\in B_{N}^{(d)}\right]}.$$ (24) By the independence of the $X_{i}$’s, the pdf of $\sum_{i=1}^{m}a_{i}X_{i}$ is given by: $$\displaystyle g(z)$$ $$\displaystyle\triangleq f_{a_{1}X_{1}+\ldots+a_{m}X_{m}}(z)$$ $$\displaystyle=\int_{\mathbb{R}^{d}\times\cdots\times\mathbb{R}^{d}}f_{a_{1}X_{1}}\left(x_{1}\right)\ldots f_{a_{m}X_{m}}\left(z-x_{1}-\ldots-x_{m-1}\right)dx_{1}\cdots dx_{m-1}.$$ Similarly, in view of (24), the pdf of $\sum_{i=1}^{m}a_{i}X_{i}^{(N)}$ is given by: $$\displaystyle g_{N}(z)$$ $$\displaystyle\triangleq f_{a_{1}X_{1}^{(N)}+\ldots+a_{m}X_{m}^{(N)}}(z)$$ $$\displaystyle=\int f_{a_{1}X_{1}^{(N)}}\left(x_{1}\right)\ldots f_{a_{m}X_{m}^{(N)}}\left(z-x_{1}-\ldots-x_{m-1}\right)dx_{1}\ldots dx_{m-1}$$ $$\displaystyle=\frac{1}{\prod_{i=1}^{m}\mathbb{P}\left[X_{i}\in B_{N}^{(d)}\right]}\cdot\int f_{a_{1}X_{1}}\left(x_{1}\right)\ldots f_{a_{m}X_{m}}\left(z-x_{1}-\ldots-x_{m-1}\right)$$ $$\displaystyle\quad\hskip 73.97733pt\cdot\mathbbm{1}\left\{\frac{x}{|a_{i}|}\in B_{N}^{(d)},\ldots,\frac{z-x_{1}-\ldots-x_{m-1}}{|a_{m}|}\in B_{N}\right\}dx_{1}\ldots dx_{m-1}.$$ Now taking the limit on both sides, we have $\lim_{N\rightarrow\infty}g_{N}(z)=g(z)$ a.e., which follows the dominated convergence theorem and the fact that $g(z)$ is finite a.e. Next we prove that the differential entropy also converges. Let $N_{0}\in\mathbb{N}$ be so large that $$\prod_{i=1}^{m}\mathbb{P}\left[X_{i}\in B_{N}^{(d)}\right]\geq\frac{1}{2}$$ for all $N\geq N_{0}$. Now, $$\displaystyle\left|h\left(\sum_{j=1}^{m}a_{j}X_{j}\right)-h\left(\sum_{j=1}^{m}a_{j}X_{j}^{(N)}\right)\right|$$ $$\displaystyle=\left|\int_{\mathbb{R}^{d}}g\log\frac{1}{g}-\int_{\mathbb{R}^{d}}g_{N}\log\frac{1}{g_{N}}\right|$$ $$\displaystyle\leq\int g_{N}\log\frac{g_{N}}{g}+\int\left|\left(g-g_{N}\right)\log\frac{1}{g}\right|$$ $$\displaystyle=D\left(P_{\sum_{i=1}^{m}a_{i}X_{i}^{(N)}}\|P_{\sum_{i=1}^{m}a_{i}X_{i}}\right)+\int\left|\left(g-g_{N}\right)\log g\right|$$ $$\displaystyle\stackrel{{\scriptstyle(a)}}{{\leq}}\sum_{i=1}^{m}D\left(P_{X_{i}^{(N)}}\|P_{X_{i}}\right)+\int\left|\left(g-g_{N}\right)\log g\right|$$ $$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\log\frac{1}{\prod_{i=1}^{m}\mathbb{P}\left[X_{i}\in B_{N}^{(d)}\right]}+\int\left|\left(g-g_{N}\right)\log g\right|$$ $$\displaystyle\stackrel{{\scriptstyle(c)}}{{\to}}0\mbox{ as }{N\rightarrow\infty},$$ where $(a)$ follows from the data processing inequality and $(b)$ is due to $D\left(P_{X|X\in E}\|P_{X}\right)=\log\frac{1}{\mathbb{P}\left[X\in E\right]}$, and $(c)$ follows from the dominated convergence theorem since $\left|\left(g-g_{N}\right)\log g\right|\leq 3g\left|\log g\right|$ for all $N\geq N_{0}$ and $\int g\left|\log g\right|<\infty$ by assumption. This completes the proof. ∎ 5.3 Proof of Lemma 4 Proof. In view of the concavity and shift-invariance of the differential entropy, without loss of generality, we may assume that $h(Z)<\infty$. Since $U$ and $Z$ are independent, we have $$\displaystyle I\left(U;U+\varepsilon Z\right)$$ $$\displaystyle=h\left(U+\varepsilon Z\right)-h\left(U+\varepsilon Z|U\right)=h\left(U+\varepsilon Z\right)-h(Z)-\log\varepsilon.$$ Hence it suffices to show that $\lim_{\varepsilon\rightarrow 0}I(U;U+\varepsilon Z)=H(U)$. Notice that $I(U;U+\varepsilon Z)\leq H(U)$ for all $\varepsilon$. On the other hand, $(U,U+\varepsilon Z)\xrightarrow{\mathcal{L}}(U,U)$ and $U+\varepsilon Z\xrightarrow{\mathcal{L}}U$ in distribution, by the continuity of the characteristic function. By the weak lower semicontinuity of the divergence, we have $$\displaystyle\liminf_{\varepsilon\rightarrow 0}I(U;U+\varepsilon Z)$$ $$\displaystyle=\liminf_{\varepsilon\rightarrow 0}D\left(P_{U,U+\varepsilon Z}\|P_{U}P_{U+\varepsilon Z}\right)$$ $$\displaystyle\geq D\left(P_{U,U}\|P_{U}P_{U}\right)=H(U),$$ completing the proof. ∎ 5.4 Proof of Lemma 5 Proof. For any $\mathbb{R}^{d}$-valued discrete random variable $U$, let $U_{[k]}\triangleq\left(U_{(1)},\ldots,U_{(k)}\right)$, where $U_{(i)}$ are i.i.d. copies of $U$. Thus $H\left(U_{[k]}\right)=kH(U)$ and $\sum_{j=1}^{m}b_{j}(U_{j})_{[k]}=\left(\sum_{j=1}^{m}b_{j}U_{j}\right)_{[k]}$ for any $b_{1},\ldots,b_{m}\in\mathbb{R}$ and any discrete random variables $U_{1},\ldots,U_{m}\in\mathbb{R}^{d}$. Let $U_{1},\ldots,U_{m}$ be $\mathbb{R}^{d}$-valued discrete random variables and $A=(a_{ij})\in\mathbb{R}^{n\times m}$. Let $\mathcal{U}\subset\mathbb{R}^{d}$ be a countable set such that $\sum_{i=1}^{m}a_{ij}U_{j}\in\mathcal{U}$ for each $i\in[n]$. Let $f_{M}:\mathbb{R}^{d\times k}\to\mathbb{R}^{d}$ be given by $f_{M}(x_{1},\ldots,x_{k})=\sum_{i=1}^{m}x_{i}M^{i}$ for $M>0$. Since for any $x=(x_{1},\ldots,x_{k})$ and $y=(y_{1},\ldots,y_{k})$ in $\mathcal{U}^{k}$, there are at most $k$ values of $M$ such that $f_{M}(x)=f_{M}(y)$. Since $\mathcal{U}^{k}$ is countable, $f_{M}$ is injective on $\mathcal{U}^{k}$ for all but at most countably many values of $M$. Fix an $M_{0}>0$ such that $f_{M_{0}}$ is injective on $\mathcal{U}^{k}$ and abbreviate $f_{M_{0}}$ by $f$. Let $U_{j}^{(k)}=f(\left(U_{j}\right)_{[k]})$ for each $j\in[m]$. Thus, for each $i\in[n]$, $$\displaystyle H\left(\sum_{j=1}^{m}b_{j}U_{j}^{(k)}\right)$$ $$\displaystyle=H\left(\sum_{j=1}^{m}a_{ij}f\left(\left(U_{j}\right)_{[k]}\right)\right)\stackrel{{\scriptstyle(a)}}{{=}}H\left(f\left(\sum_{j=1}^{m}a_{ij}(U_{j})_{[k]}\right)\right)$$ $$\displaystyle=H\left(f\left(\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)_{[k]}\right)\right)\stackrel{{\scriptstyle(b)}}{{=}}H\left(\left(\sum_{j=1}^{m}a_{ij}U_{j}\right)_{[k]}\right)$$ $$\displaystyle=kH\left(\sum_{j=1}^{m}a_{ij}U_{j}\right),$$ where $(a)$ follows from the linearity of $f$ and $(b)$ follows form the injectivity of $f$ on $\mathcal{U}^{k}$ and the invariance of Shannon entropy under injective maps. ∎ 6 Extensions to general groups We now consider a more general version of Theorem 1. To extend the notion of differential entropy to a more general setting, we need the following preliminaries. Let $G$ be a locally compact abelian group equipped with a Haar measure $\mu$. Let $X$ be a $G$-valued random variable whose distribution is absolutely continuous with respect to $\mu$. Following [MK15], we define the differential entropy of $X$ as: $$h\left(X\right)=\int f\log\frac{1}{f}d\mu=\mathbb{E}\left[\log\frac{1}{f(X)}\right],$$ where $f$ denotes the pdf of $X$ with respect to $\mu$. This extends both the Shannon entropy on $\mathbb{Z}^{d}$ (with $\mu$ being the counting measure) and the differential entropy on $\mathbb{R}^{d}$ (with $\mu$ being the Lebesgue measure). We now state a generalization of Theorem 1, which holds for connected abelian Lie groups. Note that inequalities proved in [MK15] using data processing inequalities hold for more general groups, such as locally compact groups on which Haar measures exist. Theorem 3. Under the assumptions of Theorem 1, suppose (8) holds for any independent random variables $Z_{1},\ldots,Z_{m}$ taking values in $\mathbb{Z}^{d}\times(\mathbb{Z}/2^{k}\mathbb{Z})^{n}$ for any $k,d,n\in\mathbb{N}$. Then (9) holds for any connected abelian Lie group $G^{\prime}$ and independent $G^{\prime}$-valued random variables $X_{1},\ldots,X_{m}$. We start by proving a special case of Theorem 3 with $G$ being a finite cyclic group and $G^{\prime}$ is the torus $\mathbb{T}^{d}$, where $\mathbb{T}$ denotes the unit circle in $\mathbb{C}$. Theorem 3 then follows easily since any connected abelian Lie group is isomorphic to product of torus and Euclidean space. We need the following preliminary fact relating the Haar measures and differential entropies of random variables taking values on isomorphic groups. Lemma 11. Let $\phi:G^{\prime}\rightarrow G$ be a group isomorphism between abelian topological groups $(G,+)$ and $(G^{\prime},+)$ and $\mu^{\prime}$ be a Haar measure on $G^{\prime}$. Then the pushforward measure111That is, $(\phi_{*}\mu^{\prime})(B)=\mu^{\prime}(\phi^{-1}(B))$ for any measurable subset $B$ of $G$. $\mu=\phi_{*}\mu^{\prime}$ is a Haar measure on $G$. Furthermore, for any $G$-valued continuous random variable $X$, $$h(X)=h\left(\phi^{-1}(X)\right).$$ Proof. The first part is a standard exercise: For any measurable subset $A$ of $G$ and any $g\in G$, then $$\mu(g+A)=\mu^{\prime}(\phi^{-1}(g+A))=\mu^{\prime}(\phi^{-1}(g)+\phi^{-1}(A))=\mu^{\prime}(\phi^{-1}(A))=\mu(A),$$ which follows the translation invariance of $\mu^{\prime}$. Similarly, using the fact that $\phi^{-1}$ is a homeomorphism one can verify that $\mu$ is finite on all compacts as well as its inner and outer regularity. If $f$ is the density function of $X$ with respect to the Haar measure $\phi_{*}\mu^{\prime}$ on $G$, then $f\circ\phi$ is the pdf of $\phi^{-1}\left(X\right)$ with respect to the Haar measure $\mu^{\prime}$ on $G^{\prime}$. Hence, $$\displaystyle h\left(X\right)$$ $$\displaystyle=\int f\log\frac{1}{f}d(\phi_{*}\mu^{\prime})$$ $$\displaystyle=\int f\circ\phi\log\frac{1}{f\circ\phi}d\mu$$ $$\displaystyle=h\left(\phi^{-1}\left(X\right)\right).\qed$$ As an example, consider the group $(\mathbb{R}^{+},\times)$ of strictly positive real numbers with real multiplication, which is isomorphic to $(\mathbb{R},+)$ via $x\mapsto\log x$. Then for any $X\in(\mathbb{R}^{+},\times)$, its differential entropy is given by $h(X)=h(\log X)$, with the latter defined in the usual manner. Define $\phi:[0,1)^{n}\rightarrow\mathbb{T}^{n}$ by $\phi(\theta_{1},\ldots,\theta_{n})=(e^{2\pi i\theta_{1}},\ldots,e^{2\pi i\theta_{n}})$. Let the Haar measure on $\mathbb{T}^{n}$ be the pushforward of Lebesgue measure under $\phi$. For $X\in\mathbb{T}^{n}$, let $\Theta=\phi^{-1}(X)$. Define the quantization operation of $X$ in terms of the angles $$\left[X\right]_{k}\triangleq\phi\left(\frac{\lfloor 2^{k}\Theta\rfloor}{2^{k}}\right),\quad[\Theta]_{k}=\frac{\lfloor 2^{k}\Theta\rfloor}{2^{k}}.$$ (25) Since $\phi$ is a bijection, $H\left([X]_{k}\right)=H\left(\lfloor 2^{k}\Theta\rfloor\right)$. We now prove Theorem 4. Theorem 4. Under the assumptions of Theorem 1, suppose (8) holds for any cyclic group $G$-valued independent random variables $Z_{1},\ldots,Z_{m}$. Then (9) holds for any $\mathbb{T}^{n}$-valued independent random variables $X_{1},\ldots,X_{m}$. Proof. Let $X_{1},\ldots,X_{m}$ be $\mathbb{T}^{n}$-valued continuous independent random variables. For each $i\in[m]$, let $\Theta_{i}=\phi^{-1}(X_{i})$. Since $\lfloor 2^{k}\Theta_{i}\rfloor$ is $\mathbb{Z}_{2^{k}}$-valued and $\mathbb{Z}_{2^{k}}$ is a cyclic group under modulo $2^{k}$ addition, to prove Theorem 4, it suffices to prove the following: $$\displaystyle H\left(\left[X\right]_{k}\right)=kn\log 2+h\left(X\right)+o_{k}(1)$$ (26) for any $\mathbb{T}^{n}$-valued continuous random variable $X$, and $$\displaystyle H\left(\left[\sum_{i=1}^{m}a_{i}X_{i}\right]_{k}\right)=H\left(\sum_{i=1}^{m}a_{i}\left[X_{i}\right]_{k}\right)+o_{k}(1).$$ (27) Indeed, (26) follows from $$\displaystyle H\left(\left[X\right]_{k}\right)$$ $$\displaystyle=H\left([\Theta]_{k}\right)\stackrel{{\scriptstyle(a)}}{{=}}kn\log 2+h\left(\Theta\right)+o_{k}(1)\stackrel{{\scriptstyle(b)}}{{=}}kn\log 2+h\left(X\right)+o_{k}(1),$$ where $(a)$ is by Lemma 3 since $\Theta$ is a continuous $[0,1]$-valued random variable and $(b)$ is by Lemma 11. To prove (27), for each $i\in[m]$, let $\Theta_{i}=\phi^{-1}(X_{i})$. Define $$\displaystyle A_{k}$$ $$\displaystyle\triangleq\left\lfloor 2^{k}\sum_{i=1}^{m}a_{i}\Theta_{i}\right\rfloor\ (\text{mod}\ 2^{k}),A_{k}^{\prime}=\left\lfloor 2^{k}\sum_{i=1}^{m}a_{i}\Theta_{i}\right\rfloor,$$ $$\displaystyle B_{k}$$ $$\displaystyle\triangleq\sum_{i=1}^{m}a_{i}\left\lfloor 2^{k}\Theta_{i}\right\rfloor\ (\text{mod}\ 2^{k}),B_{k}^{\prime}=\sum_{i=1}^{m}a_{i}\left\lfloor 2^{k}\Theta_{i}\right\rfloor,$$ $$\displaystyle Z_{k}$$ $$\displaystyle\triangleq\left\lfloor\sum_{i=1}^{m}a_{i}\left\{2^{k}\Theta_{i}\right\}\right\rfloor.$$ Our aim is to prove that $H(A_{k})-H(B_{k})=o_{k}(1)$. Since $A_{k}^{\prime}=B_{k}^{\prime}+Z_{k}$, $A_{k}=B_{k}+Z_{k}\ (\text{mod}\ 2^{k})$. Also, $H(A_{k})-H(B_{k})=I(Z_{k};A_{k})-I(Z_{k};B_{k})$. Hence, $$\displaystyle\left|H(A_{k})-H(B_{k})\right|\leq I(Z_{k};A_{k})+I(Z_{k};B_{k})\stackrel{{\scriptstyle(a)}}{{\leq}}I(Z_{k};A_{k}^{\prime})+I(Z_{k};B_{k}^{\prime})\stackrel{{\scriptstyle(b)}}{{\rightarrow}}0\mbox{ as }k\rightarrow\infty,$$ where $(a)$ follows from the data processing inequality and $(b)$ follows from Lemma 9 and Lemma 10. This completes the proof. ∎ Proof of Theorem 3. The proof is almost identical to that of Theorem 4. By the structure theorem for connected abelian Lie groups (cf. e.g. [AM07, Corollary 1.4.21]), $G^{\prime}$ is isomorphic to $\mathbb{R}^{d}\times\mathbb{T}^{n}$. By Lemma 11 and Lemma 2, we only need to prove the theorem for $\left[0,1\right]^{d}\times\mathbb{T}^{n}$-valued random variables. Along the lines of the proof of Theorem 4, it suffices to establish the counterparts of (26) for any $[0,1]^{d}\times\mathbb{T}^{n}$-valued continuous $X$, and (27) for any $[0,1]^{d}\times\mathbb{T}^{n}$-valued independent and continuous $X_{1},\ldots,X_{m}$, where the quantization operations are defined componentwise by applying the usual uniform quantization (15) to the real-valued components of $X$ and the angular quantization (25) to the $\mathbb{T}^{n}$-component of $X$. The argument is the same as that of Theorem 4, which we omit for conciseness. ∎ Acknowledgment The authors are grateful to Yury Polyanskiy and Mohamed-Ali Belabbas for discussions pertaining to Theorem 3 and Mokshay Madiman for bringing [Cha03] to our attention. The authors thank Adriano Pastore for pointing out a mistake in the previous version and the reference [JWW07]. This work has been supported in part by NSF grants IIS-14-47879, CCF-14-23088 and CCF-15-27105 and the Strategic Research Initiative on Big-Data Analytics of the College of Engineering at the University of Illinois. Appendix A Proof of Proposition 1 Proof. The two equalities follows from Theorem 1 and Theorem 2. Let $\alpha_{n}\triangleq\inf_{U\in\mathbb{Z}^{n}}\frac{H(U-U^{\prime})-H(U)}{H(U+U^{\prime})-H(U)}$. Clearly $\alpha_{n}\leq\alpha_{1}$ by the tensorization property of Shannon entropy. On the other hand, given $U\in\mathbb{Z}^{n}$ and $U^{\prime}$ its identical copy, using the same argument in the proof of Lemma 5, there exists a linear embedding $f:\mathbb{Z}^{n}\to\mathbb{Z}$ that preserves the Shannon entropy of $U+U^{\prime},U-U^{\prime},U$ and $U^{\prime}$. Hence $$\displaystyle\frac{H(U-U^{\prime})-H(U)}{H(U+U^{\prime})-H(U)}$$ $$\displaystyle=\frac{H(f(U))-f(U^{\prime}))-H(f(U))}{H(f(U)+f(U^{\prime}))-H(f(U))}$$ and $\alpha_{1}\leq\alpha_{n}$. The result for the supremum follows from the same proof. ∎ References [AM07] Hossein Abbaspour and Martin A Moskowitz. Basic Lie Theory. World Scientific, 2007. [Bar84] Andrew R Barron. Monotonic central limit theorem for densities. Technical report, Stanford University, Department of Statistics, 1984. [Buk08] B. Bukh. Sums of dilates. Combinatorics, Probability and Computing, 17(05):627–639, 2008. [Cha03] Terence H Chan. Balanced information inequalities. IEEE Transactions on Information Theory, 49(12):3261–3267, 2003. [Csi96] Imre Csiszár. Almost independence and secrecy capacity. Prob. Peredachi Inform., 32(1):48–57, 1996. [CT06] Thomas M. Cover and Joy A. Thomas. Elements of information theory, 2nd Ed. Wiley-Interscience, New York, NY, USA, 2006. [GHR07] Katalin Gyarmati, François Hennecart, and Imre Z Ruzsa. Sums and differences of finite sets. Funct. Approx. Comment. Math., 37(1):175–186, 2007. [Han78] Te Sun Han. Nonnegative entropy measures of multivariate symmetric correlations. Information and Control, 36(2):133 – 156, 1978. [HRY99] François Hennecart, Gilles Robert, and Alexander Yudin. On the number of sums and differences. Astérisque, (258):173–178, 1999. [JWW07] David Jimenez, Long Wang, and Yang Wang. White noise hypothesis for uniform quantization errors. SIAM journal on mathematical analysis, 38(6):2042–2056, 2007. [KM14] Ioannis Kontoyiannis and Mokshay Madiman. Sumset and inverse sumset inequalities for differential entropy and mutual information. Information Theory, IEEE Transactions on, 60(8):4503–4514, 2014. [LP08] A. Lapidoth and G. Pete. On the entropy of the sum and of the difference of two independent random variables. Proc. IEEE 25th Conv. IEEEI, pages 623–625, December 2008. [Mad08] M. Madiman. On the entropy of sums. In Proceedings of 2008 IEEE Information Theory Workshop, pages 303–307, Porto, Portugal, 2008. [MK10] M. Madiman and I. Kontoyiannis. The entropies of the sum and the difference of two IID random variables are not too different. In Proceedings of 2010 IEEE International Symposium on Information Theory, pages 1369–1372, Austin, TX, June 2010. [MK15] Mokshay Madiman and Ioannis Kontoyiannis. Entropy bounds on abelian groups and the Ruzsa divergence. arXiv preprint arXiv:1508.04089, 2015. [MMT12] Mokshay Madiman, Adam W Marcus, and Prasad Tetali. Entropy and set cardinality inequalities for partition-determined functions. Random Structures & Algorithms, 40(4):399–424, 2012. [PW15] Yury Polyanskiy and Yihong Wu. Lecture Notes on Information Theory. Feb 2015. http://www.ifp.illinois.edu/~yihongwu/teaching/itlectures.pdf. [PW16] Yury Polyanskiy and Yihong Wu. Dissipation of information in channels with input constraints. IEEE Trans. Inf. Theory, 62(1):35–55, January 2016. also arXiv:1405.3629. [Rén59] Alfréd Rényi. On the dimension and entropy of probability distributions. Acta Mathematica Hungarica, 10(1 – 2), Mar. 1959. [RS57] C.A. Rogers and G.C. Shephard. The difference body of a convex body. Archiv der Mathematik, 8(3):220–233, 1957. [Ruz91] Imre Z Ruzsa. On the number of sums and differences. Acta Mathematica Hungarica, 58(3-4):439–447, 1991. [Ruz09a] I. Z. Ruzsa. Entropy and sumsets. Random Structures and Algorithms, 34:1–10, Jan. 2009. [Ruz09b] Imre Z Ruzsa. Sumsets and structure. In Combinatorial Number Theory and Additive Group Theory. Birkhäuser, Basel, Switzerland, 2009. [Tao10] T. Tao. Sumset and inverse sumset theory for Shannon entropy. Combinatorics, Probability & Computing, 19(4):603–639, 2010. [TV05] T. Tao and V. Vu. Entropy methods. Unpublished notes, {http://www.math.ucla.edu/~tao/preprints/Expository/chapter_entropy.dvi}, 2005. [TV06] Terence Tao and Van H Vu. Additive combinatorics, volume 105. Cambridge University Press, 2006. [WSV15] Yihong Wu, Shlomo Shamai (Shitz), and Sergio Verdú. Information dimension and the degrees of freedom of the interference channel. IEEE Trans. Inf. Theory, 61(1):256–279, 2015.
Algorithms and Experiments Comparing Two Hierarchical Drawing Frameworks Panagiotis Lionakis Computer Science Department, University of Crete, GREECE.    Giorgos Kritikakis11footnotemark: 1    Ioannis G. Tollis11footnotemark: 1 () Abstract We present algorithms that extend the path-based hierarchical drawing framework and give experimental results. Our algorithms run in $O(km)$ time, where $k$ is the number of paths and $m$ is the number of edges of the graph, and provide better upper bounds than the original path based framework: e.g., the height of the resulting drawings is equal to the length of the longest path of $G$, instead of $n-1$, where $n$ is the number of nodes. Additionally, we extend this framework, by bundling and drawing all the edges of the DAG in $O(m+n\log n)$ time, using minimum extra width per path. We also provide some comparison to a well known hierarchical drawing framework, widely known as the Sugiyama framework, as a proof of concept. The experimental results show that our algorithms produce drawings that are better in area and number of bends, but worse for crossings in sparse graphs. Hence, our technique offers an interesting alternative for drawing hierarchical graphs. Finally, we present an $O(m+k\log k)$ time algorithm that computes a specific order of the paths in order to reduce the total edge length and number of crossings and bends. Algorithms and Experiments Comparing Two Hierarchical Drawing Frameworks Panagiotis Lionakis††thanks: Computer Science Department, University of Crete, GREECE. and Giorgos Kritikakis11footnotemark: 1 and Ioannis G. Tollis11footnotemark: 1 1 Introduction Hierarchical graphs are very important for many applications in several areas of research and business because they often represent hierarchical relationships between objects in a structure. They are directed (often acyclic) graphs and their visualization has received significant attention recently [4, 15, 18]. An experimental study of four algorithms specifically designed for DAGs was presented in [5]. DAGs are usually used to describe processes containing some long paths, such as in PERT applications see for example [6, 7]. The paths can be either application based, e.g. critical paths or user defined. If one desires automatically generated paths, there are several algorithms that compute a path decomposition of minimum cardinality [14, 17, 19, 24]. A new framework to visualize directed graphs and their hierarchies is introduced in [20, 21]. It computes readable hierarchical visualizations in two phases by “hiding” (abstracting) some selected edges while maintaining the complete reachability information of a graph. In this paper we present polynomial time algorithms that follow the main framework of [21] which is based on the idea of partitioning the vertices of a graph into paths/channels, drawing the vertices in each path vertically aligned on some $x$-coordinate and then drawing the edges between vertices that belong to different paths. The produced drawings contain all edges of the input graph and attempt to optimize the height, width and number of bends of the resulting drawing. This new framework departs from the typical Sugiyama framework [25] and it consists of two phases: (a) Cycle Removal, (b) the path/channel decomposition and hierarchical drawing step. The Sugiyama framework has been extensively used in practice, as manifested by the fact that various systems are using it to implement hierarchical drawing techniques. Several systems such as AGD [22], da Vinci [8], GraphViz [11], Graphlet [13], dot [10], OGDF [3], and others implement this framework in order to draw directed graphs. Commercial software such as the Tom Sawyer Software TS Perspectives [1] and yWorks [2] essentially use this framework in order to offer automatic visualizations of directed graphs. The comparative study of [5] concluded that the Sugiyama-style algorithms performed better in most of the metrics. For more recent information regarding this framework see [18]. Even though it is very popular, the Sugiyama framework has several limitations: as discussed bellow, most problems and subproblems that are used to optimize the results in various steps of each phase have turned out to be NP-hard. Additionally, several of the heuristics employed to solve these problems give results that are not bound by any approximation. Furthermore, the required manipulations in the graph often increase substantially its complexity, e.g., up to $O(nm)$ dummy vertices may be inserted in a directed graph $G=(V,E)$ with $n$ vertices and $m$ edges. The overall time complexity of this framework (depending upon implementation) can be as high as $O((nm)^{2})$, or even higher if one chooses algorithms that require exponential time. Finally, another important limitation of this framework is the fact that heuristic solutions and decisions that are made during previous phases (e.g., crossing reduction) will influence severely the results obtained in later phases. Nevertheless, previous decisions cannot be changed in order to obtain better results. By contrast, in the main framework of [21] most problems of the second phase can be solved in polynomial time. If a path decomposition contains $k$ paths, the number of bends introduced is at most $O(kn)$ and the required area is at most $O(kn)$. In order to minimize the number of crossings between cross edges and path edges the authors suggest checking all possible $k!$ permutations of the $k$ paths which may be reasonable for small values of $k$ [20]. However, edges between non consecutive vertices in a path, called path transitive edges are not drawn in this framework. Additionally, we offer experimental results comparing them to the results obtained by running the hierarchical drawing module of OGDF [3], which is based on the Sugiyama framework, and is the most updated research software that implements this framework. Since the ”cycle removal phase” is required in both frameworks, we focus our experiments on the case where the input graph $G$ is acyclic (DAG). Our algorithms run in $O(km)$ time, and provide better upper bounds than the ones given in [21]: (a) the height of the resulting drawings is equal to the length of the longest path of $G$, which is often significantly lower than $n-1$. (b) The path transitive edges are drawn by our algorithms in such a way that the required extra number of columns is minimized for each path (see Section 3). The experimental results show that the drawings produced by our algorithms have a significantly lower number of bends and are much smaller in area than the ones produced by OGDF (see Section 4). On the other hand, the drawings of OGDF have a lower number of crossings when the input graphs are relatively sparse. However, when the graphs are a bit denser (e.g., average degree greater than five) our drawings have less crossings. Of course, it is expected that OGDF would be better than our algorithms in the number of crossings since OGDF places a significant weight in minimizing crossings, whereas we do not explicitly minimize crossings. Thus our algorithms offer an interesting alternative to visualize hierarchical graphs. Finally, we present an $O(m+k\log k)$ time algorithm that computes a specific order of the paths that further reduces the total edge length, and number of crossings and bends in sparse DAGs. 2 Overview of the Two Frameworks In order to motivate our discussion about the two frameworks considered in this paper we present Figure 1 that shows a DAG $G$ drawn by these two frameworks: Part (a) shows a drawing $\Gamma$ of $G$ computed by our algorithms that customize the path-based framework of [21]; it is implemented in Tom Sawyer Perspectives [1] (a tool of Tom Sawyer Software); part (b) shows the drawing of $G$ computed by OGDF. The graph consists of 31 nodes and 69 edges. The drawing computed by our algorithms has 74 crossings, 33 bends, width 14, height 16, and area 224. On the other hand, OGDF computes a drawing that has 72 crossings, 64 bends, width 42, height 16 and area 672. The width and height reported by OGDF are 961 and 2273, respectively. We had to normalize these figures in order to have a reasonable comparison, as will be discussed later. As can be observed by these two drawings, the two frameworks produce vastly different drawings with their own advantages and disadvantages. The Path Based Hierarchical Drawing Framework, call it Algorithm PBH, follows an approach to visualize directed acyclic graphs that “hides” some edges and focuses on maintaining their reachability information [21]. This framework is based on the idea of partitioning the vertices of the graph $G$ into (a minimum number of) channels/paths, that we call channel/path decomposition of $G$, which can be computed in polynomial time. Therefore, it is orthogonal to the Sugiyama framework in the sense that it is a vertical decomposition of $G$ into (vertical) paths/channels and it consists of only two steps: (a) the cycle removal step (if the directed graph contains cycles) and (b) the channel decomposition and hierarchical drawing step. Thus, most resulting problems are vertically contained, which makes them simpler, and reduces their time complexity. This framework does not introduce any dummy vertices and keeps the vertices of a path vertically aligned. By contrast, the Sugiyama framework performs a horizontal decomposition of a graph, even though the final result is a vertical (hierarchical) visualization. Let $S_{p}=\{P_{1},...,P_{k}\}$ be a path decomposition of $G$ such that every vertex $v\in V$ belongs to exactly one of the paths of $S_{p}$. Any path decomposition naturally splits the edges of $G$ into: (a) path edges that connect consecutive vertices in the same path, (b) cross edges that connect vertices that belong to different paths, and (c) path transitive edges that connect non-consecutive vertices in the same path. Given $S_{p}$, Algorithm PBH, draws the vertices of each path $P_{i}$ vertically aligned on some $x$-coordinate depending on the order of path $P_{i}$. There is one column between paths that is reserved for the bends (if any) of some cross edges. Therefore, the total width of the resulting drawing is $2k-1$. The $y$-coordinate of each vertex is equal to its order in a topological sorting of $G$. Hence the height of the resulting drawing is $n-1$. In the algorithms of [21] path transitive edges are omitted from the final drawing. OGDF is a self-contained C++ library of graph algorithms, in particular for (but not restricted to) automatic graph drawing. The hierarchical drawing implementation of the Sugiyama framework in OGDF is implemented following [9, 23]. The Sugiyama framework in OGDF uses the following default choices: For the first phase of Sugiyama, it uses the $LongestPathRanking$ (a ranking module that determines the layering of the graph, i.e., the assignment of vertices into layers) which implements the well-known longest-path ranking algorithm. Next, it performs crossing minimization by using $BarycenterHeuristic$. This module performs two-layer crossing minimization and is applied during the top-down and bottom-up traversals [3]. The crossing minimization is repeated 15 times, and keeps the best. Each repetition (except for the first) starts with randomly permuted nodes on each layer. Finally it computes the final coordinates with $FastHierarchyLayout$ which computes the final layout of the graph. The two hierarchical drawings shown in Figure 1 demonstrate the significant differences in philosophy between the two frameworks. 3 An Algorithm for Computing Compact Drawings We present an extension of the framework of [21] by (a) compacting the drawing in the vertical direction, and (b) drawing the path transitive edges that were not drawn in [21]. This approach naturally splits the edges of $G$ into three categories, path edges, cross edges, and path transitive edges that are drawn differently. This clearly adds to the understanding of the user and allows a system to show the different categories separately without altering the user’s mental map. 3.1 Compaction Let $G=(V,E)$ be a DAG with $n$ vertices and $m$ edges. Following the framework of [20, 21] the vertices of $V$ are placed in a unique $y$-coordinate, which is specified by a topological sorting. Let $T$ be the list of vertices of $V$ in ascending order based on their $y$-coordinates. We start from the bottom and visit each vertex in $T$ in ascending order. For every vertex $v$ in this order we assign a new $y$-coordinate, $y(v)$, following a simple rule that compacts the height of the drawing: ”If $v$ has no incoming edges then we set its $y(v)$ to $0$, else we set $y(v)$ equal to $a+1$, where $a$ is the $highest$ $y$-coordinate of the vertices that have edges incoming into $v$.” Algorithm 3.1 takes as input a DAG $G$, and a path based hierarchical drawing $\Gamma_{1}$ of $G$ computed by Algorithm PBH and it produces as output a new, compacted, path based hierarchical drawing $\Gamma_{2}$ with height $L$, where $L$ is the length of a longest path in $G$. Clearly this simple algorithm can be implemented in $O(n+m)$ time. Figure 2 shows an example of two hierarchical drawings of the same graph: $\Gamma_{1}$ is before compaction and $\Gamma_{2}$ is after compaction. Algorithm 3.\@thmcounter@algorithm. Input: A DAG $G=(V,E)$, and a path based hierarchical drawing $\Gamma_{1}$ of $G$ computed by Algorithm PBH Output: A compacted path based hierarchical drawing $\Gamma_{2}$ with height $L$, where $L$ is the length of a longest path in $G$. 1:  for $v\in G$: do 2:     Let $E_{v}$ be the set of incoming edges, $e=(w,v)$, into $v$: 3:     if $E_{v}=\emptyset$ then 4:        $y(v)$=0 5:     else 6:        $y(v)$=max{$y$-coordinates of vertices $w$ with $(w,v)\in E_{v}$} + 1 7:     end if 8:  end for Notice that the first case of the if-statement, is executed only for the first vertex (source) of some paths. Clearly, the rest of the vertices have at least one incoming edge since they belong to some path where every vertex is connected to its predecessor. This is the case for the ”else” part. The compacted $y$-coordinate for the rest of the vertices will always be equal to ”max {y coordinates of adjacent vertices to it} +1”. Based on these statements and the fact that the drawing after compaction is also a path based hierarchical drawing, we have the next two simple lemmas. Lemma 3.1 Two vertices of the same path cannot have the same $y$-coordinate. Lemma 3.2 For every vertex $v$ with $y(v)\neq 0$, there is an incoming edge into $v$ that starts from a vertex $w$ such that $y(v)=y(w)+1$. Based on these lemmas the height of the compacted drawing of the graph $G$ is at most $L$: Theorem 3.1 Let $G=(V,E)$ be a DAG with $n$ vertices and $m$ edges. Algorithm Compaction computes in $O(n+m)$ time a hierarchical drawing $\Gamma_{2}$ of $G$ with height $L$, where $L$ is equal to the length of a longest path in $G$. Proof. It is clear that the height of the resulting drawing $\Gamma_{2}$ cannot be lower that $L$, the length of the longest path, due to Lemma 3.1 and the fact that all edges go from a vertex with lower to a vertex with higher $y$-coordinate. Similarly, the height of the resulting drawing $\Gamma_{2}$ cannot be higher that $L$ since that would imply that there is a $y$ coordinate that does not contain a vertex of a longest path. In this case by the initial assumption and Lemma 3.2 there is another path that is longer than $L$. Hence the height of the resulting drawing $\Gamma_{2}$ is equal to $L$. The time complexity of Algorithm Compaction is immediate from the fact that we visit each vertex exactly once, in the order specified by $T$ and consider all its incoming edges once.          3.2 Drawing the Path Transitive Edges An important aspect of our work is the preservation of the mental map of the user that can be expressed by the reachability information of a DAG. At this point, we highlight that for every decomposition path, we have a set of path transitive edges that are not drawn by the framework of [20, 21]. In this subsection we show how to draw these edges while preserving the user’s mental map of the previous drawing. Additionally, one may interact with the drawings by hiding the path transitive edges at the click of a button without changing the user’s mental map of the complete drawing. Now we will describe an algorithm that draws the path transitive edges using the minimum extra width (minimum extra number of columns) for each decomposition path. The steps of the algorithm are briefly described as follows: 1. For every vertex of each decomposition path we calculate the indegree and outdegree based only on path transitive edges, i.e., excluding path edges and cross edges. 2. If all indegrees and outdegrees are zero the algorithm is over, if not, we select a vertex $v$ with the highest indegree or outdegree and we bundle all the incoming or outgoing edges of $v$, respectively. These bundled edges are represented by an $interval$ with starting and finishing points, the lowest and highest $y$-coordinates of the vertices, respectively. 3. Next, we insert each interval on the left side of the path on the first available column such that the interval does not overlap with another interval (see details below). 4. We remove these edges from the set of path transitive edges, update the indegree and outdegree of the vertices and repeat the selection process. 5. The intervals of the rightmost path, are inserted on the right side of the path in order to avoid potential crossing with cross edges. 6. A final, post-processing step can be applied because some crossings between intervals/bundled edges can be removed by changing the order of the columns containing them. The above algorithm can be implemented to run in time $O(m+n\log n)$ by handling the updates of the indegrees and outdegrees carefully, and placing the appropriate intervals in a (Max Heap) Priority Queue. As expected, the fact that we draw the path transitive edges increases the number of bends, crossings, and area, with respect to not drawing them. For each decomposition path, suppose we have a set of $b$ intervals such that each interval $I$ has a start point, $s_{I}$, and a finish point $f_{I}$. The starting point is the position of the vertex of the interval with the lowest $y$-coordinate. Similarly, the finish point is the position of the node of the interval with the highest $y$-coordinate. We follow a greedy approach in order to minimize the width (number of columns) for placing the bundled edges. The approach is similar to Task Scheduling [12], for placing the intervals. It uses the optimum number of columns and runs in $O(b\log b)$ time, for each path with $b$ intervals. This is done by considering the intervals of each decomposition path in increasing order of their starting points. We select each interval (resp. task) according to its starting point and place it into the first column that can fit (i.e., does not intersect with another interval). If there are no available columns, we allocate a new column and place the interval there. Since the sum of all $b$’s for all paths in a path decomposition is at most $n$ we conclude that the algorithm runs in $O(n\log n)$ time. The proof of correctness is similar to the one for Task Scheduling in [12] and thus it is omitted here. Theorem 3.2 Let $G=(V,E)$ be a DAG with $n$ vertices and $m$ edges. There is an algorithm that computes a drawing of $G$ bundling the path transitive edges for each path using the minimum number of columns (width) per path. The algorithm runs in $O(m+n\log n)$ time and computes a compact hierarchical drawing of $G$. 4 Experimental Results and Comparisons We performed experiments in order to compare the results produced by the two frameworks on different DAGs with varying number of nodes and edges. We use 20 DAGs that were produced in a random, but controlled, fashion in order to have small and large DAGs, but with a predefined average degree. Furthermore, in order to evaluate the performance of the two drawing frameworks, we use the following standard metrics: $\bullet$ Number of crossings. $\bullet$ Number of bends. $\bullet$ Width of the drawing: The total number of distinct x coordinates that are used by the framework. $\bullet$ Height of the drawing: The total number of distinct y coordinates that are used by the framework. $\bullet$ Area of the drawing: The area of the enclosing rectangle. Figure 4 shows a table that contains the results of our experiments based on these metrics for $PBF$ as implemented in TS Perspectives [1] compared to the results produced by OGDF. In order to be consistent with the experimental settings of OGDF, we used the default parameters. In the experiments that we present in this section we see that in all cases our approach gives better results than the ones produced by OGDF with respect to the number of bends, width, height, and as expected the total area of the drawings. For the number of bends we observe that our proposed technique produces bends that are a small fraction of $n$, whereas OGDF produces bends that are proportional to $m$. The bar charts shown in Figure 5 show how the number of bends grows as the DAGs grow in size and average degree and provide a clear evidence that the number of bends for $PBF$ is significantly lower than OGDF in all cases. On the other hand, the drawings of OGDF have a lower number of crossings when the input graphs are relatively sparse. However, when the graphs are a bit denser (e.g., average degree higher than five) our drawings start having less crossings. Since the two frameworks use a different coordinate system, for a fair comparison between them we chose to count as height of a drawing the number of different layers (or different $y$-coordinates) and as width the number of different $x$-coordinates of nodes and bends, used by each system. In other words, we normalize the two coordinate systems by mapping them on a ”grid.” In general, our experiments show that $PBF$ produces readable drawings with very good results almost in all metrics, except for the number of crossings. Additionally, it clearly partitions the edges into three distinct categories, and vertically aligns certain paths, which can be user defined. This can be a great advantage in certain applications and therefore it seems to be an interesting alternative, as also shown in Figure 6 for a larger example. $PBF$ does not perform any crossing reduction step, in contrast to OGDF which offers crossing minimization algorithms by default (also required by the Sugiyama framework), which are run several times in order to keep the best result. 4.1 A Heuristic for Ordering the Paths: As described in [20], one way to minimize the number of crossings between cross edges and path edges (and path transitive edges, now) is to check all possible $k!$ permutations of the $k$ paths. In order to reduce the number of crossings, we implemented a heuristic that aims to reduce the number of paths crossed by cross edges. Our fast and simple approach is described below. We create an undirected path graph by placing a node for each path $P$. For any pair of paths $P_{1}$ and $P_{2}$ we find the total number of cross edges between them, $c$, and we insert an (undirected) edge between the nodes corresponding to paths $P_{1}$ and $P_{2}$ with weight equal to $c$. Hence, the weight, $c$, of edge $(P_{1},P_{2})$ is the sum of the number of cross edges from $P_{1}$ to $P_{2}$ plus the number of cross edges from $P_{2}$ to $P_{1}$. We do this for all cross edges between all paths. Next, we order the paths following a greedy process: We find the maximum-weight edge and we place the corresponding paths next to each other. We remove the edge from the path graph and continue with this process until it contains no edges. If we select an edge such that both paths are already placed, we simply delete this edge and proceed. If we select an edge such that one of the two paths is not already placed, then we place it at the rightmost (or leftmost) side of the placed path, depending upon which side has the least number of paths placed. This algorithm uses data structures similar to Kruskal’s [16] algorithm for computing a minimum (maximum) spanning tree and it can be implemented in $O(m+k\log k)$ time. We performed some limited experiments on sparse graphs (with average degree 1.25, 1.75, and 3) using this path ordering algorithm, and we found out that the produced drawings have lower number of crossings, bends, and edge length. Unfortunately, for denser graphs the results are inconclusive. 5 Conclusions and Open Problems We present algorithms and experimental results comparing two hierarchical drawing frameworks: (a) the path-based framework and (b) OGDF, which is based on the Sugiyama technique. Our compaction algorithm runs in $O(km)$ time, and produces drawings with height equal to the length of a longest path of $G$ instead of $n-1$ which is the height of drawings produced in [21]. In this implementation we present an algorithm to bundle and draw the path transitive edges of $G$ in $O(m+n\log n)$ time, which is an extension of the original path based framework [21]. The experimental results show that the drawings produced by our algorithms have significantly lower number of bends and are much smaller in area than the ones produced by OGDF, but they have more crossings for sparse graphs. Thus our algorithms offer an interesting alternative when we visualize hierarchical graphs. They focus on showing important aspects of the graph such as critical paths, path transitive edges, and cross edges. For this reason, this framework is particularly useful in graph visualization systems that encourage user interaction. There are several interesting open problems: 1) Find better algorithms to order the paths. 2) Find techniques to reduce the number of crossings. 3) Allow some extra vertical space between selected vertices in order to make the visualization more visually appealing. References [1] Tom Sawyer Software, www.tomsawyer.com [2] yWorks, www.yworks.com [3] Chimani, M., Gutwenger, C., Jünger, M., Klau, G.W., Klein, K., Mutzel, P.: The open graph drawing framework (OGDF). In: Handbook on Graph Drawing and Visualization., pp. 543–569 (2013) [4] Di Battista, G., Eades, P., Tamassia, R., Tollis, I.G.: Graph Drawing: Algorithms for the Visualization of Graphs. Prentice-Hall (1999) [5] Di Battista, G., Garg, A., Liotta, G., Parise, A., Tamassia, R., Tassinari, E., Vargiu, F., Vismara, L.: Drawing directed acyclic graphs: An experimental study. In: North, S.C. (ed.) Graph Drawing, Symposium on Graph Drawing, GD ’96, Berkeley, California, USA, September 18-20, Proceedings. Lecture Notes in Computer Science, vol. 1190, pp. 76–91. Springer (1996). https://doi.org/10.1007/3-540-62495-3_39, https://doi.org/10.1007/3-540-62495-3_39 [6] Di Battista, G., Pietrosanti, E., Tamassia, R., Tollis, I.G.: Automatic layout of pert diagrams with x-pert. In: 1989 IEEE Workshop on Visual Languages. pp. 171–176. IEEE (1989) [7] Fisher, D.L., Goldstein, W.M.: Stochastic pert networks as models of cognition: Derivation of the mean, variance, and distribution of reaction time using order-of-processing (op) diagrams (1983) [8] Fröhlich, M., Werner, M.: Demonstration of the interactive graph-visualization system da Vinci. In: Graph Drawing, DIMACS International Workshop, GD ’94, Princeton, New Jersey, USA, October 10-12, 1994, Proceedings. pp. 266–269 (1994). https://doi.org/10.1007/3-540-58950-3_379, https://doi.org/10.1007/3-540-58950-3_379 [9] Gansner, E.R., Koutsofios, E., North, S.C., Vo, K.P.: A technique for drawing directed graphs. IEEE Transactions on Software Engineering 19(3), 214–230 (1993) [10] Gansner, E.R., Koutsofios, E.E., North, S.C.: Drawing graphs with dot. https://www.graphviz.org/pdf/dotguide.pdf [11] Gansner, E.R., North, S.C.: An open graph visualization system and its applications to software engineering. Softw., Pract. Exper. 30(11), 1203–1233 (2000). https://doi.org/10.1002/1097-024X(200009)30:11¡1203::AID-SPE338¿3.0.CO;2-N [12] Goodrich, M.T., Tamassia, R.: Algorithm Design and Applications. Wiley Publishing, 1st edn. (2014) [13] Himsolt, M.: Graphlet: design and implementation of a graph editor. Softw., Pract. Exper. 30(11), 1303–1324 (2000). https://doi.org/10.1002/1097-024X(200009)30:11¡1303::AID-SPE341¿3.0.CO;2-3 [14] Hopcroft, J.E., Karp, R.M.: An n${}^{\mbox{5/2}}$ algorithm for maximum matchings in bipartite graphs. SIAM J. Comput. 2(4), 225–231 (1973). https://doi.org/10.1137/0202019, https://doi.org/10.1137/0202019 [15] Kaufmann, M., Wagner, D.: Drawing graphs: Methods and models. LNCS vol. 2025 (2001) [16] Kruskal, J.B.: On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society 7(1), 48–50 (1956) [17] Kuosmanen, A., Paavilainen, T., Gagie, T., Chikhi, R., Tomescu, A.I., Mäkinen, V.: Using minimum path cover to boost dynamic programming on dags: Co-linear chaining extended. In: Research in Computational Molecular Biology - 22nd Annual International Conference, RECOMB 2018, Paris, France, April 21-24, 2018. pp. 105–121 (2018). https://doi.org/10.1007/978-3-319-89929-9_7, https://doi.org/10.1007/978-3-319-89929-9_7 [18] Nikolov, N.S., Healy, P.: Hierarchical Drawing Algorithms, in Handbook of Graph Drawing and Visualization, ed. Roberto Tamassia. CRC Press (2014), pp. 409-453 [19] Orlin, J.B.: Max flows in O(nm) time, or better. In: Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013. pp. 765–774 (2013). https://doi.org/10.1145/2488608.2488705, http://doi.acm.org/10.1145/2488608.2488705 [20] Ortali, G., Tollis, I.G.: Algorithms and bounds for drawing directed graphs. In: International Symposium on Graph Drawing and Network Visualization. pp. 579–592. Springer (2018) [21] Ortali, G., Tollis, I.G.: A new framework for hierarchical drawings. Journal of Graph Algorithms and Applications 23(3), 553–578 (2019). https://doi.org/10.7155/jgaa.00502 [22] Paulisch, F.N., Tichy, W.F.: EDGE: an extendible graph editor. Softw., Pract. Exper. 20(S1),  S1 (1990) [23] Sander, G.: Layout of compound directed graphs. Tech. rep., Universität des Saarlandes (1996) [24] Schnorr, C.: An algorithm for transitive closure with linear expected time. SIAM J. Comput. 7(2), 127–133 (1978). https://doi.org/10.1137/0207011, https://doi.org/10.1137/0207011 [25] Sugiyama, K., Tagawa, S., Toda, M.: Methods for visual understanding of hierarchical system structures. IEEE Trans. Systems, Man, and Cybernetics 11(2), 109–125 (1981). https://doi.org/10.1109/TSMC.1981.4308636, https://doi.org/10.1109/TSMC.1981.4308636
Efficient Optimal Learning for Contextual Bandits Miroslav Dudik [email protected] &Daniel Hsu [email protected] &Satyen Kale [email protected] &Nikos Karampatziakis [email protected] \ANDJohn Langford [email protected] &Lev Reyzin [email protected] &Tong Zhang [email protected] Abstract We address the problem of learning in an online setting where the learner repeatedly observes features, selects among a set of actions, and receives reward for the action taken. We provide the first efficient algorithm with an optimal regret. Our algorithm uses a cost sensitive classification learner as an oracle and has a running time $\mathrm{polylog}(N)$, where $N$ is the number of classification rules among which the oracle might choose. This is exponentially faster than all previous algorithms that achieve optimal regret in this setting. Our formulation also enables us to create an algorithm with regret that is additive rather than multiplicative in feedback delay as in all previous work.   Efficient Optimal Learning for Contextual Bandits   Miroslav Dudik [email protected]                        Daniel Hsu [email protected]                        Satyen Kale [email protected]                        Nikos Karampatziakis [email protected] John Langford [email protected]                        Lev Reyzin [email protected]                        Tong Zhang [email protected] 1 INTRODUCTION The contextual bandit setting consists of the following loop repeated indefinitely: 1. The world presents context information as features $x$. 2. The learning algorithm chooses an action $a$ from $K$ possible actions. 3. The world presents a reward $r$ for the action. The key difference between the contextual bandit setting and standard supervised learning is that only the reward of the chosen action is revealed. For example, after always choosing the same action several times in a row, the feedback given provides almost no basis to prefer the chosen action over another action. In essence, the contextual bandit setting captures the difficulty of exploration while avoiding the difficulty of credit assignment as in more general reinforcement learning settings. The contextual bandit setting is a half-way point between standard supervised learning and full-scale reinforcement learning where it appears possible to construct algorithms with convergence rate guarantees similar to supervised learning. Many natural settings satisfy this half-way point, motivating the investigation of contextual bandit learning. For example, the problem of choosing interesting news articles or ads for users by internet companies can be naturally modeled as a contextual bandit setting. In the medical domain where discrete treatments are tested before approval, the process of deciding which patients are eligible for a treatment takes contexts into account. More generally, we can imagine that in a future with personalized medicine, new treatments are essentially equivalent to new actions in a contextual bandit setting. In the i.i.d. setting, the world draws a pair $(x,\vec{r})$ consisting of a context and a reward vector from some unknown distribution $D$, revealing $x$ in Step 1, but only the reward $r(a)$ of the chosen action $a$ in Step 3. Given a set of policies $\Pi=\{\pi:X\rightarrow A\}$, the goal is to create an algorithm for Step 2 which competes with the set of policies. We measure our success by comparing the algorithm’s cumulative reward to the expected cumulative reward of the best policy in the set. The difference of the two is called regret. All existing algorithms for this setting either achieve a suboptimal regret (Langford and Zhang, 2007) or require computation linear in the number of policies (Auer et al., 2002b; Beygelzimer et al., 2011). In unstructured policy spaces, this computational complexity is the best one can hope for. On the other hand, in the case where the rewards of all actions are revealed, the problem is equivalent to cost-sensitive classification, and we know of algorithms to efficiently search the space of policies (classification rules) such as cost-sensitive logistic regression and support vector machines. In these cases, the space of classification rules is exponential in the number of features, but these problems can be efficiently solved using convex optimization. Our goal here is to efficiently solve the contextual bandit problems for similarly large policy spaces. We do this by reducing the contextual bandit problem to cost-sensitive classification. Given a supervised cost-sensitive learning algorithm as an oracle (Beygelzimer et al., 2009), our algorithm runs in time only $\mathrm{polylog}(N)$ while achieving regret $O(\sqrt{TK\ln N})$, where $N$ is the number of possible policies (classification rules), $K$ is the number of actions (classes), and $T$ is the number of time steps. This efficiency is achieved in a modular way, so any future improvement in cost-sensitive learning immediately applies here. 1.1 PREVIOUS WORK AND MOTIVATION All previous regret-optimal approaches are measure based—they work by updating a measure over policies, an operation which is linear in the number of policies. In contrast, regret guarantees scale only logarithmically in the number of policies. If not for the computational bottleneck, these regret guarantees imply that we could dramatically increase performance in contextual bandit settings using more expressive policies. We overcome the computational bottleneck using an algorithm which works by creating cost-sensitive classification instances and calling an oracle to choose optimal policies. Actions are chosen based on the policies returned by the oracle rather than according to a measure over all policies. This is reminiscent of AdaBoost (Freund and Schapire, 1997), which creates weighted binary classification instances and calls a “weak learner” oracle to obtain classification rules. These classification rules are then combined into a final classifier with boosted accuracy. Similarly as AdaBoost converts a weak learner into a strong learner, our approach converts a cost-sensitive classification learner into an algorithm that solves the contextual bandit problem. In a more difficult version of contextual bandits, an adversary chooses $(x,\vec{r})$ given knowledge of the learning algorithm (but not any random numbers). All known regret-optimal solutions in the adversarial setting are variants of the EXP4 algorithm (Auer et al., 2002b). EXP4 achieves the same regret rate as our algorithm: $O\left(\sqrt{KT\ln N}\right)$, where $T$ is the number of time steps, $K$ is the number of actions available in each time step, and $N$ is the number of policies. Why not use EXP4 in the i.i.d. setting? For example, it is known that the algorithm can be modified to succeed with high probability (Beygelzimer et al., 2011), and also for VC classes when the adversary is constrained to i.i.d. sampling. There are two central benefits that we hope to realize by directly assuming i.i.d. contexts and reward vectors. 1. Computational Tractability. Even when the reward vector is fully known, adversarial regrets scale as $O\left(\sqrt{\ln N}\right)$ while computation scales as $O(N)$ in general. One attempt to get around this is the follow-the-perturbed-leader algorithm (Kalai and Vempala, 2005) which provides a computationally tractable solution in certain special-case structures. This algorithm has no mechanism for efficient application to arbitrary policy spaces, even given an efficient cost-sensitive classification oracle. An efficient cost-sensitive classification oracle has been shown effective in transductive settings (Kakade and Kalai, 2005). Aside from the drawback of requiring a transductive setting, the regret achieved there is substantially worse than for EXP4. 2. Improved Rates. When the world is not completely adversarial, it is possible to achieve substantially lower regrets than are possible with algorithms optimized for the adversarial setting. For example, in supervised learning, it is possible to obtain regrets scaling as $O(\log(T))$ with a problem dependent constant (Bartlett et al., 2007). When the feedback is delayed by $\tau$ rounds, lower bounds imply that the regret in the adversarial setting increases by a multiplicative $\sqrt{\tau}$ while in the i.i.d. setting, it is possible to achieve an additive regret of $\tau$ (Langford et al., 2009). In a direct i.i.d. setting, the previous-best approach using a cost-sensitive classification oracle was given by $\epsilon$-greedy and epoch greedy algorithms (Langford and Zhang, 2007) which have a regret scaling as $O(T^{2/3})$ in the worst case. There have also been many special-case analyses. For example, theory of context-free setting is well understood (Lai and Robbins, 1985; Auer et al., 2002a; Even-Dar et al., 2006). Similarly, good algorithms exist when rewards are linear functions of features (Auer, 2002) or actions lie in a continuous space with the reward function sampled according to a Gaussian process (Srinivas et al., 2010). 1.2 WHAT WE PROVE In Section 3 we state the PolicyElimination algorithm, and prove the following regret bound for it. Theorem 4. For all distributions $D$ over $(x,\vec{r})$ with $K$ actions, for all sets of $N$ policies $\Pi$, with probability at least $1-\delta$, the regret of PolicyElimination (Algorithm 1) over $T$ rounds is at most $$16\sqrt{2TK\ln\frac{4T^{2}N}{\delta}}.$$ This result can be extended to deal with VC classes, as well as other special cases. It forms the simplest method we have of exhibiting the new analysis. The new key element of this algorithm is identification of a distribution over actions which simultaneously achieves small expected regret and allows estimating value of every policy with small variance. The existence of such a distribution is shown nonconstructively by a minimax argument. PolicyElimination is computationally intractable and also requires exact knowledge of the context distribution (but not the reward distribution!). We show how to address these issues in Section 4 using an algorithm we call RandomizedUCB. Namely, we prove the following theorem. Theorem 5. For all distributions $D$ over $(x,\vec{r})$ with $K$ actions, for all sets of $N$ policies $\Pi$, with probability at least $1-\delta$, the regret of RandomizedUCB (Algorithm 2) over $T$ rounds is at most $$O\left(\sqrt{TK\log\left(TN/\delta\right)}+K\log(NK/\delta)\right).$$ RandomizedUCB’s analysis is substantially more complex, with a key subroutine being an application of the ellipsoid algorithm with a cost-sensitive classification oracle (described in Section 5). RandomizedUCB does not assume knowledge of the context distribution, and instead works with the history of contexts it has observed. Modifying the proof for this empirical distribution requires a covering argument over the distributions over policies which uses the probabilistic method. The net result is an algorithm with a similar top-level analysis as PolicyElimination, but with the running time only poly-logarithmic in the number of policies given a cost-sensitive classification oracle. Theorem 11. In each time step $t$, RandomizedUCB makes at most $O(\mathrm{poly}(t,K,\log(1/\delta),\log N))$ calls to cost-sensitive classification oracle, and requires additional $O(\mathrm{poly}(t,K,\log N))$ processing time. Apart from a tractable algorithm, our analysis can be used to derive tighter regrets than would be possible in adversarial setting. For example, in Section 6, we consider a common setting where reward feedback is delayed by $\tau$ rounds. A straightforward modification of PolicyElimination yields a regret with an additive term proportional to $\tau$ compared with the delay-free setting. Namely, we prove the following. Theorem 12. For all distributions $D$ over $(x,\vec{r})$ with $K$ actions, for all sets of $N$ policies $\Pi$, and all delay intervals $\tau$, with probability at least $1-\delta$, the regret of DelayedPE (Algorithm 3) is at most $$16\sqrt{2K\ln\frac{4T^{2}N}{\delta}}\left(\tau+\sqrt{T}\right).$$ We start next with precise settings and definitions. 2 SETTING AND DEFINITIONS 2.1 THE SETTING Let $A$ be the set of $K$ actions, let $X$ be the domain of contexts $x$, and let $D$ be an arbitrary joint distribution on $(x,\vec{r})$. We denote the marginal distribution of $D$ over $X$ by $D_{X}$. We denote $\Pi$ to be a finite set of policies $\{\pi:X\rightarrow A\}$, where each policy $\pi$, given a context $x_{t}$ in round $t$, chooses the action $\pi(x_{t})$. The cardinality of $\Pi$ is denoted by $N$. Let $\vec{r}_{t}\in[0,1]^{K}$ be the vector of rewards, where $r_{t}(a)$ is the reward of action $a$ on round $t$. In the i.i.d. setting, on each round $t=1\ldots T$, the world chooses $(x_{t},\vec{r}_{t})$ i.i.d. according to $D$ and reveals $x_{t}$ to the learner. The learner, having access to $\Pi$, chooses action $a_{t}\in\{1,\ldots,K\}$. Then the world reveals reward $r_{t}(a_{t})$ (which we call $r_{t}$ for short) to the learner, and the interaction proceeds to the next round. We consider two modes of accessing the set of policies $\Pi$. The first option is through the enumeration of all policies. This is impractical in general, but suffices for the illustrative purpose of our first algorithm. The second option is an oracle access, through an argmax oracle, corresponding to a cost-sensitive learner: Definition 1. For a set of policies $\Pi$, an argmax oracle ($\mathcal{AMO}$ for short), is an algorithm, which for any sequence $\{(x_{t^{\prime}},\vec{r}_{t^{\prime}})\}_{{t^{\prime}}=1\dotsc t}$, $x_{t^{\prime}}\in X$, $\vec{r}_{t^{\prime}}\in\mathbb{R}^{K}$, computes $$\arg\max_{\pi\in\Pi}\sum_{{t^{\prime}}=1\dotsc t}r_{t^{\prime}}(\pi(x_{t^{% \prime}}))\enspace.$$ The reason why the above can be viewed as a cost-sensitive classification oracle is that vectors of rewards $\vec{r}_{t^{\prime}}$ can be interpreted as negative costs and hence the policy returned by $\mathcal{AMO}$ is the optimal cost-sensitive classifier on the given data. 2.2 EXPECTED AND EMPIRICAL REWARDS Let the expected instantaneous reward of a policy $\pi\in\Pi$ be denoted by $$\eta_{D}(\pi)\doteq\mathop{\mathbb{E}}_{(x,\vec{r})\sim D}[r(\pi(x))]\enspace.$$ The best policy $\pi_{\max}\in\Pi$ is that which maximizes $\eta_{D}(\pi)$. More formally, $$\pi_{\max}\doteq\operatorname*{argmax}_{\pi\in\Pi}{\eta_{D}(\pi)}\enspace.$$ We define $h_{t}$ to be the history at time $t$ that the learner has seen. Specifically $$h_{t}=\bigcup_{t^{\prime}=1\ldots t}(x_{t^{\prime}},a_{t^{\prime}},r_{t^{% \prime}},p_{t^{\prime}})\enspace,$$ where $p_{t^{\prime}}$ is the probability of the algorithm choosing action $a_{t^{\prime}}$ at time $t^{\prime}$. Note that $a_{t^{\prime}}$ and $p_{t^{\prime}}$ are produced by the learner while $x_{t^{\prime}},r_{t^{\prime}}$ are produced by nature. We write $x\sim h$ to denote choosing $x$ uniformly at random from the $x$’s in history $h$. Using the history of past actions and probabilities with which they were taken, we can form an unbiased estimate of the policy value for any $\pi\in\Pi$: $$\eta_{t}(\pi)\doteq\frac{1}{t}\sum_{(x,a,r,p)\in h_{t}}\frac{r\mathbb{I}(\pi(x% )=a)}{p}.$$ The unbiasedness follows, because $\mathop{\mathbb{E}}_{a\sim p}\frac{r\mathbb{I}(\pi(x)=a)}{p(a)}=\sum_{a}p(a)% \frac{r\mathbb{I}(\pi(x)=a)}{p(a)}=r(\pi(x))$. The empirically best policy at time $t$ is denoted $$\pi_{t}\doteq\operatorname*{argmax}_{\pi\in\Pi}{\eta_{t}(\pi)}.$$ 2.3 REGRET The goal of this work is to obtain a learner that has small regret relative to the expected performance of $\pi_{\max}$ over $T$ rounds, which is $$\sum_{t=1\dotsc T}\left(\eta_{D}(\pi_{\max})-r_{t}\right).$$ (2.1) We say that the regret of the learner over $T$ rounds is bounded by $\epsilon$ with probability at least $1-\delta$, if $$\Pr\left[\sum_{t=1\dotsc T}\left(\eta_{D}(\pi_{\max})-r_{t}\right)\leq\epsilon% \right]\geq 1-\delta$$ where the probability is taken with respect to the random pairs $(x_{t},\vec{r}_{t})\sim D$ for $t=1\dotsc T$, as well as any internal randomness used by the learner. We can also define notions of regret and empirical regret for policies $\pi$. For all $\pi\in\Pi$, let $$\displaystyle\Delta_{D}(\pi)$$ $$\displaystyle=\eta_{D}(\pi_{\max})-\eta_{D}(\pi)\enspace,$$ $$\displaystyle\Delta_{t}(\pi)$$ $$\displaystyle=\eta_{t}(\pi_{t})-\eta_{t}(\pi)\enspace.$$ Our algorithms work by choosing distributions over policies, which in turn then induce distributions over actions. For any distribution $P$ over policies $\Pi$, let $W_{P}(x,a)$ denote the induced conditional distribution over actions $a$ given the context $x$: $$W_{P}(x,a)\doteq\sum_{\pi\in\Pi:\pi(x)=a}P(\pi)\enspace.$$ (2.2) In general, we shall use $W$, $W^{\prime}$ and $Z$ as conditional probability distributions over the actions $A$ given contexts $X$, i.e., $W:X\times A\to[0,1]$ such that $W(x,\cdot)$ is a probability distribution over $A$ (and similarly for $W^{\prime}$ and $Z$). We shall think of $W^{\prime}$ as a smoothed version of $W$ with a minimum action probability of $\mu$ (to be defined by the algorithm), such that $$W^{\prime}(x,a)=(1-K\mu)W(x,a)+\mu\enspace.$$ Conditional distributions such as $W$ (and $W^{\prime}$, $Z$, etc.) correspond to randomized policies. We define notions true and empirical value and regret for them as follows: $$\displaystyle\eta_{D}(W)$$ $$\displaystyle\doteq\mathop{\mathbb{E}}_{(x,\vec{r})\sim D}[\vec{r}\cdot W(x)]$$ $$\displaystyle\eta_{t}(W)$$ $$\displaystyle\doteq\frac{1}{t}\sum_{(x,a,r,p)\in h_{t}}\frac{rW(x,a)}{p}$$ $$\displaystyle\Delta_{D}(W)$$ $$\displaystyle\doteq\eta_{D}(\pi_{\max})-\eta_{D}(W)$$ $$\displaystyle\Delta_{t}(W)$$ $$\displaystyle\doteq\eta_{t}(\pi_{t})-\eta_{t}(W)\enspace.$$ 3 POLICY ELIMINATION The basic ideas behind our approach are demonstrated in our first algorithm: PolicyElimination (Algorithm 1). The key step is Step 1, which finds a distribution over policies which induces low variance in the estimate of the value of all policies. Below we use minimax theorem to show that such a distribution always exists. How to find this distribution is not specified here, but in Section 5 we develop a method based on the ellipsoid algorithm. Step 2 then projects this distribution onto a distribution over actions and applies smoothing. Finally, Step 5 eliminates the policies that have been determined to be suboptimal (with high probability). ALGORITHM ANALYSIS We analyze PolicyElimination in several steps. First, we prove the existence of $P_{t}$ in Step 1, provided that $\Pi_{t-1}$ is non-empty. We recast the feasibility problem in Step 1 as a game between two players: Prover, who is trying to produce $P_{t}$, and Falsifier, who is trying to find $\pi$ violating the constraints. We give more power to Falsifier and allow him to choose a distribution over $\pi$ (i.e., a randomized policy) which would violate the constraints. Note that any policy $\pi$ corresponds to a point in the space of randomized policies (viewed as functions $X\times A\to[0,1]$), with $\pi(x,a)\doteq\mathbb{I}(\pi(x)=a)$. For any distribution $P$ over policies in $\Pi_{t-1}$, the induced randomized policy $W_{P}$ then corresponds to a point in the convex hull of $\Pi_{t-1}$. Denoting the convex hull of $\Pi_{t-1}$ by $\mathcal{C}$, Prover’s choice by $W$ and Falsifier’s choice by $Z$, the feasibility of Step 1 follows by the following lemma: Lemma 1. Let $\mathcal{C}$ be a compact and convex set of randomized policies. Let $\mu\in(0,1/K]$ and for any $W\in\mathcal{C}$, $W^{\prime}(x,a)\doteq(1-K\mu)W(x,a)+\mu$. Then for all distributions $D$, $$\min_{W\in\mathcal{C}}\max_{Z\in\mathcal{C}}\mathop{\mathbb{E}}_{x\sim D_{X}}% \mathop{\mathbb{E}}_{a\sim Z(x,\cdot)}\left[\frac{1}{W^{\prime}(x,a)}\right]% \leq\frac{K}{1-K\mu}\enspace.$$ Proof. Let $f(W,Z)\doteq\mathop{\mathbb{E}}_{x\sim D_{X}}\mathop{\mathbb{E}}_{a\sim Z(x,% \cdot)}[1/W^{\prime}(x,a)]$ denote the inner expression of the minimax problem. Note that $f(W,Z)$ is: • everywhere defined: Since $W^{\prime}(x,a)\geq\mu$, we obtain that $1/W^{\prime}(x,a)\in[0,1/\mu]$, hence the expectations are defined for all $W$ and $Z$. • linear in $Z$: Linearity follows from rewriting $f(W,Z)$ as $$f(W,Z)=\mathop{\mathbb{E}}_{x\sim D_{X}}\sum_{a\in A}\left[\frac{Z(x,a)}{W^{% \prime}(x,a)}\right].$$ • convex in $W$: Note that $1/W^{\prime}(x,a)$ is convex in $W(x,a)$ by convexity of $1/(c_{1}w+c_{2})$ in $w\geq 0$, for $c_{1}\geq 0$, $c_{2}>0$. Convexity of $f(W,Z)$ in $W$ then follows by taking expectations over $x$ and $a$. Hence, by Theorem 14 (in Appendix B), min and max can be reversed without affecting the value: $$\min_{W\in\mathcal{C}}\max_{Z\in\mathcal{C}}f(W,Z)=\max_{Z\in\mathcal{C}}\min_% {W\in\mathcal{C}}f(W,Z)\enspace.$$ The right-hand side can be further upper-bounded by $\max_{Z\in\mathcal{C}}f(Z,Z)$, which is upper-bounded by $$\displaystyle f(Z,Z)=\mathop{\mathbb{E}}_{x\sim D_{X}}\sum_{a\in A}\left[\frac% {Z(x,a)}{Z^{\prime}(x,a)}\right]$$ $$\displaystyle\quad{}\leq\mathop{\mathbb{E}}_{x\sim D_{X}}\!\!\!\!\!\sum_{% \begin{subarray}{c}a\in A:\\ Z(x,a)>0\end{subarray}}\!\!\!\!\!\left[\frac{Z(x,a)}{(1-K\mu)Z(x,a)}\right]=% \frac{K}{1-K\mu}\enspace.$$ ∎ Corollary 2. The set of distributions satisfying constraints of Step 1 is non-empty. Given the existence of $P_{t}$, we will see below that the constraints in Step 1 ensure low variance of the policy value estimator $\eta_{t}(\pi)$ for all $\pi\in\Pi_{t-1}$. The small variance is used to ensure accuracy of policy elimination in Step 5 as quantified in the following lemma: Lemma 3. With probability at least $1-\delta$, for all $t$: 1. $\pi_{\max}\in\Pi_{t}$ (i.e., $\Pi_{t}$ is non-empty) 2. $\eta_{D}(\pi_{\max})-\eta_{D}(\pi)\leq 4b_{t}$ for all $\pi\in\Pi_{t}$ Proof. We will show that for any policy $\pi\in\Pi_{t-1}$, the probability that $\eta_{t}(\pi)$ deviates from $\eta_{D}(\pi)$ by more that $b_{t}$ is at most $2\delta_{t}$. Taking the union bound over all policies and all time steps we find that with probability at least $1-\delta$, $$\left\lvert\eta_{t}(\pi)-\eta_{D}(\pi)\right\rvert\leq b_{t}$$ (3.1) for all $t$ and all $\pi\in\Pi_{t-1}$. Then: 1. By the triangle inequality, in each time step, $\eta_{t}(\pi)\leq\eta_{t}(\pi_{\max})+2b_{t}$ for all $\pi\in\Pi_{t-1}$, yielding the first part of the lemma. 2. Also by the triangle inequality, if $\eta_{D}(\pi)<\eta_{D}(\pi_{\max})-4b_{t}$ for $\pi\in\Pi_{t-1}$, then $\eta_{t}(\pi)<\eta_{t}(\pi_{\max})-2b_{t}$. Hence the policy $\pi$ is eliminated in Step 5, yielding the second part of the lemma. It remains to show Eq. (3.1). We fix the policy $\pi\in\Pi$ and time $t$, and show that the deviation bound is violated with probability at most $2\delta_{t}$. Our argument rests on Freedman’s inequality (see Theorem 13 in Appendix A). Let $$y_{t}=\frac{r_{t}\mathbb{I}(\pi(x_{t})=a_{t})}{W^{\prime}_{t}(a_{t})}\enspace,$$ i.e., $\eta_{t}(\pi)=(\sum_{t^{\prime}=1}^{t}y_{t^{\prime}})/t$. Let ${\mathbb{E}}_{t}$ denote the conditional expectation $\mathop{\mathbb{E}}[{}\cdot{}|\,h_{t-1}]$. To use Freedman’s inequality, we need to bound the range of $y_{t}$ and its conditional second moment ${\mathbb{E}}_{t}[y_{t}^{2}]$. Since $r_{t}\in[0,1]$ and $W^{\prime}_{t}(a_{t})\geq\mu_{t}$, we have the bound $$0\leq y_{t}\leq 1/\mu_{t}\doteq R_{t}\enspace.$$ Next, $$\displaystyle{\mathbb{E}}_{t}[y_{t}^{2}]$$ $$\displaystyle=\mathop{\mathbb{E}}_{(x_{t},\vec{r}_{t})\sim D}\mathop{\mathbb{E% }}_{a_{t}\sim W^{\prime}_{t}}\left[y_{t}^{2}\right]$$ $$\displaystyle=\mathop{\mathbb{E}}_{(x_{t},\vec{r}_{t})\sim D}\mathop{\mathbb{E% }}_{a_{t}\sim W^{\prime}_{t}}\left[\frac{r_{t}^{2}\mathbb{I}(\pi(x_{t})=a_{t})% }{W^{\prime}_{t}(a_{t})^{2}}\right]$$ $$\displaystyle\leq\mathop{\mathbb{E}}_{(x_{t},\vec{r}_{t})\sim D}\left[\frac{W^% {\prime}_{t}(\pi(x_{t}))}{W^{\prime}_{t}(\pi(x_{t}))^{2}}\right]$$ (3.2) $$\displaystyle=\mathop{\mathbb{E}}_{x_{t}\sim D}\left[\frac{1}{W^{\prime}_{t}(% \pi(x_{t}))}\right]\leq 2K\enspace.$$ (3.3) where Eq. (3.2) follows by boundedness of $r_{t}$ and Eq. (3.3) follows from the constraints in Step 1. Hence, $$\sum_{t^{\prime}=1\dotsc t}{\mathbb{E}}_{t^{\prime}}[y_{t^{\prime}}^{2}]\leq 2% Kt\doteq V_{t}\enspace.$$ Since $(\ln t)/t$ is decreasing for $t\geq 3$, we obtain that $\mu_{t}$ is non-increasing (by separately analyzing $t=1$, $t=2$, $t\geq 3$). Let $t_{0}$ be the first $t$ such that $\mu_{t}<1/2K$. Note that $b_{t}\geq 4K\mu_{t}$, so for $t<t_{0}$, we have $b_{t}\geq 2$ and $\Pi_{t}=\Pi$. Hence, the deviation bound holds for $t<t_{0}$. Let $t\geq t_{0}$. For $t^{\prime}\leq t$, by the monotonicity of $\mu_{t}$ $$R_{t^{\prime}}=1/\mu_{t^{\prime}}\leq 1/\mu_{t}=\sqrt{\frac{2Kt}{\ln(1/\delta_% {t})}}=\sqrt{\frac{V_{t}}{\ln(1/\delta_{t})}}\enspace.$$ Hence, the assumptions of Theorem 13 are satisfied, and $$\Pr\left[\left\lvert\eta_{t}(\pi)-\eta_{D}(\pi)\right\rvert\geq b_{t}\right]% \leq 2\delta_{t}\enspace.$$ The union bound over $\pi$ and $t$ yields Eq. (3.1). ∎ This immediately implies that the cumulative regret is bounded by $$\displaystyle\sum_{t=1\dotsc T}\!\!\left(\eta_{D}(\pi_{\max})-r_{t}\right)$$ $$\displaystyle\leq$$ $$\displaystyle 8\sqrt{2K\ln\frac{4NT^{2}}{\delta}}\sum_{t=1}^{T}\frac{1}{\sqrt{% t}}$$ (3.4) $$\displaystyle\leq$$ $$\displaystyle 16\sqrt{2TK\ln\frac{4T^{2}N}{\delta}}$$ and gives us the following theorem. Theorem 4. For all distributions $D$ over $(x,\vec{r})$ with $K$ actions, for all sets of $N$ policies $\Pi$, with probability at least $1-\delta$, the regret of PolicyElimination (Algorithm 1) over $T$ rounds is at most $$16\sqrt{2TK\ln\frac{4T^{2}N}{\delta}}\enspace.$$ 4 THE RANDOMIZED UCB ALGORITHM PolicyElimination is the simplest exhibition of the minimax argument, but it has some drawbacks: 1. The algorithm keeps explicit track of the space of good policies (like a version space), which is difficult to implement efficiently in general. 2. If the optimal policy is mistakenly eliminated by chance, the algorithm can never recover. 3. The algorithm requires perfect knowledge of the distribution $D_{X}$ over contexts. These difficulties are addressed by RandomizedUCB (or RUCB for short), an algorithm which we present and analyze in this section. Our approach is reminiscent of the UCB algorithm (Auer et al., 2002a), developed for context-free setting, which keeps an upper-confidence bound on the expected reward for each action. However, instead of choosing the highest upper confidence bound, we randomize over choices according to the value of their empirical performance. The algorithm has the following properties: 1. The optimization step required by the algorithm always considers the full set of policies (i.e., explicit tracking of the set of good policies is avoided), and thus it can be efficiently implemented using an argmax oracle. We discuss this further in Section 5. 2. Suboptimal policies are implicitly used with decreasing frequency by using a non-uniform variance constraint that depends on a policy’s estimated regret. A consequence of this is a bound on the value of the optimization, stated in Lemma 7 below. 3. Instead of $D_{X}$, the algorithm uses the history of previously seen contexts. The effect of this approximation is quantified in Theorem 6 below. The regret of RandomizedUCB is the following: Theorem 5. For all distributions $D$ over $(x,\vec{r})$ with $K$ actions, for all sets of $N$ policies $\Pi$, with probability at least $1-\delta$, the regret of RandomizedUCB (Algorithm 2) over $T$ rounds is at most $$O\left(\sqrt{TK\log\left(TN/\delta\right)}+K\log(NK/\delta)\right).$$ The proof is given in Appendix D.4. Here, we present an overview of the analysis. 4.1 EMPIRICAL VARIANCE ESTIMATES A key technical prerequisite for the regret analysis is the accuracy of the empirical variance estimates. For a distribution $P$ over policies $\Pi$ and a particular policy $\pi\in\Pi$, define $$\displaystyle V_{P,\pi,t}$$ $$\displaystyle=\mathop{\mathbb{E}}_{x\sim D_{X}}\left[\frac{1}{(1-K\mu_{t})W_{P% }(x,\pi(x))+\mu_{t}}\right]$$ $$\displaystyle\widehat{V}_{P,\pi,t}$$ $$\displaystyle=\frac{1}{t-1}\sum_{i=1}^{t-1}\frac{1}{(1-K\mu_{t})W_{P}(x_{i},% \pi(x_{i}))+\mu_{t}}.$$ The first quantity $V_{P,\pi,t}$ is (a bound on) the variance incurred by an importance-weighted estimate of reward in round $t$ using the action distribution induced by $P$, and the second quantity $\widehat{V}_{P,\pi,t}$ is an empirical estimate of $V_{P,\pi,t}$ using the finite sample $\{x_{1},\dotsc,x_{t-1}\}\subseteq X$ drawn from $D_{X}$. We show that for all distributions $P$ and all $\pi\in\Pi$, $\widehat{V}_{P,\pi,t}$ is close to $V_{P,\pi,t}$ with high probability. Theorem 6. For any $\epsilon\in(0,1)$, with probability at least $1-\delta$, $$V_{P,\pi,t}\leq(1+\epsilon)\cdot\widehat{V}_{P,\pi,t}+\frac{7500}{\epsilon^{3}% }\cdot K$$ for all distributions $P$ over $\Pi$, all $\pi\in\Pi$, and all $t\geq 16K\log(8KN/\delta)$. The proof appears in Appendix C. 4.2 REGRET ANALYSIS Central to the analysis is the following lemma that bounds the value of the optimization in each round. It is a direct corollary of Lemma 24 in Appendix D.4. Lemma 7. If $\operatorname{OPT}_{t}$ is the value of the optimization problem (4.1) in round $t$, then $$\operatorname{OPT}_{t}\ \leq\ O\left(\sqrt{\frac{KC_{t-1}}{t-1}}\right)\ =\ O% \left(\sqrt{\frac{K\log(Nt/\delta)}{t}}\right).$$ This lemma implies that the algorithm is always able to select a distribution over the policies that focuses mostly on the policies with low estimated regret. Moreover, the variance constraints ensure that good policies never appear too bad, and that only bad policies are allowed to incur high variance in their reward estimates. Hence, minimizing the objective in (4.1) is an effective surrogate for minimizing regret. The bulk of the analysis consists of analyzing the variance of the importance-weighted reward estimates $\eta_{t}(\pi)$, and showing how they relate to their actual expected rewards $\eta_{D}(\pi)$. The details are deferred to Appendix D. 5 USING AN ARGMAX ORACLE In this section, we show how to solve the optimization problem (4.1) using the argmax oracle ($\mathcal{AMO}$) for our set of policies. Namely, we describe an algorithm running in polynomial time independent111Or rather dependent only on $\log N$, the representation size of a policy. of the number of policies, which makes queries to $\mathcal{AMO}$ to compute a distribution over policies suitable for the optimization step of Algorithm 2. This algorithm relies on the ellipsoid method. The ellipsoid method is a general technique for solving convex programs equipped with a separation oracle. A separation oracle is defined as follows: Definition 2. Let $S$ be a convex set in $\mathbb{R}^{n}$. A separation oracle for $S$ is an algorithm that, given a point $x\in\mathbb{R}^{n}$, either declares correctly that $x\in S$, or produces a hyperplane $H$ such that $x$ and $S$ are on opposite sides of $H$. We do not describe the ellipsoid algorithm here (since it is standard), but only spell out its key properties in the following lemma. For a point $x\in\mathbb{R}^{n}$ and $r\geq 0$, we use the notation $B(x,r)$ to denote the $\ell_{2}$ ball of radius $r$ centered at $x$. Lemma 8. Suppose we are required to decide whether a convex set $S\subseteq\mathbb{R}^{n}$ is empty or not. We are given a separation oracle for $S$ and two numbers $R$ and $r$, such that $S\in B(0,R)$ and if $S$ is non-empty, then there is a point $x^{\star}$ such that $S\supseteq B(x^{\star},r)$. The ellipsoid algorithm decides correctly if $S$ is empty or not, by executing at most $O(n^{2}\log(\frac{R}{r}))$ iterations, each involving one call to the separation oracle and additional $O(n^{2})$ processing time. We now write a convex program whose solution is the required distribution, and show how to solve it using the ellipsoid method by giving a separation oracle for its feasible set using $\mathcal{AMO}$. Fix a time period $t$. Let $\mathcal{X}_{t-1}$ be the set of all contexts seen so far, i.e. $\mathcal{X}_{t-1}=\{x_{1},x_{2},\ldots,x_{t-1}\}$. We embed all policies $\pi\in\Pi$ in $\mathbb{R}^{(t-1)K}$, with coordinates identified with $(x,a)\in\mathcal{X}_{t-1}\times A$. With abuse of notation, a policy $\pi$ is represented by the vector $\pi$ with coordinate $\pi(x,a)=1$ if $\pi(x)=a$ and $0$ otherwise. Let $\mathcal{C}$ be the convex hull of all policy vectors $\pi$. Recall that a distribution $P$ over policies corresponds to a point inside $\mathcal{C}$, i.e., $W_{P}(x,a)=\sum_{\pi:\pi(x)=a}P(\pi)$, and that $W^{\prime}(x,a)=(1-\mu_{t}K)W(x,a)+\mu_{t}$, where $\mu_{t}$ is as defined in Algorithm 2. Also define $\beta_{t}=\frac{t-1}{180C_{t-1}}$. In the following, we use the notation $x\sim h_{t-1}$ to denote a context drawn uniformly at random from $\mathcal{X}_{t-1}$. Consider the following convex program: $$\displaystyle\min\ s\text{ s.t.}$$ $$\displaystyle\Delta_{t-1}(W)\ \leq\ s$$ (5.1) $$\displaystyle W\ \in\ \mathcal{C}$$ (5.2) $$\displaystyle\forall Z\in\mathcal{C}:$$ $$\displaystyle\mathop{\mathbb{E}}_{x\sim h_{t-1}}\!\!\left[\sum_{a}\frac{Z(x,a)% }{W^{\prime}(x,a)}\right]\!\leq\max\{4K,\beta_{t}\Delta_{t-1}(Z)^{2}\}$$ (5.3) We claim that this program is equivalent to the RUCB optimization problem (4.1), up to finding an explicit distribution over policies which corresponds to the optimal solution. This can be seen as follows. Since we require $W\in\mathcal{C}$, it can be interpreted as being equal to $W_{P}$ for some distribution over policies $P$. The constraints (5.3) are equivalent to (4.1) by substitution $Z=W_{Q}$. The above convex program can be solved by performing a binary search over $s$ and testing feasibility of the constraints. For a fixed value of $s$, the feasibility problem defined by (5.1)–(5.3) is denoted by $\mathcal{A}$. We now give a sketch of how we construct a separation oracle for the feasible region of $\mathcal{A}$. The details of the algorithm are a bit complicated due to the fact that we need to ensure that the feasible region, when non-empty, has a non-negligible volume (recall the requirements of Lemma 8). This necessitates having a small error in satisfying the constraints of the program. We leave the details to Appendix E. Modulo these details, the construction of the separation oracle essentially implies that we can solve $\mathcal{A}$. Before giving the construction of the separation oracle, we first show that $\mathcal{AMO}$ allows us to do linear optimization over $\mathcal{C}$ efficiently: Lemma 9. Given a vector $w\in\mathbb{R}^{(t-1)K}$, we can compute $\arg\max_{Z\in\mathcal{C}}w\cdot Z$ using one invocation of $\mathcal{AMO}$. Proof. The sequence for $\mathcal{AMO}$ consists of $x_{t^{\prime}}\in\mathcal{X}_{t-1}$ and $\vec{r}_{t^{\prime}}(a)=w(x_{t^{\prime}},a)$. The lemma now follows since $w\cdot\pi=\sum_{x\in\mathcal{X}_{t-1}}w(x,\pi(x))$. ∎ We need another simple technical lemma which explains how to get a separating hyperplane for violations of convex constraints: Lemma 10. For $x\in\mathbb{R}^{n}$, let $f(x)$ be a convex function of $x$, and consider the convex set $K$ defined by $K=\{x:\ f(x)\leq 0\}$. Suppose we have a point $y$ such that $f(y)>0$. Let $\nabla f(y)$ be a subgradient of $f$ at $y$. Then the hyperplane $f(y)+\nabla f(y)\cdot(x-y)=0$ separates $y$ from $K$. Proof. Let $g(x)=f(y)+\nabla f(y)\cdot(x-y)$. By the convexity of $f$, we have $f(x)\geq g(x)$ for all $x$. Thus, for any $x\in K$, we have $g(x)\leq f(x)\leq 0$. Since $g(y)=f(y)>0$, we conclude that $g(x)=0$ separates $y$ from $K$. ∎ Now given a candidate point $W$, a separation oracle can be constructed as follows. We check whether $W$ satisfies the constraints of $\mathcal{A}$. If any constraint is violated, then we find a hyperplane separating $W$ from all points satisfying the constraint. 1. First, for constraint (5.1), note that $\eta_{t-1}(W)$ is linear in $W$, and so we can compute $\max_{\pi}\eta_{t-1}(\pi)$ via $\mathcal{AMO}$ as in Lemma 9. We can then compute $\eta_{t-1}(W)$ and check if the constraint is satisfied. If not, then the constraint, being linear, automatically yields a separating hyperplane. 2. Next, we consider constraint (5.2). To check if $W\in\mathcal{C}$, we use the perceptron algorithm. We shift the origin to $W$, and run the perceptron algorithm with all points $\pi\in\Pi$ being positive examples. The perceptron algorithm aims to find a hyperplane putting all policies $\pi\in\Pi$ on one side. In each iteration of the perceptron algorithm, we have a candidate hyperplane (specified by its normal vector), and then if there is a policy $\pi$ that is on the wrong side of the hyperplane, we can find it by running a linear optimization over $\mathcal{C}$ in the negative normal vector direction as in Lemma 9. If $W\notin\mathcal{C}$, then in a bounded number of iterations (depending on the distance of $W$ from $\mathcal{C}$, and the maximum magnitude $\|\pi\|_{2}$) we obtain a separating hyperplane. In passing we also note that if $W\in\mathcal{C}$, the same technique allows us to explicitly compute an approximate convex combination of policies in $\Pi$ that yields $W$. This is done by running the perceptron algorithm as before and stopping after the bound on the number of iterations has been reached. Then we collect all the policies we have found in the run of the perceptron algorithm, and we are guaranteed that $W$ is close in distance to their convex hull. We can then find the closest point in the convex hull of these policies by solving a simple quadratic program. 3. Finally, we consider constraint (5.3). We rewrite $\eta_{t-1}(W)$ as $\eta_{t-1}(W)=w\cdot W$, where $w(x_{t^{\prime}},a)=r_{t^{\prime}}\mathbb{I}(a=a_{t^{\prime}})/W^{\prime}_{t^{% \prime}}(a_{t^{\prime}})$. Thus, $\Delta_{t-1}(Z)=v-w\cdot Z$, where $v=\max_{\pi^{\prime}}\eta_{t-1}(\pi^{\prime})=\max_{\pi^{\prime}}w\cdot\pi^{\prime}$, which can be computed by using $\mathcal{AMO}$ once. Next, using the candidate point $W$, compute the vector $u$ defined as $u(x,a)=\frac{n_{x}/t}{W^{\prime}(x,a)}$, where $n_{x}$ is the number of times $x$ appears in $h_{t-1}$, so that $\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)% }\right]=u\cdot Z$. Now, the problem reduces to finding a policy $Z\in\mathcal{C}$ which violates the constraint $$u\cdot Z\leq\max\{4K,\beta_{t}(w\cdot Z-v)^{2}\}.$$ Define $f(Z)=\max\{4K,\beta_{t}(w\cdot Z-v)^{2}\}-u\cdot Z$. Note that $f$ is a convex function of $Z$. Finding a point $Z$ that violates the above constraint is equivalent to solving the following (convex) program: $$\displaystyle f(Z)$$ $$\displaystyle\leq\ 0$$ (5.4) $$\displaystyle Z$$ $$\displaystyle\in\ \mathcal{C}$$ (5.5) To do this, we again apply the ellipsoid method. For this, we need a separation oracle for the program. A separation oracle for the constraints (5.5) can be constructed as in Step 2 above. For the constraints (5.4), if the candidate solution $Z$ has $f(Z)>0$, then we can construct a separating hyperplane as in Lemma 10. Suppose that after solving the program, we get a point $Z\in\mathcal{C}$ such that $f(Z)\leq 0$, i.e. $W$ violates the constraint (5.3) for $Z$. Then since constraint (5.3) is convex in $W$, we can construct a separating hyperplane as in Lemma 10. This completes the description of the separation oracle. Working out the details carefully yields the following theorem, proved in Appendix E: Theorem 11. There is an iterative algorithm with $O(t^{5}K^{4}\log^{2}(\frac{tK}{\delta}))$ iterations, each involving one call to $\mathcal{AMO}$ and $O(t^{2}K^{2})$ processing time, that either declares correctly that $\mathcal{A}$ is infeasible or outputs a distribution $P$ over policies in $\Pi$ such that $W_{P}$ satisfies $$\displaystyle\forall Z\in\mathcal{C}:$$ $$\displaystyle\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\sum_{a}\frac{Z(x,a)}{W_% {P}^{\prime}(x,a)}\right]\leq\max\{4K,\beta_{t}\Delta_{t-1}(Z)^{2}\}+5\epsilon$$ $$\displaystyle\Delta_{t-1}(W)\ \leq\ s+2\gamma,$$ where $\epsilon=\frac{8\delta}{\mu_{t}^{2}}$ and $\gamma=\frac{\delta}{\mu_{t}}$. 6 DELAYED FEEDBACK In a delayed feedback setting, we observe rewards with a $\tau$ step delay according to: 1. The world presents features $x_{t}$. 2. The learning algorithm chooses an action $a_{t}\in\{1,...,K\}$. 3. The world presents a reward $r_{t-\tau}$ for the action $a_{t-\tau}$ given the features $x_{t-\tau}$. We deal with delay by suitably modifying Algorithm 1 to incorporate the delay $\tau$, giving Algorithm 3. Now we can prove the following theorem, which shows the delay has an additive effect on regret. Theorem 12. For all distributions $D$ over $(x,\vec{r})$ with $K$ actions, for all sets of $N$ policies $\Pi$, and all delay intervals $\tau$, with probability at least $1-\delta$, the regret of DelayedPE (Algorithm 3) is at most $$16\sqrt{2K\ln\frac{4T^{2}N}{\delta}}\left(\tau+\sqrt{T}\right).$$ Proof. Essentially as Theorem 4. The variance bound is unchanged because it depends only on the context distribution. Thus, it suffices to replace $\sum_{t-1}^{T}\frac{1}{\sqrt{t}}$ with $\tau+\sum_{t=\tau+1}^{T+\tau}\frac{1}{\sqrt{t-\tau}}=\tau+\sum_{t=1}^{T}\frac{% 1}{\sqrt{t}}$ in Eq. (3.4). ∎ Acknowledgements We thank Alina Beygelzimer, who helped in several formative discussions. References References Auer (2002) Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3:397–422, 2002. Auer et al. (2002a) Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2–3):235–256, 2002a. Auer et al. (2002b) Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal of Computing, 32(1):48–77, 2002b. Bartlett et al. (2007) P. L. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. In NIPS, 2007. Beygelzimer et al. (2009) Alina Beygelzimer, John Langford, and Pradeep Ravikumar. Error correcting tournaments. In ALT, 2009. Beygelzimer et al. (2011) Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert E. Schapire. Contextual bandit algorithms with supervised learning guarantees. In AISTATS, 2011. Even-Dar et al. (2006) Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7:1079–1105, 2006. Freedman (1975) David A. Freedman. On tail probabilities for martingales. Annals of Probability, 3(1):100–118, 1975. Freund and Schapire (1997) Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. Kakade and Kalai (2005) Sham M. Kakade and Adam Kalai. From batch to transductive online learning. In NIPS, 2005. Kalai and Vempala (2005) Adam Tauman Kalai and Santosh Vempala. Efficient algorithms for online decision problems. J. Comput. Syst. Sci., 71(3):291–307, 2005. Lai and Robbins (1985) Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985. Langford et al. (2009) J. Langford, A. Smola, and M. Zinkevich. Slow learners are fast. In NIPS, 2009. Langford and Zhang (2007) John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In NIPS, 2007. Sion (1958) Maurice Sion. On general minimax theorems. Pacific J. Math., 8(1):171–176, 1958. Srinivas et al. (2010) Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In ICML, 2010. Appendix A Concentration Inequality The following is an immediate corollary of Theorem 1 of (Beygelzimer et al., 2011). It can be viewed as a version of Freedman’s Inequality (Freedman, 1975). Let $y_{1},\ldots,y_{T}$ be a sequence of real-valued random variables. Let ${\mathbb{E}}_{t}$ denote the conditional expectation $\mathop{\mathbb{E}}[{}\cdot{}|\,y_{1},\ldots,y_{t-1}]$ and ${\mathbb{V}}_{t}$ conditional variance. Theorem 13 (Freedman-style Inequality). Let $V,R\in\mathbb{R}$ such that $\sum_{t=1}^{T}{\mathbb{V}}_{t}[y_{t}]\leq V$, and for all $t$, $y_{t}-{\mathbb{E}}_{t}[y_{t}]\leq R$. Then for any $\delta>0$ such that $R\leq\sqrt{V/\ln(2/\delta)}$, with probability at least $1-\delta$, $$\left\lvert\sum_{t=1}^{T}y_{t}-\sum_{t=1}^{T}{\mathbb{E}}_{t}[y_{t}]\right% \rvert\leq 2\sqrt{V\ln(2/\delta)}\enspace.$$ Appendix B Minimax Theorem The following is a continuous version of Sion’s Minimax Theorem (Sion, 1958, Theorem 3.4). Theorem 14. Let $\mathcal{W}$ and $\mathcal{Z}$ be compact and convex sets, and $f:\mathcal{W}\times\mathcal{Z}\to\mathbb{R}$ a function which for all $Z\in\mathcal{Z}$ is convex and continuous in $W$ and for all $W\in\mathcal{W}$ is concave and continuous in $Z$. Then $$\min_{W\in\mathcal{W}}\max_{Z\in\mathcal{Z}}f(W,Z)=\max_{Z\in\mathcal{Z}}\min_% {W\in\mathcal{W}}f(W,Z)\enspace.$$ Appendix C Empirical Variance Bounds In this section we prove Theorem 6. We first show uniform convergence for a certain class of policy distributions (Lemma 15), and argue that each distribution $P$ is close to some distribution $\widetilde{P}$ from this class, in the sense that $V_{P,\pi,t}$ is close to $V_{\widetilde{P},\pi,t}$ and $\widehat{V}_{P,\pi,t}$ is close to $\widehat{V}_{\widetilde{P},\pi,t}$ (Lemma 16). Together, they imply the main uniform convergence result in Theorem 6. For each positive integer $m$, let $\mathsf{Sparse}[m]$ be the set of distributions $\widetilde{P}$ over $\Pi$ that can be written as $$\widetilde{P}(\pi)=\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}(\pi=\pi_{i})$$ (i.e., the average of $m$ delta functions) for some $\pi_{1},\dotsc,\pi_{m}\in\Pi$. In our analysis, we approximate an arbitrary distribution $P$ over $\Pi$ by a distribution $\widetilde{P}\in\mathsf{Sparse}[m]$ chosen randomly by independently drawing $\pi_{1},\dotsc,\pi_{m}\sim P$; we denote this process by $\widetilde{P}\sim P^{m}$. Lemma 15. Fix positive integers $(m_{1},m_{2},\dotsc)$. With probability at least $1-\delta$ over the random samples $(x_{1},x_{2},\dotsc)$ from $D_{X}$, $$\displaystyle V_{\widetilde{P},\pi,t}\leq(1+\lambda)\cdot\widehat{V}_{% \widetilde{P},\pi,t}\\ \displaystyle+\left(5+\frac{1}{2\lambda}\right)\cdot\frac{(m_{t}+1)\log N+\log% \frac{2t^{2}}{\delta}}{\mu_{t}\cdot(t-1)}$$ for all $\lambda>0$, all $t\geq 1$, all $\pi\in\Pi$, and all distributions $\widetilde{P}\in\mathsf{Sparse}[m_{t}]$. Proof. Let $$Z_{\widetilde{P},\pi,t}(x)\doteq\frac{1}{(1-K\mu_{t})W_{\widetilde{P}}(x,\pi(x% ))+\mu_{t}}$$ so $V_{\widetilde{P},\pi,t}=\mathop{\mathbb{E}}_{x\sim D_{X}}[Z_{\widetilde{P},\pi% ,t}(x)]$ and $\widehat{V}_{\widetilde{P},\pi,t}=(t-1)^{-1}\sum_{i=1}^{t-1}Z_{\widetilde{P},% \pi,t}(x_{i})$. Also let $$\displaystyle\varepsilon_{t}$$ $$\displaystyle\doteq\frac{\log(|\mathsf{Sparse}[m_{t}]|N2t^{2}/\delta)}{\mu_{t}% \cdot(t-1)}$$ $$\displaystyle=\frac{((m_{t}+1)\log N+\log\frac{2t^{2}}{\delta})}{\mu_{t}\cdot(% t-1)}.$$ We apply Bernstein’s inequality and union bounds over $\widetilde{P}\in\mathsf{Sparse}[m_{t}]$, $\pi\in\Pi$, and $t\geq 1$ so that with probability at least $1-\delta$, $$V_{\widetilde{P},\pi,t}\leq\widehat{V}_{\widetilde{P},\pi,t}+\sqrt{2V_{% \widetilde{P},\pi,t}\varepsilon_{t}}+(2/3)\varepsilon_{t}$$ all $t\geq 1$, all $\pi\in\Pi$, and all distributions $P\in\mathsf{Sparse}[m_{t}]$. The conclusion follows by solving the quadratic inequality for $V_{\widetilde{P},\pi,t}$ to get $$V_{\widetilde{P},\pi,t}\leq\widehat{V}_{\widetilde{P},\pi,t}+\sqrt{2\widehat{V% }_{\widetilde{P},\pi,t}\varepsilon_{t}}+5\varepsilon_{t}$$ and then applying the AM/GM inequality. ∎ Lemma 16. Fix any $\gamma\in[0,1]$, and any $x\in X$. For any distribution $P$ over $\Pi$ and any $\pi\in\Pi$, if $$m\doteq\left\lceil\frac{6}{\gamma^{2}\mu_{t}}\right\rceil,$$ then $$\displaystyle\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}\Biggl{|}\frac{1}{(1% -K\mu_{t})W_{\widetilde{P}}(x,\pi(x))+\mu_{t}}\\ \displaystyle\qquad{}-\frac{1}{(1-K\mu_{t})W_{P}(x,\pi(x))+\mu_{t}}\Biggr{|}\\ \displaystyle\leq\frac{\gamma}{(1-K\mu_{t})W_{P}(x,\pi(x))+\mu_{t}}.$$ This implies that for all distributions $P$ over $\Pi$ and any $\pi\in\Pi$, there exists $\widetilde{P}\in\mathsf{Sparse}[m]$ such that for any $\lambda>0$, $$\displaystyle\left(V_{P,\pi,t}-V_{\widetilde{P},\pi,t}\right)+(1+\lambda)\left% (\widehat{V}_{\widetilde{P},\pi,t}-\widehat{V}_{P,\pi,t}\right)\\ \displaystyle\leq\gamma(V_{P,\pi,t}+(1+\lambda)\widehat{V}_{P,\pi,t}).$$ Proof. We randomly draw $\widetilde{P}\sim P^{m}$, with $\widetilde{P}(\pi^{\prime})\doteq m^{-1}\sum_{i=1}^{m}\mathbb{I}(\pi^{\prime}=% \pi_{i})$, and then define $$\displaystyle z$$ $$\displaystyle\doteq\sum_{\pi^{\prime}\in\Pi}P(\pi^{\prime})\cdot\mathbb{I}(\pi% ^{\prime}(x)=\pi(x))\quad\text{and}$$ $$\displaystyle\hat{z}$$ $$\displaystyle\doteq\sum_{\pi^{\prime}\in\Pi}\widetilde{P}(\pi^{\prime})\cdot% \mathbb{I}(\pi^{\prime}(x)=\pi(x)).$$ We have $z=\mathop{\mathbb{E}}_{\pi^{\prime}\sim P}[\mathbb{I}(\pi^{\prime}(x)=\pi(x)]$ and $\hat{z}=m^{-1}\sum_{i=1}^{m}\mathbb{I}(\pi_{i}(x)=\pi(x))$. In other words, $\hat{z}$ is the average of $m$ independent Bernoulli random variables, each with mean $z$. Thus, $\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}[(\hat{z}-z)^{2}]=z(1-z)/m$ and $\Pr_{\widetilde{P}\sim P^{m}}[\hat{z}\leq z/2]\leq\exp(-mz/8)$ by a Chernoff bound. We have $$\displaystyle\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}\left|\frac{1}{(1-K% \mu_{t})\hat{z}+\mu_{t}}-\frac{1}{(1-K\mu_{t})z+\mu_{t}}\right|$$ $$\displaystyle\leq\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}\frac{(1-K\mu_{t% })|\hat{z}-z|}{[(1-K\mu_{t})\hat{z}+\mu_{t}][(1-K\mu_{t})z+\mu_{t}]}$$ $$\displaystyle\leq\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}\frac{(1-K\mu_{t% })|\hat{z}-z|\mathbb{I}(\hat{z}\geq 0.5z)}{0.5[(1-K\mu_{t})z+\mu_{t}]^{2}}$$ $$\displaystyle\quad{}+\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}\frac{(1-K% \mu_{t})|\hat{z}-z|\mathbb{I}(\hat{z}\leq 0.5z)}{\mu_{t}[(1-K\mu_{t})z+\mu_{t}]}$$ $$\displaystyle\leq\frac{(1-K\mu_{t})\sqrt{\mathop{\mathbb{E}}_{\widetilde{P}% \sim P^{m}}|\hat{z}-z|^{2}}}{0.5[(1-K\mu_{t})z+\mu_{t}]^{2}}$$ $$\displaystyle\quad{}+\frac{(1-K\mu_{t})z\Pr_{\widetilde{P}\sim P^{m}}(\hat{z}% \leq 0.5z)}{\mu_{t}[(1-K\mu_{t})z+\mu_{t}]}$$ $$\displaystyle\leq\frac{(1-K\mu_{t})\sqrt{z/m}}{0.5[2\sqrt{(1-K\mu_{t})z\mu_{t}% }][(1-K\mu_{t})z+\mu_{t}]}$$ $$\displaystyle\quad{}+\frac{(1-K\mu_{t})z\exp(-mz/8)}{\mu_{t}[(1-K\mu_{t})z+\mu% _{t}]}$$ $$\displaystyle\leq\frac{\gamma\sqrt{1-K\mu_{t}}\sqrt{z/m}}{\sqrt{z(6/m)}[(1-K% \mu_{t})z+\mu_{t}]}$$ $$\displaystyle\quad{}+\frac{(1-K\mu_{t})\gamma^{2}mz\exp(-mz/8)}{6[(1-K\mu_{t})% z+\mu_{t}]},$$ where the third inequality follows from Jensen’s inequality, and the fourth inequality uses the AM/GM inequality in the denominator of the first term and the previous observations in the numerators. The final expression simplifies to the first desired displayed inequality by observing that $mz\exp(-mz/8)\leq 3$ for all $mz\geq 0$ (the maximum is achieved at $mz=8$). The second displayed inequality follows from the following facts: $$\displaystyle\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}|V_{P,\pi,t}-V_{% \widetilde{P},\pi,t}|\leq\gamma V_{P,\pi,t},$$ $$\displaystyle\mathop{\mathbb{E}}_{\widetilde{P}\sim P^{m}}(1+\lambda)|\widehat% {V}_{P,\pi,t}-\widehat{V}_{\widetilde{P},\pi,t}|\leq\gamma(1+\lambda)\widehat{% V}_{P,\pi,t}.$$ Both inequalities follow from the first displayed bound of the lemma, by taking expectation with respect to the true (and empirical) distributions over $x$. The desired bound follows by adding the above two inequalities, which implies that the bound holds in expectation, and hence the existence of $\widetilde{P}$ for which the bound holds. ∎ Now, we can prove Theorem 6. Proof of Theorem 6. Let $$m_{t}\doteq\left\lceil\frac{6}{\lambda^{2}}\cdot\frac{1}{\mu_{t}}\right\rceil$$ (for some $\lambda\in(0,1/5)$ to be determined) and condition on the $\geq 1-\delta$ probability event from Lemma 15 that $$\displaystyle V_{\widetilde{P},\pi,t}-(1+\lambda)\widehat{V}_{\widetilde{P},% \pi,t}\\ \displaystyle\leq K\cdot\left(5+\frac{1}{2\lambda}\right)\cdot\frac{(m_{t}+1)% \log(N)+\log(2t^{2}/\delta)}{K\mu_{t}\cdot(t-1)}\\ \displaystyle\leq K\cdot 5\left(1+\frac{1}{\lambda}\right)\cdot\frac{(m_{t}+1)% \log(N)+\log(2t^{2}/\delta)}{K\mu_{t}\cdot t}$$ for all $t\geq 2$, all $\widetilde{P}\in\mathsf{Sparse}[m_{t}]$, and all $\pi\in\Pi$. Using the definitions of $m_{t}$ and $\mu_{t}$, the second term is at most $(40/\lambda^{2})\cdot(1+1/\lambda)\cdot K$ for all $t\geq 16K\log(8KN/\delta)$: the key here is that for $t\geq 16K\log(8KN/\delta)$, we have $\mu_{t}=\sqrt{\log(Nt/\delta)/(Kt)}\leq 1/(2K)$ and therefore $$\frac{m_{t}\log(N)}{K\mu_{t}t}\leq\frac{6}{\lambda^{2}}\quad\text{and}\quad% \frac{\log(N)+\log(2t^{2}/\delta)}{K\mu_{t}t}\leq 2.$$ Now fix $t\geq 16K\log(8KN/\delta)$, $\pi\in\Pi$, and a distribution $P$ over $\Pi$. Let $\widetilde{P}\in\mathsf{Sparse}[m_{t}]$ be the distribution guaranteed by Lemma 16 with $\gamma=\lambda$ satisfying $$V_{P,\pi,t}\leq\frac{V_{\widetilde{P},\pi,t}-(1+\lambda)\widehat{V}_{% \widetilde{P},\pi,t}+(1+\lambda)^{2}\widehat{V}_{P,\pi,t}}{1-\lambda}.$$ Substituting the previous bound for $V_{\widetilde{P},\pi,t}-(1+\lambda)\widehat{V}_{\widetilde{P},\pi,t}$ gives $$\displaystyle V_{P,\pi,t}\leq\frac{1}{1-\lambda}\left(\frac{40}{\lambda^{2}}(1% +1/\lambda)K+(1+\lambda)^{2}\widehat{V}_{P,\pi,t}\right).$$ This can be bounded as $(1+\epsilon)\cdot\widehat{V}_{P,\pi,t}+(7500/\epsilon^{3})\cdot K$ by setting $\lambda=\epsilon/5$. ∎ Appendix D Analysis of RandomizedUCB D.1 Preliminaries First, we define the following constants. • $\epsilon\in(0,1)$ is a fixed constant, and • $\rho\doteq\frac{7500}{\epsilon^{3}}$ is the factor that appears in the bound from Theorem 6. • $\theta\doteq(\rho+1)/(1-(1+\epsilon)/2)=\frac{2}{1-\epsilon}\left(1+\frac{7500% }{\epsilon^{3}}\right)\geq 5$ is a constant central to Lemma 21, which bounds the variance of the optimal policy’s estimated rewards. Recall the algorithm-specific quantities $$\displaystyle C_{t}$$ $$\displaystyle\doteq 2\log\left(\frac{Nt}{\delta}\right)$$ $$\displaystyle\mu_{t}$$ $$\displaystyle\doteq\min\left\{\frac{1}{2K},\ \sqrt{\frac{C_{t}}{2Kt}}\right\}.$$ It can be checked that $\mu_{t}$ is non-increasing. We define the following time indices: • $t_{0}$ is the first round $t$ in which $\mu_{t}=\sqrt{C_{t}/(2Kt)}$. Note that $8K\leq t_{0}\leq 8K\log(NK/\delta)$. • $t_{1}:=\lceil 16K\log(8KN/\delta)\rceil$ is the round given by Theorem 6 such that, with probability at least $1-\delta$, $$\displaystyle\mathop{\mathbb{E}}_{x_{t}\sim D_{X}}\left[\frac{1}{W_{t}^{\prime% }(\pi(x_{t}))}\right]\\ \displaystyle\leq(1+\epsilon)\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\frac{1}% {W_{P_{t},\mu_{t}}(x,\pi(x))}\right]+\rho K$$ (D.1) for all $\pi\in\Pi$ and all $t\geq t_{1}$, where $W_{P,\mu}(x,\cdot)$ is the distribution over $A$ given by $$W_{P,\mu}(x,a)\doteq(1-K\mu)W_{P}(x,a)+\mu,$$ and the notation $\mathop{\mathbb{E}}_{x\sim h_{t-1}}$ denotes expectation with respect to the empirical (uniform) distribution over $x_{1},\dotsc,x_{t-1}$. The following lemma shows the effect of allowing slack in the optimization constraints. Lemma 17. If $P$ satisfies the constraints of the optimization problem (4.1) with slack $K$ for each distribution $Q$ over $\Pi$, i.e., $$\displaystyle\mathop{\mathbb{E}}_{\pi\sim Q}\mathop{\mathbb{E}}_{x\sim h_{t-1}% }\left[\frac{1}{(1-K\mu_{t})W_{P}(x,\pi(x))+\mu_{t}}\right]\\ \displaystyle\leq\max\left\{4K,\frac{(t-1)\Delta_{t-1}(W_{Q})^{2}}{180C_{t-1}}% \right\}+K$$ for all $Q$, then $P$ satisfies $$\displaystyle\mathop{\mathbb{E}}_{\pi\sim Q}\mathop{\mathbb{E}}_{x\sim h_{t-1}% }\left[\frac{1}{(1-K\mu_{t})W_{P}(x,\pi(x))+\mu_{t}}\right]\\ \displaystyle\leq\max\left\{5K,\frac{(t-1)\Delta_{t-1}(W_{Q})^{2}}{144C_{t-1}}\right\}$$ for all $Q$. Proof. Let $b\doteq\max\left\{4K,\frac{(t-1)\Delta_{t-1}(\pi)^{2}}{180C_{t-1}}\right\}$. Note that $\frac{b}{4}\geq K$. Hence $b+K\leq\frac{5b}{4}$ which gives the stated bound. ∎ Note that the allowance of slack $K$ is somewhat arbitrary; any $O(K)$ slack is tolerable provided that other constants are adjusted appropriately. D.2 Deviation Bound for $\eta_{t}(\pi)$ For any policy $\pi\in\Pi$, define, for $1\leq t\leq t_{0}$, $$\bar{V}_{t}(\pi)\doteq K,$$ and for $t>t_{0}$, $$\bar{V}_{t}(\pi)\doteq K+\mathop{\mathbb{E}}_{x_{t}\sim D_{X}}\left[\frac{1}{W% _{t}^{\prime}(\pi(x_{t}))}\right].$$ The $\bar{V}_{t}(\pi)$ bounds the variances of the terms in $\eta_{t}(\pi)$. Lemma 18. Assume the bound in (D.1) holds for all $\pi\in\Pi$ and $t\geq t_{1}$. For all $\pi\in\Pi$: 1. If $t\leq t_{1}$, then $$K\leq\bar{V}_{t}(\pi)\leq 4K.$$ 2. If $t>t_{1}$, then $$\displaystyle\bar{V}_{t}(\pi)$$ $$\displaystyle\leq(1+\epsilon)\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\frac{1}% {(1-K\mu_{t})W_{P_{t}}(x,\pi(x))+\mu_{t}}\right]$$ $$\displaystyle\quad{}+(\rho+1)K.$$ Proof. For the first claim, note that if $t<t_{0}$, then $\bar{V}_{t}(\pi)=K$, and if $t_{0}\leq t<t_{1}$, then $$\mu_{t}=\sqrt{\frac{\log(Nt/\delta)}{Kt}}\geq\sqrt{\frac{\log(Nt_{0}/\delta)}{% 16K^{2}\log(8KN/\delta)}}\geq\frac{1}{4K};$$ so $W_{t}^{\prime}(a)\geq\mu_{t}\geq 1/(4K)$. For the second claim, pick any $t>t_{1}$, and note that by definition of $t_{1}$, for any $\pi\in\Pi$ we have $$\displaystyle\mathop{\mathbb{E}}_{x_{t}\sim D_{X}}\left[\frac{1}{W_{t}^{\prime% }(\pi(x_{t}))}\right]$$ $$\displaystyle\leq(1+\epsilon)\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\frac{1}% {(1-K\mu_{t})W_{P_{t}}(x,\pi(x))+\mu_{t}}\right]+\rho K.$$ The stated bound on $\bar{V}_{t}(\pi)$ now follows from its definition. ∎ Let $$\bar{V}_{\max,t}(\pi)\doteq\max\{\bar{V}_{\tau}(\pi),\ \tau=1,2,\ldots,t\}$$ The following lemma gives a deviation bound for $\eta_{t}(\pi)$ in terms of these quantities. Lemma 19. Pick any $\delta\in(0,1)$. With probability at least $1-\delta$, for all pairs $\pi,\pi^{\prime}\in\Pi$ and $t\geq t_{0}$, we have $$\displaystyle\Bigl{|}(\eta_{t}(\pi)-\eta_{t}(\pi^{\prime}))-(\eta_{D}(\pi)-% \eta_{D}(\pi^{\prime}))\Bigr{|}\\ \displaystyle\leq 2\sqrt{\frac{(\bar{V}_{\max,t}(\pi)+\bar{V}_{\max,t}(\pi^{% \prime}))\cdot C_{t}}{t}}.$$ (D.2) Proof. Fix any $t\geq t_{0}$ and $\pi,\pi^{\prime}\in\Pi$. Let $\delta_{t}:=\exp(-C_{t})$. Pick any $\tau\leq t$. Let $$Z_{\tau}(\pi)\doteq\frac{r_{\tau}(a_{\tau})\mathbb{I}(\pi(x_{\tau})=a_{\tau})}% {W_{\tau}^{\prime}(a_{\tau})}$$ so $\eta_{t}(\pi)=t^{-1}\sum_{\tau=1}^{t}Z_{\tau}(\pi)$. It is easy to see that $$\mathop{\mathbb{E}}_{\begin{subarray}{c}(x_{\tau},\vec{r}_{\tau})\sim D,\\ a_{\tau}\sim W_{\tau}^{\prime}\end{subarray}}\left[Z_{\tau}(\pi)-Z_{\tau}(\pi^% {\prime})\right]=\eta_{D}(\pi)-\eta_{D}(\pi^{\prime})$$ and $$\displaystyle\sum_{\tau=1}^{t}\mathop{\mathbb{E}}_{\begin{subarray}{c}(x_{\tau% },\vec{r}(\tau))\sim D,\\ a_{\tau}\sim W_{\tau}^{\prime}\end{subarray}}\left[(Z_{\tau}(\pi)-Z_{\tau}(\pi% ^{\prime}))^{2}\right]$$ $$\displaystyle\leq\sum_{\tau=1}^{t}\mathop{\mathbb{E}}_{x_{\tau}\sim D_{X}}% \left[\frac{1}{W_{\tau}^{\prime}(\pi(x_{\tau}))}+\frac{1}{W_{\tau}^{\prime}(% \pi^{\prime}(x_{\tau}))}\right]$$ $$\displaystyle\leq t\cdot(\bar{V}_{\max,t}(\pi)+\bar{V}_{\max,t}(\pi^{\prime})).$$ Moreover, with probability $1$, $$|Z_{\tau}(\pi)-Z_{\tau}(\pi^{\prime})|\leq\frac{1}{\mu_{\tau}}.$$ Now, note that since $t\geq t_{0}$, $\mu_{t}=\sqrt{\frac{C_{t}}{2Kt}}$, so that $t=\frac{C_{t}}{2K\mu_{t}^{2}}$. Further, both $\bar{V}_{\max,t}(\pi)$ and $\bar{V}_{\max,t}(\pi^{\prime})$ are at least $K$. Using these bounds we get $$\displaystyle\sqrt{\frac{1}{\log(1/\delta_{t})}\cdot t\cdot(\bar{V}_{\max,t}(% \pi)+\bar{V}_{\max,t}(\pi^{\prime}))}$$ $$\displaystyle\geq\sqrt{\frac{1}{C_{t}}\cdot\frac{C_{t}}{2K\mu_{t}^{2}}\cdot 2K% }=\frac{1}{\mu_{t}}\geq\frac{1}{\mu_{\tau}},$$ for all $\tau\leq t$, since the $\mu_{\tau}$’s are non-increasing. Therefore, by Freedman’s inequality (Theorem 13), we have $$\displaystyle\Pr\Biggl{[}\Bigl{|}(\eta_{t}(\pi)-\eta_{t}(\pi^{\prime}))-(\eta_% {D}(\pi)-\eta_{D}(\pi^{\prime}))\Bigr{|}\\ \displaystyle>2\sqrt{\frac{(\bar{V}_{\max,t}(\pi)+\bar{V}_{\max,t}(\pi^{\prime% }))\cdot\log(1/\delta_{t})}{t}}\Biggr{]}\leq 2\delta_{t}.$$ The conclusion follows by taking a union bound over $t_{0}<t\leq T$ and all pairs $\pi,\pi^{\prime}\in\Pi$. ∎ D.3 Variance Analysis We define the following condition, which will be assumed by most of the subsequent lemmas in this section. Condition 1. The deviation bound (D.1) holds for all $\pi\in\Pi$ and $t\geq t_{1}$, and the deviation bound (D.2) holds for all pairs $\pi,\pi^{\prime}\in\Pi$ and $t\geq t_{0}$. The next two lemmas relate the $\bar{V}_{t}(\pi)$ to the $\Delta_{t}(\pi)$. Lemma 20. Assume Condition 1. For any $t\geq t_{1}$ and $\pi\in\Pi$, if $\bar{V}_{t}(\pi)>\theta K$, then $$\Delta_{t-1}(\pi)\geq\sqrt{\frac{72\bar{V}_{t}(\pi)C_{t-1}}{t-1}}.$$ Proof. By Lemma 18, the fact $\bar{V}_{t}(\pi)>\theta K$ implies that $$\displaystyle\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\frac{1}{(1-K\mu_{t})W_{% P_{t}}(x,\pi(x))+\mu_{t}}\right]\\ \displaystyle>\frac{1}{1+\epsilon}\left(1-\frac{\rho+1}{\theta}\right)\bar{V}_% {t}(\pi)\geq\frac{1}{2}\bar{V}_{t}(\pi).$$ Since $\bar{V}_{t}(\pi)>\theta K\geq 5K$, Lemma 17 implies that in order for $P_{t}$ to satisfy the optimization constraint in (4.1) corresponding to $\pi$ (with slack $\leq K$), it must be the case that $$\displaystyle\Delta_{t-1}(\pi)\\ \displaystyle\geq\sqrt{\frac{144C_{t-1}}{t-1}\cdot\mathop{\mathbb{E}}_{x\sim h% _{t-1}}\left[\frac{1}{(1-K\mu_{t})W_{P_{t}}(x,\pi(x))+\mu_{t}}\right]}.$$ Combining with the above, we obtain $$\Delta_{t-1}(\pi)\geq\sqrt{\frac{72\bar{V}_{t}(\pi)C_{t-1}}{t-1}}.$$ ∎ Lemma 21. Assume Condition 1. For all $t\geq 1$, $\bar{V}_{\max,t}(\pi_{\max})\leq\theta K$ and $\bar{V}_{\max,t}(\pi_{t})\leq\theta K$. Proof. By induction on $t$. The claim for all $t\leq t_{1}$ follows from Lemma 18. So take $t>t_{1}$, and assume as the (strong) inductive hypothesis that $\bar{V}_{\max,\tau}(\pi_{\max})\leq\theta K$ and $\bar{V}_{\max,\tau}(\pi_{\tau})\leq\theta K$ for $\tau\in\{1,\dotsc,t-1\}$. Suppose for sake of contradiction that $\bar{V}_{t}(\pi_{\max})>\theta K$. By Lemma 20, $$\Delta_{t-1}(\pi_{\max})\geq\sqrt{\frac{72\bar{V}_{t}(\pi_{\max})C_{t-1}}{t-1}}.$$ However, by the deviation bounds, we have $$\displaystyle\Delta_{t-1}(\pi_{\max})+\Delta_{D}(\pi_{t-1})$$ $$\displaystyle\leq 2\sqrt{\frac{(\bar{V}_{\max,t-1}(\pi_{t-1})+\bar{V}_{\max,t-% 1}(\pi_{\max}))C_{t-1}}{t-1}}$$ $$\displaystyle\leq 2\sqrt{\frac{2\bar{V}_{t}(\pi_{\max})C_{t-1}}{t-1}}<\sqrt{% \frac{72\bar{V}_{t}(\pi_{\max})C_{t-1}}{t-1}}.$$ The second inequality follows from our assumption and the induction hypothesis: $$\bar{V}_{t}(\pi_{\max})>\theta K\geq\bar{V}_{\max,t-1}(\pi_{t-1}),\bar{V}_{% \max,t-1}(\pi_{\max}).$$ Since $\Delta_{D}(\pi_{t-1})\geq 0$, we have a contradiction, so it must be that $\bar{V}_{t}(\pi_{\max})\leq\theta K$. This proves that $\bar{V}_{\max,t}(\pi_{\max})\leq\theta K$. It remains to show that $\bar{V}_{\max,t}(\pi_{t})\leq\theta K$. So suppose for sake of contradiction that the inequality fails, and let $t_{1}<\tau\leq t$ be any round for which $\bar{V}_{\tau}(\pi_{t})=\bar{V}_{\max,t}(\pi_{t})>\theta K$. By Lemma 20, $$\Delta_{\tau-1}(\pi_{t})\geq\sqrt{\frac{72\bar{V}_{\tau}(\pi_{t})C_{\tau-1}}{% \tau-1}}.$$ (D.3) On the other hand, $$\displaystyle\Delta_{\tau-1}(\pi_{t})$$ $$\displaystyle\leq\Delta_{D}(\pi_{\tau-1})+\Delta_{\tau-1}(\pi_{t})+\Delta_{t}(% \pi_{\max})$$ $$\displaystyle=\Bigl{(}\Delta_{D}(\pi_{\tau-1})+\Delta_{\tau-1}(\pi_{\max})% \Bigr{)}$$ $$\displaystyle{}\quad+\Bigl{(}\eta_{\tau-1}(\pi_{\max})-\eta_{\tau-1}(\pi_{t})-% \Delta_{D}(\pi_{t})\Bigr{)}$$ $$\displaystyle{}\quad+\Bigl{(}\Delta_{D}(\pi_{t})+\Delta_{t}(\pi_{\max})\Bigr{)}.$$ The parenthesized terms can be bounded using the deviation bounds, so we have $$\displaystyle\Delta_{\tau-1}(\pi_{t})$$ $$\displaystyle\leq$$ $$\displaystyle 2\sqrt{\frac{(\bar{V}_{\max,\tau-1}(\pi_{\tau-1})+\bar{V}_{\max,% \tau-1}(\pi_{\max}))C_{\tau-1}}{\tau-1}}$$ $$\displaystyle\quad{}+2\sqrt{\frac{(\bar{V}_{\max,\tau-1}(\pi_{t})+\bar{V}_{% \max,\tau-1}(\pi_{\max}))C_{\tau-1}}{\tau-1}}$$ $$\displaystyle\quad{}+2\sqrt{\frac{(\bar{V}_{\max,t}(\pi_{t})+\bar{V}_{\max,t}(% \pi_{\max}))C_{t}}{t}}$$ $$\displaystyle\leq$$ $$\displaystyle 2\sqrt{\frac{2\bar{V}_{\tau}(\pi_{t})C_{\tau-1}}{\tau-1}}+2\sqrt% {\frac{2\bar{V}_{\tau}(\pi_{t})C_{\tau-1}}{\tau-1}}$$ $$\displaystyle\quad{}+2\sqrt{\frac{2\bar{V}_{\tau}(\pi_{t})C_{t}}{t}}$$ $$\displaystyle<$$ $$\displaystyle\sqrt{\frac{72\bar{V}_{\tau}(\pi_{t})C_{\tau-1}}{\tau-1}}$$ where the second inequality follows from the following facts: {enumerate*} By induction hypothesis, we have $\bar{V}_{\max,\tau-1}(\pi_{\tau-1}),\bar{V}_{\max,\tau-1}(\pi_{\max}),\bar{V}_% {\max,t}(\pi_{\max})\leq\theta K$, and $\bar{V}_{\tau}(\pi_{t})>\theta K$, $\bar{V}_{\tau}(\pi_{t})\geq\bar{V}_{\max,t}(\pi_{t})$, and since $\tau$ is a round that achieves $\bar{V}_{\max,t}(\pi_{t})$, we have $\bar{V}_{\tau}(\pi_{t})\geq\bar{V}_{\tau-1}(\pi_{t})$. This contradicts the inequality in (D.3), so it must be that $\bar{V}_{\max,t}(\pi_{t})\leq\theta K$. ∎ Corollary 22. Under the assumptions of Lemma 21, $$\Delta_{D}(\pi_{t})+\Delta_{t}(\pi_{\max})\leq 2\sqrt{\frac{2\theta KC_{t}}{t}}$$ for all $t\geq t_{0}$. Proof. Immediate from Lemma 21 and the deviation bounds from (D.2). ∎ The following lemma shows that if a policy $\pi$ has large $\Delta_{\tau}(\pi)$ in some round $\tau$, then $\Delta_{t}(\pi)$ remains large in later rounds $t>\tau$. Lemma 23. Assume Condition 1. Pick any $\pi\in\Pi$ and $t\geq t_{1}$. If $\bar{V}_{\max,t}(\pi)>\theta K$, then $$\Delta_{t}(\pi)>2\sqrt{\frac{2\bar{V}_{\max,t}(\pi)C_{t}}{t}}.$$ Proof. Let $\tau\leq t$ be any round in which $\bar{V}_{\tau}(\pi)=\bar{V}_{\max,t}(\pi)>\theta K$. We have $$\displaystyle\Delta_{t}(\pi)$$ $$\displaystyle\geq\Delta_{t}(\pi)-\Delta_{t}(\pi_{\max})-\Delta_{D}(\pi_{\tau-1})$$ $$\displaystyle=\Delta_{\tau-1}(\pi)+\Bigl{(}\eta_{t}(\pi_{\max})-\eta_{t}(\pi)-% \Delta_{D}(\pi)\Bigr{)}$$ $$\displaystyle\quad{}+\Bigl{(}\eta_{D}(\pi_{\tau-1})-\eta_{D}(\pi)-\Delta_{\tau% -1}(\pi)\Bigr{)}$$ $$\displaystyle\geq\sqrt{\frac{72\bar{V}_{\tau}(\pi)C_{\tau-1}}{\tau-1}}$$ $$\displaystyle\quad{}-2\sqrt{\frac{(\bar{V}_{\max,t}(\pi)+\bar{V}_{\max,t}(\pi_% {\max}))C_{t}}{t}}$$ $$\displaystyle\quad{}-2\sqrt{\frac{(\bar{V}_{\max,\tau-1}(\pi)+\bar{V}_{\max,% \tau-1}(\pi_{\tau-1}))C_{\tau-1}}{\tau-1}}$$ $$\displaystyle>\sqrt{\frac{72\bar{V}_{\max,t}(\pi)C_{\tau-1}}{\tau-1}}-2\sqrt{% \frac{2\bar{V}_{\max,t}(\pi)C_{t}}{t}}$$ $$\displaystyle\quad{}-2\sqrt{\frac{2\bar{V}_{\max,t}(\pi)C_{\tau-1}}{\tau-1}}$$ $$\displaystyle\geq 2\sqrt{\frac{2\bar{V}_{\max,t}(\pi)C_{\tau-1}}{\tau-1}}\geq 2% \sqrt{\frac{2\bar{V}_{\max,t}(\pi)C_{t}}{t}}$$ where the second inequality follows from Lemma 20 and the deviation bounds, and the third inequality follows from Lemma 21 and the facts that $\bar{V}_{\tau}(\pi)=\bar{V}_{\max,t}(\pi)>\theta K\geq\bar{V}_{\max,t}(\pi_{% \max}),\bar{V}_{\max,\tau-1}(\pi_{\tau-1})$, and $\bar{V}_{\max,t}(\pi)\geq\bar{V}_{\max,\tau-1}(\pi)$. ∎ D.4 Regret Analysis We now bound the value of the optimization problem (4.1), which then leads to our regret bound. The next lemma shows the existence of a feasible solution with a certain structure based on the non-uniform constraints. Recall from Section 5, that solving the optimization problem $\mathcal{A}$, i.e. constraints (5.1, 5.2, 5.3), for the smallest feasible value of $s$ is equivalent to solving the RUCB optimization problem (4.1). Recall that $\beta_{t}=\frac{t-1}{180C_{t-1}}$. Lemma 24. There is a point $W\in\mathbb{R}^{(t-1)K}$ such that $$\displaystyle\Delta_{t-1}(W)\ \leq\ 4\sqrt{\frac{K}{\beta_{t}}}$$ $$\displaystyle W\ \in\ \mathcal{C}$$ $$\displaystyle\forall Z\in\mathcal{C}:\!\!\!\!\mathop{\mathbb{E}}_{x\sim h_{t-1% }}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}\right]\leq\max\{4K,\beta_{t}% \Delta_{t-1}(Z)^{2}\}$$ In particular, the value of the optimization problem (4.1), $\operatorname{OPT}_{t}$, is bounded by $8\sqrt{\frac{K}{\beta_{t}}}\leq 110\sqrt{\frac{KC_{t-1}}{t-1}}$. Proof. Define the sets $\{\mathcal{C}_{i}:\ i=1,2,\ldots\}$ such that $$\mathcal{C}_{i}:=\{Z\in\mathcal{C}:\ 2^{i+1}\kappa\leq\Delta_{t-1}(Z)\leq 2^{i% +2}\kappa\},$$ where $\kappa=\sqrt{\frac{K}{\beta_{t}}}$. Note that since $\Delta_{t-1}(Z)$ is a linear function of $Z$, each $\mathcal{C}_{i}$ is a closed, convex, compact set. Also, define $\mathcal{C}_{0}=\{Z\in\mathcal{C}:\ \Delta_{t-1}(Z)\leq 4\kappa\}$. This is also a closed, convex, compact set. Note that $\mathcal{C}=\bigcup_{i=0}^{\infty}\mathcal{C}_{i}$. Let $I=\{i:\ \mathcal{C}_{i}\neq\emptyset\}$.For $i\in I\setminus\{0\}$, define $w_{i}=4^{-i}$, and let $w_{0}=1-\sum_{i\in I\setminus\{0\}}w_{i}$. Note that $w_{0}\geq 2/3$. By Lemma 1, for each $i\in I$, there is a point $W_{i}\in\mathcal{C}_{i}$ such that for all $Z\in\mathcal{C}_{i}$, we have $$\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\sum_{a}\frac{Z(x,a)}{W_{i}^{\prime}(% x,a)}\right]\leq 2K.$$ Here we use the fact that $K\mu_{t}\leq 1/2$ to upper bound $\frac{K}{1-K\mu_{t}}$ by $2K$. Now consider the point $W=\sum_{i\in I}w_{i}W_{i}$. Since $\mathcal{C}$ is convex, $W\in\mathcal{C}$. Now fix any $i\in I$. For any $(x,a)$, we have $W^{\prime}(x,a)\geq w_{i}W^{\prime}_{i}(x,a)$, so that for all $Z\in\mathcal{C}_{i}$, we have $$\displaystyle\mathop{\mathbb{E}}_{x\sim h_{t-1}}\left[\sum_{a}\frac{Z(x,a)}{W^% {\prime}(x,a)}\right]$$ $$\displaystyle\leq\frac{1}{w_{i}}2K$$ $$\displaystyle\leq 4^{i+1}K$$ $$\displaystyle\leq\max\{4K,\beta_{t}\Delta_{t-1}(Z)^{2}\},$$ so the constraint for $Z$ is satisfied. Finally, since for all $i\in I$, we have $w_{i}\leq 4^{-i}$ and $\Delta_{t-1}(W_{i})\leq 2^{i+2}\kappa$, we get $$\Delta_{t-1}(W)=\sum_{i\in I}w_{i}\Delta_{t-1}(W_{i})\leq\sum_{i=0}^{\infty}4^% {-i}\cdot 2^{i+2}\kappa\leq 8\kappa.$$ ∎ The value of the optimization problem (4.1) can be related to the expected instantaneous regret of policy drawn randomly from the distribution $P_{t}$. Lemma 25. Assume Condition 1. Then $$\sum_{\pi\in\Pi}P_{t}(\pi)\Delta_{D}(\pi)\leq\left(220+4\sqrt{2\theta}\right)% \cdot\sqrt{\frac{KC_{t-1}}{t-1}}+2\varepsilon_{\operatorname{opt},t}$$ for all $t>t_{1}$. Proof. Fix any $\pi\in\Pi$ and $t>t_{1}$. By the deviation bounds, we have $$\displaystyle\Bigl{(}\eta_{D}(\pi_{t-1})-\eta_{D}(\pi)\Bigr{)}$$ $$\displaystyle\leq\Delta_{t-1}(\pi)+2\sqrt{\frac{(\bar{V}_{\max,t-1}(\pi)+\bar{% V}_{\max,t-1}(\pi_{t-1}))C_{t-1}}{t-1}}$$ $$\displaystyle\leq\Delta_{t-1}(\pi)+2\sqrt{\frac{\left(\bar{V}_{\max,t-1}(\pi)+% \theta K\right)C_{t-1}}{t-1}},$$ by Lemma 21. By Corollary 22 we have $$\Delta_{D}(\pi_{t-1})\leq 2\sqrt{\frac{2\theta KC_{t-1}}{t-1}}$$ Thus, we get $$\displaystyle\Delta_{D}(\pi)$$ $$\displaystyle\leq\Bigl{(}\eta_{D}(\pi_{t-1})-\eta_{D}(\pi)\Bigr{)}+\Delta_{D}(% \pi_{t-1})$$ $$\displaystyle\leq\Delta_{t-1}(\pi)+2\sqrt{\frac{\left(\bar{V}_{\max,t-1}(\pi)+% \theta K\right)C_{t-1}}{t-1}}$$ $$\displaystyle\quad{}+2\sqrt{\frac{2\theta KC_{t-1}}{t-1}}.$$ If $\bar{V}_{\max,t-1}(\pi)\leq\theta K$, then we have $$\Delta_{D}(\pi)\leq\Delta_{t-1}(\pi)+4\sqrt{\frac{2\theta KC_{t-1}}{t-1}}.$$ Otherwise, Lemma 23 implies that $$\bar{V}_{\max,t-1}(\pi)\leq\frac{(t-1)\cdot\Delta_{t-1}(\pi)^{2}}{8C_{t-1}},$$ so $$\displaystyle\Delta_{D}(\pi)$$ $$\displaystyle\leq\Delta_{t-1}(\pi)+2\sqrt{\frac{\Delta_{t-1}(\pi)^{2}}{8}+% \frac{\theta KC_{t-1}}{t-1}}$$ $$\displaystyle\quad{}+2\sqrt{\frac{2\theta KC_{t-1}}{t-1}}$$ $$\displaystyle\leq 2\Delta_{t-1}(\pi)+4\sqrt{\frac{2\theta KC_{t-1}}{t-1}}.$$ Therefore $$\displaystyle\sum_{\pi\in\Pi}P_{t}(\pi)\Delta_{D}(\pi)$$ $$\displaystyle\leq 2\sum_{\pi\in\Pi}P_{t}(\pi)\Delta_{t-1}(\pi)+4\sqrt{\frac{2% \theta KC_{t-1}}{t-1}}$$ $$\displaystyle\leq 2\left(\operatorname{OPT}_{t}+\varepsilon_{\operatorname{opt% },t}\right)+4\sqrt{\frac{2\theta KC_{t-1}}{t-1}}$$ where $\operatorname{OPT}_{t}$ is the value of the optimization problem (4.1). The conclusion follows from Lemma 24. ∎ We can now finally prove the main regret bound for RUCB. Proof of Theorem 5. The regret through the first $t_{1}$ rounds is trivially bounded by $t_{1}$. In the event that Condition 1 holds, we have for all $t\geq t_{1}$, $$\displaystyle\sum_{a\in A}W_{t}(a)r_{t}(a)$$ $$\displaystyle\geq\sum_{a\in A}(1-K\mu_{t})W_{P_{t}}(x_{t},a)r_{t}(a)$$ $$\displaystyle\geq\sum_{a\in A}W_{P_{t}}(x_{t},a)r_{t}(a)-K\mu_{t}$$ $$\displaystyle=\sum_{\pi\in\Pi}P_{t}(\pi)r_{t}(\pi(x_{t}))-K\mu_{t},$$ and therefore $$\displaystyle\mathop{\mathbb{E}}_{\begin{subarray}{c}(x_{t},\vec{r}(t))\sim D% \\ a_{t}\sim W_{t}^{\prime}\end{subarray}}\left[r_{t}(a_{t})\right]$$ $$\displaystyle=\mathop{\mathbb{E}}_{(x_{t},\vec{r}(t))\sim D}\left[\sum_{a\in A% }W_{t}^{\prime}(a)r_{t}(a)\right]$$ $$\displaystyle\geq\sum_{\pi\in\Pi}P_{t}(\pi)\eta_{D}(\pi)-K\mu_{t}$$ $$\displaystyle\geq\eta_{D}(\pi_{\max})-O\left(\sqrt{\frac{KC_{t-1}}{t-1}}+% \varepsilon_{\operatorname{opt},t}\right)$$ where the last inequality follows from Lemma 25. Summing the bound from $t=t_{1}+1,\dotsc,T$ gives $$\displaystyle\sum_{t=1}^{T}\mathop{\mathbb{E}}_{\begin{subarray}{c}(x_{t},\vec% {r}(t))\sim D\\ a_{t}\sim W_{t}^{\prime}\end{subarray}}\left[\eta_{D}(\pi_{\max})-r_{t}(a_{t})% \right]\\ \displaystyle\leq t_{1}+O\left(\sqrt{TK\log\left(NT/\delta\right)}\right).$$ By Azuma’s inequality, the probability that $\sum_{t=1}^{T}r_{t}(a_{t})$ deviates from its mean by more than $O(\sqrt{T\log(1/\delta)})$ is at most $\delta$. Finally, the probability that Condition 1 does not hold is at most $2\delta$ by Lemma 19, Theorem 6, and a union bound. The conclusion follows by a final union bound. ∎ Appendix E Details of Oracle-based Algorithm We show how to (approximately) solve $\mathcal{A}$ using the ellipsoid algorithm with $\mathcal{AMO}$. Fix a time period $t$. To avoid clutter, (only) in this section we drop the subscript $t-1$ from $\eta_{t-1}(\cdot)$, $\Delta_{t-1}(\cdot)$, and $h_{t-1}$ so that they becomes $\eta(\cdot)$, $\Delta(\cdot)$, and $h$ respectively. In order to use the ellipsoid algorithm, we need to relax the program a little bit in order to ensure that the feasible region has a non-negligible volume. To do this, we need to obtain some perturbation bounds for the constraints of $\mathcal{A}$. The following lemma gives such bounds. For any $\delta>0$, we define $\mathcal{C}_{\delta}$ to be the set of all points within a distance of $\delta$ from $\mathcal{C}$. Lemma 26. Let $\delta\leq b/4$ be a parameter. Let $U,W\in\mathcal{C}_{2\delta}$ be points such that $\|U-W\|\leq\delta$. Then we have $$\displaystyle|\Delta(U)-\Delta(W)|\ \leq\ \gamma$$ (E.1) $$\displaystyle\forall Z$$ $$\displaystyle\in\mathcal{C}_{1}:$$ $$\displaystyle\left|\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{U^% {\prime}(x,a)}\right]-\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}% {W^{\prime}(x,a)}\right]\right|\leq\epsilon$$ (E.2) where $\epsilon=\frac{8\delta}{\mu_{t}^{2}}$ and $\gamma=\frac{\delta}{\mu_{t}}$. Proof. First, we have $$\displaystyle|\eta(U)-\eta(W)|$$ $$\displaystyle\leq\frac{1}{t-1}\!\!\sum_{(x,a,r,q)\in h}\frac{r}{p}|U(x,a)-W(x,% a)|$$ $$\displaystyle\leq\ \frac{\delta}{\mu_{t}}=\gamma,$$ which implies (E.1). Next, for any $Z\in\mathcal{C}_{1}$, we have $$\displaystyle\left|\sum_{a}\frac{Z(x,a)}{U^{\prime}(x,a)}-\sum_{a}\frac{Z(x,a)% }{W^{\prime}(x,a)}\right|$$ $$\displaystyle\leq\ \sum_{a}|Z(x,a)|\frac{|U^{\prime}(x,a)-W^{\prime}(x,a)|}{U^% {\prime}(x,a)W^{\prime}(x,a)}$$ $$\displaystyle\leq\ \frac{8\delta}{\mu_{t}^{2}}\ =\ \epsilon.$$ In the last inequality, we use the Cauchy-Schwarz inequality, and use the following facts (here, $Z(x,\cdot)$ denotes the vector $\langle Z(x,a)\rangle_{a}$, etc.): {enumerate*} $\|Z(x,\cdot)\|\leq 2$ since $Z\in\mathcal{C}_{1}$, $\|U^{\prime}(x,\cdot)-W^{\prime}(x,\cdot)\|\leq\|U(x,\cdot)-W(x,\cdot)\|\leq\delta$, and $U^{\prime}(x,a)\geq(1-bK)\cdot(-2\delta)+b\geq b/2$, for $\delta\leq b/4$, and similarly $W^{\prime}(x,a)\geq b/2$. This implies (E.2). ∎ We now consider the following relaxed form of $\mathcal{A}$. Here, $\delta\in(0,b/4)$ is a parameter. We want to find a point $W\in\mathbb{R}^{(t-1)K}$ such that $$\displaystyle\Delta(W)$$ $$\displaystyle\leq\ s+\gamma$$ (E.3) $$\displaystyle W$$ $$\displaystyle\in\ \mathcal{C}_{\delta}$$ (E.4) $$\displaystyle\forall Z$$ $$\displaystyle\in\mathcal{C}_{2\delta}:$$ $$\displaystyle\mathop{\mathbb{E}}_{x\sim h}$$ $$\displaystyle\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}\right]$$ $$\displaystyle\leq\max\{4K,\beta_{t}\Delta(Z)^{2}\}+\epsilon,$$ (E.5) where $\epsilon$ and $\gamma$ are as defined in Lemma 26. Call this relaxed program $\mathcal{A}^{\prime}$. We apply the ellipsoid method to $\mathcal{A}^{\prime}$ rather than $\mathcal{A}$. Recall the requirements of Lemma 8: we need an enclosing ball of bounded radius for the feasible region, and the radius of an enclosed ball in the feasible region. The following lemma gives this. Lemma 27. The feasible region for $\mathcal{A}^{\prime}$ is contained in $B(0,\sqrt{t}+\delta)$, and if $\mathcal{A}$ is feasible, then it contains a ball of radius $\delta$. Proof. Note that for any $W\in\mathcal{C}_{\delta}$, we have $\|W\|\leq\sqrt{t}+\delta$, so the feasible region lies in $B(0,\sqrt{t}+\delta)$. Next, if $\mathcal{A}$ is feasible, let $W^{\star}\in\mathcal{C}$ be any feasible solution to $\mathcal{A}$. Consider the ball $B(W^{\star},\delta)$. Let $U$ be any point in $B(W^{\star},\delta)$. Clearly $U\in\mathcal{C}_{\delta}$. By Lemma 26, assuming $\delta\leq 1/2$, we have for all $Z\in\mathcal{C}_{2\delta}$, $$\displaystyle\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{U^{% \prime}(x,a)}\right]$$ $$\displaystyle\leq\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{U^{% \prime}(x,a)}\right]+\epsilon$$ $$\displaystyle\leq\max\{4K,\beta_{t}\Delta(Z)^{2}\}+\epsilon.$$ Also $$\Delta(U)\leq\Delta(W^{\star})+\gamma\leq s+\gamma.$$ Thus, $U$ is feasible for $\mathcal{A}^{\prime}$, and hence the entire ball $B(W^{\star},\delta)$ is feasible for $\mathcal{A}^{\prime}$. ∎ We now give the construction of a separation oracle for the feasible region of $\mathcal{A}^{\prime}$ by checking for violations of the constraints. In the following, we use the word “iteration” to indicate one step of either the ellipsoid algorithm or the perceptron algorithm. Each such iteration involves one call to $\mathcal{AMO}$, and additional $O(t^{2}K^{2})$ processing time. Let $W\in\mathbb{R}^{(t-1)K}$ be a candidate point that we want to check for feasibility for $\mathcal{A}^{\prime}$. We can check for violation of the constraint (E.3) easily, and since it is a linear constraint in $W$, it automatically yields a separating hyperplane if it is violated. The harder constraints are (E.4) and (E.5). Recall that Lemma 9 shows that that $\mathcal{AMO}$ allows us to do linear optimization over $\mathcal{C}$ efficiently. This immediately gives us the following useful corollary: Corollary 28. Given a vector $w\in\mathbb{R}^{(t-1)K}$ and $\delta>0$, we can compute $\arg\max_{Z\in\mathcal{C}_{\delta}}w\leavevmode\nobreak\ \cdot\leavevmode% \nobreak\ Z$ using one invocation of $\mathcal{AMO}$. Proof. This follows directly from the following fact: $$\arg\max_{Z\in\mathcal{C}_{\delta}}w\cdot Z\ =\ \frac{\delta}{\|w\|}w+\arg\max% _{Z\in\mathcal{C}}w\cdot Z.$$ ∎ Now we show how to use $\mathcal{AMO}$ to check for constraint (E.4): Lemma 29. Suppose we are given a point $W$. Then in $O(\frac{t}{\delta^{2}})$ iterations, if $W\notin\mathcal{C}_{2\delta}$, we can construct a hyperplane separating $W$ from $\mathcal{C}_{\delta}$. Otherwise, we declare correctly that $W\in\mathcal{C}_{2\delta}$. In the latter case, we can find an explicit distribution $P$ over policies in $\Pi$ such that $W_{P}$ satisfies $\|W_{P}-W\|\leq 2\delta$. Proof. We run the perceptron algorithm with the origin at $W$ and all points in $\mathcal{C}_{\delta}$ being positive examples. The goal of the perceptron algorithm then is to find a hyperplane going through $W$ that puts all of $\mathcal{C}_{\delta}$ (strictly) on one side. In each iteration of the perceptron algorithm, we have a weight vector $w$ that is the normal to a candidate hyperplane, and we need to find a point $Z\in\mathcal{C}_{\delta}$ such that $w\cdot(Z-W)\leq 0$ (note that we have shifted the origin to $W$). To do this, we use $\mathcal{AMO}$ as in Lemma 9 to find $Z^{\star}=\arg\max_{Z\in\mathcal{C}_{\delta}}-w\cdot Z$. If $w\cdot(Z^{\star}-W)\leq 0$, we use $Z^{\star}$ to update $w$ using the perceptron update rule, $w\leftarrow w+(Z^{\star}-W)$. Otherwise, we have $w\cdot(Z-W)>0$ for all $W\in\mathcal{C}_{\delta}$, and hence we have found our separating hyperplane. Now suppose that $W\notin\mathcal{C}_{2\delta}$, i.e. the distance of $W$ from $\mathcal{C}_{\delta}$ is more than $\delta$. Since $\|Z-W\|\leq 2\sqrt{t}+3\delta=O(\sqrt{t})$ for all $W\in\mathcal{C}_{\delta}$ (assuming $\delta=O(\sqrt{t})$), the perceptron convergence guarantee implies that in $O(\frac{t}{\delta^{2}})$ iterations we find a separating hyperplane. If in $k=O(\frac{t}{\delta^{2}})$ iterations we haven’t found a separating hyperplane, then $W\in\mathcal{C}_{2\delta}$. In fact the perceptron algorithm gives a stronger guarantee: if the $k$ policies found in the run of the perceptron algorithm are $\pi_{1},\pi_{2},\ldots,\pi_{k}\in\Pi$, then $W$ is within a distance of $2\delta$ from their convex hull, $\mathcal{C}^{\prime}=\text{conv}(\pi_{1},\pi_{2},\ldots,\pi_{k})$. This is because a run of the perceptron algorithm on $\mathcal{C}^{\prime}_{2\delta}$ would be identical to that on $\mathcal{C}_{2\delta}$ for $k$ steps. We can then compute the explicit distribution over policies $P$ by computing the Euclidean projection of $W$ on $\mathcal{C}^{\prime}$ in $\text{poly}(k)$ time using a convex quadratic program: $$\displaystyle\min\ \|W-$$ $$\displaystyle\textstyle{\sum}_{i=1}^{k}P_{i}\pi_{i}\|^{2}$$ $$\displaystyle\sum_{i}P_{i}$$ $$\displaystyle=\ 1$$ $$\displaystyle\forall i:\ P_{i}$$ $$\displaystyle\geq\ 0$$ Solving this quadratic program, we get a distribution $P$ over the policies $\{\pi_{1},\pi_{2},\ldots,\pi_{k}\}$ such that $\|W_{P}-W\|\leq 2\delta$. ∎ Finally, we show how to check constraint (E.5): Lemma 30. Suppose we are given a point $W$. In $O(\frac{t^{3}K^{2}}{\delta^{2}}\cdot\log(\frac{t}{\delta}))$ iterations, we can either find a point $Z\in\mathcal{C}_{2\delta}$ such that $$\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]\geq\max\{4K,\beta_{t}\Delta(Z)^{2}\}+2\epsilon,$$ or else we conclude correctly that for all $Z\in\mathcal{C}$, we have $$\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]\leq\max\{4K,\beta_{t}\Delta(Z)^{2}\}+3\epsilon.$$ Proof. We first rewrite $\eta(W)$ as $\eta(W)=w\cdot\pi$, where $w$ is a vector defined as $$w(x,a)=\frac{1}{t-1}\sum_{(x^{\prime},a^{\prime},r,p)\in h:\ x^{\prime}=x,a^{% \prime}=a}\frac{r}{p}.$$ Thus, $\Delta(Z)=v-w\cdot Z$, where $v=\max_{\pi^{\prime}}\eta(\pi^{\prime})=\max_{\pi^{\prime}}w\cdot\pi^{\prime}$ which can be computed by using $\mathcal{AMO}$ once. Next, using the candidate point $W$, compute the vector $u$ defined as $u(x,a)=\frac{n_{x}/t}{W^{\prime}(x,a)}$, where $n_{x}$ is the number of times $x$ appears in $h$, so that $\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]=u\cdot Z$. Now, the problem reduces to finding a point $R\in\mathcal{C}$ which violates the constraint $$u\cdot Z\leq\max\{4K,\beta_{t}(w\cdot Z-v)^{2}\}+3\epsilon.$$ Define $$f(Z)=\max\{4K,\beta_{t}(w\cdot Z-v)^{2}\}+3\epsilon-u\cdot Z.$$ Note that $f$ is convex function of $Z$. Checking for violation of the above constraint is equivalent to solving the following (convex) program: $$\displaystyle f(Z)$$ $$\displaystyle\leq\ 0$$ (E.6) $$\displaystyle Z$$ $$\displaystyle\in\ \mathcal{C}$$ (E.7) To do this, we again apply the ellipsoid method, but on the relaxed program $$\displaystyle f(Z)$$ $$\displaystyle\leq\ \epsilon$$ (E.8) $$\displaystyle Z$$ $$\displaystyle\in\ \mathcal{C}_{\delta}$$ (E.9) To run the ellipsoid algorithm, we need a separation oracle for the program. Given a candidate solution $Z$, we run the algorithm of Lemma 29, and if $Z\notin\mathcal{C}_{2\delta}$, we construct a hyperplane separating $Z$ from $\mathcal{C}_{\delta}$. Now suppose we conclude that $Z\in\mathcal{C}_{2\delta}$. Then we construct a separation oracle for (E.6) as follows. If $f(Z)>\epsilon$, then since $f$ is a convex function of $Z$, we can construct a separating hyperplane as in Lemma 10. Now we can run the ellipsoid algorithm with the starting ellipsoid being $B(0,\sqrt{t})$. If there is a point $Z^{\star}\in\mathcal{C}$ such that $f(Z^{\star})\leq 0$, then consider the ball $B(Z^{\star},\frac{4\delta}{5\sqrt{tK}\beta_{t}})$. For any $Y\in B(Z^{\star},\frac{4\delta}{5\sqrt{tK}\beta_{t}})$, we have $$|(u\cdot Z^{\star})-(u\cdot Y)|\leq\|u\|\|Z^{\star}-Y\|\leq\frac{\epsilon}{2}$$ since $\|u\|\leq\frac{\sqrt{K}}{\mu_{t}}$. Also, $$\displaystyle\beta_{t}|(w\cdot Z^{\star}-v)^{2}-(w\cdot Y-v)^{2}|$$ $$\displaystyle=\beta_{t}|(w\cdot Z^{\star}-w\cdot Y)(w\cdot Z^{\star}+w\cdot Y-% 2v)|$$ $$\displaystyle\leq\beta_{t}\|w\|\|Z^{\star}-Y\|(\|w\|(\|Z^{\star}\|+\|Y\|)+2|v|% )\leq\frac{\epsilon}{2},$$ since $\|w\|\leq\frac{1}{\mu_{t}}$, $\|Z^{\star}\|\leq\sqrt{t}$, $\|Y\|\leq\sqrt{t}+\delta\leq 2\sqrt{t}$, and $|v|\leq\|w\|\cdot\sqrt{t}\leq\frac{\sqrt{t}}{\mu_{t}}$. Thus, $f(Y)\leq f(Z^{\star})+\epsilon\leq\epsilon$, so the entire ball $B(Z^{\star},\frac{4\delta}{5\sqrt{tK}\beta_{t}})$ is feasible for the relaxed program. By Lemma 8, in $O(t^{2}K^{2}\cdot\log(\frac{tK}{\delta}))$ iterations of the ellipsoid algorithm, we obtain one of the following: {enumerate*} we either find a point $Z\in\mathcal{C}_{2\delta}$ such that $f(Z)\leq\epsilon$, i.e. $$\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]\ \geq\ \max\{4K,\beta_{t}\Delta(Z)^{2}\}+2\epsilon,$$ or else we conclude that the original convex program (E.6,E.7) is infeasible, i.e. for all $Z\in\mathcal{C}$, we have $$\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]\ \leq\ \max\{4K,\beta_{t}\Delta(Z)^{2}\}+3\epsilon.$$ The total number of invocations of iterations is bounded by $O(t^{2}K^{2}\cdot\log(\frac{tK}{\delta}))\cdot O(\frac{t}{\delta^{2}})=O(\frac% {t^{3}K^{2}}{\delta^{2}}\cdot\log(\frac{tK}{\delta}))$. ∎ Lemma 31. Suppose we are given a point $Z\in\mathcal{C}_{2\delta}$ such that $$\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]\ \geq\ \max\{4K,\beta_{t}\Delta(Z)^{2}\}+2\epsilon.$$ Then we can construct a hyperplane separating $W$ from all feasible points for $\mathcal{A}^{\prime}$. Proof. For notational convenience, define the function $$f_{Z}(W):=\!\!\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{% \prime}(x,a)}\right]-\max\{4K,\beta_{t}\Delta(Z)^{2}\}-2\epsilon.$$ Note that it is a convex function of $W$. Note that for any point $U$ that is feasible for $\mathcal{A}^{\prime}$, we have $f_{Z}(U)\leq-\epsilon$, whereas $f_{Z}(W)\geq 0$. Thus, by Lemma 10, we can construct the desired separating hyperplane. ∎ We can finally prove Theorem 11: Proof. [Theorem 11.] We run the ellipsoid algorithm starting with the ball $B(0,\sqrt{t}+\delta)$. At each point, we are given a candidate solution $W$ for program $\mathcal{A}^{\prime}$. We check for violation of constraint (E.3) first. If it is violated, the constraint, being linear, gives us a separating hyperplane. Else, we use Lemma 29 to check for violation of constraint (E.4). If $W\notin\mathcal{C}_{2\delta}$, then we can construct a separating hyperplane. Else, we use Lemmas 30 and 31 to check for violation of constraint (E.5). If there is a $Z\in\mathcal{C}$ such that $\mathop{\mathbb{E}}_{x\sim h}\left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}% \right]\geq\max\{4K,\beta_{t}\Delta(Z)^{2}\}+3\epsilon$, then we can find a separating hyperplane. Else, we conclude that the current point $W$ satisfies the following constraints: $$\displaystyle\Delta(W)$$ $$\displaystyle\leq\ s+\gamma$$ $$\displaystyle\forall Z\in\mathcal{C}:\ \!\!\!\!\mathop{\mathbb{E}}_{x\sim h}% \left[\sum_{a}\frac{Z(x,a)}{W^{\prime}(x,a)}\right]$$ $$\displaystyle\leq\max\{4K,\beta_{t}\Delta(Z)^{2}\}+3\epsilon$$ $$\displaystyle W$$ $$\displaystyle\in\ \mathcal{C}_{2\delta}$$ We can then use the perceptron-based algorithm of Lemma 29 to “round” $W$ to an explicit distribution $P$ over policies in $\Pi$ such that $W_{P}$ satisfies $\|W_{P}-W\|\leq 2\delta$. Then Lemma 26 implies the stated bounds for $W_{P}$. By Lemma 8, in $O(t^{2}K^{2}\log(\frac{t}{\delta}))$ iterations of the ellipsoid algorithm, we find the point $W$ satisfying the constraints given above, or declare correctly that $\mathcal{A}$ is infeasible. In the worst case, we might have to run the algorithm of Lemma 30 in every iteration, leading to an upper bound of $O(t^{2}K^{2}\log(\frac{t}{\delta}))\times O(\frac{t^{3}K^{2}}{\delta^{2}}\cdot% \log(\frac{tK}{\delta}))=O(t^{5}K^{4}\log^{2}(\frac{tK}{\delta}))$ on the number of iterations. ∎
Characterization of Extragalactic Point-Sources on E- and B-mode Maps of the CMB Polarization [    [    [ Abstract Although interesting in themselves, extragalactic sources emitting in the microwave range (mainly radio-loud active galactic nuclei and dusty galaxies) are also considered a contaminant from the point of view of Cosmic Microwave Background (CMB) experiments. These sources appear as unresolved point-like objects in CMB measurements because of the limited resolution of CMB experiments. Amongst other issues, point-like sources are known to obstruct the reconstruction of the lensing potential, and can hinder the detection of the Primordial Gravitational Wave Background for low values of $r$. Therefore, extragalactic point-source detection and subtraction is a fundamental part of the component separation process necessary to achieve some of the science goals set for the next generation of CMB experiments. As a previous step to their removal, in this work we have designed a filter based on steerable wavelets that allows the characterization of the emission of these extragalactic sources. Instead of the usual approach of working in polarization maps of the Stokes’ $Q$ and $U$ parameters, the proposed filter operates on E- and B-mode polarization maps. In this way, it benefits from the lower intensity that, both, the CMB, and the galactic foreground emission present in B-modes to improve its performance. For the regions of fainter galactic foreground emission in the $30$ GHz and $155$ GHz bands of the future PICO satellite, and assuming that the sources were already detected by other means, we predict that our filter will be able to characterize sources down to a minimum polarization intensity of, respectively, $117$ pK and $8$ pK, wich, adopting a $\Pi=0.02$ polarization degree, correspond to $119$ mJy and $164$ mJy intensities. \newfloatcommand capbtabboxtable[][\FBwidth] a,b]P. Diego-Palazuelos, a]P. Vielva, a]and D. Herranz Prepared for submission to JCAP Characterization of Extragalactic Point-Sources on E- and B-mode Maps of the CMB Polarization ${}^{a}$ Instituto de Física de Cantabria (CSIC-Universidad de Cantabria), Avda. de los Castros s/n, E-39005 Santander, Spain ${}^{b}$ Dpto. de Física Moderna, Universidad de Cantabria, Avda. los Castros s/n, E-39005 Santander, Spain E-mail: [email protected], [email protected], [email protected]   Contents 1 Introduction 2 Filter design 2.1 Point-source profile in E- and B-mode polarization maps 2.2 Wavelet definition and parameter estimation 2.3 Calibration of pixelization effects 3 Test on simulations 3.1 Simulations description 3.2 Statistical characterization of the filter performance 4 Conclusions and future work   1 Introduction Although it is not their prime objective, Cosmic Microwave Background (CMB) experiments can also provide valuable information about the population of extragalactic sources that lies in the 20-800 GHz frequency range [e.g., 1]. Experiments at those frequencies open an observation window to the synchrotron emission coming from the relativistic jets of radio-loud active galactic nuclei and to dusty galaxies with a high star formation rate. In particular, the information contained in polarization allows the study of the strong magnetic fields present in both kinds of sources. Whereas the polarization degree of radio sources is well characterized at low frequencies, its nature is still poorly constrained at higher frequencies [2], and, in general, little is known about the polarization degree of dusty galaxies due to the complex structure of galactic magnetic fields. Therefore, our understanding of the physics of these sources will greatly benefit from the plethora of experiments centered on the CMB polarization proposed for the next generation, like the CMB Stage-IV [3] or the PICO satellite [4]. However, from the point of view of a cosmologist, extragalactic sources are just an additional contaminant obscuring the signal of the CMB. Because of the limited resolution of CMB experiments (of the order of arcminutes), extragalactic sources appear as unresolved point-like objects in CMB maps, that, when uniformly distributed across the sky, behave like an additional white Gaussian noise at the angular power spectrum level. In this way, they can potentially become an important contaminant at small angular scales for frequencies up to $\sim 200$ GHz [5, 6, 7]. At higher frequencies, and for a low flux density, dusty galaxies tend to cluster together, which introduces additional correlations into their angular power spectrum. Therefore, in addition to the removal of diffuse galactic foreground emission [8, 9], and the delensing of the secondary B-modes induced by weak gravitational lensing [10, 11], extragalactic point-source detection and subtraction is also a fundamental part of the component separation process necessary to achieve the science goals set for the next generation of CMB experiments. In particular, extragalactic point-sources would significantly affect the reconstruction of the lensing potential [12], and consequently, severely limit the delensing of secondary B-modes. The lensing potential, which is the projection onto the sphere of the integrated mass distribution along the line-of-sight between us and the last scattering surface [e.g., 10], is an excellent probe for the matter distribution in the universe since it goes up to much higher redshifts than conventional galaxy surveys. Amongst other applications (e.g., see science goals pursued by [13], [3], [4] or [14]), a faithful estimate of the lensing potential could provide a measurement of the absolute mass scale of neutrinos [15, 16], and would help calibrate cluster masses to improve the interpretation of galaxy cluster surveys [17, 18, 19, 20]. Although it can also be reconstructed from other large-scale structure tracers [21, 22] (like galaxy surveys [23], the Cosmic Infrared Background [24, 25, 26, 15], or tomographic line intensity mapping [27]), for the next generation of experiments, the best lensing potential reconstructions are expected to come from CMB data [21]. When recovering the lensing potential through CMB measurements, there are at least two ways in which point-sources would affect the reconstruction. On the one hand, whether we use quadratic estimators [e.g., 28, 29] or maximum a posteriori reconstructions [e.g., 30, 31], the small angular scales of polarization fields (especially the ones from the EB cross-correlation) are the ones that contribute the most to the estimation of the lensing potential. As was previously mentioned, these are precisely the scales where, if not mitigated, point-source emission would dominate over the CMB. On the other hand, and given that point-sources are themselves tracers of the large-scale structure of the universe, they are also known to introduce spurious correlations between CMB fields and the lensing potential. Such correlations are further enhanced by the actual lensing of the point-sources’ emission [32], because, as they come from cosmological distances, their emission is itself lensed by the rest of the matter distribution between them and us. Therefore, if they are not properly controlled, point-sources could lead to a poor and biased lensing potential reconstruction. Since they limit our ability to correctly estimate the lensing potential, point-sources would also condition the delensing of the secondary B-modes originated by weak gravitational lensing [10, 11]. All in all, point-sources could become an important obstacle for the detection of the Primordial Gravitational Wave Background (PGWB) for low values of $r$ [5, 6, 7] due to, both, the noise-like signal they constitute in themselves, and the reduction in delensing power they cause by degrading lensing potential reconstructions. The detection of such PWGB, a relic background of stochastic gravitational waves that most inflationary models predict must have been produced during inflation (see e.g. [33] for a recent review), is one of the main science goals pursued by the next generation of CMB experiments. Well physically motivated inflationary models [e.g., 34, 35] predict that the amplitude of the signal that the PGWB leaves on the B-mode polarization of the CMB [36, 37, 38], which is controlled by the ratio between tensor and scalar perturbations $r=\mathcal{P}_{t}(k)/\mathcal{P}_{s}(k)$, should be of about $r\sim 0.001$. Current constrains put an upper limit to this value of $r<0.056$ [39]. Unfortunately, a PGWB B-mode signal of such amplitude would fall far below the emissions of diffuse galactic foregrounds and extragalactic sources, and the secondary lensed B-modes. In this way, as we previously discussed, the improvement of the lensing potential estimates available for delensing and of component separation techniques are fundamental to achieve the goal of PGWB detection. As a previous step to their removal, in this paper we present an alternative to the usual approach of working in $Q$ and $U$ polarization maps [e.g., 40, 41, 42] by designing a filter based on steerable wavelets that is capable of characterizing extragalactic sources on E- and B-mode polarization maps. By working on maps of the B-mode polarization, we hope to take advantage of the lower intesity that, both, the CMB, and the galactic foreground emission present in that channel in comparison to that of E-modes [43], or $Q$ and $U$ maps. The filter has not been designed for blind source detection but rather for its application in the estimation of the polarization angle and intensity of already known point-sources. The work is structured as follows. In section 2 we specify the filter design and describe the methodology devised for parameter estimation, leaving the characterization of its performance on simulations of the proposed PICO satellite for section 3. Finally, the conclusions that can be drawn from this work and some possible lines of future work are discussed in section 4. 2 Filter design In this section we will explain the details of the filter design and present the methodology that will allow us to characterize extragalactic sources. We will start by introducing the mathematical expressions describing the profile of point-sources in E- and B-mode polarization maps in subsection 2.1. Inspired by those profiles, we will then build a basis of steerable wavelet functions in subsection 2.2, and show a simple method to recover from them the polarization angle and intensity of the source. Finally, we will acknowledge how working with discrete images affects the filter implementation in subsection 2.3. 2.1 Point-source profile in E- and B-mode polarization maps As happens with the rest of galactic foregrounds, the emission of extragalactic sources is linearly polarized [e.g., 8, 44], thus being fully characterized by its polarization angle $\phi$ and intensity $P$. In addition, because of the limited resolution of telescopes in the microwave range (of the order of arcminutes), extragalactic sources appear as point-like objects rather than extended structures since they cannot be resolved. Therefore, in maps of the $Q$ and $U$ Stokes’ parameters, an extragalactic source located at an $\vec{r_{i}}$ position could be described like: $$\displaystyle Q(\vec{r})=$$ $$\displaystyle\rho(\vec{r})P\cos 2\phi,$$ $$\displaystyle U(\vec{r})=$$ $$\displaystyle\rho(\vec{r})P\sin 2\phi,$$ (2.1) with a radial profile $\rho(\vec{r})=\delta(\vec{r}-\vec{r}_{i})$, and polarization angle defined between $\phi\in[0,\pi)$. Like it is customary for the detection of compact sources in full-sky maps [e.g., 40, 41, 42], the filter will be applied to the projection onto the plane of a small square region of the sky containing the source. Working within this reduced surface allows for a better statistical characterization of the background surrounding the source, thus improving the performance of the filter. In addition, the small size and compact nature of point-sources ensure that, when pixels in the plane and in the sphere have approximately the same size, the projection will not introduce a significant distortion to the sources’ shape. In a first approximation, most CMB experiments can be effectively modeled to have a circular Gaussian beam. Because of this instrumental response, point-sources adopt a Gaussian profile characterized by the beam’s Full Width at Half Maximum, or alternatively its $\sigma$ (related by $\mathrm{FWHM}=2\sqrt{2\ln 2}\sigma$): $$\rho(\vec{r})=\frac{1}{2\pi\sigma^{2}}e^{-r^{2}/2\sigma^{2}}.$$ (2.2) In this last equation, the coordinate origin was moved to the center of the source. Following for instance [37], when working in the plane, the source profile will transform from a pair of $Q$ and $U$ maps to the corresponding E- and B-mode maps like: $$\displaystyle\tilde{E}(\vec{q})=$$ $$\displaystyle\frac{1}{2}[\cos 2\theta\tilde{Q}(\vec{q})+\sin 2\theta\tilde{U}(% \vec{q})],$$ $$\displaystyle\tilde{B}(\vec{q})=$$ $$\displaystyle\frac{1}{2}[\sin 2\theta\tilde{Q}(\vec{q})-\cos 2\theta\tilde{U}(% \vec{q})],$$ (2.3) where $\tilde{f}(\vec{q})$ stands for the Fourier Transform of a $f(\vec{r})$ function, and $\vec{q}=(q,\theta)$ are the polar coordinates in reciprocal space. Hence, introducing the Fourier Transforms of the discussed $Q(\vec{r})$ and $U(\vec{r})$ profiles in (2.1), the E- and B-mode profiles of the source in reciprocal space would be: $$\displaystyle\tilde{E}(\vec{q})=$$ $$\displaystyle\frac{P}{2}[\cos 2\theta\cos 2\phi+\sin 2\theta\sin 2\phi]e^{-q^{% 2}\sigma^{2}/2},$$ $$\displaystyle\tilde{B}(\vec{q})=$$ $$\displaystyle\frac{P}{2}[\sin 2\theta\cos 2\phi-\cos 2\theta\sin 2\phi]e^{-q^{% 2}\sigma^{2}/2}.$$ (2.4) Calculating the inverse Fourier Transform of the previous expression results in a real space profile of: $$\displaystyle E(\vec{r})=$$ $$\displaystyle\frac{P}{\pi}[\cos 2\xi\cos 2\phi+\sin 2\xi\sin 2\phi]\tau(r),$$ $$\displaystyle B(\vec{r})=$$ $$\displaystyle\frac{P}{\pi}[\sin 2\xi\cos 2\phi-\cos 2\xi\sin 2\phi]\tau(r),$$ (2.5) where $\vec{r}=(r,\xi)$ are the polar coordinates in real space, and the radial dependence reads $$\tau(r)=\frac{1}{r^{2}}\left[e^{-r^{2}/2\sigma^{2}}\left(1+\frac{r^{2}}{2% \sigma^{2}}\right)-1\right].$$ (2.6) To account for the different convention in the definition of E- and B-modes adopted by [37] in the plane, and by HEALPIX111http://healpix.sourceforge.net [45] in the sphere, the equations in (2.1) carry an extra 2 factor222If the reader were to repeat these calculations, they would obtain expressions like the ones shown in (2.1) but with a denominator of $2\pi$. to make this formulation suitable for its application in real and simulated maps of the microwave sky. As can be seen in figure 1, point-sources present a hot and cold two-lobes profile in E- and B-mode maps. The position of the hot and cold lobes is the opposite of what would be expected by just looking at the angular component of equations (2.1) because of the negative amplitude of its radial component $\tau(r)$. The symmetry between the sine and cosine terms in these equations, both for the polar ($\xi$) and polarization angles, introduces $\pi/4$ rotation relationships between the E- and B-mode profiles. Fixing the polarization angle, a $\pi/4$ spatial rotation transforms the E-mode profile into the B-mode one, $E(r,\xi\pm\pi/4,\phi)=\mp B(r,\xi,\phi)$. This property manifests itself in the plots shown in figure 1, and could be useful in cross-matching mechanisms between E- and B-modes to verify detections. Another useful relationship is $E(r,\xi,\phi\pm\pi/4)=\pm B(r,\xi,\phi)$, the equality between E- and B-modes under a $\pi/4$ rotation in the polarization angle. These angular symmetries make $E(\vec{r})$ and $B(\vec{r})$ steerable functions, i.e., functions that can be written as linear combinations of rotated versions of themselves (see [46] for more insight about steerability conditions). Choosing as basis the $E(\vec{r})$ source profile for polarization angles 0 and $\pi/4$, $$\displaystyle P_{x}(\vec{r})=$$ $$\displaystyle E(\vec{r},\phi=0)=\frac{P}{\pi}\cos 2\xi\tau(r),$$ $$\displaystyle P_{y}(\vec{r})=$$ $$\displaystyle E(\vec{r},\phi=\pi/4)=\frac{P}{\pi}\sin 2\xi\tau(r),$$ (2.7) it is immediate to see that indeed the source profile for any other polarization angle is just a rotation of this basis: $$\displaystyle E(\vec{r})=$$ $$\displaystyle\cos 2\phi P_{x}(\vec{r})+\sin 2\phi P_{y}(\vec{r}),$$ $$\displaystyle B(\vec{r})=$$ $$\displaystyle\cos 2\phi P_{y}(\vec{r})-\sin 2\phi P_{x}(\vec{r}).$$ (2.8) The $x$ and $y$ nomenclature for basis functions was chosen to reflect along which axis do the cold lobes of the source fall. Here we have defined the basis functions starting from the E-mode profile of the source, but thanks to their angular symmetries, the very same basis could have been obtained from the B-mode profile: $P_{x}(\vec{r})=B(\vec{r},\phi=3\pi/4)$ and $P_{y}(\vec{r})=B(\vec{r},\phi=0)$. 2.2 Wavelet definition and parameter estimation Looking at $E(\vec{r})$ and $B(\vec{r})$ as written in equations (2.1), one could intuitively think that a filter relying on the $P_{x}(\vec{r})$ and $P_{y}(\vec{r})$ functions may be the simplest approach to recover the polarization angle and intensity of the source. Going back to the definition of $\tilde{E}(\vec{q})$ in equations (2.1), imposing the 0 and $\pi/4$ polarization angles previously used to obtain the $P_{x}(\vec{r})$ and $P_{y}(\vec{r})$ basis, and computing the inverse Fourier Transform, leads us to a basis of (properly normalized) real space filtering functions: $$\displaystyle\psi_{x}(\vec{r},R)=$$ $$\displaystyle\frac{1}{2\pi R^{2}}\cos 2\xi e^{-r^{2}/2R^{2}},$$ $$\displaystyle\psi_{y}(\vec{r},R)=$$ $$\displaystyle\frac{1}{2\pi R^{2}}\sin 2\xi e^{-r^{2}/2R^{2}}.$$ (2.9) The same basis of filtering functions could have been reached from the $\tilde{B}(\vec{q})$ profile in equations (2.1) if the $3\pi/4$ and $0$ polarization angles were chosen instead. These $\psi_{x}$ and $\psi_{y}$ filtering functions are just the 0 and $\pi/4$ rotations of the mother wavelet: $$\Psi(\vec{r},R)=\frac{1}{2\pi R^{2}}\cos 2\xi e^{-r^{2}/2R^{2}}.$$ (2.10) We call this function a wavelet not only due to the introduction of the scale $R$, but also because it is a compensated function: $$\int_{0}^{\infty}\int_{0}^{2\pi}\Psi(\vec{r},R)rdrd\xi=\int_{0}^{\infty}re^{-r% ^{2}/2R^{2}}dr\int_{0}^{2\pi}\cos 2\xi d\xi=0.$$ (2.11) Following [47], compensated functions satisfy the admissibility condition, which in turn ensures the fulfilment of the synthesis condition. Therefore, $\Psi(\vec{r},R)$ fulfills all the conditions required to be a wavelet in the plane. In this way, we can study the source profile in E- and B-modes in terms of its decomposition into the wavelet coefficients of our two basis functions ($i=x,y$): $$\displaystyle\omega_{i}^{E}(\vec{r},R)=\iint E(\vec{r}^{\prime})\psi_{i}(\vec{% r}^{\prime}-\vec{r},R)d\vec{r}^{\prime},$$ $$\displaystyle\omega_{i}^{B}(\vec{r},R)=\iint B(\vec{r}^{\prime})\psi_{i}(\vec{% r}^{\prime}-\vec{r},R)d\vec{r}^{\prime}.$$ (2.12) Since we have defined $\psi_{x}(\vec{r},R)$ and $\psi_{y}(\vec{r},R)$ to have the same functional form than $P_{x}(\vec{r})$ and $P_{y}(\vec{r})$, we count with a steerable wavelet that will make possible the reconstruction of the source profile for any polarization angle through the linear combination of wavelet coefficients: $$\displaystyle\omega_{E}(\vec{r},\hat{\phi},R)=$$ $$\displaystyle\cos 2\hat{\phi}\,\omega_{x}^{E}(\vec{r},R)+\sin 2\hat{\phi}\,% \omega_{y}^{E}(\vec{r},R),$$ $$\displaystyle\omega_{B}(\vec{r},\hat{\phi},R)=$$ $$\displaystyle\cos 2\hat{\phi}\,\omega_{y}^{B}(\vec{r},R)-\sin 2\hat{\phi}\,% \omega_{x}^{B}(\vec{r},R).$$ (2.13) Therefore, the only remaining step of the filtering process would be to find a way to estimate a $\hat{\phi}$ value for the polarization angle. The optimal method we found to estimate both, polarization angle and intensity, relies on the relationships the wavelet coefficients’ central point keeps with these magnitudes. For the E-mode source profile, the $\omega_{x}^{E}(\vec{r},R)$ and $\omega_{y}^{E}(\vec{r},R)$ wavelet coefficients are: $$\displaystyle\omega_{x}^{E}(\vec{r},R)=$$ $$\displaystyle\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R^{2}}\Big{[}\cos 2\phi e% ^{-z}+\Big{(}\cos 4\xi\cos 2\phi+\sin 4\xi\sin 2\phi\Big{)}\lambda(z,R)\Big{]},$$ $$\displaystyle\omega_{y}^{E}(\vec{r},R)=$$ $$\displaystyle\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R^{2}}\Big{[}\sin 2\phi e% ^{-z}+\Big{(}\sin 4\xi\cos 2\phi-\cos 4\xi\sin 2\phi\Big{)}\lambda(z,R)\Big{]},$$ (2.14) where the radial dependence reads $$\lambda(z,R)=\frac{1}{2z^{2}}\Big{[}e^{-z}\Big{(}z(z+4)+6\Big{)}+2(z-3)\Big{]}% ,\hskip 28.452756ptz=\frac{r^{2}}{2(\sigma^{2}+R^{2})}.$$ (2.15) For both coefficients, if we focus our attention in the center of the image (when $r\rightarrow 0$, the radial terms tend to $\lambda(z,R)\rightarrow 0$ and $e^{-z}\rightarrow 1$), we are left with: $$\displaystyle\omega_{x}^{E}(\vec{0},R)=$$ $$\displaystyle\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R^{2}}\cos 2\phi,$$ $$\displaystyle\omega_{y}^{E}(\vec{0},R)=$$ $$\displaystyle\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R^{2}}\sin 2\phi.$$ (2.16) Therefore, an estimation of the polarization angle can easily be computed through the ratio of central wavelet coefficients like: $$\hat{\phi}^{E}(R)=\frac{1}{2}\arctan\left(\frac{\omega_{y}^{E}(\vec{0},R)}{% \omega_{x}^{E}(\vec{0},R)}\right).$$ (2.17) As it would be expected, the wavelet coefficients for B-modes are just a $\pi/4$ rotation of $\omega_{x}^{E}(\vec{r},R)$ and $\omega_{y}^{E}(\vec{r},R)$: $$\displaystyle\omega_{x}^{B}(\vec{r},R)=$$ $$\displaystyle\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R^{2}}\Big{[}-\sin 2% \phi e^{-z}+\Big{(}\sin 4\xi\cos 2\phi-\cos 4\xi\sin 2\phi\Big{)}\lambda(z,R)% \Big{]},$$ $$\displaystyle\omega_{y}^{B}(\vec{r},R)=$$ $$\displaystyle\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R^{2}}\Big{[}\cos 2\phi e% ^{-z}-\Big{(}\cos 4\xi\cos 2\phi+\sin 4\xi\sin 2\phi\Big{)}\lambda(z,R)\Big{]}.$$ (2.18) Hence, an estimate of the polarization angle can be obtained from B-modes like: $$\hat{\phi}^{B}(R)=\frac{1}{2}\arctan\left(\frac{-\omega_{x}^{B}(\vec{0},R)}{% \omega_{y}^{B}(\vec{0},R)}\right).$$ (2.19) Now that we count with the $\hat{\phi}^{E,B}$ estimates of the polarization angle, we can go back to the equations (2.2) to reconstruct the total wavelet coefficients $\omega_{E,B}$. An estimate of the polarization intensity of the source can also be obtained by looking at the central point of the combined coefficients, since $$\omega_{E,B}(\vec{0},\hat{\phi},R)=\frac{P}{8\pi^{2}}\frac{R^{2}}{\sigma^{2}+R% ^{2}}\cos 2\Big{(}\phi-\hat{\phi}^{E,B}(R)\Big{)}.$$ (2.20) Consequently, if the $\hat{\phi}^{E,B}$ estimate is unbiased such that $\phi-\hat{\phi}^{E,B}\approx 0$, then the polarization intensity can be simply recovered from $$\hat{P}^{E,B}(\hat{\phi},R)=8\pi^{2}\frac{\sigma^{2}+R^{2}}{R^{2}}\omega_{E,B}% (\vec{0},\hat{\phi},R).$$ (2.21) We count now with two independent estimates of the source’s polarization angle and intensity, that, in principle, should give similar values for the actual polarization angle and intensity. However, this will not be the case when we apply the filter to the real microwave sky since the backgrounds present in E-modes, both galactic foregrounds [43] and the CMB itself, are known to be higher than those in B-modes. Therefore, the results coming from the filtering of B-mode maps are expected to lead to a more accurate estimate. Indeed, this fact alone is enough to justify the present study, since the ratio of background intensities between $Q$ and $U$ polarization maps and B-modes should be similar to that of E- and B-modes, and thus working in B-modes should also prove to be advantageous with respect to the standard approach of working in $Q$ and $U$ maps. We could also provide a joint estimate of the polarization angle and intensity of the source by combining the information from E- and B-mode maps. Exploiting the $\omega_{x}^{E}(r,\xi\pm\pi/4,\phi)=\omega_{y}^{B}(r,\xi,\phi)$ and $\omega_{y}^{E}(r,\xi\pm\pi/4,\phi)=-\omega_{x}^{B}(r,\xi,\phi)$ $\pi/4$ rotation symmetries between wavelet coefficients, we could rotate the $x$ and $y$ E-mode coefficients to match those from B-modes, and then stack them to create two new effective joint coefficients $\omega_{x}^{J}$ and $\omega_{y}^{J}$. Going a step further, we could account for the aforementioned differences on background amplitude between E- and B-modes by weighting the sum of wavelet coefficients: $$\displaystyle\omega_{y}^{J}(\vec{r},R)=$$ $$\displaystyle\frac{\alpha^{E}_{x}}{\alpha^{E}_{x}+\alpha^{B}_{y}}\omega^{E}_{x% }(r,\xi+\pi/4,R)+\frac{\alpha^{B}_{y}}{\alpha^{E}_{x}+\alpha^{B}_{y}}\omega^{B% }_{y}(\vec{r},R),$$ $$\displaystyle\omega_{x}^{J}(\vec{r},R)=$$ $$\displaystyle-\frac{\alpha^{E}_{y}}{\alpha^{B}_{x}+\alpha^{E}_{y}}\omega^{E}_{% y}(r,\xi+\pi/4,R)+\frac{\alpha^{B}_{x}}{\alpha^{B}_{x}+\alpha^{E}_{y}}\omega^{% B}_{x}(\vec{r},R),$$ (2.22) with indices reading $M=E,B$ and $i=x,y$. The $\alpha^{M}_{i}$ weights are defined from the dispersion of wavelet coefficients like $\alpha^{M}_{i}=\left(\sigma_{\omega^{M}_{i}}\right)^{-2}$. Using these weights we are giving more importance to the patches where the source’s signal is best defined against the background. To properly calculate $\sigma_{\omega^{M}_{i}}$ and avoid the source to artificially boost the variance, we exclude an $8\sigma$ circular region around the source before computing the dispersion of $\omega^{M}_{i}$. Since E-modes were rotated into B-modes to create $\omega^{J}$, these joint wavelet coefficients effectively behave like B-modes, and thus the joint estimates of the polarization angle and intensity should be computed like: $$\displaystyle\hat{\phi}^{J}(R)=$$ $$\displaystyle\frac{1}{2}\arctan\left(\frac{-\omega_{x}^{J}(\vec{0,}R)}{\omega_% {y}^{J}(\vec{0},R)}\right),$$ $$\displaystyle\hat{P}^{J}(\hat{\phi},R)$$ $$\displaystyle=8\pi^{2}\frac{\sigma^{2}+R^{2}}{R^{2}}\left(\cos 2\hat{\phi}^{J}% \omega_{y}^{J}(\vec{0},R)-\sin 2\hat{\phi}^{J}\omega_{x}^{J}(\vec{0},R)\right).$$ (2.23) By combining E- and B-modes at the wavelet coefficient stage, and then submitting them to the same parameter estimation logic, we ensure that polarization angles are correctly defined inside the $\phi\in[0,\pi)$ interval and that polarization intensities are always positive. Otherwise we could not guarantee the correct definition of our joint estimates. 2.3 Calibration of pixelization effects All the equations presented in previous sections rely on continuous functions. However, digital imaging discretizes information into pixels, compromising the resolution of functions to the number of pixels used. Therefore, a proper filter implementation must consider, and if necessary correct, the possible pixelization induced distortions. Since our filter is implemented in the plane, the first step of the filtering process would be to project the target region of the sky (i.e., part of a spherical surface) onto the Cartesian plane. To guarantee that no significant distortions are introduced to the shape of sources by projecting, projections must be limited to a close entourage of the source, and pixels in the plane must have a similar (or smaller) size than pixels on the sphere. Given how the finesse of the sphere pixelization depends on the particular resolution granted by the instrumental specifications of every experiment, the plane’s pixelization has to be tailored for the analysis of the specific data at hand. In particular, in this work we will be testing the performance of the filter on PICO-like simulations, so our pixelization is designed to fit their current instrumental specifications given in [4]. Table 1 collects the main parameters describing both the sphere and plane pixelization of the two PICO channels we will be simulating. Like most CMB experiments, PICO maps will be build using HEALPix, the Hierarchical Equal Area, and Iso-Latitude Pixelation of the sphere proposed by [45]. In this pixelization scheme, the sphere is divided into $12\times\mathrm{nside}^{2}$ rhomboid pixels. By fixing the patch extent to $64\times 64$ pixels to keep the computational cost of the filtering process at bay, and forcing pixels in the plane to have the same size as pixels in the sphere, the size of the square regions to project is immediately set to be of $7.33^{\circ}\times 7.33^{\circ}$ and $1.83^{\circ}\times 1.83^{\circ}$, respectively, for the 30 GHz and 155 GHz channels. Patches of that size are large enough to offer a good representation of the statistical properties of background emissions, and small enough to ensure that the flat approximation of the sphere’s surface will still hold, thus avoiding the introduction of distortions during projection. As shown in figure 2, pixelization limits the angular resolution of the source’s profile. The compact nature of the source limits its extension to the smallest of $r$s, where no matter how fine pixelization is, only the $\xi=\{0^{\circ},45^{\circ},90^{\circ},135^{\circ},180^{\circ},225^{\circ},270^% {\circ},315^{\circ}\}$ angles will be perfectly defined. In contrast, the angles right between those would be the ones most affected by pixelization distortions. Therefore, it is only for the $\phi=\{0^{\circ},45^{\circ},90^{\circ},135^{\circ}\}$ polarization angles (when the lobes of point-sources fall along the direction of the $x$ and $y$ axes and the diagonals) that the filter will be free of bias in the estimation of $\hat{\phi}$, while for the angles just in the middle, $\phi=\{22.5^{\circ},67.5^{\circ},112.5^{\circ},157.5^{\circ}\}$, the largest biases are expected. These pixelization imposed restrictions clearly manifest themselves in the determination of the polarization angle when applying the filter to a naked source, as shown in figure 3. Since the estimation of polarization intensity depends on the accuracy of the $\hat{\phi}$ estimate like $\hat{P}\propto\cos 2(\phi-\hat{\phi})$, in turn, the largest biases in the recovered polarization intensity will be shifted to $\phi=\{0^{\circ},45^{\circ},90^{\circ},135^{\circ}\}$. In addition, the accuracy in the determination of $P$ is also limited by how well the discrete points in the pixel grid can sample continuous functions. Naively modeling this discrepancy as $\hat{P}=P-\epsilon$, where the value of $\epsilon$ progressively decreases for finer pixel grids, and the bias in the polarization angle estimate simply as $A\cos 2\phi$, like the left panel of figure 3 suggests we can do, the relative bias committed in the determination of the polarization intensity would behave as: $$\frac{P-\hat{P}}{P}\propto 1-\left(1-\frac{\epsilon}{P}\right)\cos(2A\cos 2% \phi).$$ (2.24) Since $\epsilon/P$ is very small but different from zero, this toy model explains why the relative bias in the determination of the polarization intensity seen in figure 3 shockingly does not oscillate around zero. Moreover, giving $A$ and $\epsilon$ the actual values they present in these scenarios, the model precisely reproduces the relative biases displayed for the discrete polarization intensity. A finer pixelization allows for both, a better angular resolution, and a more precise approximation of the value of continuous functions at all points, decreasing the induced biases in the polarization angle and intensity. Once pixelization is fixed, the only free parameter altering the resolution of the source’s profile is the FWHM/pix ratio. An increase in the FWHM/pix ratio has the effect of smoothing the profile of the source. As variations in the value of the source’s profile are now smaller from pixel to pixel, the ability of the filter to distinguish from one polarization angle to another is also diminished. Therefore, increasing the FWHM/pix ratio aggravates the biases committed in polarization angle and intensity determination, as can be seen in figure 3. Albeit not illustrated here, playing with the $R$ filter scale has the same effect that increasing or decreasing the FWHM/pix ratio. Since these biases in polarization angle and intensity determination are exclusively caused by known parameters of image pixelization, filter definition and instrument resolution, they can be easily corrected. In our case a multiplicative calibration function would suffice to correct the initial estimations of $\hat{P}$ and $\hat{\phi}$. We can obtain such calibration functions from the initial outputs recovered when applying the filter to a naked source like: $$\displaystyle f_{E,B}(\hat{\phi},\mathrm{FWHM/pix},R/\sigma)=$$ $$\displaystyle\frac{\phi}{\hat{\phi}_{E,B}(\phi,\mathrm{FWHM/pix},R/\sigma)},$$ $$\displaystyle g_{E,B}(\hat{\phi},\mathrm{FWHM/pix},R/\sigma)=$$ $$\displaystyle\frac{P}{\hat{P}_{E,B}(\phi,\mathrm{FWHM/pix},R/\sigma)}.$$ (2.25) Calibration functions have been computed in this way for polarization angles $\phi\in[0^{\circ},180^{\circ})$, with a one degree step between them, and for the FWHM/pix and $R/\sigma$ ratios that will be used later to test the filter performance. For polarization angles not tabulated, the value of the calibration function is interpolated using cubic splines. Once stored, calibration functions are used to correct the filter initial response simply like $\tilde{\phi}_{E,B}=f_{E,B}\times\hat{\phi}_{E,B}$ and $\tilde{P}_{E,B}=g_{E,B}\times\hat{P}_{E,B}$. Since $\hat{\phi}^{J}$ and $\hat{P}^{J}$ joint estimates effectively behave like B-modes, the same calibration is applied to them. After calibration, the remaining residual errors are only due to numerical precision (of the order of $10^{-10}$arcsec and $10^{-14}\%$, respectively for $\tilde{\phi}$ and $\tilde{P}$). 3 Test on simulations With the filter defined and calibrated, we proceed now to test its performance on realistic simulations of the microwave sky, where sources are immersed in a background of CMB and galactic foreground emissions, and can also be hidden below instrumental noise. Before statistically characterizing the filter’s performance in subsection 3.2, we will first describe our simulations of the microwave sky in subsection 3.1. 3.1 Simulations description We decided to test the filter performance on simulations of the future PICO satellite, an ideal experiment for point-source detection since it will combine high resolutions with low instrumental noises. Amongst the 21 frequency bands envisioned in the [4] mission concept study, we chose to work with the 30 GHz and 155 GHz channels. We selected these bands for the diverse experimental conditions they will allow us to explore: from different beam sizes ($28$ vs. $6$ arcmin, as indicated in table 1), to contrasting backgrounds (see table 3). On the one hand, galactic foregrounds have a similar amplitude in E- and B-modes at 155 GHz, while at 30 GHz, the amplitude of galactic E-modes is larger than that of B-modes. On the other hand, B-modes will still be noise-dominated at 30 GHz, whereas, thanks to the low instrumental noise planned for the 155 GHz band, B-modes will be foreground-dominated at 155 GHz (see table 2). Foreground emission was simulated using the Planck Sky Model [48], a publicly available software333https://pla.esac.esa.int/#plaavi_psm that allows us to generate random realizations of the microwave sky in agreement with current observational constrains. Our simulations include the lensed CMB (assuming $r=0$), synchrotron and thermal dust emissions, and point-sources below the detection threshold expected for PICO in intensity (4 mJy and 7 mJy respectively for the 30 GHz and 155 GHz channels). The sources to characterize will be added directly on the plane once patches are projected. We will simulate sources of different fluxes, ranging from tens of mJy to 10 Jy in intensity, as indicated in the $I_{i}$ column of table 2. The aim of this particular flux selection is not to make a full characterization of the filter performance for all fluxes above the detection threshold but rather to show the potential of the methodology through a few archetypal fluxes. To translate fluxes into polarization intensities, we will assume sources to have a constant $\Pi=0.02$ polarization degree, independent of their flux or the frequency band like [49] and [50] suggest. Accounting also for the conversion factor between intensity and thermodynamical units (more about unit conversion can be found in [51]), the polarization intensity of sources will be $$P(\mu K)\approx\Pi\hskip 2.845276ptI(\mathrm{Jy})\hskip 2.845276pt\frac{\sinh^% {2}(x/2)}{24.8x^{4}}\hskip 2.845276pt,\hskip 28.452756ptx\approx\frac{\nu}{56.% 8\mathrm{GHz}}.$$ (3.1) Foreground emission greatly varies across the sky, so to better assess filter performance, we divided the sky into three separate regions based on the intensity of foreground emission. To ensure the sampling of the whole sky, we start by projecting a total of 768 square patches (of a $7.33^{\circ}$ side for the 30 GHz channel, and $1.83^{\circ}$ for the 155 GHz one) centered around the positions of HEALPix’s $\mathrm{nside}=8$ pixels. The three distinct regions are then defined as a function of the $\sigma_{\mathrm{patch}}$ dispersion foreground emission presents on each patch so that Zone I contains the first $40\%$ of patches of lowest dispersion, Zone II comprises the next $35\%$ of patches of lowest dispersion, and Zone III collects the next $22\%$ (see figure 4 for an example). The remaining $3\%$ of the patches are depreciated for corresponding to the regions of largest foreground emission inside the galactic plane. As an additional condition, we require this classification to be spatially coherent across polarization modes, meaning that for a certain patch to belong to a given region, both its dispersion in E- and B-modes needs to fall into that zone. We could also impose a spatial coherence across frequencies, but that would be of little use given the very different nature of foregrounds in the two frequency bands: the 30 GHz band is mainly composed of synchrotron radiation, while thermal dust emission dominates on the 155 GHz band. For this reason, regions are defined independently for each frequency. Finally, table 3 shows the $\sigma_{\mathrm{patch}}$ range defining the three regions like so constructed, both for E- and B-modes, and the two frequency bands. The dispersion of the simulated CMB component is also included for reference. 3.2 Statistical characterization of the filter performance To characterize the filter performance, we randomly select a hundred patches from each of the regions defined in table 3, and add to them a realization of isotropic Gaussian noise. At the center of each patch, we place a source directly on the plane for 36 different polarization angles, and the four fluxes shown in table 2. By applying the filter to all of them, we obtain 3600 estimates of $\tilde{\phi}$ and $\tilde{P}$ for the statistical analysis of the filter performance for each flux and region. The $\sigma_{\phi}$ and $\sigma_{P}$ uncertainties on parameter estimation are then calculated as the standard deviation of the values recovered for all patches, averaged over the different orientations of the source. However, the intrinsic nature of the source’s parameters complicates such statistical analysis. On the one hand, the polarization angle is a bounded ($\phi\in[0,\pi)$) and periodic (the source’s profile is symmetric under a $\phi\pm\pi$ rotation) quantity. This means that for $\phi^{*}$ angles close to the edges of the definition interval, the filter is equally likely to return $\phi^{*}\pm\pi$ since both of them are actually equivalent and angles outside the definition interval are not allowed. This feature does not suppose a problem in the sense that we now errors should also be interpreted periodically (i.e., a $\phi^{*}=5^{\circ}\pm 10^{\circ}$ uncertainty means that $\phi^{*}$ is compatible with all angles contained in the $[0^{\circ},15^{\circ}]\cup[175^{\circ},180^{\circ})$ interval), but it can artificially increase the dispersion of recovered $\tilde{\phi}$ for angles near the edges of the definition interval. To solve this problem, we should take into account the periodicity of polarization angles when determining $\sigma_{\phi}$, which we do by using the $\Phi=\Big{\{}\tilde{\phi},\tilde{\phi}-\pi,\tilde{\phi}+\pi\Big{\}}$ angle that minimizes the $|\phi-\Phi|$ difference in the calculation of the standard deviation. On the other hand, polarization intensity is a positive defined quantity (i.e., $P>0$ always). This also restricts the interval of allowed values, and for the faintest of sources, it has the effect of skewing the distribution of recovered $\tilde{P}$ towards larger polarization intensities. Therefore, there would be a limiting value of $P$ below which the filter will start to systematically overestimate polarization intensity, as can be seen in figure 5, where the mean $\tilde{P}$ recovered is plotted against the actual polarization intensity. For this reason, to properly characterize its performance, we should also give a measurement of how much the filter tends to overestimate polarization intensity. The parameter we chose to quantify this is the $b_{P}$ bias, calculated as the mean error in the estimation of $P$, averaged over all patches, and over all orientations of the source: $b_{P}=\langle\langle\tilde{P}-P\rangle_{\mathrm{patch}}\rangle_{\phi}$. As a general rule, sources will only be correctly characterized as long as $b_{P}<\sigma_{P}$. Taking into account all these peculiarities, tables 4 and 5 collect the typical errors (presented in $\sigma_{\phi}$; $b_{P}\pm\sigma_{P}$ triplets) obtained for each frequency band, sky region and polarization intensity tested. Let us focus on the results for the 30 GHz band. For the lowest of polarization intensities tested ($P_{1}$ is only ten times above the detection threshold in intensity), the filter is not able to characterize point-sources in any of the sky regions. Increasing the polarization intensity up to $P_{2}$ (corresponding to a $500$ mJy intensity), characterization starts to be possible for B-modes in the Zones I and II of low foreground emission. Although still high, biases also improve for E-modes, reaching $b_{P}<\sigma_{P}$ values. For sources of $P_{3}$ polarization intensity, characterization is now possible in all sky regions for B-modes, and only in the Zones I and II of low foreground amplitude for E-modes. As would be expected, characterization is possible in all regions, and in both E- and B-modes, for very bright sources ($P_{4}$ corresponds to a $10$ Jy intensity). A similar pattern can be seen on the 155 GHz band, although this time, a good characterization of point-sources emission is harder to reach because of the larger foreground emission present on this band (see table 3). In this case, apart from very bright sources, the filter is only able to correctly characterize sources of $P_{2}$ and $P_{3}$ polarization intensities in the region of lowest foreground emission, and only in the B-mode channel. In both bands, all three $\sigma_{\phi}$, $b_{P}$ and $\sigma_{P}$ parameters prove to be smaller for B-modes than E-modes, with the differences between both modes decreasing as the amplitude of foreground emission increases. It is only for Zone III of the 155 GHz band, where foregrounds become equally important for both modes (see table 3), that working on B-modes does not report any advantage compared to E-modes. Such results confirm our initial hypothesis that B-mode polarization maps would be the best channel for the study of point-sources because of the lower amplitude of the backgrounds found there. Meanwhile, the joint parameter estimation tends to return very similar results to those coming from B-modes at the 30 GHz band since the lower amplitude of foregrounds there makes B-modes the main contributor to $\hat{\phi}$ and $\hat{P}$. In contrast, for the 155 GHz band, where the amplitude of foreground emission in E- and B-modes becomes comparable, the joint analysis starts to systematically yield better results than what a B-mode-only analysis would. The small negative biases recovered in some regions for the highest of fluxes reflect how in those scenarios $P$ is so far from zero that the recovered $\hat{P}$ values can be symmetrically distributed around the true $P$, and thus underestimating the polarization intensity becomes a possibility again. As a final result of this statistical study of the filter performance, we give in table 6 the polarization intensity below which the filter will not be able to correctly characterize the properties of extragalactic sources. This threshold is obtained by finding the source polarization intensity at wich $b_{p}=\sigma_{p}$ (equivalent to finding the point in figure 5 where $\langle\hat{P}\rangle-\sigma_{P}$ intersects the $P=P_{true}$ line). Again, these results evidence how a better characterization is possible in maps of the B-mode polarization. Although we saw in tables 4 and 5 that the joint parameter estimation yielded better biases and uncertainties than a B-mode-only estimate for faint sources in the region of most intense foregrounds emission, such simultaneous reduction of the $b_{P}$ and $\sigma_{P}$ values also has the effect of moving the $b_{P}=\sigma_{P}$ threshold towards higher polarization intensities, like it happens in the Zone III column of table 6. 4 Conclusions and future work In this work we have designed a filter based on steerable wavelets that allows the characterization of extragalactic point-sources on the E- and B-mode maps of the CMB polarization. The initial motivation for working in E- and B-modes maps instead of following the conventional approach of working in maps of the Stokes’ $Q$ and $U$ parameters, where sources have a simpler profile, was to try to exploit the lower amplitude that the background of microwave emissions present in B-modes. The application of the filter to realistic simulations of the microwave sky proved that, indeed, a better determination of the sources’ properties was possible in B-modes than in E-modes. Moreover, since a similar ratio of background amplitudes can be found between E- and B-modes, and $Q$ and $U$ maps and B-modes, our results also prove that B-modes are the optimum polarization channel for point source characterization. Throughout this work, the filter scale has always been fixed to match the size of the source ($R=\sigma$). Nevertheless, filter performance could be enhanced by finding the optimum wavelet scale at which to operate it. Following the example of other wavelet-based filters designed for point source detection on CMB temperature maps [e.g., 52], a possible approach to optimum scale determination would be to identify the optimum filter scale as the one that maximizes the amplification, a quantity proportional to the quotient between the central wavelet coefficient amplitude and the wavelet coefficients dispersion. However, this maximum amplification criteria might not be the best approach for our characterization problem since it is one meant for detection: it identifies the scale at which the source stands out more from the background, and there is no guarantee that the scale that maximizes amplification would be the one to minimize the error of the polarization angle estimate, an estimation that depends on a non-linear function like the arctangent. Although finding the optimum wavelet scale was outside the scope of the present work, it will be a topic of future study. The ultimate goal of this project would be to apply the designed filter to real data. For instance, it would be very interesting to compare the polarization angle and intensity determined in public catalogs, like for example the Second Planck Catalog of Compact Sources [42], with the results our filter yields from E- and B-modes. However, we must leave this endeavor to a future work since some possible sources of systematic error still have to be tested before applying the filter to real data. In particular, two of these possible sources of error are the small distortions in the source profile introduced by the projection onto the plane, and the uncertainty in the source’s location. For convenience, in our analysis we simulated point-sources directly on the plane instead of projecting them from the sphere. Although distortions are expected to be minimal, we still need to verify that the projection onto the plane does not introduce any additional bias to the polarization angle estimation. In addition, during our analysis we always assumed to know the exact position of the point source in the plane patch, an statement that will no longer hold when working with real observations. During our study we noticed that polarization angle and intensity estimates can be very sensitive to the position and orientation of the source within the background, which suggests that the filter performance could also be significantly affected if it was not exactly applied on the center of the source. Given the positive results obtained when studying extragalactic sources on E- and B-mode maps, the natural progression would be to extend the presented formalism from the characterization of known sources to an actual blind detection scheme operating on E- and B-mode maps. Catalogs made from E- and B-mode maps have the advantage of directly detecting polarization intensities, which makes no assumptions about the underlying polarization degree. In contrast, the usual strategy of detecting in intensity and subsequently looking at its counterpart on polarization tends to favor a certain range of polarization degrees, leaving out of the catalog sources with high polarization degrees that could have been detected in polarization despite not reaching the detection threshold in intensity. In this way, blind detection on E- and B-mode maps could help produce unbiased polarization degree catalogs. Acknowledgments PDP acknowledge partial financial support from the Formación del Profesorado Universitario (FPU) programme of the Spanish Ministerio de Ciencia, Innovación y Universidades. DH acknowledges partial financial support from the Spanish Ministerio de Ciencia, Innovación y Universidades project PGC2018-101814-B-I00. We make use of HEALPix [45], and the numpy and matplotlib [53] Python packages. References [1] G. De Zotti, G. Castex, J. González-Nuevo, M. Lopez-Caniego, M. Negrello, Z. Y. Cai et al., Extragalactic sources in Cosmic Microwave Background maps, J. Cosmology Astropart. Phys. 2015 (2015) 018 [1501.02170]. [2] V. Galluzzi and M. Massardi, The polarimetric multi-frequency radio sources properties, International Journal of Modern Physics D 25 (2016) 1640005 [1611.08159]. [3] K. Abazajian, G. Addison, P. Adshead, Z. Ahmed, S. W. Allen, D. Alonso et al., CMB-S4 Science Case, Reference Design, and Project Plan, arXiv e-prints (2019) arXiv:1907.04473 [1907.04473]. [4] S. Hanany, M. Alvarez, E. Artis, P. Ashton, J. Aumont, R. Aurlien et al., PICO: Probe of Inflation and Cosmic Origins, arXiv e-prints (2019) arXiv:1902.10541 [1902.10541]. [5] M. Tucci, E. Martínez-González, P. Vielva and J. Delabrouille, Limits on the detectability of the CMB B-mode polarization imposed by foregrounds, MNRAS 360 (2005) 935 [astro-ph/0411567]. [6] G. Puglisi, V. Galluzzi, L. Bonavera, J. Gonzalez-Nuevo, A. Lapi, M. Massardi et al., Forecasting the Contribution of Polarized Extragalactic Radio Sources in CMB Observations, ApJ 858 (2018) 85 [1712.09639]. [7] T. Trombetti, C. Burigana, G. De Zotti, V. Galluzzi and M. Massardi, Average fractional polarization of extragalactic sources at Planck frequencies, A&A 618 (2018) A29 [1712.08412]. [8] C. Dickinson, CMB foregrounds - A brief review, arXiv e-prints (2016) arXiv:1606.03606 [1606.03606]. [9] J. Errard, S. M. Feeney, H. V. Peiris and A. H. Jaffe, Robust forecasts on fundamental physics from the foreground-obscured, gravitationally-lensed CMB polarization, J. Cosmology Astropart. Phys. 2016 (2016) 052 [1509.06770]. [10] A. Lewis and A. Challinor, Weak gravitational lensing of the CMB, Phys. Rep. 429 (2006) 1 [astro-ph/0601594]. [11] P. Diego-Palazuelos, P. Vielva, E. Martínez-González and R. B. Barreiro, Comparison of delensing methodologies and assessment of the delensing capabilities of future experiments, arXiv e-prints (2020) arXiv:2006.12935 [2006.12935]. [12] N. Sailer, E. Schaan and S. Ferraro, Lower bias, lower noise CMB lensing with foreground-hardened estimators, arXiv e-prints (2020) arXiv:2007.04325 [2007.04325]. [13] P. Ade, J. Aguirre, Z. Ahmed, S. Aiola, A. Ali, D. Alonso et al., The Simons Observatory: science goals and forecasts, J. Cosmology Astropart. Phys. 2019 (2019) 056 [1808.07445]. [14] A. Challinor, R. Allison, J. Carron, J. Errard, S. Feeney, T. Kitching et al., Exploring cosmic origins with CORE: Gravitational lensing of the CMB, J. Cosmology Astropart. Phys. 2018 (2018) 018 [1707.02259]. [15] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al., Planck 2018 results. VIII. Gravitational lensing, arXiv e-prints (2018) arXiv:1807.06210 [1807.06210]. [16] J. Lesgourgues and S. Pastor, Massive neutrinos and cosmology, Phys. Rep. 429 (2006) 307 [astro-ph/0603494]. [17] U. Seljak and M. Zaldarriaga, Lensing-induced Cluster Signatures in the Cosmic Microwave Background, ApJ 538 (2000) 57 [astro-ph/9907254]. [18] W. Hu, S. DeDeo and C. Vale, Cluster mass estimators from CMB temperature and polarization lensing, New Journal of Physics 9 (2007) 441 [astro-ph/0701276]. [19] J.-B. Melin and J. G. Bartlett, Measuring cluster masses with CMB lensing: a statistical approach, A&A 578 (2015) A21 [1408.5633]. [20] E. J. Baxter, R. Keisler, S. Dodelson, K. A. Aird, S. W. Allen, M. L. N. Ashby et al., A Measurement of Gravitational Lensing of the Cosmic Microwave Background by Galaxy Clusters Using Data from the South Pole Telescope, ApJ 806 (2015) 247 [1412.7521]. [21] A. Manzotti, Future cosmic microwave background delensing with galaxy surveys, Phys. Rev. D 97 (2018) 043527 [1710.11038]. [22] B. Yu, J. C. Hill and B. D. Sherwin, Multitracer CMB delensing maps from Planck and WISE data, Phys. Rev. D 96 (2017) 123511 [1705.02332]. [23] T. Namikawa, D. Yamauchi, B. Sherwin and R. Nagata, Delensing cosmic microwave background B modes with the Square Kilometre Array Radio Continuum Survey, Phys. Rev. D 93 (2016) 043527 [1511.04653]. [24] B. D. Sherwin and M. Schmittfull, Delensing the CMB with the cosmic infrared background, Phys. Rev. D 92 (2015) 043005 [1502.05356]. [25] P. Larsen, A. Challinor, B. D. Sherwin and D. Mak, Demonstration of Cosmic Microwave Background Delensing Using the Cosmic Infrared Background, Phys. Rev. Lett. 117 (2016) 151102 [1607.05733]. [26] A. Manzotti, K. T. Story, W. L. K. Wu, J. E. Austermann, J. A. Beall, A. N. Bender et al., CMB Polarization B-mode Delensing with SPTpol and Herschel, ApJ 846 (2017) 45 [1701.04396]. [27] K. S. Karkare, Delensing degree-scale B-mode polarization with high-redshift line intensity mapping, Phys. Rev. D 100 (2019) 043529 [1908.08128]. [28] C. M. Hirata and U. Seljak, Reconstruction of lensing from the cosmic microwave background polarization, Phys. Rev. D 68 (2003) 083002 [astro-ph/0306354]. [29] T. Okamoto and W. Hu, Cosmic microwave background lensing reconstruction on the full sky, Phys. Rev. D 67 (2003) 083002 [astro-ph/0301031]. [30] J. Carron and A. Lewis, Maximum a posteriori CMB lensing reconstruction, Phys. Rev. D 96 (2017) 063510 [1704.08230]. [31] M. Millea, E. Anderes and B. D. Wandelt, Bayesian delensing of CMB temperature and polarization, Phys. Rev. D 100 (2019) 023509 [1708.06753]. [32] N. Mishra and E. Schaan, Bias to CMB lensing from lensed foregrounds, Phys. Rev. D 100 (2019) 123504 [1908.08057]. [33] C. Caprini and D. G. Figueroa, Cosmological backgrounds of gravitational waves, Classical and Quantum Gravity 35 (2018) 163001 [1801.04268]. [34] L. P. Grishchuk, Amplification of gravitational waves in an isotropic universe, Soviet Journal of Experimental and Theoretical Physics 40 (1975) 409. [35] A. A. Starobinskiǐ, Spectrum of relict gravitational radiation and the early state of the universe, Soviet Journal of Experimental and Theoretical Physics Letters 30 (1979) 682. [36] A. G. Polnarev, Polarization and Anisotropy Induced in the Microwave Background by Cosmological Gravitational Waves, Soviet Ast. 29 (1985) 607. [37] P. Cabella and M. Kamionkowski, Theory of Cosmic Microwave Background Polarization, arXiv e-prints (2004) astro [astro-ph/0403392]. [38] W. Zhao and Y. Zhang, Analytic approach to the CMB polarization generated by relic gravitational waves, Phys. Rev. D 74 (2006) 083006 [astro-ph/0508345]. [39] Planck Collaboration, Y. Akrami, F. Arroja, M. Ashdown, J. Aumont, C. Baccigalupi et al., Planck 2018 results. X. Constraints on inflation, arXiv e-prints (2018) arXiv:1807.06211 [1807.06211]. [40] F. Argüeso, J. L. Sanz, D. Herranz, M. López-Caniego and J. González-Nuevo, Detection/estimation of the modulus of a vector. Application to point-source detection in polarization data, MNRAS 395 (2009) 649 [0906.0893]. [41] Planck Collaboration, P. A. R. Ade, N. Aghanim, F. Argüeso, C. Armitage-Caplan, M. Arnaud et al., Planck 2013 results. XXVIII. The Planck Catalogue of Compact Sources, A&A 571 (2014) A28 [1303.5088]. [42] Planck Collaboration, P. A. R. Ade, N. Aghanim, F. Argüeso, M. Arnaud, M. Ashdown et al., Planck 2015 results. XXVI. The Second Planck Catalogue of Compact Sources, A&A 594 (2016) A26 [1507.02058]. [43] Planck Collaboration, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi, M. Ballardini et al., Planck 2018 results. XI. Polarized dust foregrounds, arXiv e-prints (2018) arXiv:1801.04945 [1801.04945]. [44] M. Tucci, E. Martínez-González, L. Toffolatti, J. González-Nuevo and G. De Zotti, Predictions on the high-frequency polarization properties of extragalactic radio sources and implications for polarization measurements of the cosmic microwave background, MNRAS 349 (2004) 1267 [astro-ph/0307073]. [45] K. M. Górski, E. Hivon, A. J. Banday, B. D. Wand elt, F. K. Hansen, M. Reinecke et al., HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, ApJ 622 (2005) 759 [astro-ph/0409513]. [46] W. T. Freeman and E. H. Adelson, The design and use of steerable filters, IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (1991) 891. [47] Y. Wiaux, L. Jacques and P. Vandergheynst, Correspondence Principle between Spherical and Euclidean Wavelets, ApJ 632 (2005) 15 [astro-ph/0502486]. [48] J. Delabrouille, M. Betoule, J. B. Melin, M. A. Miville-Deschênes, J. Gonzalez-Nuevo, M. Le Jeune et al., The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths, A&A 553 (2013) A96 [1207.3675]. [49] L. Bonavera, J. González-Nuevo, F. Argüeso and L. Toffolatti, Statistics of the fractional polarization of compact radio sources in Planck maps, MNRAS 469 (2017) 2401 [1703.09952]. [50] N. Gupta, C. L. Reichardt, P. A. R. Ade, A. J. Anderson, M. Archipley, J. E. Austermann et al., Fractional polarization of extragalactic sources in the 500 deg${}^{2}$ SPTpol survey, MNRAS 490 (2019) 5712 [1907.02156]. [51] M. P. Hobson, A. W. Jones, A. N. Lasenby and F. R. Bouchet, Foreground separation methods for satellite observations of the cosmic microwave background, MNRAS 300 (1998) 1 [astro-ph/9806387]. [52] P. Vielva, E. Martínez-González, L. Cayón, J. M. Diego, J. L. Sanz and L. Toffolatti, Predicted Planck extragalactic point-source catalogue, MNRAS 326 (2001) 181 [astro-ph/0104077]. [53] J. D. Hunter, Matplotlib: A 2D Graphics Environment, Computing in Science and Engineering 9 (2007) 90.
Gluing Theorems for Subharmonic Functions Bulat N. Khabibullin111This study was financially supported by the Russian Science Foundation (projects No. 18-01-00002.)    Enzhe Menshikova Abstract In our articles of recent years, the technique of gluing two subharmonic functions turned out to be very useful in studying the distribution of the roots or masses of holomorphic or subharmonic functions, respectively. Here we develop and improve this technique. Its applications will be given in our further works. MSC 2010: 31B5, 3A05, 3C05, 32A60 Keywords: subharmonic function, Green’s function, potential, Riesz measure, harmonic continuation, plurisubharmonic function 1 Introduction As usual, $\mathbb{N}:=\{1,2,\dots\}$, $\mathbb{R}$ and $\mathbb{C}$ are the sets of all natural, real and complex numbers, respectively; $\mathbb{N}_{0}:=\{0\}\cup\mathbb{N}$ is French natural series, and $\mathbb{Z}:=\mathbb{N}_{0}\cup\mathbb{N}_{0}$. For $d\in\mathbb{N}$ we denote by $\mathbb{R}^{d}$ the $d$-dimensional real Euclidean space with the standard Euclidean norm $|x|:=\sqrt{x_{1}^{2}+\dots+x_{d}^{2}}$ for $x=(x_{1},\dots,x_{d})\in\mathbb{R}^{d}$ and the distance function $\operatorname{dist}(\cdot,\cdot)$. For a subset $S\subset\mathbb{R}^{d}$, we denote by $\operatorname{har}(S)$ and $\operatorname{sbh}(S)$ the classes of all harmonic (affine for m = 1) and subharmonic (locally convex for $m=1$) functions on an open set $O\supset S$, respectively. The basis of our note is Gluing Theorem A ([14, Theorem 2.4.5], [9, Corollary 2.4.4]). Let $\mathcal{O}$ be an open set in $\mathbb{R}^{d}$, and let $\mathcal{O}_{0}$ be a subset of ${\mathcal{O}}$. If $u\in\operatorname{sbh}({\mathcal{O}})$, $u_{0}\in\operatorname{sbh}({\mathcal{O}}_{0})$, and $$\limsup_{y\to x}u_{0}(y)=u(x)\quad\text{for each $x\in{\mathcal{O}}\cap% \partial{\mathcal{O}}_{0}$},$$ (1.1) then the formula $$U:=\begin{cases}\max\{u,u_{0}\}&\text{ on ${\mathcal{O}}_{0}$},\\ u&\text{ on ${\mathcal{O}}\setminus{\mathcal{O}}_{0}$}\end{cases}$$ (1.2) defines a subharmonic function on ${\mathcal{O}}$. Important applications of Theorem A can be found in our articles [10], [11], [13]. 2 Basic definitions, notations and conventions The reader can skip this Section 2 and return to it only if necessary. 2.1 Sets, order, topology. For the real line $\mathbb{R}$ with Euclidean norm-module $|\cdot|$, $$\displaystyle\mathbb{R}_{-\infty}:=\{-\infty\}\cup\mathbb{R},\;\mathbb{R}_{+\infty}$$ $$\displaystyle:=\mathbb{R}\cup\{+\infty\},\;|\pm\infty|:=+\infty;\;\mathbb{R}_{% \pm\infty}:=\mathbb{R}_{-\infty}\cup\mathbb{R}_{+\infty}$$ (2.1$${}_{\infty}$$) is extended real line in the end topology with two ends $$\pm\infty$$, with the order relation $$\leq$$ on $$\mathbb{R}$$ complemented by the inequalities $$-\infty\leq x\leq+\infty$$ for $$x\in\mathbb{R}_{\pm\infty}$$, with the positive real axis $$\displaystyle\mathbb{R}^{+}:=\{x\in\mathbb{R}\colon x\geq 0\},\quad x^{+}$$ $$\displaystyle:=\max\{0,x\},\quad x^{-}:=(-x)^{+},\quad\text{for $x\in\mathbb{R% }_{\pm\infty}$},$$ (2.1$${}^{+}$$) $$\displaystyle S^{+}:=\{x\geq 0\colon x\in S\},\quad S_{*}$$ $$\displaystyle:=S\setminus\{0\}\quad\text{for $S\subset\mathbb{R}_{\pm\infty}$}% ,\quad\mathbb{R}_{*}^{+}:=(\mathbb{R}^{+})_{*},$$ (2.1$${}_{*}^{+}$$) $$\displaystyle x\cdot(\pm\infty):=\pm\infty=:$$ $$\displaystyle(-x)\cdot(\mp\infty)\quad\text{for $x\in\mathbb{R}_{*}^{+}\cup(+% \infty)$},$$ (2.1$${}_{\pm}$$) $$\displaystyle\frac{x}{\pm\infty}:=0$$ for $$x\in\mathbb{R}$$, but $$0\cdot(\pm\infty):=0$$ (2.1$${}_{0}$$) unless otherwise specified. An open connected (sub-)set of $\mathbb{R}_{\pm\infty}$ is a (sub-)interval of $\mathbb{R}_{\pm\infty}$. The Alexandroff one-point compactification of $\mathbb{R}^{d}$ is denoted by $\mathbb{R}^{d}_{\infty}:=\mathbb{R}^{d}\cup\{\infty\}$. The same symbol $0$ is used, depending on the context, to denote the number zero, the origin, zero vector, zero function, zero measure, etc. Given $x\in\mathbb{R}^{d}$ and $r\overset{\eqref{df:R+}}{\in}\mathbb{R}_{+\infty}$, we set $$\displaystyle B(x,r):=\{x^{\prime}\in\mathbb{R}^{d}\colon|x^{\prime}-x|<r\},$$ $$\displaystyle\quad\overline{B}(x,r):=\{x^{\prime}\in\mathbb{R}^{d}\colon|x^{% \prime}-x|\leq r\},$$ (2.2B) $$\displaystyle\quad B(\infty,r):=\{x\in\mathbb{R}_{\infty}^{d}\colon|x|>1/r\},$$ $$\displaystyle\quad\overline{B}(\infty,r):=\{x\in\mathbb{R}_{\infty}^{d}\colon|% x|\geq 1/r\},$$ (2.2$${}_{\infty}$$) $$\displaystyle B(r):=B(0,r),\quad\mathbb{B}:=B(0,1),$$ $$\displaystyle\quad\overline{B}(r):=\overline{B}(0,r),\quad\overline{\mathbb{B}% }:=\overline{B}(0,1).$$ (2.2$${}_{1}$$) $$\displaystyle B_{\circ}(x,r):=B(x,r)\setminus\{x\},$$ $$\displaystyle\quad\overline{B}_{\circ}(x,r):=\overline{B}(x,r)\setminus\{x\}.$$ (2.2$${}_{\circ}$$) Thus, the basis of open (respectively closed) neighborhood of the point $x\in\mathbb{R}_{\infty}^{d}$ is open (respectively closed) balls $B(x,r)$ (respectively $\overline{B}(x,r)$) centered at $x$ with radius $r>0$. Given a subset $S$ of $\mathbb{R}^{d}_{\infty}$, the closure $\operatorname{clos}S$, the interior $\operatorname{int}S$ and the boundary $\partial S$ will always be taken relative $\mathbb{R}^{d}_{\infty}$. For $S^{\prime}\subset S\subset\mathbb{R}^{d}_{\infty}$ we write $S^{\prime}\Subset S$ if $\operatorname{clos}S^{\prime}\subset\operatorname{int}S$. An open connected (sub-)set of $\mathbb{R}^{d}_{\infty}$ is a (sub-)domain of $\mathbb{R}^{d}_{\infty}$. 2.2 Functions. Let $X,Y$ are sets. We denote by $Y^{X}$ the set of all functions $f\colon X\to Y$. The value $f(x)\in Y$ of an arbitrary function $f\in X^{Y}$ is not necessarily defined for all $x\in X$. The restriction of a function f to $S\subset X$ is denoted by $f\bigm{|}_{S}$. We set $$\mathbb{R}_{-\infty}^{X}\overset{\eqref{df:Rr}}{:=}(\mathbb{R}_{-\infty})^{X},% \quad\mathbb{R}_{+\infty}^{X}\overset{\eqref{df:Rr}}{:=}(\mathbb{R}_{+\infty})% ^{X},\quad\mathbb{R}_{\pm\infty}^{X}\overset{\eqref{df:Rr}}{:=}(\mathbb{R}_{% \pm\infty})^{X}.$$ (2.3) A function $f\in\mathbb{R}_{\pm\infty}^{X}$ is said to be extended numerical. For extended numerical functions $f$, we set $$\begin{split}\displaystyle\operatorname{Dom}_{-\infty}:=f^{-1}(\mathbb{R}_{-% \infty})\subset X,&\displaystyle\quad\operatorname{Dom}_{+\infty}f:=f^{-1}(% \mathbb{R}_{+\infty})\subset X,\\ \displaystyle\operatorname{Dom}f:=f^{-1}(\mathbb{R}_{\pm\infty})&\displaystyle% =\operatorname{Dom}_{-\infty}f\bigcup\operatorname{Dom}_{+\infty}f\subset X,\\ \displaystyle\operatorname{dom}f:=f^{-1}(\mathbb{R})=&\displaystyle% \operatorname{Dom}_{-\infty}f\bigcap\operatorname{Dom}_{+\infty}f\subset X,% \end{split}$$ (2.4) For $f,g\in\mathbb{R}_{\pm\infty}^{X}$ we write $f=g$ if $\operatorname{Dom}f=\operatorname{Dom}g=:D$ and $f(x)=g(x)$ for all $x\in D$, and we write $f\leq g$ if $f(x)\leq g(x)$ for all $x\in D$. For $f\in\mathbb{R}_{\pm\infty}^{X}$, $g\in\mathbb{R}_{\pm\infty}^{Y}$ and a set $S$, we write ‘‘$f=g$ on $S$ ’’ or ‘‘$f\leq g$ on $S$ ’’ if $f\bigm{|}_{S\cap D}=g\bigm{|}_{S\cap D}$ or $f\bigm{|}_{S\cap D}\leq g\bigm{|}_{S\cap D}$ respectively. For $f\in F\subset\mathbb{R}_{\pm\infty}^{X}$, we set $f^{+}\colon x\mapsto\max\{0,f(x)\}$, $x\in\operatorname{Dom}f$, $F^{+}:=\{f\geq 0\colon f\in F\}$. So, $f$ is positive on $X$ if $f=f^{+}$, and we write ‘‘$f\geq 0$ on $X$’’. The class $\operatorname{sbh}(O)$ contains the minus-infinity function $\boldsymbol{-\infty}\colon x\mapsto-\infty$ identically equal to $-\infty$; $$\operatorname{sbh}_{*}(O):=\operatorname{sbh}\,(O)\setminus\{\boldsymbol{-% \infty}\},\quad\operatorname{sbh}^{+}(O):=(\operatorname{sbh}(O))^{+}.$$ (2.5) If $o\notin O\ni\infty$, then we can to use the inversion in the sphere $\partial B(o,1)$ centered at the point $o\in\mathbb{R}^{d}$: $$\displaystyle\star_{o}\colon x\longmapsto x^{\star_{o}}$$ $$\displaystyle:=\begin{cases}o&\text{for $x=\infty$},\\ o+\frac{1}{|x-o|^{2}}\,(x-o)&\text{for $x\neq o,\infty$},\\ \infty&\text{for $x=o$},\end{cases}\qquad\star:=\star_{0}=:\star_{\infty}$$ (2.6$$\star$$) together with the Kelvin transform [8, Ch. 2, 6; Ch. 9] $$\displaystyle u^{\star_{o}}(x^{\star_{o}})$$ $$\displaystyle=|x-o|^{d-2}u(x),\quad x^{\star_{o}}\in O^{\star_{o}}:=\{x^{\star% _{o}}\colon x\in O\},$$ (2.6u) $$\displaystyle\Bigl{(}u\in\operatorname{sbh}(O)\Bigr{)}\Longleftrightarrow\Bigl% {(}u^{\star_{o}}\in\operatorname{sbh}(O^{\star_{o}})\Bigr{)}.$$ (2.6s) For a subset $S\subset\mathbb{R}_{\infty}^{d}$, the classes $\operatorname{har}(S)$ and $\operatorname{sbh}(S)$ consist of the restrictions to $S$ of harmonic and subharmonic functions in some (in general, its own for each function) open set $O\subset\mathbb{R}_{\infty}^{d}$ containing $S$. The class $\operatorname{sbh}_{*}(S)$ are defined like previous class (2.5). By ${\rm const}_{a_{1},a_{2},\dots}\in\mathbb{R}$ we denote constants, and constant functions, in general, depend on $a_{1},a_{2},\dots$ and, unless otherwise specified, only on them, where the dependence on dimension $d$ of $\mathbb{R}_{\infty}^{d}$ will be not specified and not discussed; ${\rm const}^{+}_{\dots}\geq 0$. 3 General Gluing Theorems Gluing Theorem 1. Let $O$ and $O_{0}$ be a pair of open subsets in $\mathbb{R}^{d}$, $v\in\operatorname{sbh}(O)$ and $v_{0}\in\operatorname{sbh}(O_{0})$ be a pair of functions such that $$\displaystyle\limsup_{\stackrel{{\scriptstyle y\to x}}{{y\in O_{0}\cap O}}}v(y)$$ $$\displaystyle\leq v_{0}(x)\quad\text{for each $x\in O_{0}\cap\partial O$},$$ (3.1$${}_{0}$$) $$\displaystyle\limsup_{\stackrel{{\scriptstyle y\to x}}{{y\in O_{0}\cap O}}}v_{% 0}(y)$$ $$\displaystyle\leq v(x)\quad\text{for each $x\in O\cap\partial O_{0}$}.$$ (3.1$${}_{1}$$) Then the function $$V:=\begin{cases}v_{0}&\text{ on $O_{0}\setminus O$},\\ \sup\{v_{0},v\}&\text{ on $O_{0}\cap O$},\\ v&\text{ on $O\setminus O_{0}$,}\end{cases}$$ (3.2) is subharmonic on $O_{0}\cup O$. Proof. It is enough to apply Gluing Theorem A twice: [O${}_{0}$] to one pair of functions $$\displaystyle u$$ $$\displaystyle:=v_{0}\in\operatorname{sbh}(O_{0}),\quad{\mathcal{O}}:=O_{0};$$ $$\displaystyle u_{0}$$ $$\displaystyle:=v\bigm{|}_{O\cap O_{0}}\in\operatorname{sbh}(O\cap O_{0}),\quad% {\mathcal{O}}_{0}:=O\cap O_{0}\subset O_{0},$$ under condition (3.1${}_{0}$) realizing condition (1.1); [O] to another pair of functions $$\displaystyle u$$ $$\displaystyle:=v\in\operatorname{sbh}(O),\quad{\mathcal{O}}:=O;$$ $$\displaystyle u_{0}$$ $$\displaystyle:=v_{0}\bigm{|}_{O_{0}\cap O}\in\operatorname{sbh}(O_{0}\cap O),% \quad{\mathcal{O}}_{0}:=O_{0}\cap O\subset O,$$ under condition (3.1${}_{1}$) realizing condition (1.1). These two glued subharmonic functions coincide at the open intersection $O\cap O_{0}$ and give subharmonic function $V$ defined in (3.2). ∎ Gluing Theorem 2 (quantitative version). Let $O$ and $O_{0}$ be a pair of open subsets in $\mathbb{R}^{d}$, and $v\in\operatorname{sbh}(O)$ and $g\in\operatorname{sbh}(O_{0})$ be a pair of functions such that $$\displaystyle-\infty<m_{v}\leq$$ $$\displaystyle\inf_{x\in O\cap\partial O_{0}}v(x),$$ (3.5m) $$\displaystyle\sup_{x\in O_{0}\cap\partial O}\limsup_{\stackrel{{\scriptstyle y% \to x}}{{y\in O_{0}\cap O}}}v(y)$$ $$\displaystyle\leq M_{v}<+\infty,$$ (3.5M) $$\displaystyle-\infty<\sup_{x\in O\cap\partial O_{0}}\limsup_{\stackrel{{% \scriptstyle y\to x}}{{y\in O\cap O_{0}}}}g(y)\leq m_{g}$$ $$\displaystyle<M_{g}\leq\inf_{x\in O_{0}\cap\partial O}g(x)<+\infty.$$ (3.5g) If we choose the function $$v_{0}:=\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}-m_{g}}(2g-M_{g}-m_{g})\in\operatorname% {sbh}(O_{0}),$$ (3.6) then the function $V$ from (3.2) is subharmonic on $O_{0}\cup O$. Proof. The function $v_{0}$ from definition (3.6) is subharmonic on $O_{0}$ since this function $v_{0}$ has a form ${\rm const}^{+}g+{\rm const}$ with ${\rm const}^{+}\in\mathbb{R}^{+}$, ${\rm const}\in\mathbb{R}$. In addition, by construction (3.6), for each $x\in O_{0}\cap\partial O$, we obtain $$\displaystyle\limsup_{\stackrel{{\scriptstyle y\to x}}{{y\in O_{0}\cap O}}}v(y% )\overset{\eqref{g01vOm}-\eqref{g01vOM}}{\leq}M_{v}^{+}+m_{v}^{-}=\frac{M_{v}^% {+}+m_{v}^{-}}{M_{g}-m_{g}}(2M_{g}-M_{g}-m_{g})\\ \displaystyle\overset{\eqref{g01g}}{\leq}\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}-m_{g% }}\Bigl{(}2\inf_{x\in O_{0}\cap\partial O}g(x)-M_{g}-m_{g}\Bigr{)}\\ \displaystyle=\inf_{x\in O_{0}\cap\partial O}\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}-% m_{g}}\Bigl{(}2g(x)-M_{g}-m_{g}\Bigr{)}\\ \displaystyle\overset{\eqref{v0g}}{=}\inf_{O_{0}\cap\partial O}v_{0}\leq v_{0}% (x),\quad\forall x\in O_{0}\cap\partial O.$$ Thus, we have (3.1${}_{0}$). Besides, by construction (3.6), for each $x\in O\cap\partial O_{0}$, we obtain $$\displaystyle\limsup_{\stackrel{{\scriptstyle y\to x}}{{y\in O_{0}\cap O}}}v_{% 0}(y)\overset{\eqref{v0g}}{\leq}\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}-m_{g}}\biggl{% (}2\limsup_{\stackrel{{\scriptstyle y\to x}}{{y\in O_{0}\cap O}}}g-M_{g}-m_{g}% \biggr{)}\overset{\eqref{g01g}}{\leq}\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}-m_{g}}(2% m_{g}-M_{g}-m_{g})\\ \displaystyle=-(M_{v}^{+}+m_{v}^{-})\leq-m_{v}^{-}\leq m_{v}\overset{\eqref{g0% 1vOm}}{\leq}\inf_{x\in O\cap\partial O_{0}}v(x)\leq v(x),\quad\forall x\in O% \cap\partial O_{0}.$$ Thus, we have (3.1${}_{1}$), and our Gluing Theorem 2 follows from Gluing Theorem 1. ∎ Remark 1. Theorems of this section can be easily transferred to the cone of plurisubharmonic functions [9, Corollary 2.9.15]. We sought to formulate our theorems and their proofs with the possibility of their fast transport to the plurisubharmonic functions and to abstract potential theories with more general constructions based on the theories of harmonic spaces and sheaves in the spirit of books [4], [3], [5], [2], [1], etc. 4 Gluing with the Green Function Definition 1 ([14], [7], [12]). For $q\in\mathbb{R}$, we set $$\displaystyle k_{q}(t)$$ $$\displaystyle:=\begin{cases}\log t&\text{ if $q=0$},\\ -\operatorname{sgn}(q)t^{q}&\text{ if $q\in\mathbb{R}_{*}$,}\end{cases}\qquad t% \in\mathbb{R}_{*}^{+},$$ (4.1k) $$\displaystyle K_{d-2}(x,y)$$ $$\displaystyle:=\begin{cases}k_{d-2}\bigl{(}|x-y|\bigr{)}&\text{ if $x\neq y$},% \\ -\infty&\text{ if $x=y$ and $d\geq 2$},\\ 0&\text{ if $x=y$ and $d=1$},\\ \end{cases}\quad(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d}.$$ (4.1K) Reminder, that a set $E\subset\mathbb{R}^{d}$ is called polar if there is a function $u\in\operatorname{sbh}_{*}(\mathbb{R}^{d})$ such that $$\Bigl{(}E\subset(-\infty)_{u}:=\{x\in\mathbb{R}^{d}\colon u(x)=-\infty\}\Bigr{% )}\Longleftrightarrow\Bigl{(}\text{Cap}^{*}E=0\Bigr{)},$$ (4.2) where the set $(-\infty)_{u}$ is minus-infinity $G_{\delta}$-set for the function $u$, $$\text{Cap}^{*}(S):=\inf_{S\subset O=\operatorname{int}O}\sup_{\stackrel{{% \scriptstyle C=\operatorname{clos}C\Subset O}}{{\mu\in\operatorname{Meas}^{1+}% (C)}}}k_{d-2}^{-1}\left(\iint K_{d-2}(x,y)\,{\rm d}\mu(x)\,{\rm d}\mu(y)\right)$$ is the outer capacity of $S\subset\mathbb{R}^{d}$. Let $\mathcal{O}$ be an open proper subset in $\mathbb{R}_{\infty}^{d}$. Consider a point $o\in\mathbb{R}^{d}$ and subsets $S_{0},S\subset\mathbb{R}_{\infty}^{d}$ such that $$\mathbb{R}^{d}\ni o\in\operatorname{int}S_{0}\subset S_{0}\Subset S\subset% \operatorname{int}\mathcal{O}=\mathcal{O}\subset\mathbb{R}^{d}_{\infty}\neq% \mathcal{O}.$$ (4.3) Let $D$ be a domain in $\mathbb{R}_{\infty}^{d}$ such that $$o\overset{\eqref{x0S}}{\in}\operatorname{int}S_{0}\subset S_{0}\Subset D% \Subset S\subset\mathcal{O}.$$ (4.4) Such domain $D$ possesses the generalized Green’s function $g_{D}(\cdot,o)$ (see [7, 5.7.2], [8, Ch. 5, 2]) with pole at the point $o\overset{\eqref{Dg}}{\in}D$ described by the following properties: $$\displaystyle g_{D}(\cdot,o)$$ $$\displaystyle\in\operatorname{sbh}^{+}\bigl{(}\mathbb{R}_{\infty}^{d}\setminus% \{o\}\bigr{)}\subset\operatorname{sbh}^{+}\bigl{(}\mathcal{O}\setminus\{o\}% \bigr{)},$$ (4.5s) $$\displaystyle g_{D}(\cdot,o)$$ $$\displaystyle=0\text{ on $\mathbb{R}_{\infty}^{d}\setminus\operatorname{clos}D% \supset\mathcal{O}\setminus\operatorname{clos}D\supset\mathcal{O}\setminus S$},$$ (4.5$${}_{0}$$) $$\displaystyle g_{D}(\cdot,o)$$ $$\displaystyle\in\operatorname{har}\bigl{(}D\setminus\{o\}\bigr{)}\subset% \operatorname{har}\bigl{(}S_{0}\setminus\{o\}\bigr{)}\overset{\eqref{Bo}}{% \subset}\operatorname{har}\bigl{(}B_{\circ}(o,r_{o})\bigr{)}$$ (4.5h) with a number $$r_{o}\in\mathbb{R}_{*}^{+}$$, $$g_{D}(o,o):=+\infty$$, $$\displaystyle g_{D}(x,o)$$ $$\displaystyle\overset{\eqref{{kK}K}}{=}-K_{d-2}(x,o)+O(1)\quad\text{when $o% \neq x\to o$}.$$ (4.5o) and the following strictly positive number $$\displaystyle 0<M_{g}$$ $$\displaystyle:=\inf_{x\in\partial S_{0}}g_{D}(x,o)={\rm const}^{+}_{o,S_{0},D,S}$$ (4.5M) depends only on $$S_{0},S,D$$ and the pole $$o$$, and, by the minimum principle, we have $$\displaystyle g_{D}(x,o)-M_{g}$$ $$\displaystyle\overset{\eqref{Mg}}{\geq}0\quad\text{for all $x\in S_{0}% \setminus\{o\}$}.$$ (4.5M+) Properties (4.5) for the generalized Green’s function $g_{D}(\cdot,o)$ from (4.3)–(4.4) are well known [14, 4.4], [7, 5.7], and property (4.3) follows from $0<g(\cdot,o)\in C\bigl{(}D\setminus\{o\}\bigr{)}$ on $D\setminus\{o\}$. Gluing Theorem 3. Under conditions (4.3) suppose that a function $v\in\operatorname{sbh}(\mathcal{O}\setminus S_{0})$ satisfy constraints above and below in the form $$-\infty<m_{v}\overset{\eqref{g01vOm}}{\leq}\inf_{S\setminus S_{0}}v\leq\sup_{S% \setminus S_{0}}v\overset{\eqref{g01vOM}}{\leq}M_{v}<+\infty.$$ (4.6) Every domain $D$ with inclusions (4.4) possesses the generalized Green’s function $g_{D}(\cdot,o)$ with pole $o\in\operatorname{int}S_{0}$, properties (4.5) and constant $M_{g}$ of (4.5M) such that the choice of function $$\displaystyle v_{0}$$ $$\displaystyle\overset{\eqref{v0g}}{:=}\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}}\bigl{(% }2g_{D}(\cdot,o)-M_{g}\bigr{)}\in\operatorname{sbh}\bigl{(}\mathbb{R}_{\infty}% ^{d}\setminus\{o\}\bigr{)}\subset\operatorname{sbh}\bigl{(}\operatorname{int}S% \setminus\{o\}\bigr{)},$$ (4.7v) defines the subharmonic function $$\displaystyle V$$ $$\displaystyle\overset{\eqref{Vv}}{:=}\begin{cases}v_{0}&\text{ on $S_{0}$},\\ \sup\{v_{0},v\}&\text{ on $S\setminus S_{0}$},\\ v&\text{ on $\mathcal{O}\setminus S$,}\end{cases}\qquad\text{from $% \operatorname{sbh}_{*}\bigl{(}\mathcal{O}\setminus\{o\}\bigr{)}$,}$$ (4.7V) satisfying the conditions $$\displaystyle V$$ $$\displaystyle\overset{\eqref{gDh}}{\in}\operatorname{har}\bigl{(}S_{0}% \setminus\{o\}\bigr{)}\overset{\eqref{Bo}}{\subset}\operatorname{har}\bigl{(}B% _{\circ}(o,r_{o})\bigr{)}\quad\text{with a number $r_{o}\in\mathbb{R}_{*}^{+}$},$$ (4.7h) $$\displaystyle V$$ $$\displaystyle\overset{\eqref{Mg}-\eqref{Mg+}}{\geq 0}\quad\text{on $S_{0}$,}$$ (4.7+) $$\displaystyle V(x)$$ $$\displaystyle\overset{\eqref{gD0a}}{=}-2\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}}K_{d-% 2}(x,o)+O(1)\quad\text{when $o\neq x\to o$}.$$ (4.7o) Proof. It is enough to apply Gluing Theorem 2 with $$O:=\mathcal{O}\setminus\operatorname{clos}S_{0},\quad O_{0}:=\operatorname{int% }S\setminus\{o\},\quad g:=g_{D}(\cdot,o),\quad m_{g}:=0$$ according to the references written above the relations in (4.6)–(4.7). ∎ Given $S\subset\mathbb{R}^{d}$ and $r\in\mathbb{R}^{+}$, a set $$S^{\cup r}:=S\bigcup\bigcup\limits_{x\in S}B(x,r).$$ (4.8) is called a outer $r$-parallel set [15, Ch. I,§ 4]. Easy to install the following Proposition 1. Let a subset $S_{0}\Subset\mathbb{R}^{d}$ be connected, and $r\in\mathbb{R}_{*}^{+}$. Then $S_{0}^{\cup r}$ is connected, $S_{0}\Subset S_{0}^{\cup r}$, and there is a regular for the Dirichlet problem domain $D\subset S_{0}^{\cup r}$ such that $$S_{0}^{\cup(r/3)}\Subset\partial D\Subset S_{0}^{\cup(2r/3)}.$$ (4.9) For $v\in L^{1}\bigl{(}\partial B(x,r)\bigr{)}$, we define the averaging value of $v$ at the point $x$ as $$v^{\circ r}(x):=\int_{\partial B(x,r)}v\,{\rm d}\sigma_{d-1},$$ (4.10) where $\sigma_{d-1}$ the normalized by $1$ surface measure on the sphere $\partial B(x,r)$. Gluing Theorem 4. Let $\mathcal{O}\subset\mathbb{R}^{d}$ be an open subset, and $S_{0}\subset\mathbb{R}^{d}$ be a connected set such that there is a point $$o\overset{\eqref{x0S}}{\in}\operatorname{int}S_{0}\subset S_{0}\Subset\mathcal% {O}.$$ (4.11) Let $r\in\mathbb{R}^{+}$ be a number such that $$0<r<\operatorname{dist}(S_{0},\partial\mathcal{O}),$$ (4.12) and $D$ be a domain from Proposition 1 satisfying (4.9). Let $v\in\operatorname{sbh}_{*}(\mathcal{O}\setminus S_{0})$ be a function satisfying constraints above and below in the form $$\displaystyle v$$ $$\displaystyle\leq M_{v}<+\infty\quad\text{on $S_{0}^{\cup r}\setminus S_{0}$},$$ (4.13M) $$\displaystyle m_{v}$$ $$\displaystyle:=\inf\bigl{\{}v^{\circ(r/3)}(x)\colon x\in S_{0}^{\cup(2r/3)}% \setminus S_{0}^{\cup(r/3)}\bigr{\}}.$$ (4.13m) Then $m_{v}>-\infty$, and there is a function $V\in\operatorname{sbh}_{*}(\mathcal{O}\setminus\{o\})$ satisfying (4.7h)–(4.7+), i. e., $$\displaystyle V$$ $$\displaystyle\in\operatorname{har}^{+}\bigl{(}S_{0}\setminus\{o\}\bigr{)},$$ (4.14h) $$\displaystyle V$$ $$\displaystyle=v\quad\text{on $\mathcal{O}\setminus\operatorname{clos}S^{\cup r% }$},$$ (4.14=) and such that $$\displaystyle V(x)$$ $$\displaystyle\overset{\eqref{gD0aV}}{=}-2\frac{M_{v}^{+}+m_{v}^{-}}{M_{g}}K_{d% -2}(x,o)+O(1)\quad\text{when $o\neq x\to o$},$$ (4.14o) where $$\displaystyle 0<M_{g}$$ $$\displaystyle:=\inf_{x\in\partial S_{0}^{\cup(r/3)}}g_{D}(x,o)={\rm const}^{+}% _{o,S_{0},r,D}$$ (4.14g) Proof. We have $m_{v}>-\infty$ since the function $v^{\circ(r/3)}$ is continuous in $S_{0}^{\cup(2r/3)}\setminus S_{0}^{\cup(r/3)}$ [8, Theorem 1.14]. The function $v$ can be transformed using the Perron – Wiener – Brelot method (into the open ‘‘layer’’ $S_{0}^{\circ r}\setminus\operatorname{clos}S_{0}$ from boundary of this layer) to a new subharmonic function $\widetilde{v}\geq v$ on $\mathcal{O}\setminus S_{0}$ such that $\widetilde{v}\in\operatorname{har}(S_{0}^{\circ r}\setminus\operatorname{clos}% S_{0})$ and $\widetilde{v}=v$ on $\mathcal{O}\setminus\operatorname{clos}S$. This follow from the principle of subordination (domination) for harmonic continuations and the maximum principle that $$-\infty<m_{v}\leq\widetilde{v}\quad\text{on $S_{0}^{\cup(2r/3)}\setminus S_{0}% ^{\cup(r/3)}$},\quad\quad\widetilde{v}\leq M_{v}\quad\text{on $S_{0}^{\cup r}% \setminus S_{0}$},$$ (4.15) If we choose in Gluing Theorem 3 for the role a set $S_{0}$ the set $S_{0}^{\cup(r/3)}$, and instead of $S$ the set $S_{0}^{\cup(2r/3)}$, then, by construction (4.7v)–(4.7V) and conditions (4.7h)–(4.7o) , we get series of conclusions (4.14) of Theorem 4. ∎ References [1] Arsove M., Leutwiler Y. Algebraic Potential Theory, Memoirs of the American Mathematical Society 23:226, Providence, Rhode Island, 1980. [2] J. Bliedner, W. Hansen, Potential Theory. An Analytic and Probabilistic Approach to Balayage, Berlin: Springer, 1986. [3] Bauer, H.: Harmonische Räume und ihre Potentialtheorie. Lecture Notes, 22, Berlin-Heidelberg-New York: Springer, 1966. [4] Boboc N., Bucur G. Order and convexity in potential theory. In: Potential Theory Surveys and Problems. Lecture Notes in Mathematics, 1344, Berlin-Heidelberg: Springer, 1988. [5] Boboc N., Bucur G., Cornea A., Höllein H. Order and Convexity in Potential Theory: H-Cones, Lecture Notes in Mathematics, 853, Berlin : Springer-Verlag, 1981. [6] Constantinescu C., Cornea A. Potential theory on harmonic spaces. Berlin-Heidelberg-New York: Springer, 1972. [7] Hayman W. K., Kennedy P. B. Subharmonic functions, 1, London etc.: Acad. Press, 1976. [8] Helms L. L. Introduction to Potential Theory, New York: Wiley-Interscience, 1969. [9] Klimek M. Pluripotential Theory, London Math. Soc. Monogr. NY: Clarendon Press. 6, 1991. [10] B. N. Khabibullin, F. B. Khabibullin, On the Distribution of Zero Sets of Holomorphic Functions. III. Inversion Theorems, Funktsional. Anal. i Prilozhen.,53:2 (2019), 42–58. [11] B. N. Khabibullin, A. P. Rozit, On the Distribution of Zero Sets of Holomorphic Functions, Funktsional. Anal. i Prilozhen., 52:1 (2018), 26–42; Funct. Anal. Appl., 52:1 (2018), 21–34. [12] Landkoff N. S., Foundations of modern potential theory, New York: Springer-Verlag, 1972. [13] E. B. Menshikova, B. N. Khabibullin, On the Distribution of Zero Sets of Holomorphic Functions. II, Funktsional. Anal. i Prilozhen., 53:1 (2019), 84-87; Funct. Anal. Appl., 53:1 (2019). [14] Ransford Th. Potential Theory in the Complex Plane, Cambridge: Cambridge University Press, 1995. [15] Santaló Luis A. Integral Geometry and Geometric Probability, Addison-Wesley Publishing Company, Advanced Book Program, 1976 Russian Federation, Bashkortostan, Ufa, Bashkir State University E-mail: [email protected], [email protected]
Dynamical effects of interactions and the Tully-Fisher relation for Hickson compact groups C. Mendes de Oliveira 11affiliation: Universidade de São Paulo, IAG, Departamento de Astronomia, Rua do Matão 1226, 05508-900, São Paulo, SP, Brazil 22affiliation: Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching b. München, Germany 33affiliation: Universitäts-Sternwarte der LMU, Scheinerstrasse 1, D-81679 München, Germany , P. Amram 44affiliation: Observatoire Astronomique Marseille-Provence & Laboratoire d’Astrophysique de Marseille, 2 Place Le Verrier, 13248 Marseille Cedex 04, France , H. Plana 55affiliation: Universidade Estadual de Santa Cruz, Dept. Ciencias Exatas e Tecnologicas, Rodovia Ilheus-Itabuna, km. 16, 45650-000, Ilhéus, BA, Brazil C. Balkowski 66affiliation: Observatoire de Paris, GEPI, CNRS and Universite Paris 7, 5 Place Jules Janssen, F-92195 Meudon Cedex, France [email protected] Abstract We investigate the properties of the $B$-band Tully-Fisher (T-F) relation for 25 compact group galaxies, using $V_{\rm max}$ derived from 2-D velocity maps. Our main result is that the majority of the Hickson Compact Group galaxies lie on the T-F relation. However, about 20% of the galaxies, including the lowest-mass systems, have higher $B$ luminosities for a given mass, or alternatively, a mass which is too low for their luminosities. We favour a scenario in which outliers have been brightened due to either enhanced star formation or merging. Alternatively, the T-F outliers may have undergone truncation of their dark halo due to interactions. It is possible that in some cases, both effects contribute. The fact that the $B$-band T-F relation is similar for compact group and field galaxies tells us that these galaxies show common mass-to-size relations and that the halos of compact group galaxies have not been significantly stripped inside $R_{25}$. We find that 75% of the compact group galaxies studied (22 out of 29) have highly peculiar velocity fields. Nevertheless, a careful choice of inclination, position angle and center, obtained from the velocity field, and an average of the velocities over a large sector of the galaxy enabled the determination of fairly well-behaved rotation curves for the galaxies. However, two of the compact group galaxies which are the most massive members in M51–like pairs, HCG 91a and HCG 96a, have very asymmetric rotation curves, with one arm rising and the other one falling, indicating, most probably, a recent perturbation by the small close companions. Galaxies: individual ( galaxies: kinematics and dynamics — galaxies: interactions — galaxies: ISM — galaxies: intergalactic medium — instrumentation: interferometers) 1 Introduction The influence of environmental effects on the internal dynamics and matter distribution of compact group galaxies has not yet been clearly established, mostly due to lack of reliable kinematic data. A recent study of the internal kinematics of 30 galaxies by Nishiura et al. (2000, hereafter N2000) found that asymmetric and peculiar rotation curves are more frequently seen in the Hickson Compact Groups (HCG) spiral galaxies than in field or cluster spirals and the dynamical properties of the galaxies do not seem to correlate with any group or galaxy parameter. An older but very influential study is that by Rubin et al. (1991, hereafter R1991), who analyzed rotation curves for 32 Hickson compact group galaxies and found that 2/3 of them had peculiar rotation curves. For the subsample of galaxies for which rotation curves were possible to be derived, they found a large offset of the T-F relation with respect to the field relation in the sense that galaxies in compact groups have “too low velocities for their luminosities or, alternatively, luminosities which are overbright for their rotation velocities.” This suggested to the authors that spiral galaxies in compact groups have low mass-to-light ratios compared to field galaxies by about 30% which, in turn, could be explained if compact group galaxies have smaller dark halos than their field counterparts. Given that compact groups are environments where tidal encounters are common, it may be expected that interactions have stripped or disrupted the galaxy dark halos at some level. These conclusions have important consequences for the determination of group lifetimes and understanding how compact groups evolve and eventually merge. We revisit this important problem using a new dataset for 25 galaxies, of which 13 are in common with either the sample of R1991 or N2000 or both. In this paper we re-examine the T-F relation for compact group galaxies. Our study differs from the previous work in that it uses rotation curves obtained from 2-dimensional velocity fields. In a number of cases this allows a more accurate determination of the rotation curves than was possible with slit spectroscopy. Additionally, a fuller characterization of the kinematics of each galaxy is possible. We show that, with these new data, the T-F relation for compact group galaxies is similar to that for galaxies in less dense environments. The exceptions to this conclusion are seen for some of the least massive galaxies in our sample. This paper is organized as follows. In Section 2 we present the set of rotation curves that are used. In Section 3 we illustrate and discuss comparisons of rotation curves for several galaxies in common with R1991 and N2000 and we show that 2-D spectroscopy is needed if one wants to accurately describe the kinematics of interacting galaxies. Sections 4 and 5 show the results on the T-F relation and a discussion respectively. 2 Data The data used in this paper are gathered from three publications which studied the kinematics of galaxies in nine compact groups (HCG 10, 16, 19, 87, 88, 89, 90, 91, 96, 100). A detailed description of the observations and data reduction can be found in Mendes de Oliveira et al. (1998), Plana et al. (2003) and Amram et al. (2003). In summary, our dataset consists of H$\alpha$ emission-line velocity maps obtained with a Fabry-Perot system, from which rotation curves were obtained. For the T-F study we excluded from the sample all the elliptical galaxies. We also excluded galaxies HCG 10a, 10c, 16d and 100a because their rotation curves have a very short extension, well short of R${}_{25}$ and galaxies HCG 16b and HCG 100b due to their extremely peculiar rotation curves. We included HCG 19a in our sample, although Hickson (1993) classifies it as an E2 galaxy, because we have reclassified it as an S0 based on its kinematic properties. In addition, we included in our sample unpublished data for two other galaxies (HCG 07c and HCG 79d). We show in Figs. 1a and 1b the rotation curves for the galaxies in the sample studied in this paper. The x-axis plots r/R${}_{25}$, the galaxy radius along its major axis normalized by R${}_{25}$, the length of the major axis at the 25 mag arcsec${}^{-2}$ isophote (as given by Hickson 1993; note, however that in Plana et al. 2003 and Amram et al. 2003, the value for R${}_{25}$ was taken from de Vaucouleurs et al. 1991, the RC3, resulting in slightly different numbers from those used here). In order to obtain a V${}_{max}$ for use in the T-F relation (represented by a black dot in each subpannel of Figs. 1a and 1b) we have computed the maximum velocity of the average velocity curve (single average of the values for the receding and approaching sides). V${}_{max}$ is not the best kinematic parameter to be used in the T-F relation, since V${}_{flat}$, the velocity of the flat portion of the curve or V${}_{2d}$, the velocity at two times the galaxy scale length are known to yield less scatter in the T-F relation. However, the use of V${}_{max}$ allowed us to use the largest possible number of compact group galaxies. The control sample used for the T-F comparison was the sample of Sb and Sc galaxies from Courteau (1997), where we used their “Vmax” as the equivalent to the maximum velocity determined by us. We also used the sample of galaxies in the Ursa Major cluster (Verheijen 2001), in order to fill in the lower end of the mass sequence in the T-F diagram. We note that our values of V${}_{max}$ are adjusted by the cosmological correction (1+z) as are also the values given for the galaxies in the control samples. All velocities were corrected for the same Virgocentric infall (Paturel et al. 1997) when distances were obtained using the Hubble law (for our sample and Courteau’s sample). We note that in Fig. 1 the rotation of HCG 87a is one-sided because half of this almost edge-on galaxy is not observed due to a strong dust lane. 3 Comparisons with previous long-slit spectroscopy The 2-D velocity maps obtained with a Fabry-Perot instrument allowed us to mimic a slit through our data in order to recover the rotation curves obtained from long-slit spectroscopy. In this way, we could make a direct comparison with previous rotation curves obtained by other authors in order to investigate the reason for existing differences in the shapes and amplitudes of the rotation curves between different studies, for a given galaxy. We give an example in Fig. 2, where it is clear why with long-slit spectroscopy the derived rotation curves do not always correspond to the overall kinematics of the galaxies. The rotation curve derived by R1991 for HCG 88a (reproduced in Fig. 2) shows asymmetries with disagreement between the two sides of the galaxy (a bifurcation of the curve at a radius of 10 arcsec). We overplot on Fig. 2 the Fabry-Perot data for HCG 88a, with values restricted to a narrow cone around the galaxy major axis (to mimic a slit), using the particular values of center, inclination and position angle derived by R1991 photometrically. Inspecting the figure we see that for radii larger than 10 arcsec, there is good agreement between the slit-spectroscopy and Fabry-Perot data, if the photometrically derived center, inclination and position angle are used. However, the overall shape of the curve changes when such parameters are derived from the velocity map and a large sector of the galaxy is averaged (the resulting rotation curve is shown in Plana et al. 2003 and in Fig. 1). In particular, if the kinematic parameters are used and velocities over a large sector of the galaxies are averaged, the bifurcation present in Fig. 2, at r $\sim$ 10 arcsec, disappears, and the galaxy has normal kinematics, with both sides of the curve matching. In order to understand the magnitude of the variations of the parameters measured from the kinematic and photometric data we list in Table 2 values for the inclination, PA and differences between centers measured from the continuum images and velocity fields. Detailed discussion on the method used to obtain the center, PA of the major axis, and inclination of the galaxies are given in Amram et al. (1996). We included in the table not only the galaxies used in the Tully-Fisher relation but also those that were considered too peculiar or for which the gas extension was too short. We list in the last four columns the values of inclination and PA given by R1991 and N2000, when available. A comparison of the inclinations derived from the Fabry-Perot map and from the continuum images was done by subtracting columns 2 and 3 of Table 2 (and normalized to one of the two). The differences spread approximately around zero, with an rms of 20% (for the galaxies used in the Tully-Fisher relation). This indicates that there is no systematic error introduced in the determination of the maximum velocities due to the measurement of the inclination from either photometric or kinematic data. However, a similar comparison with the smaller subsample in common with R1991 and N2000 show a slight overestimate of the inclinations obtained by these authors, as compared to our estimates. This would indicate that the V${}_{max}$ derived by them would be expected to be lower on average than those obtained by us (and in fact, the values of V${}_{max}$ derived by R1991 in particular tend to be lower than those derived in our study). A comparison between columns 4 and 5 of Table 2, of the kinematic and photometric PA’s for the galaxies used in the T-F analysis, shows that about 1/3 of the galaxies have misalignments (greater than 10 degrees) between the kinematic and photometric axes. This may be an indication that the gas is not in equilibrium in these galaxies (HCG 10d, 16c, 19a, 88b, 89d, 91a, 91c, 96a). If that is the case, and for an example the gas is collapsing and/or there is additional dispersion, then the V${}_{max}$ we compute using the rotation curve of the gas is a lower limit to the real V${}_{max}$ (indicated by the total mass). In other words, the real mass of the galaxy would then be higher than what we infer from our measurements. We note that four of the six galaxies that were too peculiar or had too short gas extensions to enter the T-F analysis also present misalignments between the kinematic and photometric axes (HCG 16b, 16d, 10c, 100b). Similar results are obtained when comparing the kinematic PA’s to the corresponding photometric values derived from either R1991 and/or N2000 for the smaller subsample of galaxies in common (comparing columns 4 and 8 and columns 4 and 10 of Table 2). We are not able to make a direct comparison between the centers we measure with those measured by R1991 and/or N2000 since we do not know the exact value they used (this problem is related to the fact that we do not have information from our Fabry-Perot data about the systemic velocities of the galaxies, given the scanning nature of the instrument, Amram et al. 1992, and also related to the fact that we do not know exactly where they positioned the slits). We then list in column 6 of Table 2 only the difference in arcsec between the measured kinematic and photometric center. The center of the continuum image was taken as the point of maximum intensity. The center of the velocity field was chosen to be a point along the major axis which made the rotation curve symmetric or as symmetric as possible (with similar amplitudes for the receding and approaching sides). This freedom with the Fabry-Perot data was, in fact, one reason why it was possible to construct fairly well behaved rotation curves even when the velocity maps were highly peculiar due to localized non-circular motions. The values for $|$$\Delta_{kin-cont}$$|$ varied from zero to six arcseconds. Our conclusion is that the velocity fields of compact group galaxies are, in several cases, significantly affected by non-circular motions, local asymmetries and misalignments between the kinematic and stellar axes. A fine tuning of map parameters (center, inclination and position angle of the velocity fields) and an average of the velocities over a large sector of the galaxy are needed to derive reliable rotation curves and representative values of V${}_{max}$. A few other comparisons with data from R1991 and N2000 are exemplified in Fig. 3. In all cases we plot rotation curves resulting from a cut through the Fabry-Perot data, using the parameters given by the author’s (photometric parameters). An important point becomes clear when inspecting Fig. 3, especially Figs. 3a, 3b and 3f. In these cases the rotation curves derived from the 2-D maps extend beyond the long-slit ones, which is also another reason for the discrepant values of derived V${}_{max}$, using these two different techniques, Fabry-Perot or long-slit. We should have in mind, however, that an over/underestimation of the inclination leads to an over/underestimation of the extension of the rotation curves. Another point we should note is that the rotation curves derived by Fabry-Perot spectroscopy are much more well behaved in the sense that more often the two sides of the curves approximately agree and they are either flat or rising in the last measured point. In fact, we have no example of galaxy with a rotation curve that drops in both sides as one would expect in the Keplerian case. 4 Results Table 1 summarizes the main parameters for each galaxy. In columns (1) and (2) we show the identification and length of semi-major axis of the galaxy as given by Hickson (1993). The morphological classification for the galaxies taken from Hickson (1993) and from NED are given in columns (3) and (4) respectively . The values for the total B magnitudes of the galaxies, listed in column (5), B${}_{Tcor}$, were obtained from the B${}_{T}$ listed in Hickson et al. (1989), corrected by galactic extinction (Schlegel et al. 1998) and by the extinction due to inclination (Tully et al. 1998). The same corrections were also applied to the magnitudes of the galaxies in the control samples. The velocities listed in (6) are the maximum rotational velocities obtained from the average velocity curves as shown in Fig. 1. In (7) we list vvir, the heliocentric radial velocity of the galaxy, corrected for the local group infall onto Virgo, from the Hyperleda database. The absolute magnitudes in column (8) were obtained from $M=B_{Tcor}-5*LG(vvir/75)$. Finally, in column (9) we marked with a $P$ those galaxies whose velocity fields were considered highly disturbed either by Amram et al. (2003), Plana et al. (2003) or Mendes de Oliveira et al. (1998). The morphological types are represented in the sample in the following proportion $S0:Sa:Sb:Sc:Sd:Sm:I$=1:2:3:12:3:2:2, with preponderance of the Sc and Scd types (both counted in the Sc bin). We note that two galaxies, HCG 16c and HCG 96d, were classified as “irregular” by Hickson (1993), indicating that they had peculiar morphologies and not that they are truly irregular galaxies. The magnitudes of the galaxies ranged from $M_{B}$ $\sim$ –18.0 to –22.0 $+$ 5 log $h_{75}$. We show in Fig. 4 the T-F relation in the $B$ band for the Hickson compact group galaxies, for the Sb and Sc galaxies in the lower density environments studied by Courteau (1997), and for Sb-Sd galaxies in the Ursa Major cluster studied by Verheijen (2001). The $R$ magnitudes given in Courteau (1996) were transformed to the $B$ magnitudes using the observational relation between morphological type index and the $B-R$ colors (de Jong 1996). The solid line in Fig. 4 represents a linear-squares fit to the Courteau’s data (M${}_{B}$ = –7.05 log (V${}_{max}$) – 4.57). The broken lines indicate the $\pm$ one sigma dispersion (rms=0.63) for their data. The dispersion for the HCG galaxies around the solid line is rms=0.82 for the whole sample of 25 galaxies and rms=0.65 (similar to that for the control sample of Courteau 1997) if the outliers HCG 89d, 91c, 96c, 96d are not considered (see section 5.1 for the discussion why these galaxies could have a peculiar location in the T-F relation). The sample of Verheijen (2001) for the Ursa Major cluster spirals presents a smaller dispersion around the solid line of rms=0.43, as expected for cluster galaxies. Our main result is that galaxies in compact groups follow the T-F relation, with a few exceptions. This result is in contrast with an earlier result by R1991, who found an offset of the T-F relation for most galaxies in compact groups in the sense that galaxies in compact groups have lower velocities for a given luminosity. The disagreement is explained by the differences in the amplitudes of the derived rotation curves, V${}_{max}$, as discussed in Section 3, mainly due to a different choice of galactic parameters (center, inclination and position angle) and a less extended rotation curve in the case of curves obtained from long-slit spectroscopy. Generally, a choice of parameters guided by photometry alone (when no kinematic maps are available) will result in a lower V${}_{max}$ of a galaxy, affecting the T-F relation in the direction of the R1991 result. 5 Discussion and future prospects The fact that the $B$-band T-F relation is similar for compact group and field galaxies tells us that these galaxies show common mass-to-size relations. However, since the parameters for the T-F relation are mostly derived in the inner parts of the galaxy, this agreement does not tell us much about the dark matter, a dominant component in the outer parts (although for the latest-type galaxies we do expect a significant contribution of the dark component in the inner regions as well, Blais-Ouellette et al. 1999). Since the internal velocity dispersions of the compact group galaxy members (250 km s${}^{-1}$) are of the same order of their orbital velocities, interactions in the compact-group environment are likely to lead to mergers. In fact, N-body simulations of compact group formation and evolution (Barnes 1989) show that the fate of a compact group is merging in a few crossing times. In order to avoid fast merging, the theoretical expectation is that the compact group environment should change the shapes of the dark matter halos of galaxies that traverse the group, specifically by transforming the single-galaxy halos in a common halo around the group (e.g. Athanassoula, Makino & Bosma, 1997). Our observations cannot test (directly) this hypothesis but do show that the galaxy halos have not been significantly stripped inside R${}_{25}$. Nevertheless, mass models using a common halo for the groups added to the individual star distributions of each galaxy should be performed to test this scenario. 5.1 The outliers of the Tully-Fisher relation It is clear from Fig. 4 that a few of the lowest-mass members of compact groups lie well “above” the relationship. If they once had the T-F parameters similar to galaxies in less dense environment, they could have either brightened by 1 to 2 magnitudes in the $B$ band, to get to their present position, they could have lost a substantial amount of mass, or they could have undergone a combination of these processes. A natural way that the smaller members could have brightened is by forming stars, induced by the interactions the galaxy may have suffered within the group. For a given rate of star formation, the brightening of a galaxy will be much more noticeable for a low-mass than for a high-mass galaxy. As an example, for a large spiral galaxy with mass of 10${}^{11}$ solar masses, an episode of star formation that generates, say, 10 solar masses per year, would increase the luminosity of the galaxy in the $B$ band only by about 20%, while for a galaxy with a mass ten times lower, the same episode of star formation would triple the luminosity of the galaxy, enhancing its magnitude by 1.2 mag. Moreover, the two galaxies in the lowest-mass end of the diagram (HCG 89d and 96d) may have large quantities of gas, given their late morphological types. In fact, for HCG 96d, for which a study of HI has been made by Verdes Montenegro et al. (1997), the HI gas contributes with half of the total mass. HCG 89d and HCG 96d have masses (obtained from their maximum velocities inside R${}_{25}$ and the virial theorem) well below 10${}^{10}$ solar masses and their colours are quite blue (B–R=0.8 and 0.97 respectively, Hickson 1993 and Verdes-Montenegro et al. 1997). It is therefore quite probable that HCG 89d and HCG 96d are “above” the T-F relation mainly due to brightening caused by star formation. HCG 96c, which may deviate from the T-F relation for similar reasons, is the least massive galaxy in an M51-like pair (with the Seyfert galaxy HCG 96a). HCG 96c is noted by Laurikainen and Moles (1988) as the galaxy with the highest star-formation rate per unit area among interacting galaxies. Its colours (B-V)=0.95 and (U-B)=0.38 are consistent with this scenario (Laurikainen and Moles 1988). Another galaxy that lies “above” the T-F relation is HCG 91c, which has a colour of B–R=1.08 (Hickson 1993) and a mass inside R${}_{25}$ of 2.5 $\times$ 10${}^{10}$ M$\odot$. This galaxy clearly shows two dynamic components with quite similar rotation curves (plotted in Fig. 1 as C1 and C2, also see Amram et al. 2003). There are two points we would like to make about this galaxy. First, given the double kinematic component, we strongly suspect that this galaxy is probably the result of a merger of two similar-mass galaxies although we have no signs of this in the photometric profile of the object. In the T-F relation plotted in Fig. 4 we chose to represent this galaxy with the V${}_{max}$ of the C1 component and the total B luminosity of the system as a whole. We, therefore, expect, in a naive scenario, that the galaxy will be higher in the T-F relation by at least 0.75 magnitudes, in addition to the amount of brightening due to star formation, if there is any, which, in turn, would explain why this galaxy lies above the T-F relation. The second point, which is related to the first, is that given our arbitrary choice to use only one component (C1) to represent a galaxy which clearly has two equally important kinematic structures, V${}_{max}$ for HCG 91c is most probably an underestimate in Fig. 4 and a correction for this would then move the galaxy towards the T-F mean line. A second mechanism to leave the outliers in the position where they are in the T-F relation would be by mass loss, within R${}_{25}$. This could be caused, for example by disruption or ablation of a part of the dark halo due to strong interactions in the group. If the galaxy loses mass, in first approximation it would move to the left of the diagram. We then calculated how much mass the outliers had to lose in order to move from the position they should have (to lie on the T-F relation) to their actual position in the diagram, if mass-loss alone were acting. The answer is an impossible number – 300-500% of the total mass – making this mechanism very unlikely. We then conclude that the reason why the low-mass galaxies are off the T-F relation is most probably due to enhanced star formation and not mass loss but could be a combination of both. Two galaxies which may have suffered truncation are HCG 89d and HCG 96d, the most deviant galaxies from the T-F relation, which have morphological types Sm and Im respectively. Given their late morphological types we suspect that they contain a significant contribution from a dark component even in their inner regions, although detailed modelling has to be done to check this possibility. Their position in the T-F diagram could, then, be the result of a stripping of this inner dark halo combined with brightening of the galaxy due to star formation (given their blue colors). A trend of overbright galaxies for their mass was seen, at the low-mass end of the relation, when the T-F diagram of binary galaxies was plotted by Márquez et al. (2002). Three of the least massive galaxies in interacting pairs showed a deviation of the T-F relation in the same sense as observed by us for galaxies in compact groups. Moreover, Kannappan et al. (2002), studying a sample of bright nearby galaxies brighter than M${}_{R}=-18$, noted that interacting galaxies systematically lied above while Sa galaxies fell below the T-F relation. In fact, they proposed that there is a correlation of the residuals of the T-F relation with the star formation history of the galaxy and when a second parameter such as colour or equivalent width of H$\alpha$ is taken into account, the scatter on the T-F relation is significantly diminished. Note that the lowest-mass HCG members deviate from the T-F relation in the opposite sense of those gas-rich galaxies studied by McGaugh et al. (2000) for which a “baryonic correction” (to take into account the gas mass into the “luminosity” of the galaxy) was necessary. 5.2 Fraction of HCG galaxies with peculiar velocity fields In the last column of Table 1 we marked with a $P$ the galaxies whose velocity fields were considered highly disturbed either by Amram et al. (2003), Plana et al. (2003) or Mendes de Oliveira et al. (1998). A total of 17 from 24 galaxies (all in Table 1 but HCG 87a, which could not be classified) are marked peculiar. If we add to this sample those that were not included in the Tully-Fisher analysis because of their strong peculiarities or short gas extension, we find that 22 out of 29 galaxies, or 75% of the sample of studied galaxies have highly disturbed velocity fields. With this high rate of peculiar kinematics, it is striking that the corresponding rotation curves were possible to be obtained and that, on average, galaxies in compact groups follow the T-F relation. 5.3 Asymmetric rotation curves We can see in Fig. 1 that the rotation curves of HCG 91a (NGC 7214) and HCG96a (NGC 7674) present strong asymmetries, with one arm rising and the other one falling. Both HCG 91a and HCG 96a are the most massive members in M51-like pairs and they are also known to be Seyfert galaxies. We have checked the rotation curves of other most-massive galaxies in M51-like pairs in the sample of Márquez et al. (2002) and we have found several examples of similar rotation curves: NGC 3395/3396, NGC 5395/5394 and NGC 5774/5775. In fact, most galaxies in pairs of galaxies of unequal mass seem to have rotation curves with either one side rising and the other one falling or both falling. Examples of the latter class are NGC 4496a/4496b and NGC 2535/2536 (Amram et al. 1989). Other binary galaxies from Márquez’ sample show a rotation curve with high scatter and a trend for asymmetry (e.g. NGC 3769/3769A). Such rotation curves are rare among isolated galaxies. On the other hand, some rotation curves of galaxies of nearly-equal mass (e.g. NGC 5257/5258, Fuentes-Carrera 2003) curiously do not seem to show such peculiarities, presenting more symmetric rotation curves. If there was mass stripping from a galaxy and if the system had time to relax, then, the potential should be axisymmetric and both sides of the rotation curve should fall. We observe no such rotation curves among compact group galaxies. However, if the interaction is very recent and the system did not have time to relax, the potential could be non-axisymmetric (triaxial) and the resulting rotation curve could have the observed shape. Another possibility is that the halo could be just shaken (not stripped), in which case we would also expect to observe non-axisymmetric rotation curves such as those we observe among compact group galaxies. More N-body simulations are necessary to further clarify these issues. The prospects for the future are (1) to use near-infrared imaging to check this result, (2) combine rotation curves with surface brightness profiles to model what role the dark matter plays in the mass distribution, specially to detect galaxies that are dominated by dark matter in their central parts, if there are any and (3) to use other more external (to the group) test-particles to test the halo properties at larger radii (e.g. planetary nebula, globular clusters and/or dwarf galaxies). Finally, a detailed comparison of the results presented in this paper with tidal stripping of dark matter halos in cosmological N-body simulations of compact groups would be of great interest. CMdO deeply acknowledges the funding and hospitality of the Max Planck für Extraterrestrische Physik and the Universitaets-Sternwarte der Ludwig-Maximilians-Universität, where this work was finalized. CMdO would also like to thank the Brazilian FAPESP (projeto temático 01/07342-7), the PICS program for financial support of two visits to the Observatoire de Marseille and Paris/Meudon and the Brazilian PRONEX program. H.P. acknowledges financial support of Brazilian Cnpq under contract 150089/98-8. The authors thank J. Boulesteix and J.L. Gach for helping during the observations and Mike Bolte for very useful comments on the manuscript. We made use of the Hyperleda database and the NASA/IPAC Extragalactic Database (NED). The latter is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. References Athanassoula et al. (1997) Athanassoula, E., Makino, J. & Bosma, A., 1997, MNRAS, 286, 825 Amram et al. (1992) Amram, P., Le Coarer, E., Marcelin, M., Balkowski, C., Sullivan, W.T., III, Cayatte, V., 1992, A&AS, 94, 175. Amram et al. (1996) Amram, P., Balkowski, C., Boulesteix, J., Cayatte, V., Marcelin, M., & Sullivan, W. T., III. 1996, A&A, 310, 737 Amram et al. (2003) Amram P., Plana H., Mendes de Oliveira C., Balkowski C., Boulesteix J., 2003, A&A, 402, 865 Amram et al. (1989) Amram P., Marcelin M., Boulesteix J., Le Coarer E., 1989, A&A Supplement series, 81, 59 Barnes (1989) Barnes J., 1989, Nature, 338, 123 Blais-Ouellette et al. (1999) Blais-Ouellette S., Carignan C., Amram P., Côté S., 1999, AJ, 118, 2123 Courteau (1996) Courteau S., 1996, ApJS, 103, 363 Courteau (1997) Courteau S., 1997, AJ, 114, 2402 de Jong (1996) de Jong R.S, 1996, A&A Supplement series, 118, 557 (de Vaucouleurs et al. 1991) de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Buta, J., Paturel, G. and Fouqué, P., 1991, Third Reference Catalogue of Bright Galaxies, Springer Verlag (RC3) Fuentes-Carrera et al. (2003) Fuentes-Carrera, I., Ph.D. Thesis, 2003. Hickson et al. (1989) Hickson P., Kindl E., Auman J. 1989, ApJS, 70, 687 Hickson (1993) Hickson P. 1993, Astroph.Letter & Comm. 29, 1 Kan et al. (2002) Kannappan, S.J., Fabricant, D.G. and Franx, M., 2002, ApJ, 123, 2358. Márquez et al. (2002) Márquez, I., Masegosa, J., Moles, M., Varela, J., Bettoni, D., Galletta, G., 2002, A&A, 393, 389. McGaugh et al. (2000) McGaugh S.S., Schombert J.M., Bothun G.D., de Blok W.J.G., 2000, ApJ, 533, L99 Mendes de Oliveira et al. (1998) Mendes de Oliveira C., Plana H., Amram P., Balkowski C., Boulesteix J. 1998, ApJ, 507, 691 Nishiura et al. (2000) Nishiura S., Shimada M., Ohyama Y., Murayama T., Taniguchi Y. 2000, AJ, 120, 169 (N2000) Paturel et al. (1997) Paturel G., Gouguenheim L., Lanoix P., Marthinet M., Petit C., Rousseau J., Theureau G., Vauglin I. 1997, A&ASS, 124, 109 Plana et al. (1998) Plana H., Mendes de Oliveira C., Amram P., Boulesteix J., 1998, AJ, 116, 2123 Plana et al. (2003) Plana H., Amram P., Mendes de Oliveira C., Balkowski C., Boulesteix J., 2003, AJ, 125, 1736 Rubin et al. (1991) Rubin V.C., Hunter D.A., Ford W.K.Jr. 1991, ApJS, 76, 153 (R1991) Schlegel et al. (1998) Schlegel, D., Finkbeiner, D. P., Davis, M. 1998, ApJ, 500, 525 Tully et al. (1998) Tully, R. B., Pierce, M. J., Huang, J. S., Saunders, W., Verheijen, M. A. W., & Witchalls, P. L. 1998, AJ, 115, 2264 Tully et al. (2000) Tully, R.B. & Pierce, M.J. 2000, ApJ, 533, 744 Verdes-Montenegro et al. (1997) Verdes-Montenegro L., del Olmo A., Perea J., Athanassoula E., Márquez I., Augarde R., 1997, A&A, 321, 409 Verheijen (2001) Verheijen M.A.W, 2001, ApJ, 563, 694
Thermodynamic quantum critical behavior of the anisotropic Kondo necklace model D. Reyes${}^{1}$, M. A. Continentino${}^{2}$, Han-Ting Wang${}^{3}$ ${}^{1}$Centro Brasileiro de Pesquisas Físicas - Rua Dr. Xavier Sigaud, 150-Urca, 22290-180,RJ-Brazil ${}^{2}$Instituto de Física, Universidade Federal Fluminense, Campus da Praia Vermelha, Niterói, RJ, 24.210-340, Brazil ${}^{3}$Beijing National Laboratory of Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100080, People’s Republic of China [email protected] Abstract The Ising-like anisotropy parameter $\delta$ in the Kondo necklace model is analyzed using the bond-operator method at zero and finite temperatures for arbitrary $d$ dimensions. A decoupling scheme on the double time Green’s functions is used to find the dispersion relation for the excitations of the system. At zero temperature and in the paramagnetic side of the phase diagram, we determine the spin gap exponent $\nu z\approx 0.5$ in three dimensions and anisotropy between $0\leq\delta\leq 1$, a result consistent with the dynamic exponent $z=1$ for the Gaussian character of the bond-operator treatment. At low but finite temperatures, in the antiferromagnetic phase, the line of Neel transitions is calculated for $\delta\ll 1$ and $\delta\approx 1$. For $d>2$ it is only re-normalized by the anisotropy parameter and varies with the distance to the quantum critical point QCP $|g|$ as, $T_{N}\propto|g|^{\psi}$ where the shift exponent $\psi=1/(d-1)$. Nevertheless, in two dimensions, long range magnetic order occurs only at $T=0$ for any $\delta$. In the paramagnetic phase, we find a power law temperature dependence on the specific heat at the quantum liquid trajectory $J/t=(J/t)_{c}$, $T\rightarrow 0$. It behaves as $C_{V}\propto T^{d}$ for $\delta\leq 1$ and $\delta\approx 1$, in concordance with the scaling theory for $z=1$. 1 Introduction Quantum phase transitions (QPT) from an antiferromagnetic $AF$ ordered state to a nonmagnetic Fermi liquid (NFL) in heavy fermion (HF) systems have received considerable attention from both theoretical[1] and experimental points of view[2]. In contrast to classical phase transitions (CPT), driven by temperature, QPT can be driven by tuning an independent-temperature control parameter (magnetic field, external pressure, or doping). The physics of HF is mainly due to the competition of two main effects: the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction between the magnetic ions which favors long-range magnetic order and the Kondo effect which tends to screen the local moments and produce a nonmagnetic ground state. These effects are contained in the Kondo lattice model (KLM) Hamiltonian in which, only spin degrees of freedom are considered. Here we investigated a simplified version, the so-called Kondo necklace model[3] (KNM) which for all purposes can be considered yield results similar to the original model. While the ground state properties of this model has been investigated rather extensively, by a variety of methods[4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], thermodynamic and finite temperature critical properties, close to a magnetic instability, remain an open issue. That was warned for us, and it was our first motivation for studying the quantum critical properties of this model, as a function of the distance to the quantum critical point $|g|$ at zero and low temperatures[1, 15]. We extend now this treatment to finite inter-site anisotropy $\delta$ such that, $0\leq\delta\leq 1$ since the $\delta=1$ case is appropriate to describe compounds where the ordered magnetic phase has a strong Ising component. However, the main reason for considering anisotropy $\delta$ in the KNM is to try to describe its effects in the neighborhood of a magnetic quantum critical point (QCP) in HF systems, rather than a symmetry problem[12]. This is a goal in HF systems, and already several theories were formulated to explain their unusual properties[16, 17, 18]. Besides, in an early work we were succeeded in finding that the Neel line exists since turning on a geometric anisotropy[19], stressing that anisotropy is an inherent ingredient in real HF systems. Henceforth, the model will be called anisotropic Kondo necklace model (AKNM). This model was already investigated using the real space renormalization group machinary[20] but just in one dimension and zero temperature. We use the bond-operator approach introduced by Sachdev and Bhatt[21] which was employed previously to both, KLM[22] and KNM[11] models but always at $(T,\delta)=(0,0)$. We find that this method yields a shift exponent that characterizes the shape of the critical line in the neighborhood of the QCP, as well as, the power law temperature dependence on the specific heat along the so-called quantum critical trajectory $J/t=(J/t)_{c}$, $T\rightarrow 0$. We consider the following AKNM: $$H=t\sum_{<i,j>}(\tau^{x}_{i}\tau^{x}_{j}+(1-\delta)\tau^{y}_{i}\tau^{y}_{j})+J% \sum_{i}\mathbf{S}_{i}.\mathbf{\tau}_{i},$$ (1) where $\tau_{i}$ and $\mathbf{S}_{i}$ are independent sets of spin-1/2 Pauli operators, representing the conduction electron spin and localized spin operators, respectively. The sum $\langle i,j\rangle$ denotes summation over the nearest-neighbor sites. The first term mimics electron propagation which strength $t$ and the second term is the magnetic interaction between conduction electrons and localized spins $\mathbf{S}_{i}$ via the Kondo exchange coupling $J$ $(J>0)$. The Ising-like anisotropy parameter $\delta$ varies from the full anisotropic case $\delta=1$ to the well established case $\delta=0$. Considering the bond-operator representation for two spins $S=1/2$, $\tau_{i}(S_{i})^{\alpha}=\mp\frac{1}{2}(s_{i}^{\dagger}t_{i,\alpha}+t_{i,% \alpha}^{\dagger}s_{i}\pm i\epsilon_{\alpha\beta\gamma}t_{i,\beta}^{\dagger}t_% {i,\gamma})$ $(\alpha=x,y,z)$[21], the Hamiltonian above, at half-filling, i.e., with one conduction electron per site, can be simplified and the resulting effective Hamiltonian $H_{mf}$ with only quadratic operators is sufficient to describe exactly the quantum phase transition from the disordered Kondo spin liquid to the AF phase, as discussed below. Then, we have a mean-field Hamiltonian: $$\displaystyle H_{mf}$$ $$\displaystyle=$$ $$\displaystyle N\left(-\frac{3}{4}J\overline{s}^{2}+\mu\overline{s}^{2}-\mu% \right)+\omega_{0}\sum_{{\bf k}}t_{{\bf k},z}^{\dagger}t_{{\bf k},z}$$ (2) $$\displaystyle+$$ $$\displaystyle\sum_{\bf k}\left[\Lambda_{{\bf k}}t_{{\bf k},x}^{\dagger}t_{{\bf k% },x}+\Delta_{{\bf k}}\left(t_{{\bf k},x}^{\dagger}t_{-{\bf k},x}^{\dagger}+t_{% {\bf k},x}t_{-{\bf k},x}\right)\right]$$ $$\displaystyle+$$ $$\displaystyle\sum_{\bf k}\left[\Lambda_{{\bf k}}^{\prime}t_{{\bf k},y}^{% \dagger}t_{{\bf k},y}+\Delta_{{\bf k}}^{\prime}\left(t_{{\bf k},y}^{\dagger}t_% {-{\bf k},y}^{\dagger}+t_{{\bf k},y}t_{-{\bf k},y}\right)\right],$$ where $\Lambda_{{\bf k}}=\omega_{0}+2\Delta_{{\bf k}}$, $\Lambda_{{\bf k}}^{\prime}=\omega_{0}+2\Delta_{{\bf k}}^{\prime}$, $\Delta_{{\bf k}}=\frac{1}{4}t\overline{s}^{2}\lambda({\bf k)}$, $\Delta_{{\bf k}}^{\prime}=\frac{1}{4}t\overline{s}^{2}\lambda({\bf k)}(1-\delta)$ and $\lambda({\bf k)=}\sum_{s=1}^{d}\cos k_{s}$. $\overline{s}$ is the singlet order parameter consistent with the strong coupling limit $J/t\rightarrow\infty$, where the model becomes trivial, since each $\bf{S}$ spin captures a conduction electron spin to form a singlet, and where the ground state corresponds to a direct product of those singlets. The chemical potential $\mu$ was introduced to impose the constraint condition of single occupancy, $N$ is the number of lattice sites and $Z$ is the total number of the nearest neighbors on the hyper-cubic lattice. The wavevectors $k$ are taken in the first Brillouin zone and the lattice spacing was assumed to be unity. This mean-field Hamiltonian can be solved using the Green’s functions to obtain the thermal averages of the singlet and triplet correlation functions. These are given by, $$\displaystyle\ll t_{{\bf k},x};t_{{\bf k},x}^{{\dagger}}\gg$$ $$\displaystyle=$$ $$\displaystyle\frac{(\omega^{2}-\omega_{k}^{\prime 2})(\omega+\Lambda_{k})}{2% \pi\xi},$$ $$\displaystyle\ll t_{{\bf k},y};t_{{\bf k},y}^{{\dagger}}\gg$$ $$\displaystyle=$$ $$\displaystyle\frac{(\omega^{2}-\omega_{k}^{2})(\omega+\Lambda_{k}^{\prime})}{2% \pi\xi},$$ $$\displaystyle\ll t_{{\bf k},z};t_{{\bf k},z}^{{\dagger}}\gg$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2\pi(\omega-\omega_{0})},$$ (3) where $\xi=(\omega^{2}-\omega_{k}^{2})(\omega^{2}-\omega_{k}^{\prime 2})$. The poles of the Green’s functions determine the excitation energies of the system as $\omega_{0}=\left(\frac{J}{4}+\mu\right)$, which is the dispersionless spectrum of the longitudinal spin triplet states, $\omega_{k}=\pm\sqrt{\Lambda_{k}^{2}-(2\Delta_{k})^{2}}$ that correspond to the excitation spectrum of the $x$-transverse spin triplet states and $\omega_{k}^{\prime}=\pm\sqrt{\Lambda_{k}^{\prime 2}-(2\Delta_{k}^{\prime})^{2}}$ that correspond to the $y$-transverse one. 2 Paramagnetic State From these modes above and their bosonic character an expression for the paramagnetic internal energy at finite temperatures can be easily obtained[1, 15, 19], $$U=\varepsilon_{0}+\sum_{\mathbf{k}}\left(\omega_{0}n(\omega_{0})+\omega_{% \mathbf{k}}n(\omega_{\mathbf{k}})+\omega_{\mathbf{k}}^{\prime}n(\omega_{% \mathbf{k}}^{\prime})\right)\\ $$ where $\varepsilon_{0}=N\left(-\frac{3}{4}J\overline{s}^{2}+\mu\overline{s}^{2}-\mu% \right)+\sum_{\mathbf{k}}(\omega_{\mathbf{k}}+\omega_{\mathbf{k}}^{\prime}-% \Lambda_{\mathbf{k}}-\Lambda_{\mathbf{k}}^{\prime})/2$ is the paramagnetic ground state energy, $n(\omega)=\frac{1}{2}\left(\coth\frac{\beta\omega}{2}-1\right)$ the Bose factor, $\beta=1/k_{B}T$, $k_{B}$ the Boltzman’s constant and $T$ the temperature. After some straightforward algebra[1, 19] using Eq. (2), the paramagnetic free energy renders $$F=\varepsilon_{0}-\frac{1}{\beta}\sum_{\mathbf{k}}\ln[1+n(\omega_{\mathbf{k}})% ]-\frac{1}{\beta}\sum_{\mathbf{k}}\ln[1+n(\omega_{\mathbf{k}}^{\prime})]-\frac% {N}{\beta}\ln[1+n(\omega_{0})].$$ (4) For obtaining $\overline{s}^{2}$ and $\mu$ we minimize the free energy by the saddle-point equations $$\displaystyle 2(2-\overline{s}^{2})$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2N}\sum_{\mathbf{k}}\left(\frac{\Lambda_{\mathbf{k}}}{% \omega_{\mathbf{k}}}\coth\frac{\beta\omega_{\mathbf{k}}}{2}+\frac{\Lambda_{% \mathbf{k}}^{\prime}}{\omega_{\mathbf{k}}^{\prime}}\coth\frac{\beta\omega_{% \mathbf{k}}^{\prime}}{2}\right)+f(\omega_{0}),$$ $$\displaystyle\frac{2J}{t}\left(\frac{3}{4}-\frac{\mu}{J}\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2N}\sum_{\mathbf{k}}\left(\frac{\omega_{0}}{\omega_{% \mathbf{k}}}\lambda(\mathbf{k})\coth\frac{\beta\omega_{\mathbf{k}}}{2}+\frac{% \omega_{0}}{\omega_{\mathbf{k}}^{\prime}}\lambda(\mathbf{k})(1-\delta)\coth% \frac{\beta\omega_{\mathbf{k}}^{\prime}}{2}\right),$$ where $f(\omega_{0})=\frac{N}{2}\left(\coth\frac{\beta\omega_{0}}{2}-1\right)$. 2.1 Numerical results at $T=0$ We first study the case $T=0$ e.i., without thermal excitations. At zero temperature the self-consistent equations given by Eqs. (2) can be simplified as, $$\displaystyle 4(2-\overline{s}^{2})$$ $$\displaystyle=$$ $$\displaystyle I_{1}(y)+I_{2}(y)+I_{3}(y)+I_{4}(y)$$ $$\displaystyle\frac{4Jy}{t}\left(\frac{3}{4}-\frac{\mu}{J}\right)$$ $$\displaystyle=$$ $$\displaystyle I_{2}(y)-I_{1}(y)+I_{4}(y)-I_{3}(y),$$ (6) with $$\displaystyle I_{1}(y)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\pi^{d}}\int_{0}^{\pi}\frac{d^{d}k}{\sqrt{1+y\lambda(% \mathbf{k})}},\hskip 5.690551ptI_{3}(y)=\frac{1}{\pi^{d}}\int_{0}^{\pi}\frac{d% ^{d}k}{\sqrt{1+y(1-\delta)\lambda(\mathbf{k})}}$$ $$\displaystyle I_{2}(y)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\pi^{d}}\int_{0}^{\pi}d^{d}k\sqrt{1+y\lambda(\mathbf{k})% },\hskip 5.690551ptI_{4}(y)=\frac{1}{\pi^{d}}\int_{0}^{\pi}d^{d}k\sqrt{1+y(1-% \delta)\lambda(\mathbf{k})},$$ (7) where we have introduced a dimensionless parameter $y=t\overline{s}^{2}/\omega_{0}$. An equation about $y$ can then be obtained: $$y=\frac{2t}{J}\left(1-[I_{1}(y)+I_{3}(y)]/4\right).$$ (8) We will now obtain the numerical solutions to the zero temperature self-consistent equations (2.1) using Eq. (8). In this case (paramagnetic phase), we have the $z$-polarized branch of excitations has a dispersionless value $\omega_{z}(k)=\omega_{0}$ and the other two branches show a dispersion which has a minimum at the AF reciprocal vector $Q=(\pi,\pi,\pi)$ in three dimensions ($3d$). The minimum value of the excitations defines $$\displaystyle\Delta^{x}=\omega_{0}\sqrt{1-yd},\hskip 28.452756pt\Delta^{y}=% \omega_{0}\sqrt{1-yd(1-\delta)}.$$ (9) The spin gap energy $\Delta^{x}$ and $\Delta^{y}$ define the energy scale for the Kondo singlet phase, for $0\leq\delta\leq 1$ and $\delta<0$ respectively. For $\delta=0$, $\Delta^{x}$ and $\Delta^{y}$ are identical and we obtain the original spin gap in the KNM[11]. Although we are interested in the case $0<\delta\leq 1$, we consider $\delta<0$ due to theoretical reasons. This case only will be consider at $T=0$ and will not be sketched in this report. The analysis of the spin gap is important because the vanishing of gap and the appearance of soft modes define the transition from the disordered Kondo spin liquid to the AF phase at the QCP $(J/t=(J/t)_{c},T=0)$. At this point, it is suitable to clarify that in figures (1), (2) and (3), we sketched the spin gap energy like $\Delta/J$ versus $t/J$ by following the $\delta=0$ case[11], despite we consider throughout this paper the control parameter as $J/t$. That will not yield any physical difference since it only will change the onset of the curves from the left to the right. In the one dimensional $(1d)$ case, the energy gap falls linearly for small values of $t/J$ and deviates considerably from the linear behavior as $t/J$ gets larger, as it is relates in Fig (1). Thereby, it is always nonzero for any $\delta$, supporting its disordered phase, own of $1d$ Kondo lattices[11, 23]. The anisotropy dependence on the spin gap in two dimensions ($2d$) is sketched in Fig. (2). For $0<\delta\leq 1$ the effect of anisotropy is still weak and it changes the QCP slightly, until $\delta=0$, where both soft modes, $\Delta^{x}$ and $\Delta^{y}$, contribute and the QCP undergoes a slight jump. Then, the qualitative behavior is the same for this range and the gap exponent is approximately $\nu z\simeq 1$. It is plotted in the inset of Fig. (2). On the other hand, for $\delta<0$, the QCP is reduced and the Kondo spin liquid phase is limited to a narrower region. It is not shown in Fig. (2). In three dimensions, the effect of anisotropy on the spin gap is similar as in the $2d$ case. The spin gap follows a exponent $\nu z\simeq 0.5$ for $0\leq\delta\leq 1$ and changes its universality for $\delta<0$, where the spin gap vanishes close to the QCP more faster. As in the $2d$ case, there is a jump for $\delta=0$ which is the result of all soft modes coincide. We conclude that, for all anisotropy between $0\leq\delta\leq 1$, there exists a critical value $(t/J)_{c}$, where the spin gap vanishes as $\Delta/J\propto|(t/J)_{c}-t/J|^{\nu z}$, and a QPT to the ordered magnetic phase occurs in $2d$ and $3d$ whereas no transition happens in $1d$. This is similar to the results in Ref. [11] for $\delta=0$ and it gives us a kind of universality similar as in the isotropic Kondo lattices[1, 11, 22]. From relation between the spin gap and the distance to the QCP, sketched in the onset of Fig. (3), it is shown that when $t/J$ increases from its strong coupling limit, the triplet spin gap at the wave vector $Q=(\pi,\pi,\pi)$ decreases and vanishes at $t/J=(t/J)_{c}$. Since $\Delta/J\propto|(t/J)_{c}-t/J|^{0.5}$, close to the QPT, we can immediately identify the spin gap exponent $\nu z\approx 0.5$ at the QCP of the Kondo lattice, confirming our early theoretical results[1]. Finally, for $\delta<0$ exists also a QPT in $d=2,3$ but no phase transition appears in $1d$. 2.2 Analytical results at the quantum critical trajectory Since quantum phase transitions are generally associated with soft modes at the QCP, where the gap for excitation vanishes, then physical quantities have power law temperature dependencies determined by the quantum critical exponents[24]; one of them is the specific heat $C_{V}$, that we will calculate here. This strategy has been intensively explored in the study of heavy fermion materials, in the so-called quantum critical trajectory $J/t=(J/t)_{c}$, $T\rightarrow 0$, fixing the pressure (in our case the control parameter $J/t$) at its critical value for the disappearance of magnetic order[25]. Then, we calculate analytically, the anisotropy dependence on the specific heat at $J/t=(J/t)_{c}$, $T\rightarrow 0$ for both cases, $\delta\ll 1$ and $\delta\approx 1$. All the calculations will be done considering two essential approximations: $(i)$ The system is at the quantum critical point $J/t=(J/t)_{c}$, and temperatures $T\rightarrow 0$. $(ii)$ The temperatures region where will be found the specific heat will be lower than the Kondo temperature ($T_{K}$). We will begin writing $k=Q+q$ and expanding for small $q$: $\lambda(q)=-d+q^{2}/2+O(q^{4})$, this yields the spectrum of transverse spin triplet excitations as, $$\displaystyle\omega_{q}\approx\omega_{0}\sqrt{1+y\lambda(q)}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{\Delta^{2}+Dq^{2}},$$ $$\displaystyle\omega_{q}^{\prime}\approx\omega_{0}\sqrt{1+y\lambda(q)(1-\delta)}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{\Delta^{2}+D(1-\delta)q^{2}+\omega_{0}^{2}\delta},$$ (10) where $\Delta=\Delta^{x}$ is the spin gap energy given by Eq. (9) since $0\leq\delta\leq 1$, $D=\omega_{0}^{2}/2d$ the spin-wave stiffness at $T=0$, and $\omega_{0}$ is the $z$-polarized dispersionless branch of excitations. Considering $\Delta=0$, at the QCP[24] in the spectrum excitations Eq. (2.2), and using $C_{V}=-T\partial^{2}F/\partial T^{2}$ in Eq. (4) we get $$C_{V}=\frac{S_{d}}{4k_{B}T^{2}\pi^{d}}\int_{0}^{\pi}dqq^{d-1}(\omega_{q}^{2}+% \omega_{q}^{\prime 2})(\sinh^{-2}{\frac{\beta\omega_{q}}{2}}+\sinh^{-2}{\frac{% \beta\omega_{q}^{\prime}}{2}}),$$ (11) where $S_{d}$ is the solid angle. Equation (11) yields the expression for the anisotropic dependence on specific heat at the quantum critical trajectory, as an contribution of bosons $t_{x}$ and $t_{y}$. Case $0\leq\delta\ll 1$—Having shown the relationship between the specific heat $C_{V}$ and $\delta$ we now discuss the case $\delta\ll 1$. Making change of variables in Eq. (11) we obtain, $$C_{V}(\delta\ll 1)=\frac{S_{d}k_{B}Z^{d/2}}{\pi^{d}}\left(\frac{k_{B}T}{\omega% _{0}}\right)^{d}[\Upsilon_{1}(d)+\frac{\delta}{4}(\Upsilon_{2}(d)-2\Upsilon_{1% }(d))],$$ (12) where $\Upsilon_{1}(d)=\int_{0}^{\infty}dxx^{d+1}\sinh^{-2}(x/2)$, $\Upsilon_{2}(d)=\int_{0}^{\infty}dxx^{d+2}\coth(x/2)\sinh^{-2}(x/2)$ and $x=\beta\omega_{0}q/\sqrt{Z}$. In two dimensions we found $\Upsilon_{1}(2)=24\zeta(3)$ and $\Upsilon_{2}(2)=96\zeta(3)$, where $\zeta$ is the Riemann zeta-function. In three dimensions $\Upsilon_{1}(3)=16\pi^{4}/15$ and $\Upsilon_{2}(3)=16\pi^{4}/3$. For $\delta=0$, the spectrum excitations given by Eq. (2.2) coincide and we recover the exact value as obtained in an previous work for the isotropic KNM[1]. Case $\delta\approx 1$—Here, it is sufficient to consider, $\xi=1-\delta\ll 1$, where $\xi$ is a dimensionless parameter that controls the Ising-like anisotropy in this case. Thereby, working in analogy with the preceding case, we obtain $$C_{V}(\delta\approx 1)=\frac{S_{d}k_{B}Z^{d/2}}{4\pi^{d}}\left(\frac{k_{B}T}{% \omega_{0}}\right)^{d}\Upsilon_{1}(d)(2-\delta),$$ (13) where we have already replaced the $\xi$ expression. The results above show that the specific heat of the AKNM for $\delta\ll 1$ and $\delta\approx 1$ is only re-normalized by the anisotropy, concluding that $C_{V}\propto T^{d}$ at the quantum critical trajectory for $\delta\ll 1$ and $\delta\approx 1$. Notice that this is consistent with the general scaling result $C_{V}\propto T^{d/z}$ with the dynamic exponent taking the value $z=1$[24]. Since $z=1$, in three dimensions $d_{eff}=d+z=d_{c}=4$ where $d_{c}$ is the upper critical dimension for the magnetic transition [24]. Consequently, the present approach yields the correct description of the quantum critical point of the Kondo lattices for $d\geq 3$. 3 Antiferromagnetic Phase The mean-field approach can be extended to the AF phase assuming the condensation in the $x$ component of the spin triplet like: $t_{\mathbf{k},x}=\sqrt{N}\bar{t}\mathbf{\delta_{k,Q}}+\mathbf{\eta}_{\mathbf{k% },x}$, where $\bar{t}$ is its mean value in the ground state and $\mathbf{\eta}_{\mathbf{k},x}$ represents the fluctuations. Making the same steps as before, the internal energy renders $$U^{\prime}=\varepsilon_{0}^{\prime}+\sum_{\mathbf{k}}\left(\omega_{0}n(\omega_% {0})+\omega_{\mathbf{k}}n(\omega_{\mathbf{k}})+\omega_{\mathbf{k}}^{\prime}n(% \omega_{\mathbf{k}}^{\prime})\right),\\ $$ where $\varepsilon_{0}^{\prime}=N\left[-\frac{3}{4}J\overline{s}^{2}+\mu\overline{s}^% {2}-\mu+\left(\frac{J}{4}+\mu-\frac{1}{2}tZ\overline{s}^{2}\right)\overline{t}% ^{2}\right]+\sum_{\mathbf{k}}(\omega_{\mathbf{k}}+\omega_{\mathbf{k}}^{\prime}% -\Lambda_{\mathbf{k}}-\Lambda_{\mathbf{k}}^{\prime})/2$ is the AF ground state. The free energy is now $$F^{\prime}=\varepsilon_{0}^{\prime}-\frac{1}{\beta}\sum_{\mathbf{k}}\ln[1+n(% \omega_{\mathbf{k}})]-\frac{1}{\beta}\sum_{\mathbf{k}}\ln[1+n(\omega_{\mathbf{% k}}^{\prime})]-\frac{N}{\beta}\ln[1+n(\omega_{0})].$$ (14) Minimizing the free energy Eq. (14), using $(\partial F^{\prime}/\partial\mu,\partial F^{\prime}/\partial\overline{s},% \partial F^{\prime}/\partial\bar{t})=(0,0,0)$, we can easily get the following saddle-point equations, $$\displaystyle\overline{s}^{2}$$ $$\displaystyle=$$ $$\displaystyle 1+\frac{J}{Zt}-\frac{f(\omega_{0})}{2}$$ $$\displaystyle-$$ $$\displaystyle\frac{1}{4N}\sum_{\mathbf{k}}\left(\sqrt{1+\frac{2\lambda(\mathbf% {k})}{Z}}(1+2n(\omega_{\mathbf{k}}))+\sqrt{1+\frac{2\lambda(\mathbf{k})(1-% \delta)}{Z}}(1+2n(\omega_{\mathbf{k}}^{\prime}))\right),$$ $$\displaystyle\overline{t}^{2}$$ $$\displaystyle=$$ $$\displaystyle 1-\frac{J}{Zt}-\frac{f(\omega_{0})}{2}-\frac{1}{4N}\sum_{\mathbf% {k}}\left(\frac{(1+2n(\omega_{\mathbf{k}}))}{\sqrt{1+\frac{2\lambda(\mathbf{k}% )}{Z}}}+\frac{(1+2n(\omega_{\mathbf{k}}^{\prime}))}{\sqrt{1+\frac{2\lambda(% \mathbf{k})(1-\delta)}{Z}}}\right),$$ $$\displaystyle\mu$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}Zt\overline{s}^{2}-J/4,$$ (15) with the excitations spectrum of the $x$-transverse and $y$-transverse spin triplet states given now by, $\omega_{{\bf k}}=\frac{1}{2}Zt\overline{s}^{2}\sqrt{1+2\lambda(\mathbf{k})/Z}$ and $\omega_{{\bf k}}^{\prime}=\frac{1}{2}Zt\overline{s}^{2}\sqrt{1+2\lambda(% \mathbf{k})(1-\delta)/Z}$, respectively. Generally the equations for $\overline{s}$ and $\overline{t}$ in Eq. (3) should be solved and for $\delta=0$ the results of Ref. ([1]) are recovered. Here, in the magnetic ordered state, the condensation of triplets (singlets) follows from the RKKY interaction (Kondo effect). At finite temperatures the condensation of singlets occurs at a temperature scale which, to a first approximation, tracks the exchange $J$ while the energy scale below which the triplet excitations condense is given by the critical Neel temperature ($T_{N}$) which is calculated in the next section. Thus, the fact that at the mean-field level, both $\overline{s}$ and $\overline{t}$ do not vanish may be interpreted as the coexistence of Kondo screening and antiferromagnetism in the ordered phase[1, 11, 22] for all values of the ratio $J/t<(J/t)_{c}$. 4 Critical line in the AKNM Following the discussion above, the critical line giving the finite temperature instability of the AF phase for $J/t<(J/t)_{c}$ is obtained making $\overline{t}=0$. Hence, from Eq. (3) we can obtain the boundary of the AF state as, $$\displaystyle\frac{|g|}{Z}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2N}\sum_{\mathbf{k}}\left(\frac{n(\omega_{\mathbf{k}})}{% \sqrt{1+\frac{2\lambda(\mathbf{k})}{Z}}}+\frac{n(\omega_{\mathbf{k}}^{\prime})% }{\sqrt{1+\frac{2\lambda(\mathbf{k})(1-\delta)}{Z}}}\right)+\frac{f(\omega_{0}% )}{2},$$ (16) where $g=|(J/t)_{c}-(J/t)|$ measures the distance to the QCP. The latter is given by, $(J/t)_{c}=Z[1-\frac{1}{4N}\sum_{\mathbf{k}}(\frac{1}{\sqrt{1+2\lambda(\mathbf{% k})/Z}}+\frac{1}{\sqrt{1+2\lambda(\mathbf{k})(1-\delta)/Z}})]$, which separates an antiferromagnetic long range ordered phase from a gapped spin liquid phase. Performing the same analysis as in sub-section (2.2), expanding the spectrum excitations close to $\mathbf{Q}=(\pi,\pi,\pi)$, Eq. (16) becomes $$\frac{|g|}{Z}=\frac{S_{d}\omega_{0}}{4\pi^{d}}\int_{0}^{\pi}dqq^{d-1}\left(% \frac{1}{\omega_{q}}\left(\coth{\frac{\beta\omega_{q}}{2}}-1\right)+\frac{1}{% \omega_{q}^{\prime}}\left(\coth{\frac{\beta\omega_{q}^{\prime}}{2}}-1\right)% \right),$$ (17) where we have considered that for temperatures $k_{B}T\ll\omega_{0}$, $f(\omega_{0})$ goes to zero faster than the first term of Eq. (16). This equation above allow us to obtain the critical line in the AKNM as a function of the anisotropy parameter $\delta$. 4.1 Case $0\leq\delta\ll 1$ We now demonstrate analytically the appearance of a finite Neel line temperature when a small degree of anisotropy $\delta$ in $y$-component spin is turned on. Then, solving Eq. (17) for $0\leq\delta\ll 1$, we get $$\displaystyle\frac{|g|}{Z}_{\delta\ll 1}=\frac{S_{d}Z^{d/2}}{2\pi^{d}}\left(% \frac{k_{B}T}{\omega_{0}}\right)^{d-1}\left[\Phi_{1}(d)+\frac{\delta}{8}(\Phi_% {2}(d)+2\Phi_{1}(d))\right],$$ (18) where $\Phi_{1}(d)=\int_{0}^{\infty}dxx^{d-2}\left(\coth\frac{x}{2}-1\right)$ and $\Phi_{2}(d)=\int_{0}^{\infty}dxx^{d+1}\sinh^{-2}(x/2)$. We notice that the integrals $\Phi_{1}(d)$ and $\Phi_{2}(d)$ diverge for $d<3$ showing that there is no critical line in two dimensions at finite temperatures[1, 15] for any anisotropy $\delta\ll 1$, in agreement with the Mermin-Wagner theorem[26]. Nevertheless, for $d\geq 3$, the integrals are finite and the equation for the critical line shows, $(T_{N})_{\delta\ll 1}\propto|g|^{\phi}$, with $\phi=1/(d-1)$. If we write the equation for the critical line, $f(g,T)=0$, in the form, $(J/t)_{c}(T)-(J/t)_{c}(0)+v_{0}T^{1/\psi}=0$, with $v_{0}$ related to the spin-wave interaction, we identify the shift exponent, $\psi=z/(d+z-2)$[27], that comparing with $\phi$ gives us the dynamic exponent $z=1$, a Gaussian result, since the critical line only exists for $d>2$. The temperature dependence of the function $f$ arising from the spin-wave interactions can modify the temperature dependence of the physical properties, as the specific heat, at $J/t=(J/t)_{c}$. However, in the limit $T\rightarrow 0$ we can easily see that the purely Gaussian results for the specific heat calculated in section (2.2) is dominant, in concordance with the mean-field treatment used here. For $\delta=0$, we obtain the well established result for the critical line in the KNM[1], this due to the fact that the spectrum energy of the two excitations coincide. For $\delta\approx 1$, following the same steps as before, it is straightforward to show that the dominant is also $T_{N}(\delta\approx 1)\propto|g|^{\phi}$ for $d\geq 3$, and no critical line exists for $d=2$. In summary, we have obtained analytically the expression for the Neel line, below which the triplet excitations condense, close to the QCP for $0\leq\delta\ll 1$ and $\delta\approx 1$. We have shown that this line does not exist for $d=2$ for any value of the anisotropy, as we expected, whereas for $d\geq 3$, the power dependence on $|g|$ of the critical line in the presence of the anisotropy is the same of the KNM original. Therefore, the criticality close the QCP is governed by the same critical exponents of the isotropic $\delta=0$ case that we have calculated before[1]. 5 Conclusions In conclusion, we have examined the phase diagram of the Kondo necklace model in the presence of an Ising-like anisotropy at zero and low temperatures by means of analytical and numerical techniques. At zero temperature we have derived and solved the self-consistent equations on the Kondo spin liquid phase for any value of $\delta$. This allows us to calculate the anisotropy dependence on the spin gap for $d=1,2,3$. In the $1d$ case, there is no indication at all suggesting a critical value for $t/J$ where the gap would vanish for any value of anisotropy $\delta$. For $d=2,3$ we found that the anisotropy in the range $0<\delta\leq 1$ dislocates lightly the QCP until $\delta=0$, where the spectrum excitations coincide. In this range $0\leq\delta\leq 1$ the spin gap exponent is approximately the same, while for $\delta<0$ a like-jump occurs and it belongs to other universality class. In particular, in three dimensions, the triplet spin gap for anisotropy $0\leq\delta\leq 1$, close to the wave vector $Q=(\pi,\pi,\pi)$, decreases and vanishes at $t/J=(t/J)_{c}$ with spin gap exponent $\nu z\approx 0.5$, consistent with the dynamic exponent $z=1$ and correlation length $\nu=1/2$, a result in agreement with the mean-field or Gaussian character of the approximations we have used to deal with the bond-operator Hamiltonian. On the other hand, at low but finite temperatures, we find that in general the dependence on $|g|$ of the critical line for the AKNM, in the presence of the anisotropy, is the same of the KNM original. This implies that the critical exponents controlling the transition close to the QCP, for nonzero $\delta$, are the same as those of the isotropic case. We have also obtained the thermodynamic behavior of the specific heat along the quantum critical trajectory $J/t=(J/t)_{c}$, $T\rightarrow 0$. It has a power law temperature dependence as $C_{V}\propto T^{d}$, a result consistent with the scaling theory with the dynamic exponent $z=1$. Therefore, the most essential features of the Kondo lattices, i.e., the competition between a long-range-ordered state and a disordered state is clearly retained in the model for $0\leq\delta\leq 1$. The qualitative features regarding the stability of the AF phase are well displayed in the model and it allows a simple physical interpretation of the phase diagram in anisotropic Kondo lattices. It will be left to a further work to compare our theoretical results obtained for the AKNM with experimental data in order to clarify to what extent the estimates of $\delta$ from measured quantities depend on the theoretical tools used. 5.1 Acknowledgments D. Reyes to thank professor Andre M. C. de Souza for useful computational help. The authors would like to thank also the Brazilian Agency, CNPq for financial support. References [1] Reyes D and Continentino M A 2007 Phys. Rev. B 76 075114 [2] J Larrea J, Fontes M B, Baggio-Saitovitch E M, Plessel J, Abd-Elmeguid M M, Ferstl J, Geibel C, Pereira A, Jornada A, and Continentino M A 2006 Phys. Rev. B 74 140406(R) [3] Doniach S 1977 Physica B 91 231 [4] Matsushita Y, Gelfand M P and Ishii C 1997 J. Phys. Soc. Jpn. 66 3648 [5] Kotov V N, Sushkov O, Weihong Z, and Oitmaa J 1998 Phys. Rev. Lett. 80 5790 [6] Scalettar R T, Scalapino D J, and Sugar R L 1985 Phys. Rev. B 31 7316 [7] Jullien R, Fields J N, and Doniach S 1977 Phys. Rev. B 16, 4889 [8] Santini P and J. Sólyom 1992 Phys. Rev. B 46 7422 [9] Moukouri S, Caron L G, Bourbonnais C, and Hubert L 1995 Phys. Rev. B 51, 15920 [10] Otsuka H and Nishino T 1995 Phys. Rev. B 52, 15066 [11] Guang-Ming Zhang, Qiang Gu and Lu Yu 2000 Phys. Rev. B 62 69 [12] Langari A and Thalmeier P 2006 Phys. Rev. B 74 024431 [13] Strong S P and Millis A J 1994 Phys. Rev. B 50 9911 [14] KIselev M N, Aristov D N, and Kikoin K 2005 Phys. Rev. B 71 092404 [15] Reyes D, Continentino M A, Troper A and Saguia A 2005 Physica B 359 714 [16] Continentino M A 1993 Phys. Rev. B 47 11587 [17] Moriya T and Takimoto T 1995 J. Phys. Soc. Jpn. 64 960 [18] Hertz J Z 1976 Phys. Rev. B 14 1165 [19] Reyes D and Continentino M A 2007 J. Phys.:Condens. Matter 19 714 [20] Saguia A, Rappoport T G, Boeachat B and Continentino M A 2004 Physica A 344 644 [21] Sachdev S and Bhatt R N Phys. Rev. B 1990 41 9323 [22] Jurecka C and Brenig W 2001 Phys. Rev. B 64 092406 [23] Tsunetsugu Hirokazu, Sigrist Manfred and Ueda Kazuo 1997 Rev. Mod. Phys. 69 809 [24] Continentino M A 2001 Quantum Scaling in Many-Body Systems (Singapore: Word Scientific) [25] Stewart G R 2001 Rev. Mod. Phys. 73 797 [26] Mermin N D and Wagner H 1966 Phys. Rev. Lett. 17 1133 [27] Millis A J 1993 Phys. Rev. B 48 7183
A Tale of Planet Formation: From Dust to Planets Beibei Liu 1 Zhejiang Institute of Modern Physics, Department of Physics & Zhejiang University-Purple Mountain Observatory Joint Research Center for Astronomy, Zhejiang University, 38 Zheda Road, Hangzhou 310027, China 12Department of Astronomy and Theoretical Physics, Lund Observatory, Box 43, SE–22100, Sweden 2    Jianghui Ji 3CAS Key Laboratory of Planetary Sciences, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing  210008, China 34CAS Center for Excellence in Comparative Planetology, Hefei  230026, China email: [email protected], [email protected] \vs\noReceived 2020 XXX; accepted 2020 XXX4 Abstract The characterization of exoplanets and their birth protoplanetary disks have enormously advanced in the last decade. Benefited from that, our global understanding of the planet formation has been substantially improved. In this review, we first summarize the frontier states of the exoplanet and disk observations. We further present a comprehensive panoptic view of modern core accretion planet formation scenarios, including dust growth and radial drift, planetesimal formation by the streaming instability, core growth by planetesimal accretion, and pebble accretion. We discuss the key concepts and physical processes in each growth stage and elaborate on the connections between theoretical studies and observational revelations. Finally, we point out the critical questions and future directions of the planet formation studies. keywords: planets and satellites: general – planets and satellites: formation – planets and satellites: dynamical evolution and stability – protoplanetary disks \volnopage 2020 Vol. 20 No. XX, 000–000 1 Introduction In this article, we review the modern planet formation scenarios in the content of the core accretion paradigm. Since the observation and theory are two closely-related aspects, we first recap the detection and characterization of exoplanets in Sect. 1.1 and protoplanetary disks in Sect. 1.2. The outline of general planet formation processes are given in Sect. 1.3, classified by the characteristic sizes of growing planetary bodies. Finally, we introduce the relevant topics that will be covered in the subsequent sections of the paper. 1.1 Exoplanets Half of the $2019$ Nobel Prize in Physics was awarded to Michel Mayor and Didier Queloz, as an acknowledgement for their milestone discovery of the first exoplanet orbiting a main-sequence star. This is one of the most influential scientific breakthroughs in astronomy of the past decades. Already in 1995, the above two astronomers detected the exoplanet, $51$ Pegasi b, around a nearby, sun-like star in the constellation of Pegasus (Mayor & Queloz 1995). Such a discovery was extraordinary and unexpected at that time. It opened an entirely new era in astronomical observations. After that, the detection of planets beyond our Solar System has been enormously developed and grown into a rapidly evolving branch in astronomy. One major exoplanet detection method is called radial velocity (or Doppler spectroscopy). A star and its accompanied planet co-orbit the centre of their masses. Observers can see the periodic movement of the star induced by the planet. Due to the Doppler effect, the observed stellar spectral lines are blue-shifted when the star approaches us and are red-shifted when the star recedes from us. Therefore, the radial velocity of the star can be acquired by measuring the displacement of stellar spectral lines. Through this technique, the minimum mass of the planet can be obtained. Since we do not really observe the planet but infer it from the wobble of the central star, this is an indirect way to infer the planet information. The first exoplanet, $51$ Pegasi b was discovered by this method. Also, the radial velocity method was involved in most of the exoplanet discoveries in the early planet-hunting before the launch of the Kepler satellite. Another leading exoplanet detecting method is called transit, which monitors the time variation of a star’s brightness to probe the existence of planet(s). When a planet transits in front of its parent star, the surface of the star is partially blocked by the planet and hence the observed stellar flux drops accordingly. This periodic decrement of the stellar flux reflects the size ratio between the planet and the star. Therefore, this method can uniquely determine the radius of the planet. Compared to radial velocity that requires high resolution spectroscopic measurements, transit is a photometric method, and is thus more efficient in detecting planets. Combining the above two methods together, we can know both the masses and radii, and therefore deduce their bulk densities and chemical compositions of the planets. The Kepler satellite is recognized as the most successful planet hunting mission to date, which utilizes the transit in space to maximize the detection ability and efficiency (Borucki et al. 2010). The key to the success of the Kepler telescope is that it has both a large field of view and extremely high photometric precisions. More than $2600$ confirmed exoplanets and $4000$ planet candidates were detected by Kepler during its nine year operation lifetime ($2009$-$2018$). Thanks to the vastly increased the number of exoplanets detected by Kepler, the analysis of the planet properties from a statistical perspective has become feasible for the first time (Lissauer et al. 2011; Batalha et al. 2013; Burke et al. 2014). The observed planets are incredibly diverse in terms of masses, sizes, compositions and orbital properties. As illustrated in Figure 1, the confirmed exoplanets span several orders of magnitude in their masses and orbital periods 111Data are adopted from http://exoplanet.eu/. Based on the above two properties, exoplanets can be classified into the following types: hot Jupiters (Mayor et al. 1997), cold Jupiters (Zhu & Wu 2018), warm (hot) Neptunes (Dong et al. 2018), super-Earths (Borucki et al. 2011) and low-mass rocky planets. Figure 1 also marks one particular type of planets with orbital period less than $1$ day, which is called ultra-short period (USP) planets (Sanchis-Ojeda et al. 2014; Winn et al. 2018). The super-Earths strictly refer to the planets of $1.25{<}R_{\rm p}{<}2\ R_{\oplus}$ in Figure 1(Borucki et al. 2011). A more general definition of super-Earths is also frequently used in literature studies, referring to planets with radii between Earth and Neptune ($1{<}R_{\rm p}{<}4\ R_{\oplus}$ or $1{<}M_{\rm p}{\lesssim}10\ M_{\oplus}$, Seager et al. 2007; Valencia et al. 2007). In this definition, the super-Earths cover with the ranges of terrestrial-like, rocky-dominated planets and sub-Neptune planets with non-negligible hydrogen envelopes. We use the latter definition of super-Earths in the following discussion of this review. Together with hot and cold Jupiters, these three major types of planets are currently the most well chatacterized and extensively studied samples in literature. We summarize the key observational findings and the underlying physical interpretations of of these three types of exoplanets (also see reviews of Zhou et al. 2012 and Winn & Fabrycky 2015). • Small planets are more common than large planets. When taking into account of the observational bias and for solar-type stars, the occurrence rates of hot Jupiters and cold Jupiters are $1\%$ and $5{-}10\%$, whereas the occurrence rate of super-Earths is $30\%$ (Cumming et al. 2008; Howard et al. 2010; Mayor et al. 2011; Wright et al. 2012; Dong & Zhu 2013; Petigura et al. 2013; Zhu et al. 2018; Fernandes et al. 2019). Furthermore, planets are so ubiquitous that they greatly outnumber their host stars (Mulders et al. 2018; Zhu et al. 2018). • The occurrence rate of giant planets exhibits strong dependences on both stellar mass (Johnson et al. 2007, 2010; Jones et al. 2016) and metallicity (Santos et al. 2004; Fischer & Valenti 2005; Sousa et al. 2011). The occurrence rate of super-Earth seems to be much weakly dependent on stellar metallicity (Sousa et al. 2008; Buchhave et al. 2012, 2014; Wang & Fischer 2015; Schlaufman 2015; Zhu et al. 2016; Zhu 2019). Nevertheless, super-Earths are even more abundant around M-dwarfs compared to those around sun-like stars (Howard et al. 2012; Bonfils et al. 2013; Dressing & Charbonneau 2015; Mulders et al. 2015; Yang et al. 2020). • Hot Jupiters are nearly circular while the mean eccentricity of cold Jupiter is ${\sim}0.25$ (Marcy et al. 2005). The obliquity defines the angle between the spin axis of the host star and the orbital angular momentum axis of the planet, which can be measured through the Rossiter-McLaughlin effect (Winn 2010). Many hot Jupiters shows high obliquities, sometimes even polar or retrograde (Triaud et al. 2010; Albrecht et al. 2012). Theoretically, hot Jupiters were proposed to grow at further out disk locations and then migrate inward to the present-day orbits (Lin et al. 1996). Planet-disk interaction will lead the giant planets on circular and coplanar orbits (Lin & Papaloizou 1993; Artymowicz 1993; Ward 1997; Nelson et al. 2000), while the high eccentricities and inclinations of giant planets can be originated from Kozai-Lidov cycle (Kozai 1962; Lidov 1962) induced by a distant companion (Wu & Murray 2003; Fabrycky & Tremaine 2007; Naoz et al. 2011, 2013; Dong et al. 2014; Anderson et al. 2016) or planet-planet scatterings (Rasio & Ford 1996; Chatterjee et al. 2008; Jurić & Tremaine 2008; Ford & Rasio 2008; Dawson & Murray-Clay 2013). The latter two planet dynamical processes could result in the observed high obliquities. On the other hand, such misalignments could also arise from re-orientation of the host star’s spin through internal waves (Rogers et al. 2012; Lai 2012) or tilted protoplanetary disks through binary-disk interactions (Lai 2014; Matsakos & Königl 2017; Zanazzi & Lai 2018). • Based on the Kepler data, multi-transit systems have relatively low eccentricities and inclinations while single-transit systems exhibit much higher eccentricities and inclinations (Tremaine & Dong 2012; Johansen et al. 2012; Fabrycky et al. 2014; Xie et al. 2016; Zhu et al. 2018). One hypothesis is that these single-transit planets come from multiple systems. Their orbits are further excited/disrupted by long-term planet-planet interactions or by outer companions, causing them to appear as ”singles” in transit surveys (Pu & Wu 2015; Lai & Pu 2017; Mustill et al. 2017). • Hot Jupiters/Neptunes seldom have nearby companions up to a few AUs (Steffen et al. 2012; Dong et al. 2018), consistent with the Kozai-Lidov cycle and planet-planet scattering scenarios (Mustill et al. 2015). On the contrary, nearly half of the warm Jupiters co-exist with low-mass planets (Huang et al. 2016). The cold Jupiters also seem to be commonly accompanied by close-in, super-Earths (Zhu & Wu 2018; Bryan et al. 2019). Besides, $25\%{-}30\%$ of the systems with a cold Jupiter are found to host additional giant planets (Wright et al. 2009; Wittenmyer et al. 2020). • The period ratios of adjacent planet pairs neither show strong pile-ups at mean motion resonances (MMRs) nor uniformly distributed. These planets exhibits an asymmetric distribution around major resonances, such as the $2$:$1$ and $3$:$2$ MMRs (Figure 6 of Winn & Fabrycky 2015). Different scenarios are proposed to explain the above features, including tidal damping (Lithwick & Wu 2012; Batygin & Morbidelli 2013; Lee et al. 2013; Delisle & Laskar 2014; Xie 2014), retreat of the inner magnetispheric cavity (Liu et al. 2017; Liu & Ormel 2017), resonant overstability (Goldreich & Schlichting 2014), interaction with planetesimals (Chatterjee & Ford 2015), stochastic migration in highly turbulent disks (Rein 2012; Batygin & Adams 2017) or in shock-generated inviscid disks (McNally et al. 2019; Yu et al. 2010), mass growth of planet (Petrovich et al. 2013; Wang & Ji 2017), and dynamical instability of tightly packed planetary chains (Izidoro et al. 2017, 2019; Ogihara et al. 2015, 2018). • The occurrence rate of close-in super-Earths has a bimodal radius distribution, with a factor of two drop at $R_{\rm p}{\sim}1.5{-}2R_{\oplus}$ (Fulton et al. 2017; Fulton & Petigura 2018; Van Eylen et al. 2018). This so-called planetary radius valley implies a composition transition from the rocky planets without $\rm H$/$\rm He$ gaseous envelopes to the planets with envelopes of a few percent in mass (Lopez & Fortney 2013; Owen & Wu 2013). The above radius gap can be explained by the gas mass loss due to stellar photoevaporation (Owen & Wu 2017; Jin & Mordasini 2018) or core-powered heating (Ginzburg et al. 2018; Gupta & Schlichting 2019). Besides, the giant impacts may also contribute to this compositional diversity by striping the planetary primordial atmospheres through disruptive collisions (Liu et al. 2015; Inamdar & Schlichting 2016). The composition of these super-Earths is inferred to be rocky dominated, ruling out low-density, water-world planets (Owen & Wu 2017). Although this interpretation should be taken with caution, the water-deficit outcome may be caused by the fact that the short-lived radionuclide dehydrates planetesimals during their early accretion phase (Lichtenberg et al. 2019), thermal effects take place in planetary interiors during long-term evolution phase (Vazan et al. 2018), or the planets experience the runaway greenhouse effect and lose substantial surface water through photo-dissociates (Luger & Barnes 2015; Tian & Ida 2015). The above results are summarized based on the current demographic and orbital properties of exoplanets 222 For the knowledge of exoplanet atmosphere, we recommend a recent review of Zhang (2020).. Figure 2 exhibits the launched and future-designed space missions for exoplanet detection and characterization from NASA (The National Aeronautics and Space Administration), ESA (European Space Agency) and CNSA/CAS (China National Space Administration/Chinese Academy of Sciences). For instance, the successor of the Kepler mission, TESS (Transiting Exoplanet Survey Satellite) which was launched in $2018$, aims at discovering short period planets around nearby stars (Ricker et al. 2015; Huang et al. 2018a). Compared to Kepler, the advantage of TESS is that the target stars are easier for ground-based and space-based follow-up characterization observations. On the other hand, three Chinese space missions have been initiated and approved for the detection of exoplanets in coming decades. The Chinese Space Station Telescope (CSST), scheduled for launch in $2024$, will survey the mature Jupiter-like planets, classical Neptunes and super-Earths around solar-type stars using the high-contrast imaging technique, which expects to discover tens of exoplanetary candidates and brown dwarfs. The Closeby Habitable Exoplanet Survey (CHES) mission aims at searching for terrestrial planets in habitable zones around solar-type stars within $10$ pc by using astrometry in space. CHES will observe the target stars with high astrometric precisions of microarcsecond at Sun-Earth L$2$ point. The mission expects to discover at least $50$ Earth-like planets or super Earths around $100$ FGK stars with well-determined masses and orbital parameters. Miyin, on the other hand, is designed for detecting habitable exoplanets around nearby stars with interferometry. To achieve direct imaging these exoplanets and assess their habitability, the mission will launch spacecrafts with groups of telescopes working in the mid-infrared wavelengths, which ensures an extremely-high spatial resolution of $0.01$ arcsecond. 1.2 Protoplanetary disk observation Planets form in protoplanetary disks surrounding their infant stars. Since the birth and growth of planets are tightly related with their forming environment, studying the physical and chemical conditions of protoplantary disks becomes essential to understand the planet formation processes. Observations of young protoplanetary disks have been conducted enormously in literature. Here we briefly introduce the up-to-date observations of disk substructures and provide several hints for the existence of emerging planets. The discussions on the disk solid masses and dust sizes will be presented in Sect. 2.2. Thanks to the unprecedentedly high sensitivity and angular resolution of the ALMA (Atacama Large Millimeter/submillimeter Array), we now have the capability to reveal the disk structures at the spectacular level of details. The spatial resolution of the resolved disks in nearby star forming regions is approximately $3{-}5$ AU. Axisymmetric rings, gaps, inner cavities and spirals are commonly observed among these disks over a wide range of ages and masses of stellar hosts (Andrews et al. 2018; Huang et al. 2018b; Long et al. 2018). The widely observed disk substructures are in disagreement with the traditional picture of a smooth disk profile. For instance, Figure 3a shows HL Tau disk with a series of concentric bright rings separated by faint gaps from the dust continuum emission (ALMA Partnership et al. 2015). There are different explanations for the formation of these ring-like substructures. These features can be explained by grain growth (Zhang et al. 2015) or dust sintering (Okuzumi et al. 2016) at the condensation fronts of major volatile species, zonal flows in magnetized disks (Flock et al. 2015), a combined effect of the above mechanisms (Hu et al. 2019), or secular gravitational instability (Takahashi & Inutsuka 2014). Apart from those interpretations, the most commonly accepted scenario is that these substructures are induced by gap-opening planets (Pinilla et al. 2012b; Dipierro et al. 2015; Dong et al. 2015c; Jin et al. 2016; Fedele et al. 2017; Liu et al. 2018; Zhang et al. 2018; Liu et al. 2019d; Eriksson et al. 2020). One important note is that, if these ring-like features are indeed the planet origin, the young age of HL tau ($t{<}1$ Myr) implies that planet formation may be faster than previously thought. In addition, this early formation hypothesis may also be supported by Harsono et al. (2018), who analyzed the radial distributions of disk dust and gas around $\rm TMC1A$ and suggested that millimetre-sized grains have already formed around such a young Class I object within $10^{5}$ yr. Apart from the distinctive features from the dust-component, precise gas velocity maps can also be measured from the CO isotope emissions. The kinematic deviations of gas velocities from the Keplerian flow, together with the detected dust gaps at the same disk locations, strongly indicate the embedded planets associated with gap-opening (Teague et al. 2018; Pinte et al. 2018, 2020). The revealed spiral structures in disks from the scattered light images have been proposed to own different origins as well (Muto et al. 2012; Grady et al. 2012; Stolker et al. 2016; Benisty et al. 2016; Muro-Arena et al. 2020). For instance, these pattern can be explained by the density waves excited by the planets (Zhu et al. 2015; Dong et al. 2015b; Fung & Dong 2015; Bae & Zhu 2018). In the early phase when the disk is massive and self-gravitating, the gravitational instability can also induce large-structure spiral arms (Lodato & Rice 2005; Dong et al. 2015a). In addition, it might also be caused by the shadow from the warped disk (Montesinos et al. 2016), or the Rossby wave instability triggered vortex (Li et al. 2000; Huang et al. 2019). The above disk kinematics can be treated as indirect indications of the planets. More straightforward evidence of embedded planets and ongoing planet formation are demonstrated as follows. The first clue comes from the radial velocity measurements of young stars, where hot Jupiter candidates are reported around CI Tau (Johns-Krull et al. 2016) and V$830$ Tau (Donati et al. 2016). 333Donati et al. (2020) pointed out that the RV modulations of CI Tau may also be attributed to the stellar activity. Furthermore, Plavchan et al. (2020) discovered a Neptune-size planet co-existing with a debris disk around the nearby M dwarf star $\rm AU\ Mic$ by transit surveys. All above stars are in their pre-main-sequences with an age of approximately $10$ Myr. In addition, two embedded planets are detected in PDS $70$’s protoplanetary disk by using SPHERE (the high-contrast imager spectro-polarimetric high-contrast exoplanet research) on ESO’s VLT (european southern observatory’s the Very Large Telescope array, Keppler et al. 2018; Müller et al. 2018). Figure 3b shows a synthetic image of the PDS $70$ system, where two planets reside inside the gap of their protoplanetary disk. This is the first time that young planets have been directly imaged in their birth environment. Further analyses with submillimeter continuum and resolved H$\alpha$ line emissions indicated the presence of a circumplanetary disk (Isella et al. 2019) as well as ongoing protoplanet gas accretion (Haffert et al. 2019). All the above findings, combined with the previous results from the disk morphologies/kinematics, provide valuable constraints on how, when and where planets can form. 1.3 Overview of planet formation The study of planet formation is a highly multi-scale and multi-physics subject. The size increment of the planetary body varies by more than $17$ orders of magnitude, from (sub)micron-sized (dust grain) to ${>}10^{4}$ km (super-Earth/gas giant planet). It results in different physical mechanisms operating at different length scales and in different growth stages. We categorize the planetary bodies into four characteristic size objects: $\mu$m-sized dust grains, millimeter/centimeter (mm/cm)-sized pebbles, $100$-km-sized planetesimals, and larger than $1000$-km-sized protoplanets444 Protoplanets sometimes also term as protoplanetary embryos in literature studies. We do not conceptually distinguish these two words and refer them as the same planetary object./planets. The final planets are either rocky-dominated terrestrial planets/super-Earths, or gas-dominated giant planets. Chronologically, the planet formation can be classified into the following three stages: from dust to pebbles (Section 2), from pebbles to planetesimals (Section 3), from planetesimals to protoplanets and planets (Section  4, 5 and 6). Figure 4 is a sketch of planet formation with characteristic size bodies and dominated physical processes. Small dust grains coagulate into larger particles in the beginning. Direct sticking is nevertheless stalled for pebbles of roughly mm/cm size (Güttler et al. 2010). Other mechanisms are needed to proceed the growth of larger bodies. One leading mechanism is the streaming instability (Youdin & Goodman 2005), which clusters pebbles and directly collapses into planetesimals by the collective the self-gravity effect. The subsequent growth of planetesimals can be proceeded by accreting surrounding planetesimals and/or pebbles that drift inward from the outer part of the disk. When the core mass reaches a critical value (${\sim}10\ M_{\oplus}$, Pollack et al. 1996) and there is still ample disk gas left, the protoplanets can accrete surrounding gas rapidly to form a massive gas giant planet with a timescale much shorter than the disk lifetime. Otherwise, protoplanets only modestly accrete gas and form low-mass terrestrial planets or super-Earths. Beside the mass growth, planetary bodies also interact and transfer angular momentum with disk gas, which induces orbital migration. For instance, solid particles and small planetesimals mainly feel aerodynamic gas drag, and their orbital decay is termed as radial drift (Adachi et al. 1976; Weidenschilling 1977). Meanwhile, large planetesimals/planets exert gravitational forces with disk gas, and the corresponding movement is termed planet migration (Goldreich & Tremaine 1979, 1980; Lin & Papaloizou 1986). We discuss how planetary bodies grow in each stage in the following sections. Our focus is how the state-of-the-art planet formation models fit into observation frontiers. Due to the length of the paper, we do not go deep into the topics of planet migration and gas accretion in gas-rich disk phase, and the long-term dynamical evolution of planetary system in gas-free phase. The key relevant questions that will be discussed in this paper are listed as follows: • What are the challenges in the dust coagulation? What is the characteristic size of the particles that $\mu$m-sized dust grains can directly grow into (Section 2)? • How to form planetesimals by the streaming instability? When and where are planetesimals likely to form (Section 3)? • What are the differences and features between planetesimal accretion (Section 4) and pebble accretion (Section 5)? • Which is the dominated accretion channel for planetesimal/protoplanet growth (Section 6)? 2 From dust to pebbles In this section we visit the first stage of planet formation: the growth and radial drift of dust grains. We focus on the comparison between state-of-the-art theoretical and laboratory studies and latest disk observations. 2.1 Theoretical and laboratory studies 2.1.1 Radial drift of solid particles Since the gas in protoplanetary disks is pressure-supported, it rotates around the central star at a velocity $v_{\rm\phi,g}=(1-\eta)v_{\rm K}$, where $\phi$ refers to the azimuthal direction in a cylindrical coordinate, $v_{\rm K}{\equiv}\sqrt{GM_{\star}/r}$ is the Keplerian velocity at the radial disk distance $r$, $G$ is the gravitational constant and $M_{\star}$ is the mass of central star. The headwind prefactor that measures the disk pressure gradient is given by (Nakagawa et al. 1986) $$\eta=-\frac{c_{\rm s}^{2}}{2v_{\rm K}^{2}}\frac{\partial\ln P_{\rm g}}{% \partial\ln r}=\frac{1.5-0.5k_{\rm T}-k_{\Sigma}}{2}\left(\frac{H_{\rm g}}{r}% \right)^{2},$$ (1) where $c_{\rm s}$ is the gas sound speed, $P_{\rm g}$ is the gas pressure, $H_{\rm g}$ is the gas disk scale height, $h_{\rm g}{=}H_{\rm g}/r$ is the disk aspect ratio, $k_{\Sigma}$ and $k_{\rm T}$ are the gradients of gas surface density and temperature, respectively. The relative velocity between $v_{\rm\phi,g}$ and $v_{\rm K}$ is often referred to as the headwind velocity $\eta v_{\rm K}$. In most regions of the protoplantary disk, the pressure gradient is negative, and gas orbits at a sub-Keplerian velocity. All notations shown in this paper are listed in Table 1. There are variety of disk models used for studying the young protoplanetary disks, of which the Minimum Mass Solar Nebula (MMSN) model is most commonly adopted. The MMSN model represents the minimum amount of solids necessary to build the Solar System planets (Hayashi 1981), which is given by $$\Sigma_{\rm g}=1700\left(\frac{r}{1\ \rm AU}\right)^{-3/2}{\rm\ g\ cm^{-2}},\ % T_{\rm g}=280\left(\frac{r}{1\ \rm AU}\right)^{-1/2}{\rm\ K},\ h_{\rm g}=3.3% \times 10^{-2}\left(\frac{r}{1\ \rm AU}\right)^{1/4}.$$ (2) Therefore, at $1$ AU orbital distance $\eta{=}1.8\times 10^{-3}$ and the headwind velocity $\eta v_{\rm K}{\simeq}50\rm\ m\ s^{-1}$. Here we choose the MMSN as a canonical disk model. Other disk models of surface densities include the minimum mass extrasolar nebula (MMEN) model based on the mass extrapolation from the short period exoplanets (Chiang & Laughlin 2013), obtained from the emission of dust disks at (sub)millimeter wavelengths (Andrews et al. 2009) or inferred from the disk accretion rate and viscous theory (Lynden-Bell & Pringle 1974; Hartmann et al. 1998). The disk heating source and dust opacity determine the gas temperature profile. The dominant heating mechanisms include the viscous dissipation (Ruden & Lin 1986; Garaud & Lin 2007) and the stellar irradiation (Chiang & Goldreich 1997; Bell et al. 1997; Dullemond et al. 2001; Dullemond & Dominik 2004). Unlike the gas, a solid particle does not feel the pressure gradient force and tends to move at the Keplerian velocity. The solid particle experiences a hydrodynamic gas drag when its velocity deviates from the gas (Whipple 1972; Weidenschilling 1977): $$\vec{F_{\rm drag}}=\begin{cases}{\displaystyle-\frac{4\pi}{3}\rho_{\rm g}R^{2}% v_{\rm th}}\vec{\Delta v}\hfill\hskip 28.452756pt{\rm[Epstein\ drag\ law]},\\ {\displaystyle-\frac{1}{2}C_{\rm D}\pi R^{2}\rho_{\rm g}\Delta v\vec{\Delta v}% }\hfill\hskip 28.452756pt{\rm[Stokes\ drag\ law]},\end{cases}$$ (3) where $R$ is the radius of a spheric particle, $\rho_{\rm g}$ is the gas density, $v_{\rm th}{=}\sqrt{8/\pi}c_{\rm s}$ is the mean thermal velocity of the gas, $\vec{\Delta v}{=}\vec{v}-\vec{v_{\rm g}}$ is the relative velocity between the particle and gas, and $C_{\rm D}$ is given by $$C_{\rm D}=\begin{cases}{\displaystyle{24R_{\rm e}^{-1}}}\hfill\hskip 28.452756% pt\ R_{\rm e}<1,\\ {\displaystyle{24R_{\rm e}^{-0.6}}}\hfill\hskip 28.452756pt\ 1\leq R_{\rm e}% \leq 800,\\ {\displaystyle{0.5}}\hfill\hskip 28.452756pt\ R_{\rm e}>800.\end{cases}$$ (4) The drag coefficient $C_{\rm D}$ depends on the particle’s Reynolds number $R_{\rm e}{=}2R\Delta v/\nu_{\rm mol}$, where $\nu_{\rm mol}{=}\lambda_{\rm mfp}v_{\rm th}/2$ is the kinematic molecular viscosity, $\lambda_{\rm mfp}{=}m_{\rm mol}/\sigma_{\rm mol}\rho_{\rm g}$ is the gas mean free path, $m_{\rm mol}{=}\mu m_{\rm H}$ and $\sigma_{\rm mol}{=}2{\times}10^{-15}\ \rm cm^{-2}$ are the mass and collisional cross section of the gas molecule, $\mu{=}2.33$ and $m_{\rm H}{=}1.67\times 10^{-24}\ \rm g$ are the gas mean molecular weight and the hydrogen atom mass. The gas drag acceleration can also be written as $\vec{a_{\rm drag}}{=}-(\vec{v}-\vec{v_{\rm g}})/t_{\rm stop}$, where $t_{\rm stop}$ is the stopping time of the particle, which quantifies how fast the particle adjusts its velocity toward to the surrounding gas. The stopping time of the particles varies in different regimes. For instance, $$t_{\rm stop}=\begin{cases}{\displaystyle{\frac{R\rho_{\bullet}}{v_{\rm th}\rho% _{\rm g}}}}\hfill\hskip 42.679134pt{\rm when}\ R<9/4\lambda_{\rm mfp}\hskip 8.% 535827pt{\rm[Epstein\ regime]},\\ {\displaystyle{\frac{4R}{9\lambda_{\rm mfp}}\frac{R\rho_{\bullet}}{v_{\rm th}% \rho_{\rm g}}}}\hfill\hskip 28.452756pt\ {\rm when}\ R\geq 9/4\lambda_{\rm mfp% }\hskip 8.535827pt{\rm[Stokes\ regime]}.\end{cases}$$ (5) We note that the above expression in the Stokes regime holds when $R_{\rm e}{\lesssim}1$. As particle’s size increases, $F_{\rm drag}$ eventually becomes quadratic in $\Delta v$, and the stopping time is inverse proportional to $\Delta v$. The stopping time is also widely expressed in a dimensionless form $\tau_{\rm s}{=}t_{\rm stop}\Omega_{\rm K}$, where $\tau_{\rm s}$ is termed the Stokes number, and $\Omega_{\rm K}{=}v_{\rm K}/r$ is the Keplerian angular frequency. Generally, pebbles are considered as mm/cm size small rocks. However, from a hydrodynamical perspective, pebbles are specifically referred to the solid particles with a range of Stokes number approximately from $10^{-3}$ to $1$. As will be demonstrated in Sect. 5, particles with such Stokes numbers are marginally coupled to the disk gas, and can be efficiently accreted by larger protoplanetary bodies, such as planetesimals/planets. The radial and azimuthal velocity of the solid particle with respect to the Keplerian motion is given by Nakagawa et al. (1986), $$\begin{cases}{\displaystyle v_{\rm r}=-\frac{2\tau_{\rm s}}{1+\tau_{\rm s}^{2}% }\eta v_{\rm K}+\frac{1}{1+\tau_{\rm s}^{2}}v_{\rm r,g}}\\ {\displaystyle v_{\phi}=-\frac{1}{1+\tau_{\rm s}^{2}}\eta v_{\rm K}+\frac{\tau% _{\rm s}}{2(1+\tau_{\rm s}^{2})}v_{\rm r,g}},\\ \end{cases}$$ (6) where $v_{\rm r,g}$ in the second term on the right side of the equation is the gas radial velocity due to disk accretion, much lower than the headwind velocity $\eta v_{\rm K}$ in the first term. From Eq. 6, the radial velocity of the solid particle peaks at $\tau_{\rm s}{=}1$. For the MMSN, such particles are roughly meter-sized at $1$ AU and cm-sized at $10$ AU. They are strongly affected by the gas and drift toward the central star within approximately $100$ orbital timescale. For pebble-sized particles of $\tau_{\rm s}{\approx}10^{-2}{-}1$ and in the protoplanetary disk regions of $r{\lesssim}{50}$ AU, the radial drift timescale is shorter than the gas disk lifetime (${\sim}3$ Myr, Haisch et al. 2001). Therefore, due to the rapid inward drift, the survival of these high Stokes number particles in protoplanetary disks is one of long-standing conundrum in planet formation (Adachi et al. 1976; Weidenschilling 1977). 2.1.2 Dust coagulation The primordial solids in protoplanetary disks are the dust grains originated from the interstellar medium (ISM). These solid particles follows a size distribution $n(R){\propto}R^{-3.5}$, in a range between nanometer to (sub)micron (Mathis et al. 1977). The microphysics of grain growth is not controlled by the gravity, but relies on the electromagnetic interaction, like the Van der Waals force. Such inter-particle attractive forces bring small dust grains together to large aggregates through pairwise collisions. The collision outcome depends on the impact velocity between dust particles. In the early phase of low-velocity gentle collision, (sub)micron-sized dust stick together to form large aggregates with porous structures. This is referred to as the “hit-and-stick” regime. As the growth proceeds, their collision velocity increases with the size of the particle. In this intermediate velocity regime, the aggregates result in restructure through compactification (Dominik & Tielens 1997; Blum & Wurm 2000). The above process increases the mass-to-area ratio and thus the Stokes number of the particles, resulting in more energetic collisions. The further mass growth is terminated by either bouncing or fragmentation due to high velocity, catastrophic impacts (Güttler et al. 2010). This results in particles only growing up to millimeter to centimeter size pebbles in nominal protoplanetary disk conditions (Zsom et al. 2010; Birnstiel et al. 2012). The sticking and growth patterns of the dust aggregates depend on their material properties. In the inner protoplanetary disk regions of less than a few AUs, silicates are the main constituent of dust grains. During collisions, these silicate aggregates bounce off or even fragment completely at a threshold velocity approximately $1\rm\ ms^{-1}$ (Blum & Wurm 2008). Meanwhile, for the regions outside the water-ice line, the dust grains are dominated by water-ice. The icy or ice-coated aggregates are more porous than silicate aggregates, and thus the bouncing is less evident for them (Wada et al. 2011). In addition, the sticking still occurs at a collision of $10\rm\ ms^{-1}$ for the icy aggregates. Due to a higher surface energy and a lower elastic modulus, these icy aggregates are more sticky compared to silicates (Gundlach & Blum 2015). Nevertheless, based on the recent laboratory experiments, Musiolik & Wurm (2019) found that the surface energy of icy aggregates is comparable to that of silicates when the disk temperature is lower than $180$ K. If this is true, it implies that the actual difference in the growth pattern of the above two types of aggregates might be less pronounced than anticipated in the literature (also see Gundlach et al. 2018 and Steinpilz et al. 2019). Theoretical studies of the global dust coagulation and radial transport have shown that (sub)micron-sized dust grains succeed in growing to mm/cm-sized pebbles (Ormel et al. 2007; Brauer et al. 2008; Zsom et al. 2010; Birnstiel et al. 2010, 2012; Krijt et al. 2016). The size of pebbles is either regulated by the radial drift in the outer disk region of $r{\gtrsim}10$ AU, or limited by bouncing/fragmentation in the inner disk region of $r{<}10$ AU (e.g., Fig. 3 of Testi et al. (2014)). Nevertheless, the further mass increase by incremental growth becomes problematic, due to the above mentioned bouncing and fragmentation barriers. 2.2 Observational studies 2.2.1 Evidence for dust radial drift The most straightforward evidence for the radial drift of dust comes from the size comparison between the dust and gas components of the disks. The sizes of dust and gas disks can be separately inferred from the millimeter dust continuum emission and the molecule line emission of $\rm CO$ isotopes (Dutrey et al. 1998; Hughes et al. 2008). By these comparisons, gas disks are generally found to be much smaller than dust disks, indicating the radial drift of millimeter dust particles has already taken place at the corresponding ages of the systems (Andrews et al. 2012; Ansdell et al. 2018; Jin et al. 2019; Trapman et al. 2020). The other evidence is from the commonly observed substructures in protoplanetary disks, such as cavities, rings and gaps. These features resolve the drift timescale problem; otherwise, pebbles should be depleted in the disk regions of ${r\lesssim}50$ AU within a few Myrs. The difference in spatial distributions of grains with different sizes is usually used to testify the presence of pressure bumps and a strong indicator of the mobility of dust grains. The disks with large inner cavities/holes of a few tens of AUs are called ‘transitional disks’ (Calvet et al. 2005; Espaillat et al. 2014; Owen 2016). In these disks, different spatial distributions are observed from the emissions of the millimeter continuum, infrared scattering light and/or molecule lines (Dong et al. 2012; van der Marel et al. 2013, 2015). One should note that the former emission probes the mm-sized pebbles, while the latter two trace $\mu$m grains and gas, respectively. The above phenomena can be explained by dust filtration effect (Rice et al. 2006; Zhu et al. 2012; Pinilla et al. 2012a), where small grains are tightly coupled to the gas flow, while large pebbles drift toward to and halt at the local pressure maxima, resulting in the mm-sized dust cavities larger than the gas/small grain cavities. Similarly, such variation in the gas and dust components is also resolved in ring-shaped substructures (Isella et al. 2016). Furthermore, the drift of pebbles also leads to a radial variation of chemical compositions in disk gas. For instance, in the protoplanetary disk the $\rm CO$ molecule condenses into solids outside of the $\rm CO$-ice line while it sublimates into vapor inside. Since some fraction of $\rm C$ is in refractory materials, the disk gas interior to the $\rm CO$-ice line is expected to have a $\rm C/H$ lower than the stellar value when pebbles are assumed to be static. However, when these pebbles continuously drift inwardly and cross the $\rm CO$-ice line, a substantial amount of $\rm C$ is loading interior to the ice line and sublimates into vapor, which naturally increases $\rm C/H$ interior to the $\rm CO$-ice line (Krijt et al. 2018). When comparing the $\rm C/H$ in the stellar photosphere and in the disk gas, Zhang et al. (2020) first reported an elevated $\rm C/H$ interior to the $\rm CO$-ice line in HD $163296$ disk, which $1{-}2$ times higher than the stellar value. This $\rm C/H$ enrichment is in line with the large-scale radial drift of icy dust particles. 2.2.2 Constraints on pebble size Opacity index Let us first look at what we can know from the dust continuum emission of the young protoplanetary disks at (sub)millimeter and radio wavelengths. The observed intensity is $I_{\nu}{=}I_{\nu 0}[1-\exp(-\tau_{\nu})]$, where the subscript $0$ refers to the value at the disk midplane, $\tau_{\nu}{=}\kappa_{\nu}\Sigma_{\rm d}$ is the optical depth from the disk midplane to the surface layer, $\kappa_{\nu}$ is the disk opacity at the corresponding frequency, $\Sigma_{\rm d}$ and $T_{\rm d}$ are the dust mass and temperature, respectively. When the disk is optically thin ($\tau{\ll}1$) at the observed wavelengths, $I_{\nu}\approx\tau I_{\nu 0}\approx\kappa_{\nu}\Sigma_{\rm d}I_{\nu 0}$. Consequently, the observed integrated flux $F_{\nu}\propto\kappa_{\nu}M_{\rm d}B_{\nu}(T_{\rm d})$ and $M_{\rm d}$ is the dust mass. At millimeter wavelengths, the Plank function $B_{\nu}$ is expected to approximately follow the Rayleigh-Jeans law, $B_{\nu}\approx 2k_{\rm B}T_{\rm d}\nu^{2}/c^{2}$, where $k_{\rm B}$ is the Boltzmann constant and $c$ is the light speed. Therefore, $F_{\nu}\propto\kappa_{\nu}\nu^{2}M_{\rm d}T_{\rm d}$. This means that, if $\kappa_{\nu}$ and $T_{\rm d}$ are known, the dust mass can be estimated from the observed flux. In addition, the spectral index of dust opacity can be obtained from disk observations. Assuming that the opacity has a power-law dependence on frequency $\kappa_{\nu}\propto\nu^{\beta}$ and $F_{\nu}\propto\nu^{\alpha}$, we have $F_{\nu}\propto k_{\nu}\nu^{2}\propto\nu^{2+\beta}$. Since the spectral index $\alpha$ is measured from the spectral energy distribution, the dust opacity index can be calculated accordingly through $\beta=\alpha-2$. The commonly accepted evidence for grain growth is based on the spectral index measurements at millimeter wavelengths (Draine 2006). The interpretation gives as follows. Based on the Mie theory calculation, the maximum grain size matters for the dust opacity. When the maximum grain size is smaller than the observed wavelength (Rayleigh regime), $\kappa_{\nu}$ is independent of grain sizes and $\beta$ remains high. When the maximum grain size is larger than the observed wavelength (geometric regime), $\kappa_{\nu}$ decreases with the grain size and $\beta$ drops to a lower value close to $0{-}1$ (Fig. 3 of Ricci et al. 2010). As a result, grains with a maximum size ${\gtrsim}1$ mm naturally result in a less than unity spectral index at millimeter wavelengths. In other words, the value of $\beta$ can principally reveal the size of the largest grain in disks. In a realistic situation, $\beta$ is also affected by the size distribution, composition and porosity of the dust aggregates. Nonetheless, these dependences-induced uncertainties are generally smaller compared to that due to the maximum grain size (Fig. 4 of Testi et al. 2014). It is worth mentioning that the opacity spectral index observed in ISM gives $\beta_{\rm ISM}\sim 1.7$, while the inferred spectral index of the most protoplanetary disks have $\beta_{\rm disk}\lesssim 1$, much smaller than the typical ISM value. Studies that combines multi-wavelength observations with detailed modellings suggested the ubiquitous presence of grain growth in disks of varieties of ages and around different mass stars (Natta et al. 2004; Ricci et al. 2010, 2014; Miotello et al. 2014; Pinilla et al. 2017). Furthermore, the maximum grain size also correlates with the disk radial distance, where generally centimeter size particles reside in the inner disk regions and millimeter size grains are present further out (Pérez et al. 2012; Tazzari et al. 2016). Noticeably, for disks with substructures, the spectral index also varies across the rings and gaps, with lowest values in the rings and highest values in the gaps, which indicating further grain growth in the high density centric ring regions (Huang et al. 2018b; Li et al. 2019c; Long et al. 2020). . The above spectral index interpretation is based on two underlying assumptions: the dust emission is optically thin, and the opacity is dominated by absorption rather than scattering at the observed wavelengths. These two assumptions may also be intrinsically correlated. For instance, if the scattering is the main source of the opacity instead of absorption, the observed intensity would be $I_{\nu}{\approx}\sqrt{1-w_{\nu}}\tau_{\nu}I_{\nu 0}$ (Zhu et al. 2019), where $w_{\nu}$ is the single-scattering albedo. The above formula reduces to the previously mentioned $I_{\nu}\approx\tau I_{\nu 0}$ where the scattering is negligible compared to the absorption ($w_{\nu}{\to}0$). This means that the scattering causes disks look cooler than they actually are (e.g., Fig. 9 of Birnstiel et al. 2018). In other words, the disk mass and the optical depth are likely to be underestimated when the scattering is ignored in literature studies (Zhu et al. 2019; Liu 2019; Ballering & Eisner 2019). If the disks are indeed very massive and optically thick even in millimeter wavelengths, then the above approach for the grain size estimation is not valid anymore. Recently, Carrasco-González et al. (2019) considered both scattering and absorption in dust opacity and neglected any underlying assumption on the optical depth for the study of HL Tau disk. They still found that the grains have grown to millimeter size. Importantly, similar treatments with careful assumptions should be applied to other disks for more realistic estimates of the grain size as well. Polarization Several studies attempted to explain the disk polarized emission by dust scattering (Kataoka et al. 2016; Yang et al. 2016), although other interpretation such as grain alignment with the disk magnetic field could still be relevant. If the dust scattering is indeed the dominant mechanism, the maximum grain sizes can also be constrained by the observed polarization degree. For the same system, HL Tau disk, Kataoka et al. (2017) reported a maximum grain size of $100\rm\ \mu m$, one order of magnitude smaller than the size estimated from Carrasco-González et al. (2019). Despite that this polarimetric measurements are only applied to a small number of disks, the inferred size is considerably lower compared to those obtained from opacity index measurements (Hull et al. 2018; Bacciotti et al. 2018; Ohashi et al. 2020). The above polarization analysis overall supports the grain growth. Nevertheless, It is still unclear whether the discrepancy in the grain size estimation between these two interpretations is because of the existence of multi-specie dust grains, or due to the limitations and degeneracies in methodologies themselves. Future studies are warranted to make further claims. Meteorites in Solar System There is evidence of grain growth in our Solar System. For instance, the Calcium-Aluminium-rich Inclusions (CAIs) are sub-mm to cm-sized grains identified in the most primitive meteorites. These refractory inclusions are thought to be the earliest solids condensed from the young nebula that forms the Solar System. In cosmochemistry, the $\rm Pb$–$\rm Pb$ isotopic dating shows that CAIs form $4.567$ Gyr ago (Amelin et al. 2010; Connelly et al. 2012). The size of CAIs supports the hypothesis of coagulation-driven growth of condensates. In addition, chondrules are igneous-textured spherules dominating in the chondrites. The typical size of chondrules is $0.1$ to $1$ mm. Although there are still discrepancies between $\rm Pb$-$\rm Pb$ ages and $\rm Al$-$\rm Mg$ ages (see Kruijer et al. 2020), chronological studies indicated that a small fraction of chondrules might be formed as early as the CAIs, while the majority form $2{-}4$ Myr after the formation of CAIs (Amelin et al. 2002; Kleine et al. 2009; Villeneuve et al. 2009; Connelly et al. 2012; Pape et al. 2019) . The presence of chondrules and CAIs in meteorites supports the grain growth in Solar System. To conclude, the current disk observations, in line with the theoretical/laboratory studies, together demonstrate that the first step of planet formation, from dust to pebbles, is robust and ubiquitous during the protoplanetary disk evolutionary stage. 3 From pebbles to Planetesimals As stated in Sect. 2, the direct dust coagulation fails to produce aggregates much larger than pebble-sized (but see fluffy growth of Okuzumi et al. 2012, Kataoka et al. 2013, Homma et al. 2019). Instead of incremental growth, the planetesimals are proposed to form through various concentration mechanisms (see Johansen et al. 2014 for a review). In this section, we emphasize one powerful mechanism termed the streaming instability, which overcomes the above growth barrier by clustering and collapsing dense pebble filaments into planetesimals. The concept of the streaming instability mechanism and the operating disk conditions are presented in Sect. 3.1 and Sect. 3.2, respectively. The observational evidence that support the planetesimnal formation by streaming instability are discussed in Sect. 3.3. 3.1 Streaming instability The streaming instability is applicable for a wide size range of solid particles but most efficiently for the particles of $\tau_{\rm s}{\simeq}0.1{-}1$. We conventionally use the word ‘pebbles’ hereafter to refer the solid particles incorporated in the streaming instability mechanism. Basically speaking, the streaming instability includes two processes, namely the concentration of pebbles into dense clumping filaments, and gravitational collapse of pebble filaments into planetesimals. The snapshots of the above processes and the spatial distribution of solids in a streaming instability simulation are illustrated in Figure 5. We give a qualitative explanation here. Firstly, inward drifting pebbles get concentrated radially due to the solid-to-gas back reaction (Youdin & Goodman 2005).This is because pebbles feel the gas drag force and lose angular momentums. Similarly, the surrounding gas gains angular momentum from pebbles due to the solid-to-gas back reaction, and thus the gas velocity gets accelerated. The strength of this reaction force is determined by the volume density ratio between pebbles and gas ($\rho_{\rm peb}/\rho_{\rm g}$). We usually neglect this back reaction since the pebble density is much lower than the gas density in the nonimal protoplanetary disk condition. However, when the pebble density comparable to the gas density, the back reaction force is non-trivial. In this situation, the pebble density perturbation grows and concentrates pebbles effectively. Since the gas is accelerated towards to the Keplerian velocity, and the relative velocity between gas and pebbles becomes smaller. These pebbles feel weaker gas drag and drift inward more slowly. Hence, the fast drifting pebbles from the outer part of the disk thereby catch these slower drifting pebbles and form denser filaments. Because this is a positive feedback, once an initial, radial concentration of solids is achieved ($\rho_{\rm peb}{\simeq}\rho_{\rm g}$), the further clumping is self-amplified, and eventually the pebble density grows rapidly in a non-linearly manner. Youdin & Goodman (2005) proposed the concept of the streaming instability and provided the analytical linear instability solution. The robustness of the pebble clumping is later confirmed in numerical simulations (Youdin & Johansen 2007; Johansen & Youdin 2007). Secondly, the effect of self-gravity becomes dominant when the pebble density exceeds the Roche density ($\rho_{\rm R}{=}9\Omega_{\rm K}^{2}/4\pi G$). In this circumstance, the gravitational force overcomes the tidal shear, and the pebble filaments gravitationally collapses into $100$-km-sized planetesimals 555 Gerbig et al. (2020) further suggested that the collapse criterion requires gravitational force to overcome turbulent diffusion on small scales, which also regulates the sizes of forming planetesimals. . The primordial idea of the gravitational instability was proposed by Goldreich & Ward (1973). At that time these authors only considered that dust sediment into a super thin midplane layer to exceed the Roche density. However, such a vertical solid concentration generates the Kelvin-Helmholtz instability and develops the vertical velocity stirring, which prevents the subsequent solid enrichment(Weidenschilling 1980; Cuzzi et al. 1993). On the contrary, the streaming instability induces a radial concentration of solid particles. The gravitational collapse of dense particle filaments by the above streaming effect was numerically verified by Johansen et al. (2007, 2009). We remark a few things here. Strictly speaking, only the first step is relevant to the concept of the instability – the streaming motion between solids and gas. Unlike other concentration mechanisms, one feature of the streaming effect does not requires any underlying disk turbulence. The second step does not inherently depend on any initiated mechanisms that cluster pebbles. It broadly represents a pebble collapsing process that forms planetesimals aided by the self-gravity. These two steps have been commonly investigated together in literature studies and been frequently recognized as a unified mechanism termed the streaming instability. We also point out that this pebble clumping effect also triggers turbulence, even the background gas is originally laminar. As numerically investigated by Li et al. (2018), the density fluctuation induced by the streaming instability is not sufficient to halt and concentrate pebbles. The reason for pebble clumping still arises from the previously mentioned streaming motion between pebbles and gas. It is worth noting that most of these streaming instability simulations are conducted in a cubic box centred on a local, co-rotating coordinate frame with fixed Keplerian frequency and radial orbital distance (also called shearing-box simulations, see Figure 5). The length scale of the box is much smaller than the orbital distance, and therefore, the motion of particles in that box is linearized with the Keplerian shear. For most of streaming instability simulations, gas fluid is based on an Eulerian grid, and solid particles are treated as Lagrangian superparticles, each representing a swarm of actual pebbles. Such a particle-fluid hybrid approach substantially reduces the computational cost for investigating the non-linear pebble clumping and planetesimal formation processes. The mass of planetesimals formed by streaming instability simulations follow a top-heavy mass distribution, which can be roughly fitted by a power-law plus exponential decay for the intermediate and high mass branch (Johansen et al. 2015; Schäfer et al. 2017; Abod et al. 2019). A turnover mass may exist in the lower mass branch (Li et al. 2019b). The planetesimals have a characteristic size of ${\sim}100$ km when they form at the asteroid belt region (Johansen et al. 2015). The characteristic mass/size increases with the disk metallicity, the mass of the central star and radial distance (Johansen et al. 2012, 2015; Simon et al. 2016), modestly increases with the pressure gradient (Abod et al. 2019), and appears to be independent of Stokes number (Simon et al. 2017). Based on the extrapolation of literature streaming instability simulations, Liu et al. (2020) derived the characteristic mass of planetesimals as $$M_{\rm pl}=5\times 10^{-5}\left(\frac{Z}{0.02}\right)^{1/2}\left(\frac{\gamma}% {\pi^{-1}}\right)^{3/2}\left(\frac{h_{\rm g}}{0.05}\right)^{3}\left(\frac{M_{% \star}}{1\ M_{\odot}}\right)\ M_{\oplus},$$ (7) where $Z$ is the local disk metallicity and $\gamma{=}4\pi G\rho_{\rm g}/\Omega_{\rm K}^{2}$ is a self-gravity parameter 666We note that $\gamma$ can be related with the Toomre $Q_{\rm T}$ parameter by $Q_{\rm T}{=}\sqrt{8/\pi}/\gamma$. Thus, $\gamma{=}0.034$ is equivalent to $Q_{\rm T}{=}47$., related with gas density, stellar mass and radial distance. Adopting the MMSN model, we obtain $\gamma{=}0.034$ and the resultant planetesimal is $10^{-6}\ M_{\oplus}$ in mass ($100$ km in radius) at $r{=}2.5$ AU around a solar-mass star. Based on Eq. 7, we expect that smaller planetesimals form at shorter orbital distances and around lower-mass stars. Most of the streaming instability studies simply considered a laminar background gas, despite that the protoplanetary disk should be turbulent in nature. How the streaming instability operates in a realistic turbulent condition has not been fully understood. First of all, disk turbulence induces the stochastic motion and density fluctuation of the gas. Overdense pressure bumps can be produced at the region in which particles of $\tau_{\rm s}{\sim}1$ are efficiently trapped. This type of solid concentrations that facilitates the planetesimal formation are obtained in disks where the source of turbulence is either the magnetorotational instability (MRI, Johansen et al. 2007), or the vertical shear instability (VSI, Schäfer et al. 2020), by which the angular velocity depends on the disk vertical distance (Nelson et al. 2013; Stoll & Kley 2014; Lin & Youdin 2015; Flock et al. 2017). When considering the non-ideal magnetohydrodynamical (MHD) effects, Yang et al. (2018) found that dust diffusion is weak in radial direction and the strong clumping can still occur in the dead zone region where the MRI is inactive because of low ionization fraction (Gammie 1996). On the other hand, the turbulent diffusivity acts to suppress the particles sedimentation and concentration, and therefore, the birth rate of planetesimals can be slower or even quenched when the disk turbulence increases (Gole et al. 2020). These numerical studies have only been explored in a very narrow range of parameter spaces ($\tau_{\rm s}$ and $\alpha_{\rm t}$) with various disk turbulence mechanisms. Although several recent theoretical studies have attempted to quantify the role of the turbulence on the streaming instability and planetesimal formation (Umurhan et al. 2020; Chen & Lin 2020), it is still premature to reach definitive conclusions yet. 3.2 How, where and when the streaming instability occurs In order to trigger the streaming instability, the volume density of solids needs to be enhanced comparable to that of the gas, $\rho_{\rm peb}{\simeq}\rho_{\rm g}$ (Youdin & Goodman 2005). The onset criterion can also be expressed in terms of the surface density ratio, i.e., the metallicity $Z{=}\Sigma_{\rm peb}/\Sigma_{\rm g}$. The dust to gas mass ratio is measured to be $0.01$ in the ISM (Bohlin et al. 1978), while the canonical value of the solar metallicity is $0.014$ (Asplund et al. 2009). Numerical studies reported that a super-solar metallicity (${\gtrsim}2\%{-5}\%$) is required for triggering the streaming instability (Johansen et al. 2009; Carrera et al. 2015; Yang et al. 2017). Such a threshold metallicity also depends on the disk and pebble properties. The streaming instability is easier to be triggered when the disk metallicity is higher (Johansen et al. 2009), the strength of the disk pressure gradient is lower (Bai & Stone 2010), and/or the Stokes number of pebbles is higher towards to unity (Carrera et al. 2015). Nonetheless, unless some other mechanisms can operate in the first place to enhance the pebble density, the disk of a solar metallicity can hardly form planetesimals by the streaming instability. Then question is how the solid density can be enriched to satisfy this condition? We list several scenarios that propose the pebble enrichment at peculiar disk locations. For instance, the formation site can be the water-ice line (Ros & Johansen 2013; Ida & Guillot 2016; Schoonenberg & Ormel 2017; Dra̧żkowska & Alibert 2017; Hyodo et al. 2019). This is because the water-ice in pebbles sublimate into vapor when these pebbles drift inwardly across the water-ice line ($T_{\rm g}{\simeq}170$ K). First, pebbled are locally piled-up by a “traffic jam” effect since the outer fast drifting icy pebbles catch up with the inner slow drifting silicate grains. Second, the released water vapor diffuses back to the outside of the ice line and condenses onto the continuous inwardly drifting icy pebbles. This diffusion and re-condensation process also enhances the local solid density (Stevenson & Lunine 1988; Cuzzi & Zahnle 2004). The former mechanism generates “dry” planetesimals slighter interior to the ice line while the latter one produces “wet” planetesimals with a substantial water fraction slightly exterior to the ice line. Similar processes could also be expected at the ice lines of other volatile-rich species, such as $\rm CO$ and $\rm NH_{3}$. Besides these ice lines, other possible pebble trapping sites can be the edge of the dead zone (Dra̧żkowska et al. 2013; Chatterjee & Tan 2014; Hu et al. 2016; Miranda et al. 2017), the vortex generated by hydrodynamical instabilities (Surville et al. 2016; Huang et al. 2018c), and the spiral arms in self-gravitating disks (Gibbons et al. 2012). Apart from the above mentioned mechanisms that relate with local disk properties, there are other ways of increasing the disk metallicity. For instance, Dra̧żkowska et al. (2016) showed that this enrichment can occur in the inner sub-AU disk region as a result of the global dust growth and radial drift. In addition, pebble trapping is thought as a natural consequence of giant planet formation. The massive planet opens a gap (Lin & Papaloizou 1986) and produces a local pressure maximum in its vicinity (Lambrechts et al. 2014). Pebbles drift more and more slowly and get concentrated on their way towards to this local pressure maximum. When these pebbles reach the threshold metallicity at or close to the gap edge, the streaming instability is triggered to form planetesimals (Eriksson et al. 2020). Moreover, the solid enrichment can be fulfilled in the late disk dispersal phase when stellar photoevaporation dominates. In this case pebbles have already decoupled from gas and sedimented to the disk midplane. The photoevaporating wind blows gas away from the disk surface. Therefore, the solid-to-gas ratio increases globally (Carrera et al. 2017). To summarize, the planetesimal formation can either occur locally at peculiar disk locations such as ice lines and pressure bumps, or in a wide range of disk regions when stellar photoevaporation globally depletes the disk gas. Where and when the planetesimals form crucially depend on the detailed disk conditions and pebble concentration mechanisms. For instance, the formation location can broadly range from the most inner sub-AU disk region (dead zone edge) to the outer part of the disk extended to a few tens/hundreds of AUs (spiral arms in self-gravitating disks). The formation time is even more difficult to quantify, which might occur at an early phase when the disk is still self-gravitating ( $t{\lesssim}0.1{-}1$ Myr), or at a late phase when gas is significantly dispersed ($t{>}3$ Myr). The other time constraint is from the meteorite chronology in our Solar System (see Kruijer et al. 2020 for a review). What we learn is that these parent bodies of meteorites are apparently not all formed at once. They are more likely to form successively or undergo multiple formation phases over the entire disk lifetime. For instance, the parent bodies of iron meteorites formed within the first Myr (Kruijer et al. 2014), whereas those of chondritic meteorites formed slightly late, ${\sim}2{-}4$ Myr after CAI formation (Villeneuve et al. 2009; Sugiura & Fujiya 2014; Doyle et al. 2015). 3.3 Evidence for the streaming instability One tentative argument that supports the streaming instability mechanism arises from the optical depth measurements of the rings in young protoplanetary disks from the DSHARP survey. All these dusty rings shown in various systems have similar optical depths of the order of unity (Dullemond et al. 2018). These observed values can be interpreted as the ongoing planetesimal formation regulated by the streaming instability (Stammler et al. 2019). In their explanation, drifting pebbles are concentrated in the ring where the streaming instability can be triggered. The streaming instability converts the pebbles into planetesimals when $\rho_{\rm peb}{>}\rho_{\rm g}$, while it is quenched when $\rho_{\rm peb}{<}\rho_{\rm g}$. Thus, such a regulated process removes the excess pebbles into planetesimals, maintaining the midplane dust-to-gas ratio to be of order unity. Since the optical depth correlates with the dust-to-gas ratio, this interpretation naturally explains the peculiar optical depths in the observed rings. 777 The large-scale dust clumping at the edges of the rings are also resolved from hydrodynamic simulations (Huang et al. 2020). The evidence of the streaming instability can also be found from the minor bodies in our Solar System. Morbidelli et al. (2009) conducted collisional coagulation simulations and found that in order to reproduce the current size distribution of main-belt asteroids, the primordial planetesimals should have ${\gtrsim}100$ km in size. Rather than the incremental growth, this characteristic size and the slops of size distributions of main-belt asteroids and Kupiter belt objects are more consistent with planetesimals obtained from streaming instability simulations (Johansen et al. 2015; Simon et al. 2016). The most appealing evidence is from the Kuiper belt objects. Recently, a contact binary named Arrokoth (previously known as Ultima Thule or $2014\ \rm MU_{69}$) is imaged by the New Horizons spacecraft during its flyby. Arrokoth, resembled other cold classical Kuiper belt objects, is thought to be well preserved the pristine properties since its formation. It is consist of two equal-size, compositional homogeneous lobes with a narrow contact neck (Stern et al. 2019; Grundy et al. 2020; Zhao 2020). Such a peculiar shape with little distortion, and the well alignment of the two lobes strongly indicate that this type of object is originated from gentle, low-speed mergers of planetesimals within a gravitationally collapsing clump of pebbles (McKinnon et al. 2020). The prevalence of equal-size binaries found in the Kuiper belt supports that they form by the gravitational collapsing mechanism (Nesvorný et al. 2010; Robinson et al. 2020). Most of these binaries are in the cold classical Kuiper belt that have low heliocentric orbital inclinations and eccentricities and thus remain primordial than other populations. Furthermore, these binaries are observed to have similar colors even though the color distribution of the binary population has a large intrinsic scattering (Benecchi et al. 2009; Marsset et al. 2020). This is also expected from the gravitational collapsing, since they form from the same reservoir of solids in the pebble clumps. In addition, based on the obliquity measurements of trans-Neptunian binaries, Grundy et al. (2019) found that the prograde binaries are more common than the retrograde ones among tight binaries ($22/26$). Such a binary orientation distribution is consistent with the predictions of the streaming instability simulations (Nesvorný et al. 2019). In contrast, the above properties are difficult to fulfill when the binaries form by sequential coagulation and capture (Goldreich et al. 2002). In sum, the streaming instability seems to be the widely-accepted and leading mechanism of planetesimal formation. The success of the streaming instability is not only because the robustness of the mechanism itself is verified by numerous theoretical/numerical work, but also, many key features of the planetesimal populations generated by the streaming instability are consistent with current observations, both within and beyond Solar System. The streaming instability succeeds in bridging the gap between the pebbles and planetesimals. The growth of planetesimals after formation will be discussed in the subsequent sections. 4 Planetesimal accretion We review the planet formation process from planetesimals to planets from Sect. 4 to Sect. 6. In this section we focus on planetesimal accretion. The accretion cross section and accretion rates in different regimes are described in Sects. 4.1 and 4.2, the underlying physical processes are discussed in Sect. 4.3, and the key features and applications are summarized in Sects. 4.4 and 4.5. 4.1 Accretion cross section The Hill radius of a planetary body orbiting a central star is define as $$R_{\rm H}=\left(\frac{M_{\rm p}}{3M_{\star}}\right)^{1/3}a,$$ (8) where $M_{\rm p}$ and $a$ are the mass and semimajor axis of the body. The Hill velocity is $v_{\rm H}{=}R_{\rm H}\Omega_{\rm K}$. Within the Hill sphere, the planetary body’s gravitational force is more important than that of the star. We consider the planetesimal accretion in a case of a few massive protoplanetary embryos embedded in a swarm of less massive planetesimals. Hereafter we call these two populations the large and small bodies, and their masses are expressed as $M$ and $m$, respectively. Only the gravitational force operates during their encounters. The collisional (accretion) cross section of the large body can be expressed as $$\sigma{=}\pi R_{\rm M}^{2}\left(1+\frac{v_{\rm esc}^{2}}{{\delta v}^{2}}\right% )=\pi R_{\rm M}^{2}\left(1+\frac{2GM}{R_{\rm M}{\delta v}^{2}}\right),$$ (9) where $R_{\rm M}$ is the physical radius of the large body, $v_{\rm esc}{=}\sqrt{2GM/R_{\rm M}}$ and $\delta v$ are the escape velocity of the large body and the relative velocity between large and small bodies. The collision is in the gravitational focusing regime when $\delta v{<}v_{\rm esc}$, while it is in the geometric regime when $\delta v{\geq}v_{\rm esc}$. The gravitational focusing factor is given by $f_{\rm gf}{=}1+{v_{\rm esc}}^{2}/{\delta v}^{2}$, representing the enhancement of the collisional cross section compared to the physical cross section ($\pi R_{\rm M}^{2}$). One should note that Equation 9 is valid for the two-body approximation in the dispersion regime when $\delta v{\geq}v_{\rm H}$. However, when $\delta v{<}v_{\rm H}$, the Keplerian shear dominates the relative velocity, and the three-body interaction including the gravitational force of the central star becomes important (Ida & Nakazawa 1989; Lissauer 1993). 4.2 Growth modes: runaway vs. oligarchic The collision rate of the large bodies by accreting small bodies is given by $n_{\rm m}m\sigma\delta v$, where $n_{\rm m}{=}\Sigma_{m}/mH$, $\Sigma_{m}$ and $H$ are the number density, surface density and vertical height of the small bodies. The planetesimal accretion rate of the large body can be written as $$\dot{M}_{\rm PlA}=\Sigma_{\rm m}\Omega_{\rm K}\pi R_{\rm M}^{2}\left(1+\frac{2% GM}{R_{\rm M}\delta v^{2}}\right)=\begin{cases}{\displaystyle{\frac{2\pi GM% \Sigma_{\rm m}\Omega_{\rm K}R_{\rm M}}{{\delta v}^{2}}}\ \propto M^{4/3}}% \hfill\hskip 28.452756pt\mbox{when}\ \delta v\ll v_{\rm esc},\\ {\displaystyle{\pi\Sigma_{\rm m}\Omega_{\rm K}R_{\rm M}^{2}}\ \ \ \ \propto M^% {2/3}}\hfill\hskip 28.452756pt\mbox{when}\ \delta v\lesssim v_{\rm esc},\end{cases}$$ (10) where $M{\propto}R_{\rm M}^{3}$ and $H{\simeq}2\delta v_{\rm z}/\Omega_{\rm K}{\simeq}\delta v/\Omega_{\rm K}$ for an isotropic planetesimal velocity distribution. The growth timescale is given by $t_{\rm grow}=M/(dM/dt)$, where $t_{\rm grow}$ scales with $M^{-1/3}$ and $M^{1/3}$ in the former and latter cases, respectively. In the early stage of the planetesimal accretion (the former case of Eq. 10), $\delta v$ is mainly excited from the mutual interactions among small planetesimals and therefore remains low compared to $v_{\rm esc}$. Since the growth becomes faster as $M$ increases, this phase is called the runaway growth (Safronov 1972; Greenberg et al. 1978; Wetherill & Stewart 1989; Ida & Makino 1993; Kokubo & Ida 1996). In this circumstance, the mass ratio between the large and small bodies increase with time. Nevertheless, the small planetesimals still dominate the total masses of whole populations. The above runaway phase cannot last forever. When the velocities of planetesimals are significantly stirred up by the large bodies (the latter case of Eq. 10), the accretion cross sections of large bodies are strongly reduced compared to the runaway case. The growth of the big bodies becomes slower as $M$ increases. The accretion gradually turns into a self-regulated, oligarchic growth (Ida & Makino 1993; Kokubo & Ida 1998), featured by a decreasing mass ratio among adjacent massive bodies. We note that in this regime the massive bodies still grows faster than the small planetesimals. After the runaway and oligarchic phases, the system evolves into a bi-modal population, with large protoplanets and small planetesimals (Kokubo & Ida 2000; Thommes et al. 2003; Rafikov 2003; Ida & Lin 2004a; Ormel et al. 2010). These protoplanets have a typical orbital separation of $10$ mutual Hill radii (Kokubo & Ida 1998). In the late stage when there is no disk gas left, the protoplanets gradually accrete all residual planetesimals. The random velocities of the protoplanets are fully excited to their escape velocities, and their collisional cross sections reduce to the physical surface areas. In this case, the growth becomes eventually slow, and the system is chaotic in nature (Agnor et al. 1999; Chambers & Wetherill 1998; Chambers 2001; Raymond et al. 2004; Kenyon & Bromley 2006; Zhou et al. 2007). 4.3 Relevant physical processes The relative velocity $\delta v$ is the most important factor that sets the planetesimal growth regimes. It evolves through a combination of four processes: heating from viscous stirring, cooling from gas drag, dynamical friction and inelastic collisions. Here we briefly discuss them. The detailed derivations can be found in Goldreich et al. (2004). Viscous stirring is a dynamical effect in which the velocity dispersion of planetesimals increases through two-body gravitational encounters. During this process, the system gets dynamically excited. The eccentricities and inclinations of planetesimals relax into Rayleigh distributions. The growth of protoplanets slows down as the increase of random velocities of the planetesimals. The timescale for viscous stirring of small planetesimals by the large bodies is be given by Ida & Makino (1993), $$t_{\rm vs}=\frac{v}{dv/dt}=\frac{{\delta v}^{3}}{4\pi G^{2}n_{\rm M}M^{2}\ln% \Lambda},$$ (11) where $\ln\Lambda\simeq 3$ is the Coulomb factor, $n_{\rm M}$ the number density of large bodies. As can be seen from Eq. 11, the effect of viscous stirring increases with the mass of large body. On the other hand, the random velocity of small planetesimals can also be damped by the gas friction force. The eccentricity damping timescale is given by Adachi et al. (1976), $$t_{\rm e,gas}=\frac{m}{C_{\rm D}\pi R_{\rm m}^{2}\rho_{\rm g}\delta v/2},$$ (12) where $C_{\rm D}=0.5$ is the gas drag coefficient for planetesimal-size objects in Eq. 4 and $R_{\rm m}$ is the radius of the small planetesimal. Equating $t_{\rm vs}=t_{\rm e,gas}$ and solving for the equilibrium eccentricity of the small planeteosimals, one can obtain $$e_{\rm m}=\left(\frac{8\ln\Lambda mM\Sigma_{\rm M}a}{C_{\rm D}\rho_{\rm g}R_{% \rm m}^{2}M_{\star}^{2}}\right)^{1/5}.$$ (13) Inserting this value into Eq. 10, the growth timescale in the oligarchic regime can be expressed as $$t_{\rm og}=\frac{e_{\rm m}^{2}a^{2}}{2\pi G\Sigma_{\rm m}R_{\rm M}}=\frac{a^{2% }}{2\pi G\Sigma_{\rm m}R_{\rm M}}\left(\frac{8\ln\Lambda mM\Sigma_{\rm M}a}{C_% {\rm D}\rho_{\rm g}R_{\rm m}^{2}M_{\star}^{2}}\right)^{2/5}.$$ (14) On the other hand, Ormel et al. (2010) conducted Monte Carlo simulations for the planetesimal growth and found that the runaway growth timescale can be expressed as $$t_{\rm rg}=C_{\rm rg}\frac{R_{\rm m}\rho_{\bullet}}{\Omega_{\rm K}\Sigma_{\rm m% }},$$ (15) where $C_{\rm rg}{\sim}10$ is a numerical prefactor. We note that $R_{\rm m}$ and $\Sigma_{\rm m}$ in Eq. 15 are the initial size and surface density of the planetesimals, and the growth timescale only depends on the initial configuration. This is an important feature of the runaway growth, where the planetesimals spend most of time doubling their initial masses. Another important damping mechanism is called dynamical friction. It refers to the process of the equipartition of the random energy between large and small bodies. The consequence of dynamical friction is that the random velocity of the body is proportional to the square root of its mass ($\delta v\propto M^{-1/2}$), in which small (large) bodies have high (low) random velocities through their mutual gravitational interactions. Besides, inelastic collisions also damp the random velocity. When two bodies collide with each other, they conserve the total angular momentum but some fraction of the kinetic energy transfers into the internal heat. Now let us recall when is the transition between the runaway growth and oligarchic growth. Strictly speaking, this transition is not determined by $\delta v/v_{\rm esc}$ but relies on the relative growth rate of the massive bodies, $(dM/dt)/M\propto M^{\gamma}$. The growth is in the runaway phase when $\gamma{>}0$ , while the growth is in the oligarchic phase when $\gamma{<}0$. Ormel et al. (2010) proposed a physical criterion for the above transition. In the beginning of the accretion, the mass growth is faster than the excitation of random velocities, corresponding to $t_{\rm rg}<t_{\rm vs}$. The transition occurs when $t_{\rm rg}\sim t_{\rm vs}$. After that, the stirring is faster than the growth ($t_{\rm rg}>t_{\rm vs}$), and the accretion enters the self-regulated oligarchic regime. 4.4 Features We highlight a few important features of planetesimal accretion. First, the runaway accretion does not necessary mean that the growth is rapid. The concept of the ”runaway“ refers to a relative growth rate, $d(M_{1}/M_{2})/dt>0$ when $M_{1}{>}M_{2}$. The absolute rate can be high or low, depending on the planetesimal surface density $\Sigma_{\rm m}$, the Keplerian orbital frequency $\Omega_{\rm K}$, and the gravitational focusing factor that relates with ${\delta v}$. Second, the size of planetesimals matters for the growth. As introduced in Sect. 3, when planetesimals are assumed to form by the streaming instability, they are primordially large (e.g., $100$-km in size). These planetesimals are well-decoupled from gas and their orbits remain in-situ during the disk lifetime. In this circumstance, the massive protoplanets only accrete nearby planetesimals within their feeding zones (${\sim}10$ mutual Hill radii). The final masses of protoplanets are only related with local disk properties (e.g., the planetesimal surface density $\Sigma_{\rm m}$). However, if the initial size of planetesimals are kilometer or smaller, the above accretion paradigm changes. Planetesimals of smaller sizes undergo non-negligible radial drift during the disk lifetime. The accreting materials are not limited to local planetesimals in the vicinity of the protoplanets anymore. Therefore, the planets can finally reach higher masses. On the other hand, for smaller planetesimals, their random velocities remain lower due to the weaker viscous stirring and stronger gas damping (Eqs. 11 and 12). As a result, the growth of ${<}1$-km-sized boulders is faster compared to the case of $100$-km-sized planetesimals (Coleman & Nelson 2016). Planet growth has several advantages when the initial size of planetesimals is small. However, the major issue is whether the disk can form planetesimals with dominated size of ${\lessapprox}1$ km. Third, planetesimal accretion also depends on their disk locations. As shown in Eq. 10, when the planetesimals are further out, the accretion is slower due to a longer orbital time. In addition, the encounters tend to result in ejections rather than collisions when the orbital distance is larger. The outcome of the planet-planet encountering can be quantified as (Goldreich et al. 2004; Ida & Lin 2004a) $$\Phi^{2}=\frac{v_{\rm esc}^{2}}{2v_{\rm K}^{2}}=\left(\frac{M}{M_{\star}}% \right)\left(\frac{a}{R}\right),$$ (16) where $\Phi$ is the ratio between the escape velocity of the primary body ($v_{\rm esc}$) and the escape velocity of the system ($\sqrt{2}v_{\rm K}$), When $\Phi{\gg}1$, the two-body encounter results in one of them to be ejected. On the other hand, the two bodies are preferred to collide when $\Phi{<}1$. To summarize, planetesimal accretion is faster when the planetesimal surface density is higher, the planetesimals are closer in, and/or the initial planetesimals are smaller. 4.5 Applications solar system The formation timescales of terrestrial planets can be measured by using radioactive decay of short-lived isotopes, among which the Hafnium-Tungsten ($\rm Hf{-}W$) isotope is widely adopted for radiometric dating. This is not only due to its suited radioactive decay half-life time of $9$ Myr, but also relates with the chemical properties of these two elements: $\rm Hf$ is lithophile (“rock loving”) and $\rm W$ is siderophile (“iron loving”). When the protoplanet becomes massive enough to segregate, $\rm W$ preferentially settles into the metal core and $\rm Hf$ remains in the silicate mantle. For instance, if core forms early (early mantle-core segregation), the measured $\rm W$ abundance in the mantle would be high. On the other hand, if core forms late, $\rm W$ would be high in the core, leaving the $\rm W$-deficit mantle. Therefore, measuring the $\rm Hf{/}W$ ratio in the current Earth mantle can be used to constrain its core formation and differentiation. Such isotope analyses indicated that core formation of Earth occurs $30{-}100$ Myr after the Solar System formation (Kleine et al. 2002; Yin et al. 2002; Jacobsen 2005; Touboul et al. 2007; Kleine et al. 2009; Rudge et al. 2010). Similarly, based on Martian meteorites, the formation time of Mars is inferred to be within a few Myr (Kleine et al. 2004; Foley et al. 2005; Dauphas & Pourmand 2011), comparable to the lifetime of the proto-solar nebular. The final accretion stage of Solar System terrestrial planets, or called late giant-impact stage, is typically modelled by a numerical N-body approach (see Izidoro & Raymond (2018) for methods and numerical tools). Before this stage, protoplanetary embryos of lunar to Martian masses have already formed in the terrestrial planet forming region by accreting planetesimals (see Sect. 4.2). The random velocities of the embryos will be stirred without efficient damping, and planet-planet collisions frequently occur after the dispersal of disk gas. On the other hand, giant planets had already gown to their current masses. The influence of gas giants (in particular Jupiter and Saturn) are crucial on sculpting the architecture and water delivery of the asteroid belt and the inner terrestrial planets (Raymond & Izidoro 2017; Zheng et al. 2017). Numerous numerical simulations have attempted to reproduce the terrestrial planets, from both the dynamical properties and geochemical accretion timescale (Chambers 2001; Raymond et al. 2004, 2006, 2009; O’Brien et al. 2006; Thommes et al. 2008; Morishima et al. 2010; Jacobson et al. 2014). The above models are either assumed giant planets on their current positions with nearly circular or slightly eccentric orbits. However, these models still have shortfalls. For instance, the Mars analogy is difficult to reproduce in the above numerical simulations, unless the planetesimal disk is truncated at around $1$ AU (Hansen 2009). One step further, Walsh et al. (2011) proposed that such a truncation can be physically caused by the inward-then-outward migration of Jupiter and Saturn through planet-disk interactions, which is known as the Grand Tack model. The key ingredient of this model, the migration of two giant planets, is consistent with our current understanding of the disk migration theory (Kley & Nelson 2012; Baruteau et al. 2014) and have been validated by hydrodynamic simulations (Masset & Snellgrove 2001; Morbidelli & Crida 2007; Pierens & Nelson 2008a; Zhang & Zhou 2010). The appealing point of the Grand Tack model is that it satisfactorily reproduces the key observational features of the inner Solar System, such as the orbital and mass distributions of the terrestrial planet, the mass depletion and the chemical compositions of planetesimals in the asteroid belt, and the water content of the Earth (O’Brien et al. 2014). It is also worth mentioning that the most of the current N-body simulations only consider the perfect mergers when two bodies collide with each other. However, the random velocities of planetesimals would be excited when the masses of the protoplanets increase. The collisional outcome sensitively relates with the masses and velocities of the colliding bodies, which could be catastrophic disruption, grinding or fragmentation (Agnor & Asphaug 2004; Leinhardt & Stewart 2012; Genda et al. 2012; Liu et al. 2015). Dedicated N-body simulations including realistic collision recipes showed that the final masses and numbers of survival planets is comparable to the case when only perfect mergers are considered. But the differences still exist in planet spins, eccentricities, core mass fractions (Kokubo & Genda 2010; Chambers 2013). exoplanetary system The observed exoplanets are more diverse than Solar System planets. One important question is how to form different planet populations, such as gas giants and super-Earths. The growth of giant planets first requires the assembly of massive cores (${\sim}10\ M_{\oplus}$) that can initiate the rapid gas accretion (Pollack et al. 1996; Ikoma et al. 2000; Hubickyj et al. 2005; Movshovitz et al. 2010) before the dispersal of disk gas. For a nominal disk model, the planetesimal isolation mass at a few AU distance (the formation zone of Jupiter and Saturn) is generally lower than this critical mass (Ida & Lin 2004a). Therefore, giant planet formation was thought to be challenging, unless the disks are extremely massive. Protoplanets approaching such isolation masses can undergo substantial orbital migration 888It worth pointing out that although the migration theory was first proposed in late $1970$s (Lin & Papaloizou 1979; Goldreich & Tremaine 1979, 1980), it was not considered in the classical formation models of the Solar System planets (Lissauer 1993). The disk migration theory attracts its attention only until close-in exoplanets have been subsequently discovered (Lin et al. 1996).. These protoplanets migrate towards and get trapped into mean motion resonances at some zero-torque locations (or called trapping locations, Lyra et al. 2010; Horn et al. 2012; Kretke & Lin 2012). Depending on the number of planets and the gas disk mass, the resonant configurations can be disrupted with rapid migration, resulting in frequent planet-planet collisions even in the gas-rich disk phase (Hellary & Nelson 2011; Pierens et al. 2013; Zhang et al. 2014; Liu et al. 2015). In this circumstance, massive cores can be attained early enough and subsequently grow into gas giants. This interpretation prefers that the cores of gas giants only forms early when the disk is massive, e.g., the disk accretion rate higher than ${\sim}10^{-7}M_{\odot}\ \rm yr^{-1}$. Compared that with the typical observed disk accretion rates and the $\dot{M}_{\rm g}{-}M_{\star}$ dependence (Hartmann et al. 1998; Natta et al. 2006; Manara et al. 2016), it provide explanations why only a minor fraction of stars habour gas giant planets (Liu et al. 2015) and why gas giant fraction increases with the mass of the central star (Liu et al. 2016). On the other hand, if planet-planet collisions do not occur early enough, planets remain their low-mass and they evolve into super-Earth systems with compact, (near) resonant configurations (Terquem & Papaloizou 2007; Ogihara & Ida 2009; Cossou et al. 2014; Ogihara et al. 2015; Izidoro et al. 2017). Numerous work also explored the detailed resonant trapping and stability of the multiple super-Earth systems both numerically (Wang et al. 2012; Wang & Ji 2014; Sun et al. 2017; Pan et al. 2020) and analytically (Quillen 2011; Hadden & Lithwick 2018; Petit et al. 2020). When looking back to the giant planets in Solar System and based on the results of the Juno mission, the core of Jupiter is inferred to be diluted with an extended layer of heavy elements (Wahl et al. 2017; Helled & Stevenson 2017). The growing of proto-Jupiter has a strong influence on nearby planetesimals and protoplanets (Zhou & Lin 2007; Ida et al. 2013). Such a diluted internal structure is likely to result from the giant impacts among protoplanets during their assembling phase (Liu et al. 2019c) . The other possible explanation of super-Earths is that they only form late and in-situ when disk gas dissipates significantly. In that case, gas damping is weak and the super-Earth cores can grow quickly through the mergers of protoplanets (Lee & Chiang 2016). This could also explain why the observed massive super-Earth (${\sim}10\ M_{\oplus}$) do not undergo runaway gas accretion to become gas giants (Lee et al. 2014). The exact compositions and orbital properties of the formed super-Earths are determined by a combined effects of gas damping and solid accretion (Dawson et al. 2016), and potential further out giant planets (Ji et al. 2011; Jin & Ji 2011). One the other hand, recent hydrodynamic studies found that because the gas recycles efficiently between the planetary envelopes and surrounding disk, the rapid gas accretion onto the super-Earths can be actually protracted (Ormel et al. 2015; Cimerman et al. 2017; Lambrechts & Lega 2017; Kuwahara et al. 2019). This also provides an interpretation of ubiquitous presence of super-Earths but not gas giant planets. The population synthesis model is an ideal tool to explore the influence of key physical processes on planet formation and evolution. In this approach, different physical processes are simplified into specialized recipes and combined into a unified deterministic model. By Monte Carlo sampling the initial conditions over appropriate distributions, the synthetic planetary populations can be generated and thus compared to the observed exoplanet sample in a statistical manner (see Benz et al. (2014) for a review). Ida & Lin (2004a) first used such a calculation to investigate the planet formation around sun-like stars, and further extended their study to systems around stars of various masses and metallicities (Ida & Lin 2004b, 2005). The predicted correlation between gas giant planets and the stellar hosts showed good agreements with RV measurements (Fischer & Valenti 2005; Johnson et al. 2007). Sophisticated population synthesis models based on planetesimals accretion are further developed to make testable observational comparisons and to study how forming planets related with the initial disk and stellar properties (Mordasini et al. 2009, 2012a, 2012b; Ida et al. 2013; Jin et al. 2014; Coleman & Nelson 2014, 2016; Alibert & Benz 2017; Mulders et al. 2019; Miguel et al. 2020). stellar binary system The previously mentioned studies focus on the planet formation around single stars. However, nearly half of the sun-like stars are in binaries (Duquennoy & Mayor 1991; Raghavan et al. 2010), and this fraction is even higher for higher-mass stars (Kouwenhoven et al. 2007). Thus, studying how planets form in stellar binary systems is of crucially importance. Up to now, more than $150$ exoplanets have been discovered in stellar binary systems, including both S-type (satellite-like orbit around one of the single star) and P-type (planet-like orbit around binary stars). P-type planets are also referred to circumbinary planets. Planets are less common in binary systems compared to single hosts (Wang et al. 2014a, b). The S-type planets have not yet been found in the binaries with a period less than $1000$ days. The close binary companions play a destructive role on forming S-type planets (Thebault & Haghighipour 2015). First, the protoplanetary disk would be tidally truncated by the secondary companion, reducing the disk mass and lifetime (Artymowicz & Lubow 1994; Miranda & Lai 2015). Second, the secular perturbations induced by the companion excites high relative velocities among planetesimals, leading to catastrophic collisions (Heppenheimer 1978; Thébault et al. 2006, but also see Xie et al. 2010b). Third, hydrodynamic simulations showed that the dynamics of disk gas in the binary system is much complicated than typical assumed static, axisymmetric in theoretical analyses (Paardekooper et al. 2008; Kley et al. 2008; Marzari et al. 2009; Müller & Kley 2012). All above effects are detrimental for the growth of planetesimals. A noteworthy point is that, disruptive collisions among planetesimals would produce a reservoir of dust debris. The sweeping of the dust debris can nevertheless boost the further mass growth of surviving, leftover planetesimals (Paardekooper & Leinhardt 2010; Xie et al. 2010a). Although in-situ formation of S-type planets in close binaries seems to be very challenging, it is comparatively easy to form P-type planets. The disk-driven migration needs to be considered when the planets grow massive (Pierens & Nelson 2008b; Kley & Haghighipour 2015). The post evolution of the planetary systems includes the mean motion resonance capture, excitation of eccentricities along with the gas disk dissipation, and the planet–planet scattering. Gong & Ji (2018) suggested that S-type planets can form through planet-planet scattering from P-type planets and then tidally capture in various binary configurations. A smaller eccentricity or a lower mass ratio of the binary leads to a higher capture probability up to $10\%$ and produces S-type planets with retrograde orbits, consistent with the result of the two unequal-mass planet ejection model (Gong & Ji 2017). Another important topic is the orbital configuration of circumbinary planets (P-type orbits). More than $20$ circumbinary planets have been detected so far (Schwarz et al. 2016). For those discovered by the Kepler mission, the planets are inferred to be inclined with less than a few degrees relative to the binary plane (Kostov et al. 2014). However, the above coplanarity might be caused by the observational bias: planets orbiting in the binary plane are easier to be detected. On the other hand, the circumbinary disks with high inclinations have been subsequently discovered (Kennedy et al. 2012; Brinch et al. 2016; Czekala et al. 2019), even on polar orbits (Kennedy et al. 2019). Such misaligned disks may highly indicate the existence of high-inlcination circumbinary planets. In addition, since most of stars are originated from star clusters and stellar associations (Lada & Lada 2003), the planetary orbits can be influenced by the perturbations from stellar fly-bys. This idea has been explored mainly for planetary systems around sun-like single stellar hosts, and the corresponding studies generally found that close encounters from the passing-by stars are prone to be destructive for planetary systems, reducing their multiplicities with survival planets on more eccentric and inclined orbits. (Spurzem et al. 2009; Malmberg et al. 2011; Pfalzner 2013; Liu et al. 2013; Hao et al. 2013; Li & Adams 2015; Zheng et al. 2015; Cai et al. 2017; Li et al. 2019a). Similarly, the cluster environment can also significantly impact the planets in binary systems. Ma et al. (2020) showed that these stellar fly-bys can affect the inclination distribution of circumbinary planets. For instance, for close binaries systems originated in open clusters with a spacing of ${>}1$ AU, a few times fly-bys can already induce their highly-inclined orbits. 5 Pebble accretion In this section, we recapitulate pebble accretion, including the physical mechanism (Sect. 5.1), the accretion rates and efficiencies in different regimes (Sect. 5.2), the properties (Sect. 5.3) and applications (Sect. 5.4). 5.1 Onset and terminal condition In contrast to planetesimal accretion, pebble accretion refers to that pebble-sized (${\sim}$ mm-cm) solid particles get accreted by planetary bodies (Ormel & Klahr 2010; Lambrechts & Johansen 2012). Since these small pebbles are strongly influenced by the surrounding disk gas, both gas drag and gravitational force play decisive roles during the pebble-planet encounter (also see reviews of Johansen & Lambrechts 2017 and Ormel 2017). Here we provide a physical picture of pebble accretion based on the order-of-magnitude timescale analysis. How and in which condition pebble accretion commences are discussed as follows. During the pebble-planet encounter, the operation of pebble accretion needs to satisfy the following two conditions: 1. The time pebble settled onto the planet $t_{\rm set}$ is shorter than the pebble-planet encounter time $t_{\rm enc}$; otherwise, the pebble cannot be accreted onto the planet. 2. The stopping time of the pebble $t_{\rm stop}$ is shorter than the pebble-planet encounter time $t_{\rm enc}$, which means that gas drag matters during pebble-planet interaction; otherwise, the pebble-planet interaction is similar to the planetesimal-planet interaction, where only gravitational force plays a role. The illustration of the pebble-planet encounter is shown in Figure 6. When the pebble closely interact with the planet, the gas drag acceleration $a_{\rm drag}{=}(v_{\rm peb}-v_{\rm g})/t_{\rm stop}$ is adjusted quasi-statically to balance the gravitational acceleration $a_{\rm g}{=}GM_{\rm p}/b^{2}$, where $b$ is the impact parameter during the encounter, $t_{\rm stop}$ is the stopping time of the pebble, which quantifies how fast the orbit of the pebble is adapted to the gas due to the gas friction force. The pebble reaches a terminal velocity to sediment onto the planet that $v_{\rm set}\simeq GM_{\rm p}t_{\rm stop}/b^{2}$. The settling time and encounter time are given by $t_{\rm set}{=}b/v_{\rm set}$ and $t_{\rm enc}{=}b/\Delta v$ (see Figure 6), where $\Delta v$ is the unperturbed relative velocity between the planet and pebble. When the above two terms are equal, it gives the largest pebble accretion radius in the settling regime (Ormel & Klahr 2010; Lambrechts & Johansen 2012; Ida et al. 2016) $$b_{\rm set}\simeq\sqrt{\frac{GM_{\rm p}t_{\rm stop}}{\Delta v}}.$$ (17) As can be seen from the above equation, $b_{\rm set}$ is smaller as $t_{\rm stop}$ becomes shorter. Physically, this is because smaller pebbles of a shorter $t_{\rm stop}$ are more tightly coupled to gas. The trajectories of these pebbles become strongly affected by the planet only when they sediment deeply enough to the planet. The accretion radius is therefore smaller. Equivalently, criterion (1) can also be derived from the concept of gravitational deflection (Lambrechts & Johansen 2012). Pebble accretion occurs when the gravitational deflation time $t_{\rm g}{=}\Delta v/a_{\rm g}=\Delta v/(GM_{\rm p}/b^{2})$ is shorter than the pebble stopping time $t_{\rm stop}$. Equation 17 can be obtained by equating $t_{\rm g}$ with $t_{\rm stop}$. Criterion (2) breaks down when the unperturbed encounter velocity $\Delta v$ is so high that $t_{\rm stop}{\geq}t_{\rm enc}$. In this limit, gas drag is too weak to influence the orbit of the pebble during this short timespan. By equating the above two timescales, we obtain a threshold velocity $v_{\ast}{=}\sqrt[3]{M_{\rm p}/M_{\star}\tau_{\rm s}}v_{\rm K}$. When $\Delta v\ll v_{\ast}$, pebble accretion is in the settling regime (Ormel & Klahr 2010) (equivalent to the strongly-coupled regime in Lambrechts & Johansen 2012). When $\Delta v\gtrsim v_{\ast}$, the accretion enters the inefficient ballistic regime (weakly-coupled regime), where gas drag is unimportant. An exponential decay function is used to fit such a transition (Ormel & Klahr 2010; Liu & Ormel 2018), $$f_{\rm set}=\exp\left[-0.5\left(\frac{\Delta v}{v_{\ast}}\right)^{2}\right],$$ (18) and the accretion radius is expressed as $b_{\rm PA}=b_{\rm set}f_{\rm set}$. In order to distinguish with planetesimal accretion, conventionally, pebble accretion refers to the settling regime when the growth is efficiently assisted by the gas drag ($f_{\rm set}\cong 1$). However, when the mass of the planet/planetesimal is low or $\tau_{\rm s}$ is high, the settling condition is not always satisfied and the accretion can be inefficient. We can obtain the critical mass for the onset of efficient pebble accretion ($f_{\rm set}\cong 1$) by equating $v_{\ast}$ with $\Delta v$ (adopted to be the headwind velocity $\eta v_{\rm K}$, see Sect. 5.2), which gives $$M_{\rm onset}=\tau_{\rm s}\eta^{3}M_{\star}=2.5\times 10^{-4}\left(\frac{\tau_% {\rm s}}{0.1}\right)\left(\frac{\eta}{2\times 10^{-3}}\right)^{3}\left(\frac{M% _{\star}}{M_{\odot}}\right)\ M_{\oplus},$$ (19) corresponding to a $500$ km radius planetesimal 999The above expression is consistent with Eq. 25 of Visser & Ormel (2016).. Since the typical planetary bodies we consider is much more massive than $M_{\rm onset}$, for simplicity, the referred pebble accretion hereafter only corresponds to the accretion in the settling regime ($b_{\rm PA}=b_{\rm set}$). The above analysis holds when the planet feedback onto the disk gas is negligible. However, the planet perturbation increases with its growing mass. When the planet is massive enough to open a gap and reverse the local gas pressure gradient, the inward drifting pebbles stop at the local pressure maximum. In this case, pebble accretion cannot proceed, quenching the core mass growth. Such a process leads to a ‘pebble isolation’ from the planet, and the onset mass of the planet is called pebble isolation mass (Lambrechts et al. 2014; Bitsch et al. 2018; Ataiee et al. 2018). For instance, Bitsch et al. (2018) obtained a fitting formal of the pebble isolation mass from hydrodynamical simulations, which reads $$\begin{split}\displaystyle M_{\rm iso}=&\displaystyle 25\left(\frac{h_{\rm g}}% {0.05}\right)^{3}\left(\frac{M_{\star}}{M_{\odot}}\right)\left[0.34\left(\frac% {-3}{{\log}\alpha_{\rm t}}\right)^{4}+0.66\right]\\ &\displaystyle\left[1-\frac{\partial{\ln}P_{\rm g}/\partial{\ln}r+2.5}{6}% \right]M_{\oplus},\end{split}$$ (20) where $\alpha_{\rm t}$ is the coefficient of the viscosity (Shakura & Sunyaev 1973). The termination of pebble accretion corresponds to $10{-}20\ M_{\oplus}$ super-Earth planets around solar-mass stars and Earth-mass planets around $0.1M_{\odot}$ late M-dwarf stars. 5.2 Accretion rate and efficiency As shown in Eq. 17, the accretion radius depends on the relative velocity between the pebble and planet $\Delta v$. For a planet on a circular orbit, $\Delta v$ is a sum of the headwind velocity $\eta v_{\rm K}$ and the Keplerian shear velocity $\Omega_{\rm K}b_{\rm set}$. The accretion is in the headwind regime (Bondi regime) when $\Delta v$ is dominated by $\eta v_{\rm K}$, while it is in the shear regime (Hill regime) when $\Delta v$ is dominated by the Keplerian shear. The transition planet mass between the headwind and shear regimes can be calculated by equating the above two velocities, $$M_{\rm hw/sh}=\eta^{3}M_{\star}/\tau_{\rm s}\simeq 0.025\left(\frac{\eta}{2% \times 10^{-3}}\right)^{3}\left(\frac{\tau_{\rm s}}{0.1}\right)^{-1}\left(% \frac{M_{\star}}{M_{\odot}}\right)\ M_{\oplus}.$$ (21) When a planet is on an eccentric and inclined orbit, $\Delta v$ is additional contributed by the epicyclic motion ($ev_{\rm K}$ and $iv_{\rm K}$) of the planet relative to its Keplerian velocity (Johansen et al. 2015; Liu & Ormel 2018; Ormel & Liu 2018). The pebble mass accretion rate can be expressed as (Lambrechts & Johansen 2014; Morbidelli et al. 2015) $$\dot{M}_{\rm PA}=\begin{cases}{\displaystyle 2b_{\rm set}\Delta v\Sigma_{\rm peb% }\simeq 2\sqrt{{GM_{\rm p}t_{\rm stop}\Delta v}}\Sigma_{\rm peb}}\hfill\hskip 2% 8.452756pt\mbox{[2D]},\\ {\displaystyle b_{\rm set}^{2}\Delta v\rho_{\rm peb}\simeq\frac{GM_{\rm p}t_{% \rm stop}\Sigma_{\rm peb}}{\sqrt{2\pi}H_{\rm peb}}}\hfill\hskip 28.452756pt% \mbox{[3D]},\end{cases}$$ (22) where $\Sigma_{\rm peb}{=}\sqrt{2\pi}H_{\rm peb}\rho_{\rm peb}$. The scale height of the pebble disk is given by Youdin & Lithwick (2007), $$H_{\rm peb}=\sqrt{\frac{\delta_{\rm d}}{\delta_{\rm d}+\tau_{\rm s}}}H_{\rm g},$$ (23) where $\delta_{\rm d}$ represents the efficiency of the gas diffusivity, which approximates to the turbulent viscosity coefficient $\alpha_{\rm t}$ when the disk turbulence is driven by the MRI (Johansen & Klahr 2005; Zhu et al. 2015) 101010 It is worth noting that the non-ideal MHD effects, such as ambipolar diffusion and Hall Effect, also play important roles on distributing angular momentum of disk gas. The above processes crucially depend on the disk chemistry and the geometry and strength of the magnetic field (Bai & Stone 2013; Bai 2015, 2016; Gressel et al. 2015).. In order to distinguish the dominating accretion regime, one can numerically compare $\dot{M}_{\rm PA,2D/3D}$ to see which one is higher for given disk and planet parameters. From a physical perspective, whether the accretion is in $2$/$3$D is determined by the ratio between the pebble accretion radius $b_{\rm set}$ and the pebble scale hight $H_{\rm peb}$ (Morbidelli et al. 2015). When the pebble accretion radius is larger than the pebble scale hight, it is in the $2$D accretion regime; otherwise, it is in the $3$D accretion regime. The disk pebble flux that bypasses the orbit of the planet is $\dot{M}_{\rm peb}=2\pi r\Sigma_{\rm peb}v_{\rm r}$. The pebble accretion efficiency is defined as the probability of pebbles accreted by the planet, $\varepsilon_{\rm PA}=\dot{M}_{\rm PA}/\dot{M}_{\rm peb}$ (Guillot et al. 2014; Lambrechts & Johansen 2014). When the radial velocity of the gas is neglected in $v_{\rm r}$ (Eq. 6), the efficiency can be written as $$\varepsilon_{\rm PA}=\begin{cases}{\displaystyle 4\times 10^{-3}\left(\frac{M_% {\rm p}}{10^{-2}\ M_{\oplus}}\right)^{1/2}\left(\frac{\tau_{\rm s}}{0.1}\right% )^{-1/2}\left(\frac{\eta}{2\times 10^{-3}}\right)^{-1/2}\left(\frac{M_{\star}}% {M_{\odot}}\right)^{-1/2}}\hfill\hskip 14.226378pt\mbox{[2D \ headwind]},\\ {\displaystyle 4\times 10^{-3}\left(\frac{M_{\rm p}}{10^{-2}\ M_{\oplus}}% \right)^{2/3}\left(\frac{\tau_{\rm s}}{0.1}\right)^{-1/2}\left(\frac{\eta}{2% \times 10^{-3}}\right)^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)^{-2/3}}% \hfill\hskip 14.226378pt\mbox{[2D \ shear]},\\ {\displaystyle 2\times 10^{-2}\left(\frac{M_{\rm p}}{10^{-2}\ M_{\oplus}}% \right)\left(\frac{h_{\rm peb}}{3.3\times 10^{-3}}\right)^{-1}\left(\frac{\eta% }{2\times 10^{-3}}\right)^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)^{-1}}% \hfill\hskip 28.452756pt\mbox{[3D]},\end{cases}$$ (24) where $h_{\rm peb}{=}H_{\rm peb}/r$ is the pebble disk aspect ratio. The full expression of $\varepsilon_{\rm PA}$ including the eccentricity and inclination dependences can be found in Liu & Ormel (2018) and Ormel & Liu (2018). The pebble accretion efficiency is a crucial quantity in planet formation, since it corresponds to how efficiently the disk pebble mass can be converted into planet mass. The pebble accretion timescale is therefore given by $$\begin{multlined}\displaystyle t_{\rm PA}=\frac{M_{\rm p}}{\dot{M}_{\rm peb}% \varepsilon_{\rm PA}}\simeq\\ \displaystyle\begin{cases}{\displaystyle 2.5\times 10^{4}\left(\frac{M_{\rm p}% }{10^{-2}\ M_{\oplus}}\right)^{1/2}\left(\frac{\tau_{\rm s}}{0.1}\right)^{1/2}% \left(\frac{\eta}{1.8\times 10^{-3}}\right)^{1/2}}\\ {\displaystyle\left(\frac{\dot{M}_{\rm peb}}{10^{-4}\rm\ M_{\oplus}\ yr}\right% )^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)^{1/2}\rm\ yr}\hfill\hskip 14.22% 6378pt\mbox{[2D \ headwind]},\\ {\displaystyle 2.5\times 10^{4}\left(\frac{M_{\rm p}}{10^{-2}\ M_{\oplus}}% \right)^{1/3}\left(\frac{\tau_{\rm s}}{0.1}\right)^{1/2}\left(\frac{\eta}{2% \times 10^{-3}}\right)}\\ {\displaystyle\left(\frac{\dot{M}_{\rm peb}}{10^{-4}\rm\ M_{\oplus}\ yr}\right% )^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)^{2/3}\rm\ yr}\hfill\hskip 14.22% 6378pt\mbox{[2D \ shear]},\\ {\displaystyle 5\times 10^{4}\left(\frac{h_{\rm peb}}{3.3\times 10^{-3}}\right% )\left(\frac{\eta}{1.8\times 10^{-3}}\right)\left(\frac{\dot{M}_{\rm peb}}{10^% {-4}\rm\ M_{\oplus}\ yr}\right)^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)% \rm\ yr}\hfill\hskip 28.452756pt\mbox{[3D]},\end{cases}\end{multlined}% \displaystyle t_{\rm PA}=\frac{M_{\rm p}}{\dot{M}_{\rm peb}\varepsilon_{\rm PA% }}\simeq\\ \displaystyle\begin{cases}{\displaystyle 2.5\times 10^{4}\left(\frac{M_{\rm p}% }{10^{-2}\ M_{\oplus}}\right)^{1/2}\left(\frac{\tau_{\rm s}}{0.1}\right)^{1/2}% \left(\frac{\eta}{1.8\times 10^{-3}}\right)^{1/2}}\\ {\displaystyle\left(\frac{\dot{M}_{\rm peb}}{10^{-4}\rm\ M_{\oplus}\ yr}\right% )^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)^{1/2}\rm\ yr}\hfill\hskip 14.22% 6378pt\mbox{[2D \ headwind]},\\ {\displaystyle 2.5\times 10^{4}\left(\frac{M_{\rm p}}{10^{-2}\ M_{\oplus}}% \right)^{1/3}\left(\frac{\tau_{\rm s}}{0.1}\right)^{1/2}\left(\frac{\eta}{2% \times 10^{-3}}\right)}\\ {\displaystyle\left(\frac{\dot{M}_{\rm peb}}{10^{-4}\rm\ M_{\oplus}\ yr}\right% )^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)^{2/3}\rm\ yr}\hfill\hskip 14.22% 6378pt\mbox{[2D \ shear]},\\ {\displaystyle 5\times 10^{4}\left(\frac{h_{\rm peb}}{3.3\times 10^{-3}}\right% )\left(\frac{\eta}{1.8\times 10^{-3}}\right)\left(\frac{\dot{M}_{\rm peb}}{10^% {-4}\rm\ M_{\oplus}\ yr}\right)^{-1}\left(\frac{M_{\star}}{M_{\odot}}\right)% \rm\ yr}\hfill\hskip 28.452756pt\mbox{[3D]},\end{cases}$$ (25) We can see that $t_{\rm PA}$ is proportional to $M_{\rm p}^{1/2}$ or $M_{\rm p}^{1/3}$ in the $2$D headwind or shear accretion regime, while $\tau_{\rm PA}$ is independent of the planet mass in the $3$D regime. 5.3 Features The first important feature is that pebble accretion is not a runaway process. From Eq. 22 we can see that $(dM/dt)/M\propto M^{0}$ in the $3$D regime, and $(dM/dt)/M\propto M^{-1/2}$ or $M^{-1/3}$ in the $2$D headwind or shear regime. This means that the mass ratios among the growing bodies would approach the order of unity as the accretion proceeds. This is the feature of the orderly growth. Nevertheless, even not in a runaway mode, pebble accretion can still be very fast due to a large accretion cross section and a high flux of continuous feeding pebbles from the outer part of the disk (see discussions in Sect. 6). The efficiency of pebble accretion is determined by the Stokes number of the pebbles. When pebble accretion is in the $3$D regime, the efficiency increases with the Stokes number. This is because higher Stokes number pebbles sediment into a thinner vertical layer, increasing the number density of pebbles for being accreted. When the accretion is in the $2$D regime, the efficiency decreases with $\tau_{\rm s}$. In this case, higher Stokes number pebbles drift faster and have a lower probability to be accreted by the planet (see Fig.4 of Ormel & Liu (2018)). The above dependences hold when pebbles are marginally coupled to the disk gas ($10^{-3}\lesssim\tau_{\rm s}\lesssim 1$). When $\tau_{\rm s}$ is much larger than the order of unity, gas drag is negligible and pebbles are more aerodynamically like planetesimals. In that circumstance, the actual accretion rate drops substantially (Ormel & Klahr 2010). Conversely, when $\tau_{\rm s}\lesssim 10^{-3}$, the pebbles are tightly coupled to the gas flow. Accretion rate is also very low in this geometric regime (Guillot et al. 2014). Therefore, the preferred Stoke number for pebble accretion ranges from $10^{-3}$ to $1$, corresponding to $0.3$ mm to $30$ cm size particles at $5$ AU in the MMSN model. Pebble accretion is suppressed when the disk is highly turbulent (Morbidelli et al. 2015; Ormel & Liu 2018; Rosenthal et al. 2018). There are two reasons. First, pebbles are stirred up vertically due to the turbulent diffusion. In a strong turbulent case, a vertically extended distribution of pebbles leads to less amount of them being accreted, and the corresponding efficiency is lower. This is similar to the effect of the smaller pebbles in the $3$ D regime discussed above. Second, pebble’s random velocity also correlates with disk turbulence as $\sqrt{\alpha_{\rm t}\tau_{\rm s}}c_{\rm s}$ (Ormel & Cuzzi 2007). The impact velocity between the pebble and planet is additionally contributed by this turbulent-induced motion. The settling condition fails when the turbulent velocity is very high (Sect. 5.1), and therefore, the accretion is also significantly suppressed. Ormel & Liu (2018) obtained a pebble accretion efficiency formula by accounting for the stochastic turbulent velocity into the equation of motion of the pebble. Their results are in agreement with the pebble accretion measured from more realistic MHD simulations of Xu et al. (2017) as well as the vertical shear instability hydrodynamic simulations of Picogna et al. (2018). We note that the trajectories of pebbles and accretion features could deviate from the above described paradigm that only steady-state shear gas flow is considered (Sect. 5.1). There are two additional effects. First, when pebbles are sedimented and accreted onto the planet, the potential energy of pebbles is transferred into frictional heat, which raises the temperature of surrounding gas. The deep gas layer close to the planet is more dynamically convective caused by this accretion-driven heating, which may affect the pebble accretion. Taking into account of both adiabatic and convective models of pebble accretion in hydrodynamic simulations, Popovas et al. (2018) found that even though an active mass mixing among different layers is indeed observed due to the vigorous gas motion, the net pebble accretion is not strongly affected, except for the smallest particles that are tightly coupled to the gas. Second, when the pebbles fall into the region close to the planet with a temperature exceeding their evaporating temperature, these pebbles get vaporized, resulting in an enrichment of the planetary envelope rather than direct accretion onto the core (Alibert & Benz 2017; Brouwers et al. 2018; Valletta & Helled 2019). This enrichment increases the gas mean molecular weight of the envelope, resulting in a thinner, opaque envelope (Venturini et al. 2016). Meanwhile, hydrodynamic simulations indicated that the envelopes gas of low-mass protoplanets is not in a steady-state but get replenished by the surrounding disk gas (Ormel et al. 2015; Fung & Dong 2015; Cimerman et al. 2017). It is unclear how largely the envelope enrichment process would be affected the above gas recycling. The ablation of accreting pebbles with realistic radiative transfer plus gas replenishment models, and how these affect the core mass growth and gas accretion is an active research topic subject to future investigations. 5.4 Applications solar system The pebble accretion scenario has been used to explain the formation of Solar System. Based on the fact that icy pebbles drift across the water-ice line and sublimate into small silicate pebbles, Morbidelli et al. (2015) inferred that the growth of the protoplanets is in a slow $3$D accretion interior to the ice line and a fast $2$D accretion exterior to the ice line. This results in low-mass progenitors of terrestrial planets in the inner disk regions and massive cores of giant planets in the outer disk regions, in agreement with the architecture of Solar System. Levison et al. (2015) found that before efficient pebble accretion commences, an early phase of velocity stirring of protoplanets is required. Protoplanets then evolve into a bi-modal mass distribution. Due to dynamical friction, the massive protoplanets have low velocity dispersions. After this phase, only these massive protoplanets can undergo rapid pebble accretion and grow to giant planet cores. The above self-excitation process is essential to explain why only a few giant planet cores formed in the Solar System. If there is no such a pre-stirring phase, hundreds of Earth-mass objects would form instead (Kretke & Levison 2014). Johansen et al. (2015) proposed the formation of asteroids and Kuiper belt objects by accreting chondrules, which is the millimeter-sized spherules commonly found in primitive meteorites. Furthermore, the Galilean satellites formation have also been recently proposed to be aided by pebble accretion (Shibaike et al. 2019; Ronnet & Johansen 2020). exoplanetary systems The pebble accretion scenario is also widely adopted to explain the formation of super-Earths and gas gaints in exoplanetary systems. Lambrechts & Johansen (2014) constructed a dust growth and pebble drift model to investigate the formation of giant planet cores by pebble accretion. Bitsch et al. (2015) studied the influence of disk radial distance and evolutional phase on the growth and migration of a single protoplanet, and Johansen et al. (2019) focused on under which conditions growth can overcome migration. Recent numerous studies have incorporated pebble accretion model into N-body code to investigate the impact of pebble accretion on final planetary architectures (Matsumura et al. 2017; Lambrechts et al. 2019; Bitsch et al. 2019; Izidoro et al. 2019; Liu et al. 2019b; Schoonenberg et al. 2019; Coleman et al. 2019; Wimarsson et al. 2020; Ogihara et al. 2020) For instance, Lambrechts et al. (2019) showed that the final type of planetary system (terrestrial planets or super-Earths) is crucially determined by the pebble flux, or equivalently, the total masses of pebble reservoir in the protoplanetary disk. systems around low-mass stars Studies of pebble accretion are not only limited to the systems around solar-mass stars, but also are generalized to stellar hosts of different masses. Ormel (2017) proposed a formation scenario for TRAPPIST-$1$ (Gillon et al. 2016, 2017) and other compact systems around very low-mass M-dwarf stars. In their scenario, protoplanets form by the streaming instability at the water-ice line. These protoplanets subsequently undergo inward migration and accrete pebbles to reach their final masses. Ormel (2017) suggested that all TRAPPIST-$1$ planets of roughly Earth-mass could be an indication of the planet mass regulated by pebble isolation. The follow-up numerical simulations by Schoonenberg et al. (2019) verified that these forming planets have ${\approx}10\%$ water mass fractions, consistent with bulk density measurements and ineterior modelling of the TRAPPIST-$1$ planets (Grimm et al. 2018; Unterborn et al. 2017; Dorn et al. 2018). Liu et al. (2019a) investigated pebble-driven planet formation around stars of masses from $0.08\ M_{\odot}$ to $1\ M_{\odot}$. Figure 7 illustrates the planet populations from observations and the population synthesis model of Liu et al. (2019a). It should be noted that the observed sample is adopted from different surveys and uncorrected for any selectional bias. The figure nevertheless illustrates important features. First, a paucity of giant planets but not super-Earths are found around stars with masses below ${\simeq}0.1{-}0.2\ M_{\odot}$ (Figure 7a). Since larger planets are easier to be detected than smaller planets for the same mass stellar hosts, the above planet desert is physical. Second, there seems to be a linear mass trend between the low-mass rocky-dominated planets (reflecting their core masses) and their stellar hosts for systems around stars less massive than $0.3\ M_{\odot}$. Again, the observational bias would not be the cause of this pattern, which leads to the opposite trend, due to the fact that more massive planets are easier to be detected around smaller stars. For systems around stars massive than $0.3M_{\odot}$, we cannot directly infer the core masses, since the observed sample are outnumbered by gas-rich giant planets. On the other hand, the above linear $M_{\rm c}{-}M_{\star}$ correlation exhibited in late-M dwarf systems is in good agreement with the inter-system uniformity reported from the super-Earth planets detected by the Kepler mission around more massive early M-dwarfs and FGK stars (Pascucci et al. 2018; Wu 2019). The validity of this positive correlation also needs to be unbiased and complementary tested by current and future RV programs for a wider mass range of stellar hosts, such as HARPS (High Accuracy Radial Velocity Planet Searcher, Mayor et al. 2003, 2011; Udry et al. 2019), ESPRESSO (Echelle Spectrograph for Rocky Exoplanet and Stable Spectroscopic Observations, Pepe et al. 2010), PFS (Planet Finder Spectrograph, Crane et al. 2010; Feng et al. 2019), and CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Echelle Spectrographs, Quirrenbach et al. 2016). Importantly, Liu et al. (2019a) proposed that the characteristic mass (core mass) of super-Earths is set by the pebble isolation mass, which increases linearly with the mass of the stellar host. The pebble isolation mass can be written as (Liu et al. 2019a) $$M_{\rm iso}=25\left(\frac{M_{\star}}{1\ M_{\odot}}\right)\left(\frac{h_{\rm g}% }{0.05}\right)^{3}\ M_{\oplus}=25\left(\frac{M_{\star}}{1\ M_{\odot}}\right)^{% 4/3}\left(\frac{\dot{M}_{\rm g\odot}}{6\times 10^{-8}M_{\odot}\ \rm yr^{-1}}% \right)^{2/3}.$$ (26) In the above equation, we neglect $\alpha_{\rm t}$ and $\eta$ dependencies on $M_{\rm iso}$, $\dot{M}_{\rm g\odot}$ is the fiducial disk accretion rate around the solar-mass star, and $h_{\rm g}$ is derived from the viscously heated disk model by assuming that the disk accretion rate scales with the stellar mass square. Figure 7b shows the resultant planets generated from the population synthesis model, and the solid line refers to the pebble isolation mass (Eq. 26). We clearly see that the super-Earths reaching $M_{\rm iso}$ increase with their stellar masses, approximately Earth-mass terrestrial planets around stars of $0.1\ M_{\odot}$ and $10{-}20\ \ M_{\oplus}$ around solar-mass stars. We also note that $M_{\rm iso}$ decreases when disk evolves and $h_{\rm g}$ as well as $\dot{M}_{\rm g}$ decline. For instance, planets around solar-mass stars reach $M_{\rm iso}$ of $7\ M_{\oplus}$ when $\dot{M}_{\rm g\odot}{=}10^{-8}M_{\odot}\ \rm yr^{-1}$. Since $M_{\rm iso}$ is so low around late M-dwarfs, massive gas giants are unlikely to form in such systems through the pebble accretion planet formation channel. Liu et al. (2020) further applied this approach for even lower-mass central hosts and found the above linear mass scaling holds for planets around brown dwarfs. 6 Comparison between pebble accretion and planetesimal accretion We compare the efficiency of pebble accretion and planetesimal accretion in Sect. 6.1. The stellar mass and radial distance dependence are discussed in Sect. 6.2. We highlight the importance of incorporating these two accretion mechanisms for planet formation in Sect. 6.3. 6.1 Why pebble accretion is more efficient than planetesimal accretion Pebble accretion have gained its attention since it is a more efficient growth process compared to planetesimal accretion. This comparison can be demonstrated from the following two aspects: the accretion cross section and total mass of feeding materials. For planetesimal accretion, the gravitational focusing factor reaches the maximum value when $\delta v=v_{\rm H}$ (see Sect. 4.1). Therefore, the ratio between accretion radius and physical radius is expressed as $$\displaystyle b_{\rm PlA}/R$$ $$\displaystyle=\sqrt{1+\left(\frac{{v_{\rm esc}}}{{R_{\rm H}\Omega_{\rm K}}}% \right)^{2}}\simeq 2\left(\frac{M_{\rm p}}{M_{\star}}\right)^{1/6}\left(\frac{% a}{R}\right)^{1/2}$$ (27) $$\displaystyle\simeq 35\left(\frac{a}{1\rm\ AU}\right)^{1/2}\left(\frac{\rho_{% \bullet}}{5\ \rm gcm^{-3}}\right)^{1/2}\left(\frac{M_{\star}}{M_{\odot}}\right% )^{-1/6}.$$ In the shear-dominated pebble accretion regime ($\Delta v\simeq b_{\rm peb}\Omega_{\rm K}$), from Eq. 17 the accretion radius reads $$b_{\rm PA}\simeq\sqrt{\frac{GM_{\rm p}t_{\rm stop}}{\Delta v}}\simeq\tau_{\rm s% }^{1/3}R_{\rm H}.$$ (28) The pebble accretion radius can maximally approach the planet Hill radius when $\tau_{\rm s}$ reaches the order of unity. The enhanced factor is given by $$b_{\rm PA}/R\simeq\tau_{\rm s}^{1/3}\left(\frac{R_{\rm H}}{R}\right)\sim 230% \tau_{\rm s}^{1/3}\left(\frac{a}{1\rm\ AU}\right)\left(\frac{\rho_{\bullet}}{5% \ \rm g\ cm^{-2}}\right)^{1/3},$$ (29) From Eqs. 27 and 29, we can see that generally the pebble accretion has a larger accretion cross section compared to the planetesimal accretion. The feature can be seen in Figure 8, which illustrates the trajectories of planetesimals (left) and pebbles (right) during their interactions with an Earth-mass planet. Pebbles lose the angular momentum more efficiently and sediment towards to the planet by a combination of gas drag and gravitational force. The above difference is more pronounced when the Stokes number of the pebbles is more closer to the order of unity, the planet is further out and/or around more massive stellar hosts. In addition, since planetesimals (${\sim}100$ km) are weakly affected by gas, the orbits of the planetesimals are relatively fixed. The planetary bodies only accrete their local planetesimals. The maximum mass that the planet can reach (planetesimal isolation mass) is given by $M_{\rm pl,iso}{=}2\pi a\Delta a\Sigma_{\rm pl}$ where the width of the feeding zone $\Delta a{\sim}10R_{\rm H}$. For the MMSN, the planet grows maximumly to Mars mass in the inner terrestrial planet region and a few Earth masses beyond the water ice line (e.g., Ida & Lin 2004a). On the contrary, pebble accretion is not limited to the local pebble density. The majority of the dust reside in outer part of the disk. Pebbles, which grow from dust grains, drift inwardly from the outer region of the disk. Due to the mobility of pebbles, there is no concept of feeding zone for pebble accretion. The feeding materials correspond to all pebbles that are able to bypass the orbit of the planet. The planets stop their core growth only when they reach the pebble isolation mass, which has a similar scaling as the gap opening mass (Eq. 20). Therefore, in contrast to planetesimal accretion, the final core mass of the planet by pebble accretion mainly depends on the mass of the central star and the aspect ratio of the disk, but not the local density of solids. 6.2 Stellar mass and radial distance dependence Here we qualitatively discuss how these two accretion scenarios depend on the mass of the stellar host and the orbital distance. First, we focus on the stellar mass dependence. Based on Eq. 15, the growth timescale for planetesimal accretion is given by $t_{\rm PlA}\propto\Sigma_{\rm pl}^{-1}M_{\star}^{-1/2}$. We have $t_{\rm PA}\propto\dot{M}_{\rm peb}^{-1}M_{\star}^{1/2}$ in the $2$D headwind regime from Eq. 25. Since observations indicate lower-mass disks around less massive massive stars (Natta et al. 2006; Andrews et al. 2013; Pascucci et al. 2016), the planetesimal surface density and pebble flux are expected to decrease with the mass of the stellar host. As a result, both pebble accretion and planetesimal accretion tend to be slower as $M_{\star}$ decreases. Furthermore, assuming that the planetesimal surface density and pebble flux have the same stellar mass dependence, we find that the ratio of these two timescales ($\tau_{\rm PA}/\tau_{\rm PlA}$) increases with $M_{\star}$. This in turn indicates that the decreasing rate of planetesimal accretion is faster than that of pebble accretion as $M_{\star}$ decreases. In other words, pebble accretion is even more pronounced compared to planetesimal accretion for the planet growth around lower-mass stars than higher mass stars. The above conclusion holds as well when adopting $t_{\rm PA}$ for the $2$D shear regime (Eq. 25) or $t_{\rm PlA}$ for the expression of the oligarchic growth (Eq. 14). The radial distance dependence is explained as follows. Based on Eq. 15, the planetesimal accretion timescale is written as $t_{\rm PlA}\propto\Sigma_{\rm pl}^{-1}r^{3/2}$. Since the disk surface density additionally decreases with $r$, $t_{\rm PlA}$ is expected to be a strong increasing function of $r$. Planetesimals spend much longer time to grow their masses at further out disk locations. Moreover, as discussed in Sect. 4.4, planet-planet encounters result in scatterings/ejections than collisions at large orbital distances. Therefore, the growth by planetesimal accretion is strongly suppressed or even quenched at large orbital distances. On the other hand, for pebble accretion, $t_{\rm PA}\propto\eta$ in the $2$D shear regime. Although the growth by pebble accretion also turns to be slower at further out disk locations, the radial distance dependence is weaker than planetesimal accretion. Therefore, pebble accretion is more appealing for the formation of distant massive planets. To summarize, when comparing these two accretion scenarios, we find that the pebble accretion becomes more attractive than the planetesimal accretion when the stellar host is less massive and/or the accretion occurs at a larger orbital distance. 6.3 A hybrid accretion of pebbles and planetesimals Despite the above distinctive differences, we want to raise the point that pebble accretion and planetesimal accretion are not two isolated, mutually exclusive growth channels. They are nevertheless likely to be connected and complementary for the planet growth. On the one hand, the above two mechanisms can operate concurrently and jointly contribute to the mass increase. On the other hand, they also compete at certain levels since both pebbles and planetesimals are basic components of solids in protoplanetary disks (Schoonenberg et al. 2019). For instance, the streaming instability converts pebbles into planetesimals once the triggering condition is satisfied. After that, these forming planetesimals accrete surrounding planetesimals as well as pebbles that continuously drift from the outer region of the disk. The following mass growth is a combination of accreting pebbles and planetesimals (Liu et al. 2019b; Schoonenberg et al. 2019). We can consider two types of extreme situations. In one circumstance, the streaming instability is extensively triggered and the majority of the solids are in form of planetesimals. The following growth are naturally led by planetesimal accretion. In the other circumstance, when the streaming instability is modestly triggered, the dominant solid masses are still in pebbles. Therefore, pebble accretion is the central mechanism for the growth of the cores. As can be expected, a more general pattern is that pebble and planetesimal accretions co-operate together to feed the planet growth. In addition, the above two mechanisms may occur at different evolutionary stage (Alibert et al. 2018; Venturini & Helled 2020) and/or in different disk regions (Ormel et al. 2017). For example, when a planet reaches the pebble isolation mass, the inward drifting pebble flux is truncated. Pebble accretion and planetesimal accretion occur for planets whose orbits are beyond the gap-opening planet. Interior to this massive planet, collisions between planetesimals and protoplanets is the only pathway for core mass growth. This might provide an explanation for the mass dichotomy between the inner low-mass terrestrial planets and outer massive giant planets in the Solar System (Ormel et al. 2017). All in all, the detailed exploration based on the concept of the hybrid planetesimal and pebble accretion is an active research area and requires future investigations. 7 Summary and future outlook In this review, we have recapitulated the current states of exoplanet demographics and disk observations (Section 1). The planet formation theories have been overviewed chronologically, including dust coagulation and radial drift (Section 2), planetesimals formation (Section 3), and subsequent planetesimal growth by planetesimal accretion (Section 4) and pebble accretion (Section 5). Importantly, we have discussed how different planet formation models fit into observations in each growth stage. Lastly, we propose some open questions in this field, which are existing topics with disputed interpretations and unsolved puzzles that require future studies. These questions are summarized as follows: • What is the characteristic size of solid particles in protoplanetary disks? Are these particles mm-cm size suggested by spectral index observations, or $100\ \mu$m inferred from polarization measurements? On the one hand, the answer itself is valuable, since it needs to test the validity of different model interpretations and the underlying assumptions. On the other hand, the answer is also crucially related to the subsequent planet formation processes such as the streaming instability and pebble accretion, which essentially depends on the size of solid particles. • Current streaming instability and pebble acceretion studies are limited to the systems around single stars. Since the gas streamline in protoplantary disks around binaries are significantly deviated from static, axisymmetric cases around single stellar hosts, it still remains to be seen how these gas-assisted formation mechanisms operate in this highly-perturbed binary environment. • How can we distinguish planetesimal accretion and pebble accretion mechanisms from a observational perspective? Namely, what would be the key observational signatures that result from different formation channels (see e.g., Brügger et al. 2020)? Can we really claim that which types of systems can be uniquely produced by pebble accretion, or vice versa? • Recent impact models showed that the accreting pebbles as well as small planetesimals may get vaporized on their way to the planet interior due to the increased thermal ablation and friction (see Sect. 5.3). The materials enrich the planetary atmospheres rather than directly impact the solid cores. The resultant core mass is substantially lower than the traditional critical mass that a planet requires initiating rapid gas accretion (Brouwers & Ormel 2020). How this mass deposition process affects the compositional and thermal structures of envelopes, and further influences the gas accretion is not well understood. In addition, small bodies in protoplanetary disks feel head wind from disk gas and suffer the surface shear stress (Paraskov et al. 2006; Schaffer et al. 2020). The influence of wind erosion during the above two accretion processes is also poorly investigated and desires future studies. • How early can planets form? There seems to be evidence of large grains existing in the early Class I disk phase (Harsono et al. 2018). Based on the isotope measurements of meteorites, the core of Jupiter with ${\sim}20\ M_{\oplus}$ is suggested to form early within $0.5{-}1$ Myr (Kruijer et al. 2017). Furthermore, if the rings and gaps in HL Tau disk are induced by planets, it then indicates that planets of sub-Jupiter masses might already form at large orbital separations within $1$ Myr in such a young system. Putting all clues together, can we speculate that planet formation is more rapid than what we thought before? • We have already learned much about of the planet-related properties from exoplanet demographics: occurrence rate, mass, radius, and the corresponding dependencies on the mass and metallicity of the stellar host. The next step is to understand the planetary-system-related properties, i.e., the architectures. Can different populations of planets co-exist with each other under certain conditions, or the formation of one type of planets inhibit the growth of the others? What would be the architecture of these systems? Some of these questions have already been pointed our by Zhu & Wu (2018) and Masuda et al. (2020). In order to further address these questions, both follow-up observations and dedicated numerical modellings are needed. • As raised by Murchikova & Tremaine (2020), is planet formation really an independent and generalized process that planets does not know each other and the environment where they reside? Or planet formation is universally set by a few of key physical processes related with disk and host properties (Liu et al. 2019a), and therefore, the imprinted planetary systems may contain some degrees of intra-system/inter-system similarity (Millholland et al. 2017; Wang 2017; Weiss et al. 2018; Wu 2019)? With a rapid increasing number of characterized exoplanets and protoplanetary disks, we have greatly improved the statistics of planet formation, in the context of both initial conditions and final products. The theoretical study of planet formation has been advanced enormously in the last decade. The validity of the modern planet formation scenarios need to match with various observational constraints. Future work is also required to account for the consistency with findings from ongoing/upcoming space and ground-base facilities, such as TESS and ALMA. Acknowledgements. We thank editor Wing-Huen Ip and the anonymous referee for valuable comments and suggestions. We also thank Anders Johansen, Michiel Lambrechts, Joanna Dra̧żkowska, Chris Ormel, Rixin Li, Jiwei Xie, Shangfei Liu, Wei Zhu and Feng Long for proofreading the manuscript and providing useful suggestions. B. L. greatly appreciate the valuable discussions with Gijs Mulders, Daniel Harsono, Joanna Dra̧żkowska and Chris Ormel as a PPVII chapter proposal team. In addition, B.L. wishes to express the deepest gratefulness to Adam Showman (1968-2020) for his inspiration and guidance in B.L.’s early academic career. Adam is an amazing scientist and a tremendous mentor. His scientific contribution as well as the great personality will keep influencing and memorizing by the community. Lastly, B.L. feels especially grateful to his girlfriend, Jing Yang, for her spiritual support during the writing period. In these tough Covid-19 epidemic times, many things seemed extraordinary now become ordinary. The work would not be done successfully without her dedicated encouragements. B.L. is supported by the European Research Council (ERC Consolidator Grant 724687-PLANETESYS), the Swedish Walter Gyllenberg Foundation, and start-up grant of Bairen program from Zhejiang University. J.J. is supported by the B-type Strategic Priority Program of the Chinese Academy of Sciences (Grant No. XDB41000000), the National Natural Science Foundation of China (Grant Nos. 11773081), CAS Interdisciplinary Innovation Team, Foundation of Minor Planets of the Purple Mountain Observatory. References Abod et al. (2019) Abod, C. P., Simon, J. B., Li, R., et al. 2019, ApJ, 883, 192 Adachi et al. (1976) Adachi, I., Hayashi, C., & Nakazawa, K. 1976, Progress of Theoretical Physics, 56, 1756 Agnor & Asphaug (2004) Agnor, C., & Asphaug, E. 2004, ApJ, 613, L157 Agnor et al. (1999) Agnor, C. B., Canup, R. M., & Levison, H. F. 1999, Icarus, 142, 219 Albrecht et al. (2012) Albrecht, S., Winn, J. N., Johnson, J. A., et al. 2012, ApJ, 757, 18 Alibert & Benz (2017) Alibert, Y., & Benz, W. 2017, A&A, 598, L5 Alibert et al. (2018) Alibert, Y., Venturini, J., Helled, R., et al. 2018, Nature Astronomy, 2, 873 ALMA Partnership et al. (2015) ALMA Partnership, Brogan, C. L., Pérez, L. M., et al. 2015, ApJ, 808, L3 Amelin et al. (2010) Amelin, Y., Kaltenbach, A., Iizuka, T., et al. 2010, Earth and Planetary Science Letters, 300, 343 Amelin et al. (2002) Amelin, Y., Krot, A. N., Hutcheon, I. D., & Ulyanov, A. A. 2002, Science, 297, 1678 Anderson et al. (2016) Anderson, K. R., Storch, N. I., & Lai, D. 2016, MNRAS, 456, 3671 Andrews et al. (2013) Andrews, S. M., Rosenfeld, K. A., Kraus, A. L., & Wilner, D. J. 2013, ApJ, 771, 129 Andrews et al. (2009) Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2009, ApJ, 700, 1502 Andrews et al. (2012) Andrews, S. M., Wilner, D. J., Hughes, A. M., et al. 2012, ApJ, 744, 162 Andrews et al. (2018) Andrews, S. M., Huang, J., Pérez, L. M., et al. 2018, ApJ, 869, L41 Ansdell et al. (2018) Ansdell, M., Williams, J. P., Trapman, L., et al. 2018, ApJ, 859, 21 Artymowicz (1993) Artymowicz, P. 1993, ApJ, 419, 166 Artymowicz & Lubow (1994) Artymowicz, P., & Lubow, S. H. 1994, ApJ, 421, 651 Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481 Ataiee et al. (2018) Ataiee, S., Baruteau, C., Alibert, Y., & Benz, W. 2018, A&A, 615, A110 Bacciotti et al. (2018) Bacciotti, F., Girart, J. M., Padovani, M., et al. 2018, ApJ, 865, L12 Bae & Zhu (2018) Bae, J., & Zhu, Z. 2018, ApJ, 859, 119 Bai (2015) Bai, X.-N. 2015, ApJ, 798, 84 Bai (2016) Bai, X.-N. 2016, ApJ, 821, 80 Bai & Stone (2010) Bai, X.-N., & Stone, J. M. 2010, ApJ, 722, 1437 Bai & Stone (2013) Bai, X.-N., & Stone, J. M. 2013, ApJ, 769, 76 Ballering & Eisner (2019) Ballering, N. P., & Eisner, J. A. 2019, AJ, 157, 144 Baruteau et al. (2014) Baruteau, C., Crida, A., Paardekooper, S. J., et al. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 667 Batalha et al. (2013) Batalha, N. M., Rowe, J. F., Bryson, S. T., et al. 2013, ApJS, 204, 24 Batygin & Adams (2017) Batygin, K., & Adams, F. C. 2017, AJ, 153, 120 Batygin & Morbidelli (2013) Batygin, K., & Morbidelli, A. 2013, AJ, 145, 1 Bell et al. (1997) Bell, K. R., Cassen, P. M., Klahr, H. H., & Henning, T. 1997, ApJ, 486, 372 Benecchi et al. (2009) Benecchi, S. D., Noll, K. S., Grundy, W. M., et al. 2009, Icarus, 200, 292 Benisty et al. (2016) Benisty, M., Stolker, T., Pohl, A., et al. 2016, A&A, 597, A42 Benz et al. (2014) Benz, W., Ida, S., Alibert, Y., Lin, D., & Mordasini, C. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 691 Birnstiel et al. (2010) Birnstiel, T., Dullemond, C. P., & Brauer, F. 2010, A&A, 513, A79 Birnstiel et al. (2012) Birnstiel, T., Klahr, H., & Ercolano, B. 2012, A&A, 539, A148 Birnstiel et al. (2018) Birnstiel, T., Dullemond, C. P., Zhu, Z., et al. 2018, ApJ, 869, L45 Bitsch et al. (2019) Bitsch, B., Izidoro, A., Johansen, A., et al. 2019, A&A, 623, A88 Bitsch et al. (2015) Bitsch, B., Johansen, A., Lambrechts, M., & Morbidelli, A. 2015, A&A, 575, A28 Bitsch et al. (2018) Bitsch, B., Morbidelli, A., Johansen, A., et al. 2018, A&A, 612, A30 Blum & Wurm (2000) Blum, J., & Wurm, G. 2000, Icarus, 143, 138 Blum & Wurm (2008) Blum, J., & Wurm, G. 2008, ARA&A, 46, 21 Bohlin et al. (1978) Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, ApJ, 224, 132 Bonfils et al. (2013) Bonfils, X., Delfosse, X., Udry, S., et al. 2013, A&A, 549, A109 Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977 Borucki et al. (2011) Borucki, W. J., Koch, D. G., Basri, G., et al. 2011, ApJ, 736, 19 Brauer et al. (2008) Brauer, F., Dullemond, C. P., & Henning, T. 2008, A&A, 480, 859 Brinch et al. (2016) Brinch, C., Jørgensen, J. K., Hogerheijde, M. R., Nelson, R. P., & Gressel, O. 2016, ApJ, 830, L16 Brouwers & Ormel (2020) Brouwers, M. G., & Ormel, C. W. 2020, A&A, 634, A15 Brouwers et al. (2018) Brouwers, M. G., Vazan, A., & Ormel, C. W. 2018, A&A, 611, A65 Brügger et al. (2020) Brügger, N., Burn, R., Coleman, G., Alibert, Y., & Benz, W. 2020, arXiv e-prints, arXiv:2006.04121 Bryan et al. (2019) Bryan, M. L., Knutson, H. A., Lee, E. J., et al. 2019, AJ, 157, 52 Buchhave et al. (2012) Buchhave, L. A., Latham, D. W., Johansen, A., et al. 2012, Nature, 486, 375 Buchhave et al. (2014) Buchhave, L. A., Bizzarro, M., Latham, D. W., et al. 2014, Nature, 509, 593 Burke et al. (2014) Burke, C. J., Bryson, S. T., Mullally, F., et al. 2014, ApJS, 210, 19 Cai et al. (2017) Cai, M. X., Kouwenhoven, M. B. N., Portegies Zwart, S. F., & Spurzem, R. 2017, MNRAS, 470, 4337 Calvet et al. (2005) Calvet, N., D’Alessio, P., Watson, D. M., et al. 2005, ApJ, 630, L185 Carrasco-González et al. (2019) Carrasco-González, C., Sierra, A., Flock, M., et al. 2019, ApJ, 883, 71 Carrera et al. (2017) Carrera, D., Gorti, U., Johansen, A., & Davies, M. B. 2017, ApJ, 839, 16 Carrera et al. (2015) Carrera, D., Johansen, A., & Davies, M. B. 2015, A&A, 579, A43 Chambers (2001) Chambers, J. E. 2001, Icarus, 152, 205 Chambers (2013) Chambers, J. E. 2013, Icarus, 224, 43 Chambers & Wetherill (1998) Chambers, J. E., & Wetherill, G. W. 1998, Icarus, 136, 304 Chatterjee & Ford (2015) Chatterjee, S., & Ford, E. B. 2015, ApJ, 803, 33 Chatterjee et al. (2008) Chatterjee, S., Ford, E. B., Matsumura, S., & Rasio, F. A. 2008, ApJ, 686, 580 Chatterjee & Tan (2014) Chatterjee, S., & Tan, J. C. 2014, ApJ, 780, 53 Chen & Lin (2020) Chen, K., & Lin, M.-K. 2020, ApJ, 891, 132 Chiang & Goldreich (1997) Chiang, E. I., & Goldreich, P. 1997, ApJ, 490, 368 Chiang & Laughlin (2013) Chiang, E., & Laughlin, G. 2013, MNRAS, 431, 3444 Cimerman et al. (2017) Cimerman, N. P., Kuiper, R., & Ormel, C. W. 2017, MNRAS, 471, 4662 Coleman et al. (2019) Coleman, G. A. L., Leleu, A., Alibert, Y., & Benz, W. 2019, A&A, 631, A7 Coleman & Nelson (2014) Coleman, G. A. L., & Nelson, R. P. 2014, MNRAS, 445, 479 Coleman & Nelson (2016) Coleman, G. A. L., & Nelson, R. P. 2016, MNRAS, 457, 2480 Connelly et al. (2012) Connelly, J. N., Bizzarro, M., Krot, A. N., et al. 2012, Science, 338, 651 Cossou et al. (2014) Cossou, C., Raymond, S. N., Hersant, F., & Pierens, A. 2014, A&A, 569, A56 Crane et al. (2010) Crane, J. D., Shectman, S. A., Butler, R. P., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, 773553 Cumming et al. (2008) Cumming, A., Butler, R. P., Marcy, G. W., et al. 2008, PASP, 120, 531 Cuzzi et al. (1993) Cuzzi, J. N., Dobrovolskis, A. R., & Champney, J. M. 1993, Icarus, 106, 102 Cuzzi & Zahnle (2004) Cuzzi, J. N., & Zahnle, K. J. 2004, ApJ, 614, 490 Czekala et al. (2019) Czekala, I., Chiang, E., Andrews, S. M., et al. 2019, ApJ, 883, 22 Dauphas & Pourmand (2011) Dauphas, N., & Pourmand, A. 2011, Nature, 473, 489 Dawson et al. (2016) Dawson, R. I., Lee, E. J., & Chiang, E. 2016, ApJ, 822, 54 Dawson & Murray-Clay (2013) Dawson, R. I., & Murray-Clay, R. A. 2013, ApJ, 767, L24 Delisle & Laskar (2014) Delisle, J. B., & Laskar, J. 2014, A&A, 570, L7 Dipierro et al. (2015) Dipierro, G., Price, D., Laibe, G., et al. 2015, MNRAS, 453, L73 Dominik & Tielens (1997) Dominik, C., & Tielens, A. G. G. M. 1997, ApJ, 480, 647 Donati et al. (2016) Donati, J. F., Moutou, C., Malo, L., et al. 2016, Nature, 534, 662 Donati et al. (2020) Donati, J. F., Bouvier, J., Alencar, S. H., et al. 2020, MNRAS, 491, 5660 Dong et al. (2015a) Dong, R., Hall, C., Rice, K., & Chiang, E. 2015a, ApJ, 812, L32 Dong et al. (2015b) Dong, R., Zhu, Z., Rafikov, R. R., & Stone, J. M. 2015b, ApJ, 809, L5 Dong et al. (2015c) Dong, R., Zhu, Z., & Whitney, B. 2015c, ApJ, 809, 93 Dong et al. (2012) Dong, R., Rafikov, R., Zhu, Z., et al. 2012, ApJ, 750, 161 Dong et al. (2014) Dong, S., Katz, B., & Socrates, A. 2014, ApJ, 781, L5 Dong et al. (2018) Dong, S., Xie, J.-W., Zhou, J.-L., Zheng, Z., & Luo, A. 2018, Proceedings of the National Academy of Science, 115, 266 Dong & Zhu (2013) Dong, S., & Zhu, Z. 2013, ApJ, 778, 53 Dorn et al. (2018) Dorn, C., Mosegaard, K., Grimm, S. L., & Alibert, Y. 2018, The Astrophysical Journal, 865, 20 Doyle et al. (2015) Doyle, P., Jogo, K., Nagashima, K., et al. 2015, Nat Commun, 6, 7444 Draine (2006) Draine, B. T. 2006, ApJ, 636, 1114 Dra̧żkowska & Alibert (2017) Dra̧żkowska, J., & Alibert, Y. 2017, A&A, 608, A92 Dra̧żkowska et al. (2016) Dra̧żkowska, J., Alibert, Y., & Moore, B. 2016, A&A, 594, A105 Dra̧żkowska et al. (2013) Dra̧żkowska, J., Windmark, F., & Dullemond, C. P. 2013, A&A, 556, A37 Dressing & Charbonneau (2015) Dressing, C. D., & Charbonneau, D. 2015, ApJ, 807, 45 Dullemond & Dominik (2004) Dullemond, C. P., & Dominik, C. 2004, A&A, 417, 159 Dullemond et al. (2001) Dullemond, C. P., Dominik, C., & Natta, A. 2001, ApJ, 560, 957 Dullemond et al. (2018) Dullemond, C. P., Birnstiel, T., Huang, J., et al. 2018, ApJ, 869, L46 Duquennoy & Mayor (1991) Duquennoy, A., & Mayor, M. 1991, A&A, 500, 337 Dutrey et al. (1998) Dutrey, A., Guilloteau, S., Prato, L., et al. 1998, A&A, 338, L63 Eriksson et al. (2020) Eriksson, L. E. J., Johansen, A., & Liu, B. 2020, A&A, 635, A110 Espaillat et al. (2014) Espaillat, C., Muzerolle, J., Najita, J., et al. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 497 Fabrycky et al. (2014) Fabrycky, D. C., Lissauer, J. J., Ragozzine, D., et al. 2014, ApJ, 790, 146 Fabrycky & Tremaine (2007) Fabrycky, D., & Tremaine, S. 2007, ApJ, 669, 1298 Fedele et al. (2017) Fedele, D., Carney, M., Hogerheijde, M. R., et al. 2017, A&A, 600, A72 Feng et al. (2019) Feng, F., Crane, J. D., Xuesong Wang, S., et al. 2019, ApJS, 242, 25 Fernandes et al. (2019) Fernandes, R. B., Mulders, G. D., Pascucci, I., Mordasini, C., & Emsenhuber, A. 2019, ApJ, 874, 81 Fischer & Valenti (2005) Fischer, D. A., & Valenti, J. 2005, ApJ, 622, 1102 Flock et al. (2017) Flock, M., Nelson, R. P., Turner, N. J., et al. 2017, ApJ, 850, 131 Flock et al. (2015) Flock, M., Ruge, J. P., Dzyurkevich, N., et al. 2015, A&A, 574, A68 Foley et al. (2005) Foley, C. N., Wadhwa, M., Borg, L. E., et al. 2005, Geochim. Cosmochim. Acta, 69, 4557 Ford & Rasio (2008) Ford, E. B., & Rasio, F. A. 2008, ApJ, 686, 621 Fulton & Petigura (2018) Fulton, B. J., & Petigura, E. A. 2018, The Astronomical Journal, 156, 264 Fulton et al. (2017) Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109 Fung & Dong (2015) Fung, J., & Dong, R. 2015, ApJ, 815, L21 Gammie (1996) Gammie, C. F. 1996, ApJ, 457, 355 Garaud & Lin (2007) Garaud, P., & Lin, D. N. C. 2007, ApJ, 654, 606 Genda et al. (2012) Genda, H., Kokubo, E., & Ida, S. 2012, ApJ, 744, 137 Gerbig et al. (2020) Gerbig, K., Murray-Clay, R. A., Klahr, H., & Baehr, H. 2020, ApJ, 895, 91 Gibbons et al. (2012) Gibbons, P. G., Rice, W. K. M., & Mamatsashvili, G. R. 2012, MNRAS, 426, 1444 Gillon et al. (2016) Gillon, M., Jehin, E., Lederer, S. M., et al. 2016, Nature, 533, 221 Gillon et al. (2017) Gillon, M., Triaud, A. H. M. J., Demory, B.-O., et al. 2017, Nature, 542, 456 Ginzburg et al. (2018) Ginzburg, S., Schlichting, H. E., & Sari, R. 2018, MNRAS, 476, 759 Goldreich et al. (2002) Goldreich, P., Lithwick, Y., & Sari, R. 2002, Nature, 420, 643 Goldreich et al. (2004) Goldreich, P., Lithwick, Y., & Sari, R. 2004, ARA&A, 42, 549 Goldreich & Schlichting (2014) Goldreich, P., & Schlichting, H. E. 2014, AJ, 147, 32 Goldreich & Tremaine (1979) Goldreich, P., & Tremaine, S. 1979, ApJ, 233, 857 Goldreich & Tremaine (1980) Goldreich, P., & Tremaine, S. 1980, ApJ, 241, 425 Goldreich & Ward (1973) Goldreich, P., & Ward, W. R. 1973, ApJ, 183, 1051 Gole et al. (2020) Gole, D. A., Simon, J. B., Li, R., Youdin, A. N., & Armitage, P. J. 2020, arXiv e-prints, arXiv:2001.10000 Gong & Ji (2017) Gong, Y.-X., & Ji, J. 2017, AJ, 154, 179 Gong & Ji (2018) Gong, Y.-X., & Ji, J. 2018, MNRAS, 478, 4565 Grady et al. (2012) Grady, C. A., Muto, T., Hashimoto, J., et al. 2012, ApJ, 762, 48 Greenberg et al. (1978) Greenberg, R., Wacker, J. F., Hartmann, W. K., & Chapman, C. R. 1978, Icarus, 35, 1 Gressel et al. (2015) Gressel, O., Turner, N. J., Nelson, R. P., & McNally, C. P. 2015, ApJ, 801, 84 Grimm et al. (2018) Grimm, S. L., Demory, B.-O., Gillon, M., et al. 2018, A&A, 613, A68 Grundy et al. (2019) Grundy, W. M., Noll, K. S., Roe, H. G., et al. 2019, Icarus, 334, 62 Grundy et al. (2020) Grundy, W. M., Bird, M. K., Britt, D. T., et al. 2020, Science, 367, aay3705 Guillot et al. (2014) Guillot, T., Ida, S., & Ormel, C. W. 2014, A&A, 572, A72 Gundlach & Blum (2015) Gundlach, B., & Blum, J. 2015, ApJ, 798, 34 Gundlach et al. (2018) Gundlach, B., Schmidt, K. P., Kreuzig, C., et al. 2018, MNRAS, 479, 1273 Gupta & Schlichting (2019) Gupta, A., & Schlichting, H. E. 2019, MNRAS, 1166 Güttler et al. (2010) Güttler, C., Blum, J., Zsom, A., Ormel, C. W., & Dullemond, C. P. 2010, A&A, 513, A56 Hadden & Lithwick (2018) Hadden, S., & Lithwick, Y. 2018, AJ, 156, 95 Haffert et al. (2019) Haffert, S. Y., Bohn, A. J., de Boer, J., et al. 2019, Nature Astronomy, 3, 749 Haisch et al. (2001) Haisch, Jr., K. E., Lada, E. A., & Lada, C. J. 2001, ApJ, 553, L153 Hansen (2009) Hansen, B. M. S. 2009, ApJ, 703, 1131 Hao et al. (2013) Hao, W., Kouwenhoven, M. B. N., & Spurzem, R. 2013, MNRAS, 433, 867 Harsono et al. (2018) Harsono, D., Bjerkeli, P., van der Wiel, M. H. D., et al. 2018, Nature Astronomy, 2, 646 Hartmann et al. (1998) Hartmann, L., Calvet, N., Gullbring, E., & D’Alessio, P. 1998, ApJ, 495, 385 Hayashi (1981) Hayashi, C. 1981, Progress of Theoretical Physics Supplement, 70, 35 Hellary & Nelson (2011) Hellary, P., & Nelson, R. P. 2011, Monthly Notices of the Royal Astronomical Society, 419, 2737–2757 Helled & Stevenson (2017) Helled, R., & Stevenson, D. 2017, ApJ, 840, L4 Heppenheimer (1978) Heppenheimer, T. A. 1978, A&A, 65, 421 Homma et al. (2019) Homma, K. A., Okuzumi, S., Nakamoto, T., & Ueda, Y. 2019, ApJ, 877, 128 Horn et al. (2012) Horn, B., Lyra, W., Mac Low, M.-M., & Sándor, Z. 2012, ApJ, 750, 34 Howard et al. (2010) Howard, A. W., Marcy, G. W., Johnson, J. A., et al. 2010, Science, 330, 653 Howard et al. (2012) Howard, A. W., Marcy, G. W., Bryson, S. T., et al. 2012, ApJS, 201, 15 Hu et al. (2019) Hu, X., Zhu, Z., Okuzumi, S., et al. 2019, ApJ, 885, 36 Hu et al. (2016) Hu, X., Zhu, Z., Tan, J. C., & Chatterjee, S. 2016, ApJ, 816, 19 Huang et al. (2016) Huang, C., Wu, Y., & Triaud, A. H. M. J. 2016, ApJ, 825, 98 Huang et al. (2018a) Huang, C. X., Shporer, A., Dragomir, D., et al. 2018a, arXiv e-prints, arXiv:1807.11129 Huang et al. (2018b) Huang, J., Andrews, S. M., Cleeves, L. I., et al. 2018b, ApJ, 852, 122 Huang et al. (2019) Huang, P., Dong, R., Li, H., Li, S., & Ji, J. 2019, ApJ, 883, L39 Huang et al. (2018c) Huang, P., Isella, A., Li, H., Li, S., & Ji, J. 2018c, ApJ, 867, 3 Huang et al. (2020) Huang, P., Li, H., Isella, A., et al. 2020, ApJ, 893, 89 Hubickyj et al. (2005) Hubickyj, O., Bodenheimer, P., & Lissauer, J. J. 2005, Icarus, 179, 415 Hughes et al. (2008) Hughes, A. M., Wilner, D. J., Qi, C., & Hogerheijde, M. R. 2008, ApJ, 678, 1119 Hull et al. (2018) Hull, C. L. H., Yang, H., Li, Z.-Y., et al. 2018, ApJ, 860, 82 Hyodo et al. (2019) Hyodo, R., Ida, S., & Charnoz, S. 2019, A&A, 629, A90 Ida & Guillot (2016) Ida, S., & Guillot, T. 2016, A&A, 596, L3 Ida et al. (2016) Ida, S., Guillot, T., & Morbidelli, A. 2016, A&A, 591, A72 Ida & Lin (2004a) Ida, S., & Lin, D. N. C. 2004a, ApJ, 604, 388 Ida & Lin (2004b) Ida, S., & Lin, D. N. C. 2004b, ApJ, 616, 567 Ida & Lin (2005) Ida, S., & Lin, D. N. C. 2005, ApJ, 626, 1045 Ida et al. (2013) Ida, S., Lin, D. N. C., & Nagasawa, M. 2013, ApJ, 775, 42 Ida & Makino (1993) Ida, S., & Makino, J. 1993, Icarus, 106, 210 Ida & Nakazawa (1989) Ida, S., & Nakazawa, K. 1989, A&A, 224, 303 Ikoma et al. (2000) Ikoma, M., Nakazawa, K., & Emori, H. 2000, ApJ, 537, 1013 Inamdar & Schlichting (2016) Inamdar, N. K., & Schlichting, H. E. 2016, ApJ, 817, L13 Isella et al. (2019) Isella, A., Benisty, M., Teague, R., et al. 2019, ApJ, 879, L25 Isella et al. (2016) Isella, A., Guidi, G., Testi, L., et al. 2016, Phys. Rev. Lett., 117, 251101 Izidoro et al. (2019) Izidoro, A., Bitsch, B., Raymond, S. N., et al. 2019, arXiv e-prints, arXiv:1902.08772 Izidoro et al. (2017) Izidoro, A., Ogihara, M., Raymond, S. N., et al. 2017, MNRAS, 470, 1750 Izidoro & Raymond (2018) Izidoro, A., & Raymond, S. N. 2018, Handbook of Exoplanets, 2365–2423 Jacobsen (2005) Jacobsen, S. B. 2005, Annual Review of Earth and Planetary Sciences, 33, 531 Jacobson et al. (2014) Jacobson, S. A., Morbidelli, A., Raymond, S. N., et al. 2014, Nature, 508, 84 Ji & Huang (2020) Ji, J. H., & Huang, X. M. 2020, Chinese Science Bulletin, in press Ji et al. (2011) Ji, J., Jin, S., & Tinney, C. G. 2011, ApJ, 727, L5 Jin et al. (2019) Jin, S., Isella, A., Huang, P., et al. 2019, ApJ, 881, 108 Jin & Ji (2011) Jin, S., & Ji, J. 2011, MNRAS, 418, 1335 Jin et al. (2016) Jin, S., Li, S., Isella, A., Li, H., & Ji, J. 2016, ApJ, 818, 76 Jin & Mordasini (2018) Jin, S., & Mordasini, C. 2018, ApJ, 853, 163 Jin et al. (2014) Jin, S., Mordasini, C., Parmentier, V., et al. 2014, ApJ, 795, 65 Johansen et al. (2014) Johansen, A., Blum, J., Tanaka, H., et al. 2014, Protostars and Planets VI, 547 Johansen et al. (2019) Johansen, A., Ida, S., & Brasser, R. 2019, A&A, 622, A202 Johansen & Klahr (2005) Johansen, A., & Klahr, H. 2005, ApJ, 634, 1353 Johansen & Lambrechts (2017) Johansen, A., & Lambrechts, M. 2017, Annual Review of Earth and Planetary Sciences, 45, 359 Johansen et al. (2015) Johansen, A., Mac Low, M.-M., Lacerda, P., & Bizzarro, M. 2015, Science Advances, 1, 1500109 Johansen et al. (2007) Johansen, A., Oishi, J. S., Mac Low, M.-M., et al. 2007, Nature, 448, 1022 Johansen & Youdin (2007) Johansen, A., & Youdin, A. 2007, ApJ, 662, 627 Johansen et al. (2009) Johansen, A., Youdin, A., & Mac Low, M.-M. 2009, ApJ, 704, L75 Johansen et al. (2012) Johansen, A., Youdin, A. N., & Lithwick, Y. 2012, A&A, 537, A125 Johns-Krull et al. (2016) Johns-Krull, C. M., McLane, J. N., Prato, L., et al. 2016, ApJ, 826, 206 Johnson et al. (2010) Johnson, J. A., Aller, K. M., Howard, A. W., & Crepp, J. R. 2010, PASP, 122, 905 Johnson et al. (2007) Johnson, J. A., Butler, R. P., Marcy, G. W., et al. 2007, ApJ, 670, 833 Jones et al. (2016) Jones, M. I., Jenkins, J. S., Brahm, R., et al. 2016, A&A, 590, A38 Jurić & Tremaine (2008) Jurić, M., & Tremaine, S. 2008, ApJ, 686, 603 Kataoka et al. (2013) Kataoka, A., Tanaka, H., Okuzumi, S., & Wada, K. 2013, A&A, 557, L4 Kataoka et al. (2017) Kataoka, A., Tsukagoshi, T., Pohl, A., et al. 2017, ApJ, 844, L5 Kataoka et al. (2016) Kataoka, A., Tsukagoshi, T., Momose, M., et al. 2016, ApJ, 831, L12 Kennedy et al. (2012) Kennedy, G. M., Wyatt, M. C., Sibthorpe, B., et al. 2012, MNRAS, 421, 2264 Kennedy et al. (2019) Kennedy, G. M., Matrà, L., Facchini, S., et al. 2019, Nature Astronomy, 3, 230 Kenyon & Bromley (2006) Kenyon, S. J., & Bromley, B. C. 2006, AJ, 131, 1837 Keppler et al. (2018) Keppler, M., Benisty, M., Müller, A., et al. 2018, A&A, 617, A44 Kleine et al. (2004) Kleine, T., Mezger, K., Münker, C., Palme, H., & Bischoff, A. 2004, Geochim. Cosmochim. Acta, 68, 2935 Kleine et al. (2002) Kleine, T., Münker, C., Mezger, K., & Palme, H. 2002, Nature, 418, 952 Kleine et al. (2009) Kleine, T., Touboul, M., Bourdon, B., et al. 2009, Geochim. Cosmochim. Acta, 73, 5150 Kley & Haghighipour (2015) Kley, W., & Haghighipour, N. 2015, A&A, 581, A20 Kley & Nelson (2012) Kley, W., & Nelson, R. P. 2012, ARA&A, 50, 211 Kley et al. (2008) Kley, W., Papaloizou, J. C. B., & Ogilvie, G. I. 2008, A&A, 487, 671 Kokubo & Genda (2010) Kokubo, E., & Genda, H. 2010, ApJ, 714, L21 Kokubo & Ida (1996) Kokubo, E., & Ida, S. 1996, Icarus, 123, 180 Kokubo & Ida (1998) Kokubo, E., & Ida, S. 1998, Icarus, 131, 171 Kokubo & Ida (2000) Kokubo, E., & Ida, S. 2000, Icarus, 143, 15 Kostov et al. (2014) Kostov, V. B., McCullough, P. R., Carter, J. A., et al. 2014, ApJ, 784, 14 Kouwenhoven et al. (2007) Kouwenhoven, M. B. N., Brown, A. G. A., Portegies Zwart, S. F., & Kaper, L. 2007, A&A, 474, 77 Kozai (1962) Kozai, Y. 1962, AJ, 67, 591 Kretke & Levison (2014) Kretke, K. A., & Levison, H. F. 2014, AJ, 148, 109 Kretke & Lin (2012) Kretke, K. A., & Lin, D. N. C. 2012, The Astrophysical Journal, 755, 74 Krijt et al. (2016) Krijt, S., Ormel, C. W., Dominik, C., & Tielens, A. G. G. M. 2016, A&A, 586, A20 Krijt et al. (2018) Krijt, S., Schwarz, K. R., Bergin, E. A., & Ciesla, F. J. 2018, ApJ, 864, 78 Kruijer et al. (2017) Kruijer, T. S., Burkhardt, C., Budde, G., & Kleine, T. 2017, Proceedings of the National Academy of Science, 114, 6712 Kruijer et al. (2020) Kruijer, T. S., Kleine, T., & Borg, L. E. 2020, Nature Astronomy, 4, 32 Kruijer et al. (2014) Kruijer, T. S., Touboul, M., Fischer-Gödde, M., et al. 2014, Science, 344, 1150 Kuwahara et al. (2019) Kuwahara, A., Kurokawa, H., & Ida, S. 2019, A&A, 623, A179 Lada & Lada (2003) Lada, C. J., & Lada, E. A. 2003, ARA&A, 41, 57 Lai (2012) Lai, D. 2012, MNRAS, 423, 486 Lai (2014) Lai, D. 2014, MNRAS, 440, 3532 Lai & Pu (2017) Lai, D., & Pu, B. 2017, AJ, 153, 42 Lambrechts & Johansen (2012) Lambrechts, M., & Johansen, A. 2012, A&A, 544, A32 Lambrechts & Johansen (2014) Lambrechts, M., & Johansen, A. 2014, A&A, 572, A107 Lambrechts et al. (2014) Lambrechts, M., Johansen, A., & Morbidelli, A. 2014, A&A, 572, A35 Lambrechts & Lega (2017) Lambrechts, M., & Lega, E. 2017, A&A, 606, A146 Lambrechts et al. (2019) Lambrechts, M., Morbidelli, A., Jacobson, S. A., et al. 2019, A&A, 627, A83 Lee & Chiang (2016) Lee, E. J., & Chiang, E. 2016, ApJ, 817, 90 Lee et al. (2014) Lee, E. J., Chiang, E., & Ormel, C. W. 2014, ApJ, 797, 95 Lee et al. (2013) Lee, M. H., Fabrycky, D., & Lin, D. N. C. 2013, ApJ, 774, 52 Leinhardt & Stewart (2012) Leinhardt, Z. M., & Stewart, S. T. 2012, ApJ, 745, 79 Levison et al. (2015) Levison, H. F., Kretke, K. A., & Duncan, M. J. 2015, Nature, 524, 322 Li et al. (2019a) Li, D., Mustill, A. J., & Davies, M. B. 2019a, MNRAS, 488, 1366 Li & Adams (2015) Li, G., & Adams, F. C. 2015, MNRAS, 448, 344 Li et al. (2000) Li, H., Finn, J. M., Lovelace, R. V. E., & Colgate, S. A. 2000, ApJ, 533, 1023–1034 Li et al. (2018) Li, R., Youdin, A. N., & Simon, J. B. 2018, ApJ, 862, 14 Li et al. (2019b) Li, R., Youdin, A. N., & Simon, J. B. 2019b, ApJ, 885, 69 Li et al. (2019c) Li, Y.-P., Li, H., Ricci, L., et al. 2019c, ApJ, 878, 39 Lichtenberg et al. (2019) Lichtenberg, T., Golabek, G. J., Burn, R., et al. 2019, Nature Astronomy, 3, 307 Lidov (1962) Lidov, M. L. 1962, Planet. Space Sci., 9, 719 Lin et al. (1996) Lin, D. N. C., Bodenheimer, P., & Richardson, D. C. 1996, Nature, 380, 606 Lin & Papaloizou (1979) Lin, D. N. C., & Papaloizou, J. 1979, MNRAS, 186, 799 Lin & Papaloizou (1986) Lin, D. N. C., & Papaloizou, J. 1986, ApJ, 309, 846 Lin & Papaloizou (1993) Lin, D. N. C., & Papaloizou, J. C. B. 1993, in Protostars and Planets III, ed. E. H. Levy & J. I. Lunine, 749 Lin & Youdin (2015) Lin, M.-K., & Youdin, A. N. 2015, ApJ, 811, 17 Lissauer (1993) Lissauer, J. J. 1993, ARA&A, 31, 129 Lissauer et al. (2011) Lissauer, J. J., Ragozzine, D., Fabrycky, D. C., et al. 2011, ApJS, 197, 8 Lithwick & Wu (2012) Lithwick, Y., & Wu, Y. 2012, ApJ, 756, L11 Liu et al. (2019a) Liu, B., Lambrechts, M., Johansen, A., & Liu, F. 2019a, A&A, 632, A7 Liu et al. (2020) Liu, B., Lambrechts, M., Johansen, A., Pascucci, I., & Henning, T. 2020, A&A, 638, A88 Liu & Ormel (2017) Liu, B., & Ormel, C. W. 2017, A&A, 606, A66 Liu & Ormel (2018) Liu, B., & Ormel, C. W. 2018, A&A, 615, A138 Liu et al. (2019b) Liu, B., Ormel, C. W., & Johansen, A. 2019b, A&A, 624, A114 Liu et al. (2017) Liu, B., Ormel, C. W., & Lin, D. N. C. 2017, A&A, 601, A15 Liu et al. (2016) Liu, B., Zhang, X., & Lin, D. N. C. 2016, ApJ, 823, 162 Liu et al. (2015) Liu, B., Zhang, X., Lin, D. N. C., & Aarseth, S. J. 2015, ApJ, 798, 62 Liu (2019) Liu, H. B. 2019, ApJ, 877, L22 Liu et al. (2013) Liu, H.-G., Zhang, H., & Zhou, J.-L. 2013, ApJ, 772, 142 Liu et al. (2015) Liu, S.-F., Hori, Y., Lin, D. N. C., & Asphaug, E. 2015, ApJ, 812, 164 Liu et al. (2019c) Liu, S.-F., Hori, Y., Müller, S., et al. 2019c, Nature, 572, 355 Liu et al. (2018) Liu, S.-F., Jin, S., Li, S., Isella, A., & Li, H. 2018, ApJ, 857, 87 Liu et al. (2019d) Liu, Y., Dipierro, G., Ragusa, E., et al. 2019d, A&A, 622, A75 Lodato & Rice (2005) Lodato, G., & Rice, W. K. M. 2005, MNRAS, 358, 1489–1500 Long et al. (2018) Long, F., Pinilla, P., Herczeg, G. J., et al. 2018, ApJ, 869, 17 Long et al. (2020) Long, F., Pinilla, P., Herczeg, G. J., et al. 2020, arXiv e-prints, arXiv:2006.03120 Lopez & Fortney (2013) Lopez, E. D., & Fortney, J. J. 2013, ApJ, 776, 2 Luger & Barnes (2015) Luger, R., & Barnes, R. 2015, Astrobiology, 15, 119 Lynden-Bell & Pringle (1974) Lynden-Bell, D., & Pringle, J. E. 1974, MNRAS, 168, 603 Lyra et al. (2010) Lyra, W., Paardekooper, S.-J., & Mac Low, M.-M. 2010, The Astrophysical Journal, 715, L68–L73 Ma et al. (2020) Ma, C.-T., Gong, Y.-X., Wu, X.-M., & Ji, J. 2020, MNRAS, 493, 1907 Malmberg et al. (2011) Malmberg, D., Davies, M. B., & Heggie, D. C. 2011, MNRAS, 411, 859 Manara et al. (2016) Manara, C. F., Rosotti, G., Testi, L., et al. 2016, A&A, 591, L3 Marcy et al. (2005) Marcy, G., Butler, R. P., Fischer, D., et al. 2005, Progress of Theoretical Physics Supplement, 158, 24–42 Marsset et al. (2020) Marsset, M., Fraser, W. C., Bannister, M. T., et al. 2020, The Planetary Science Journal, 1, 16 Marzari et al. (2009) Marzari, F., Scholl, H., Thébault, P., & Baruteau, C. 2009, A&A, 508, 1493 Masset & Snellgrove (2001) Masset, F., & Snellgrove, M. 2001, MNRAS, 320, L55 Masuda et al. (2020) Masuda, K., Winn, J. N., & Kawahara, H. 2020, AJ, 159, 38 Mathis et al. (1977) Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425 Matsakos & Königl (2017) Matsakos, T., & Königl, A. 2017, AJ, 153, 60 Matsumura et al. (2017) Matsumura, S., Brasser, R., & Ida, S. 2017, A&A, 607, A67 Mayor & Queloz (1995) Mayor, M., & Queloz, D. 1995, Nature, 378, 355 Mayor et al. (1997) Mayor, M., Queloz, D., Udry, S., & Halbwachs, J.-L. 1997, in IAU Colloq. 161: Astronomical and Biochemical Origins and the Search for Life in the Universe, ed. C. Batalli Cosmovici, S. Bowyer, & D. Werthimer, 313 Mayor et al. (2003) Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 Mayor et al. (2011) Mayor, M., Marmier, M., Lovis, C., et al. 2011, arXiv:1109.2497 McKinnon et al. (2020) McKinnon, W. B., Richardson, D. C., Marohnic, J. C., et al. 2020, Science, 367, aay6620 McNally et al. (2019) McNally, C. P., Nelson, R. P., & Paardekooper, S.-J. 2019, MNRAS, 489, L17 Miguel et al. (2020) Miguel, Y., Cridland, A., Ormel, C. W., Fortney, J. J., & Ida, S. 2020, MNRAS, 491, 1998 Millholland et al. (2017) Millholland, S., Wang, S., & Laughlin, G. 2017, ApJ, 849, L33 Miotello et al. (2014) Miotello, A., Testi, L., Lodato, G., et al. 2014, A&A, 567, A32 Miranda & Lai (2015) Miranda, R., & Lai, D. 2015, MNRAS, 452, 2396 Miranda et al. (2017) Miranda, R., Li, H., Li, S., & Jin, S. 2017, ApJ, 835, 118 Montesinos et al. (2016) Montesinos, M., Perez, S., Casassus, S., et al. 2016, ApJ, 823, L8 Morbidelli et al. (2009) Morbidelli, A., Bottke, W. F., Nesvorný, D., & Levison, H. F. 2009, Icarus, 204, 558 Morbidelli & Crida (2007) Morbidelli, A., & Crida, A. 2007, Icarus, 191, 158 Morbidelli et al. (2015) Morbidelli, A., Lambrechts, M., Jacobson, S., & Bitsch, B. 2015, Icarus, 258, 418 Mordasini et al. (2009) Mordasini, C., Alibert, Y., & Benz, W. 2009, A&A, 501, 1139 Mordasini et al. (2012a) Mordasini, C., Alibert, Y., Benz, W., Klahr, H., & Henning, T. 2012a, A&A, 541, A97 Mordasini et al. (2012b) Mordasini, C., Alibert, Y., Georgy, C., et al. 2012b, A&A, 547, A112 Morishima et al. (2010) Morishima, R., Stadel, J., & Moore, B. 2010, Icarus, 207, 517 Movshovitz et al. (2010) Movshovitz, N., Bodenheimer, P., Podolak, M., & Lissauer, J. J. 2010, Icarus, 209, 616 Mulders et al. (2019) Mulders, G. D., Mordasini, C., Pascucci, I., et al. 2019, ApJ, 887, 157 Mulders et al. (2015) Mulders, G. D., Pascucci, I., & Apai, D. 2015, ApJ, 798, 112 Mulders et al. (2018) Mulders, G. D., Pascucci, I., Apai, D., & Ciesla, F. J. 2018, AJ, 156, 24 Müller et al. (2018) Müller, A., Keppler, M., Henning, T., et al. 2018, A&A, 617, L2 Müller & Kley (2012) Müller, T. W. A., & Kley, W. 2012, A&A, 539, A18 Murchikova & Tremaine (2020) Murchikova, L., & Tremaine, S. 2020, arXiv e-prints, arXiv:2003.02290 Muro-Arena et al. (2020) Muro-Arena, G. A., Ginski, C., Dominik, C., et al. 2020, A&A, 636, L4 Musiolik & Wurm (2019) Musiolik, G., & Wurm, G. 2019, ApJ, 873, 58 Mustill et al. (2015) Mustill, A. J., Davies, M. B., & Johansen, A. 2015, ApJ, 808, 14 Mustill et al. (2017) Mustill, A. J., Davies, M. B., & Johansen, A. 2017, MNRAS, 468, 3000 Muto et al. (2012) Muto, T., Grady, C. A., Hashimoto, J., et al. 2012, ApJ, 748, L22 Nakagawa et al. (1986) Nakagawa, Y., Sekiya, M., & Hayashi, C. 1986, Icarus, 67, 375 Naoz et al. (2011) Naoz, S., Farr, W. M., Lithwick, Y., Rasio, F. A., & Teyssandier, J. 2011, Nature, 473, 187 Naoz et al. (2013) Naoz, S., Farr, W. M., Lithwick, Y., Rasio, F. A., & Teyssandier, J. 2013, MNRAS, 431, 2155 Natta et al. (2004) Natta, A., Testi, L., Neri, R., Shepherd, D. S., & Wilner, D. J. 2004, A&A, 416, 179 Natta et al. (2006) Natta, A., Testi, L., & Randich, S. 2006, A&A, 452, 245 Nelson et al. (2013) Nelson, R. P., Gressel, O., & Umurhan, O. M. 2013, MNRAS, 435, 2610 Nelson et al. (2000) Nelson, R. P., Papaloizou, J. C. B., Masset, F., & Kley, W. 2000, MNRAS, 318, 18 Nesvorný et al. (2019) Nesvorný, D., Li, R., Youdin, A. N., Simon, J. B., & Grundy, W. M. 2019, Nature Astronomy, 3, 808 Nesvorný et al. (2010) Nesvorný, D., Youdin, A. N., & Richardson, D. C. 2010, AJ, 140, 785 O’Brien et al. (2006) O’Brien, D. P., Morbidelli, A., & Levison, H. F. 2006, Icarus, 184, 39 O’Brien et al. (2014) O’Brien, D. P., Walsh, K. J., Morbidelli, A., Raymond, S. N., & Mandell, A. M. 2014, Icarus, 239, 74 Ogihara & Ida (2009) Ogihara, M., & Ida, S. 2009, ApJ, 699, 824 Ogihara et al. (2018) Ogihara, M., Kokubo, E., Suzuki, T. K., & Morbidelli, A. 2018, A&A, 615, A63 Ogihara et al. (2020) Ogihara, M., Kunitomo, M., & Hori, Y. 2020, Unified simulations of planetary formation and atmospheric evolution II: Rapid disk clearing by photoevaporation yields low-mass super-Earth atmospheres, arXiv:2007.13717 Ogihara et al. (2015) Ogihara, M., Morbidelli, A., & Guillot, T. 2015, A&A, 578, A36 Ohashi et al. (2020) Ohashi, S., Kataoka, A., Van der Marel, N., et al. 2020, arXiv e-prints, arXiv:2007.15014 Okuzumi et al. (2016) Okuzumi, S., Momose, M., Sirono, S.-i., Kobayashi, H., & Tanaka, H. 2016, ApJ, 821, 82 Okuzumi et al. (2012) Okuzumi, S., Tanaka, H., Kobayashi, H., & Wada, K. 2012, ApJ, 752, 106 Ormel (2017) Ormel, C. W. 2017, Astrophysics and Space Science Library, Vol. 445, The Emerging Paradigm of Pebble Accretion, ed. M. Pessah & O. Gressel, Astrophysics and Space Science Library, Vol. 445, Astrophysics and Space Science Library, ed. M. Pessah & O. Gressel, 197 Ormel & Cuzzi (2007) Ormel, C. W., & Cuzzi, J. N. 2007, A&A, 466, 413 Ormel et al. (2010) Ormel, C. W., Dullemond, C. P., & Spaans, M. 2010, ApJ, 714, L103 Ormel & Klahr (2010) Ormel, C. W., & Klahr, H. H. 2010, A&A, 520, A43 Ormel & Liu (2018) Ormel, C. W., & Liu, B. 2018, A&A, 615, A178 Ormel et al. (2017) Ormel, C. W., Liu, B., & Schoonenberg, D. 2017, A&A, 604, A1 Ormel et al. (2015) Ormel, C. W., Shi, J.-M., & Kuiper, R. 2015, MNRAS, 447, 3512 Ormel et al. (2007) Ormel, C. W., Spaans, M., & Tielens, A. G. G. M. 2007, A&A, 461, 215 Owen (2016) Owen, J. E. 2016, PASA, 33, e005 Owen & Wu (2013) Owen, J. E., & Wu, Y. 2013, ApJ, 775, 105 Owen & Wu (2017) Owen, J. E., & Wu, Y. 2017, ApJ, 847, 29 Paardekooper & Leinhardt (2010) Paardekooper, S. J., & Leinhardt, Z. M. 2010, MNRAS, 403, L64 Paardekooper et al. (2008) Paardekooper, S. J., Thébault, P., & Mellema, G. 2008, MNRAS, 386, 973 Pan et al. (2020) Pan, M., Wang, S., & Ji, J. 2020, MNRAS, 496, 4688 Pape et al. (2019) Pape, J., Mezger, K., Bouvier, A. S., & Baumgartner, L. P. 2019, Geochim. Cosmochim. Acta, 244, 416 Paraskov et al. (2006) Paraskov, G. B., Wurm, G., & Krauss, O. 2006, ApJ, 648, 1219 Pascucci et al. (2018) Pascucci, I., Mulders, G. D., Gould, A., & Fernandes, R. 2018, ApJ, 856, L28 Pascucci et al. (2016) Pascucci, I., Testi, L., Herczeg, G. J., et al. 2016, ApJ, 831, 125 Pepe et al. (2010) Pepe, F. A., Cristiani, S., Rebolo Lopez, R., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, 77350F Pérez et al. (2012) Pérez, L. M., Carpenter, J. M., Chand ler, C. J., et al. 2012, ApJ, 760, L17 Petigura et al. (2013) Petigura, E. A., Howard, A. W., & Marcy, G. W. 2013, Proceedings of the National Academy of Science, 110, 19273 Petit et al. (2020) Petit, A. C., Pichierri, G., Davies, M. B., & Johansen, A. 2020, arXiv e-prints, arXiv:2006.14903 Petrovich et al. (2013) Petrovich, C., Malhotra, R., & Tremaine, S. 2013, ApJ, 770, 24 Pfalzner (2013) Pfalzner, S. 2013, A&A, 549, A82 Picogna et al. (2018) Picogna, G., Stoll, M. H. R., & Kley, W. 2018, A&A, 616, A116 Pierens et al. (2013) Pierens, A., Cossou, C., & Raymond, S. N. 2013, A&A, 558, A105 Pierens & Nelson (2008a) Pierens, A., & Nelson, R. P. 2008a, A&A, 482, 333 Pierens & Nelson (2008b) Pierens, A., & Nelson, R. P. 2008b, A&A, 483, 633 Pinilla et al. (2012a) Pinilla, P., Benisty, M., & Birnstiel, T. 2012a, A&A, 545, A81 Pinilla et al. (2012b) Pinilla, P., Birnstiel, T., Ricci, L., et al. 2012b, A&A, 538, A114 Pinilla et al. (2017) Pinilla, P., Quiroga-Nuñez, L. H., Benisty, M., et al. 2017, ApJ, 846, 70 Pinte et al. (2018) Pinte, C., Price, D. J., Ménard, F., et al. 2018, ApJ, 860, L13 Pinte et al. (2020) Pinte, C., Price, D. J., Ménard, F., et al. 2020, ApJ, 890, L9 Plavchan et al. (2020) Plavchan, P., Barclay, T., Gagné, J., et al. 2020, Nature, 582, 497 Pollack et al. (1996) Pollack, J. B., Hubickyj, O., Bodenheimer, P., et al. 1996, Icarus, 124, 62 Popovas et al. (2018) Popovas, A., Nordlund, Å., Ramsey, J. P., & Ormel, C. W. 2018, MNRAS, 479, 5136 Pu & Wu (2015) Pu, B., & Wu, Y. 2015, ApJ, 807, 44 Quillen (2011) Quillen, A. C. 2011, MNRAS, 418, 1043 Quirrenbach et al. (2016) Quirrenbach, A., Amado, P. J., Caballero, J. A., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI, 990812 Rafikov (2003) Rafikov, R. R. 2003, AJ, 125, 942 Raghavan et al. (2010) Raghavan, D., McAlister, H. A., Henry, T. J., et al. 2010, ApJS, 190, 1 Rasio & Ford (1996) Rasio, F. A., & Ford, E. B. 1996, Science, 274, 954 Raymond & Izidoro (2017) Raymond, S. N., & Izidoro, A. 2017, Icarus, 297, 134 Raymond et al. (2009) Raymond, S. N., O’Brien, D. P., Morbidelli, A., & Kaib, N. A. 2009, Icarus, 203, 644 Raymond et al. (2004) Raymond, S. N., Quinn, T., & Lunine, J. I. 2004, Icarus, 168, 1 Raymond et al. (2006) Raymond, S. N., Quinn, T., & Lunine, J. I. 2006, Icarus, 183, 265 Rein (2012) Rein, H. 2012, MNRAS, 427, L21 Ricci et al. (2010) Ricci, L., Testi, L., Natta, A., et al. 2010, A&A, 512, A15 Ricci et al. (2014) Ricci, L., Testi, L., Natta, A., et al. 2014, ApJ, 791, 20 Rice et al. (2006) Rice, W. K. M., Armitage, P. J., Wood, K., & Lodato, G. 2006, MNRAS, 373, 1619 Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003 Robinson et al. (2020) Robinson, J. E., Fraser, W. C., Fitzsimmons, A., & Lacerda, P. 2020, arXiv e-prints, arXiv:2008.04207 Rogers et al. (2012) Rogers, T. M., Lin, D. N. C., & Lau, H. H. B. 2012, ApJ, 758, L6 Ronnet & Johansen (2020) Ronnet, T., & Johansen, A. 2020, A&A, 633, A93 Ros & Johansen (2013) Ros, K., & Johansen, A. 2013, A&A, 552, A137 Rosenthal et al. (2018) Rosenthal, M. M., Murray-Clay, R. A., Perets, H. B., & Wolansky, N. 2018, ApJ, 861, 74 Ruden & Lin (1986) Ruden, S. P., & Lin, D. N. C. 1986, ApJ, 308, 883 Rudge et al. (2010) Rudge, J. F., Kleine, T., & Bourdon, B. 2010, Nature Geoscience, 3, 439 Safronov (1972) Safronov, V. S. 1972, Evolution of the protoplanetary cloud and formation of the earth and planets. Sanchis-Ojeda et al. (2014) Sanchis-Ojeda, R., Rappaport, S., Winn, J. N., et al. 2014, ApJ, 787, 47 Santos et al. (2004) Santos, N. C., Israelian, G., & Mayor, M. 2004, A&A, 415, 1153 Schäfer et al. (2020) Schäfer, U., Johansen, A., & Banerjee, R. 2020, A&A, 635, A190 Schäfer et al. (2017) Schäfer, U., Yang, C.-C., & Johansen, A. 2017, A&A, 597, A69 Schaffer et al. (2020) Schaffer, N., Johansen, A., Cedenblad, L., Mehling, B., & Mitra, D. 2020, A&A, 639, A39 Schlaufman (2015) Schlaufman, K. C. 2015, ApJ, 799, L26 Schoonenberg et al. (2019) Schoonenberg, D., Liu, B., Ormel, C. W., & Dorn, C. 2019, arXiv e-prints, arXiv:1906.00669 Schoonenberg & Ormel (2017) Schoonenberg, D., & Ormel, C. W. 2017, A&A, 602, A21 Schwarz et al. (2016) Schwarz, R., Funk, B., Zechner, R., & Bazsó, Á. 2016, MNRAS, 460, 3598 Seager et al. (2007) Seager, S., Kuchner, M., Hier-Majumder, C. A., & Militzer, B. 2007, ApJ, 669, 1279 Shakura & Sunyaev (1973) Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 Shibaike et al. (2019) Shibaike, Y., Ormel, C. W., Ida, S., Okuzumi, S., & Sasaki, T. 2019, ApJ, 885, 79 Simon et al. (2016) Simon, J. B., Armitage, P. J., Li, R., & Youdin, A. N. 2016, ApJ, 822, 55 Simon et al. (2017) Simon, J. B., Armitage, P. J., Youdin, A. N., & Li, R. 2017, ApJ, 847, L12 Sousa et al. (2011) Sousa, S. G., Santos, N. C., Israelian, G., Mayor, M., & Udry, S. 2011, A&A, 533, A141 Sousa et al. (2008) Sousa, S. G., Santos, N. C., Mayor, M., et al. 2008, A&A, 487, 373 Spurzem et al. (2009) Spurzem, R., Giersz, M., Heggie, D. C., & Lin, D. N. C. 2009, ApJ, 697, 458 Stammler et al. (2019) Stammler, S. M., Dra̧żkowska, J., Birnstiel, T., et al. 2019, ApJ, 884, L5 Steffen et al. (2012) Steffen, J. H., Ragozzine, D., Fabrycky, D. C., et al. 2012, Proceedings of the National Academy of Science, 109, 7982 Steinpilz et al. (2019) Steinpilz, T., Teiser, J., & Wurm, G. 2019, ApJ, 874, 60 Stern et al. (2019) Stern, S. A., Weaver, H. A., Spencer, J. R., et al. 2019, Science, 364, aaw9771 Stevenson & Lunine (1988) Stevenson, D. J., & Lunine, J. I. 1988, Icarus, 75, 146 Stolker et al. (2016) Stolker, T., Dominik, C., Avenhaus, H., et al. 2016, A&A, 595, A113 Stoll & Kley (2014) Stoll, M. H. R., & Kley, W. 2014, A&A, 572, A77 Sugiura & Fujiya (2014) Sugiura, N., & Fujiya, W. 2014, Meteoritics and Planetary Science, 49, 772 Sun et al. (2017) Sun, Z., Ji, J., Wang, S., & Jin, S. 2017, MNRAS, 467, 619 Surville et al. (2016) Surville, C., Mayer, L., & Lin, D. N. C. 2016, ApJ, 831, 82 Takahashi & Inutsuka (2014) Takahashi, S. Z., & Inutsuka, S.-i. 2014, ApJ, 794, 55 Tazzari et al. (2016) Tazzari, M., Testi, L., Ercolano, B., et al. 2016, A&A, 588, A53 Teague et al. (2018) Teague, R., Bae, J., Bergin, E. A., Birnstiel, T., & Foreman-Mackey, D. 2018, ApJ, 860, L12 Terquem & Papaloizou (2007) Terquem, C., & Papaloizou, J. C. B. 2007, The Astrophysical Journal, 654, 1110–1120 Testi et al. (2014) Testi, L., Birnstiel, T., Ricci, L., et al. 2014, in Protostars and Planets VI, ed. H. Beuther, R. S. Klessen, C. P. Dullemond, & T. Henning, 339 Thebault & Haghighipour (2015) Thebault, P., & Haghighipour, N. 2015, Planet Formation in Binaries, Planetary Exploration and Science: Recent Results and Advances, 309 Thébault et al. (2006) Thébault, P., Marzari, F., & Scholl, H. 2006, Icarus, 183, 193 Thommes et al. (2008) Thommes, E., Nagasawa, M., & Lin, D. N. C. 2008, ApJ, 676, 728 Thommes et al. (2003) Thommes, E. W., Duncan, M. J., & Levison, H. F. 2003, Icarus, 161, 431 Tian & Ida (2015) Tian, F., & Ida, S. 2015, Nature Geoscience, 8, 177 Touboul et al. (2007) Touboul, M., Kleine, T., Bourdon, B., Palme, H., & Wieler, R. 2007, Nature, 450, 1206 Trapman et al. (2020) Trapman, L., Ansdell, M., Hogerheijde, M. R., et al. 2020, A&A, 638, A38 Tremaine & Dong (2012) Tremaine, S., & Dong, S. 2012, AJ, 143, 94 Triaud et al. (2010) Triaud, A. H. M. J., Collier Cameron, A., Queloz, D., et al. 2010, A&A, 524, A25 Udry et al. (2019) Udry, S., Dumusque, X., Lovis, C., et al. 2019, A&A, 622, A37 Umurhan et al. (2020) Umurhan, O. M., Estrada, P. R., & Cuzzi, J. N. 2020, ApJ, 895, 4 Unterborn et al. (2017) Unterborn, C. T., Desch, S. J., Hinkel, N. R., & Jr, A. L. 2017, arXiv:1706.02689 Valencia et al. (2007) Valencia, D., Sasselov, D. D., & O’Connell, R. J. 2007, ApJ, 665, 1413 Valletta & Helled (2019) Valletta, C., & Helled, R. 2019, ApJ, 871, 127 van der Marel et al. (2015) van der Marel, N., van Dishoeck, E. F., Bruderer, S., Pérez, L., & Isella, A. 2015, A&A, 579, A106 van der Marel et al. (2013) van der Marel, N., van Dishoeck, E. F., Bruderer, S., et al. 2013, Science, 340, 1199 Van Eylen et al. (2018) Van Eylen, V., Agentoft, C., Lundkvist, M. S., et al. 2018, MNRAS, 479, 4786–4795 Vazan et al. (2018) Vazan, A., Ormel, C. W., Noack, L., & Dominik, C. 2018, ApJ, 869, 163 Venturini et al. (2016) Venturini, J., Alibert, Y., & Benz, W. 2016, A&A, 596, A90 Venturini & Helled (2020) Venturini, J., & Helled, R. 2020, A&A, 634, A31 Villeneuve et al. (2009) Villeneuve, J., Chaussidon, M., & Libourel, G. 2009, Science, 325, 985 Visser & Ormel (2016) Visser, R. G., & Ormel, C. W. 2016, A&A, 586, A66 Wada et al. (2011) Wada, K., Tanaka, H., Suyama, T., Kimura, H., & Yamamoto, T. 2011, ApJ, 737, 36 Wahl et al. (2017) Wahl, S. M., Hubbard, W. B., Militzer, B., et al. 2017, Geophys. Res. Lett., 44, 4649 Walsh et al. (2011) Walsh, K. J., Morbidelli, A., Raymond, S. N., O’Brien, D. P., & Mandell, A. M. 2011, Nature, 475, 206 Wang & Fischer (2015) Wang, J., & Fischer, D. A. 2015, AJ, 149, 14 Wang et al. (2014a) Wang, J., Fischer, D. A., Xie, J.-W., & Ciardi, D. R. 2014a, ApJ, 791, 111 Wang et al. (2014b) Wang, J., Xie, J.-W., Barclay, T., & Fischer, D. A. 2014b, ApJ, 783, 4 Wang (2017) Wang, S. 2017, Research Notes of the American Astronomical Society, 1, 26 Wang & Ji (2014) Wang, S., & Ji, J. 2014, ApJ, 795, 85 Wang & Ji (2017) Wang, S., & Ji, J. 2017, AJ, 154, 236 Wang et al. (2012) Wang, S., Ji, J., & Zhou, J.-L. 2012, ApJ, 753, 170 Ward (1997) Ward, W. R. 1997, Icarus, 126, 261 Weidenschilling (1977) Weidenschilling, S. J. 1977, MNRAS, 180, 57 Weidenschilling (1980) Weidenschilling, S. J. 1980, Icarus, 44, 172 Weiss et al. (2018) Weiss, L. M., Marcy, G. W., Petigura, E. A., et al. 2018, AJ, 155, 48 Wetherill & Stewart (1989) Wetherill, G. W., & Stewart, G. R. 1989, Icarus, 77, 330 Whipple (1972) Whipple, F. L. 1972, in From Plasma to Planet, ed. A. Elvius, 211 Wimarsson et al. (2020) Wimarsson, J., Liu, B., & Ogihara, M. 2020, MNRAS, 496, 3314 Winn (2010) Winn, J. N. 2010, arXiv e-prints, arXiv:1001.2010 Winn & Fabrycky (2015) Winn, J. N., & Fabrycky, D. C. 2015, ARA&A, 53, 409 Winn et al. (2018) Winn, J. N., Sanchis-Ojeda, R., & Rappaport, S. 2018, New A Rev., 83, 37 Wittenmyer et al. (2020) Wittenmyer, R. A., Wang, S., Horner, J., et al. 2020, MNRAS, 492, 377 Wright et al. (2012) Wright, J. T., Marcy, G. W., Howard, A. W., et al. 2012, ApJ, 753, 160 Wright et al. (2009) Wright, J. T., Upadhyay, S., Marcy, G. W., et al. 2009, ApJ, 693, 1084 Wu (2019) Wu, Y. 2019, ApJ, 874, 91 Wu & Murray (2003) Wu, Y., & Murray, N. 2003, ApJ, 589, 605 Xie (2014) Xie, J.-W. 2014, ApJ, 786, 153 Xie et al. (2010a) Xie, J.-W., Payne, M. J., Thébault, P., Zhou, J.-L., & Ge, J. 2010a, ApJ, 724, 1153 Xie et al. (2010b) Xie, J.-W., Zhou, J.-L., & Ge, J. 2010b, ApJ, 708, 1566 Xie et al. (2016) Xie, J.-W., Dong, S., Zhu, Z., et al. 2016, Proceedings of the National Academy of Science, 113, 41 Xu et al. (2017) Xu, Z., Bai, X.-N., & Murray-Clay, R. A. 2017, ApJ, 847, 52 Yang et al. (2017) Yang, C.-C., Johansen, A., & Carrera, D. 2017, A&A, 606, A80 Yang et al. (2018) Yang, C.-C., Mac Low, M.-M., & Johansen, A. 2018, ApJ, 868, 27 Yang et al. (2016) Yang, H., Li, Z.-Y., Looney, L., & Stephens, I. 2016, MNRAS, 456, 2794 Yang et al. (2020) Yang, J.-Y., Xie, J.-W., & Zhou, J.-L. 2020, AJ, 159, 164 Yin et al. (2002) Yin, Q., Jacobsen, S. B., Yamashita, K., et al. 2002, Nature, 418, 949 Youdin & Johansen (2007) Youdin, A., & Johansen, A. 2007, ApJ, 662, 613 Youdin & Goodman (2005) Youdin, A. N., & Goodman, J. 2005, ApJ, 620, 459 Youdin & Lithwick (2007) Youdin, A. N., & Lithwick, Y. 2007, Icarus, 192, 588 Yu et al. (2010) Yu, C., Li, H., Li, S., Lubow, S. H., & Lin, D. N. C. 2010, ApJ, 712, 198 Zanazzi & Lai (2018) Zanazzi, J. J., & Lai, D. 2018, MNRAS, 478, 835 Zhang & Zhou (2010) Zhang, H., & Zhou, J.-L. 2010, ApJ, 714, 532 Zhang et al. (2015) Zhang, K., Blake, G. A., & Bergin, E. A. 2015, ApJ, 806, L7 Zhang et al. (2020) Zhang, K., Bosman, A. D., & Bergin, E. A. 2020, ApJ, 891, L16 Zhang et al. (2018) Zhang, S., Zhu, Z., Huang, J., et al. 2018, ApJ, 869, L47 Zhang (2020) Zhang, X. 2020, \raa, 20, 99 Zhang et al. (2014) Zhang, X., Liu, B., Lin, D. N. C., & Li, H. 2014, ApJ, 797, 20 Zhao (2020) Zhao, Y. et al. 2020, Nat. Astron., in press Zheng et al. (2015) Zheng, X., Kouwenhoven, M. B. N., & Wang, L. 2015, MNRAS, 453, 2759 Zheng et al. (2017) Zheng, X., Lin, D. N. C., & Kouwenhoven, M. B. N. 2017, ApJ, 836, 207 Zhou & Lin (2007) Zhou, J.-L., & Lin, D. N. C. 2007, ApJ, 666, 447 Zhou et al. (2007) Zhou, J.-L., Lin, D. N. C., & Sun, Y.-S. 2007, ApJ, 666, 423 Zhou et al. (2012) Zhou, J.-L., Xie, J.-W., Liu, H.-G., Zhang, H., & Sun, Y.-S. 2012, Research in Astronomy and Astrophysics, 12, 1081 Zhu (2019) Zhu, W. 2019, ApJ, 873, 8 Zhu et al. (2018) Zhu, W., Petrovich, C., Wu, Y., Dong, S., & Xie, J. 2018, ApJ, 860, 101 Zhu et al. (2016) Zhu, W., Wang, J., & Huang, C. 2016, ApJ, 832, 196 Zhu & Wu (2018) Zhu, W., & Wu, Y. 2018, AJ, 156, 92 Zhu et al. (2015) Zhu, Z., Dong, R., Stone, J. M., & Rafikov, R. R. 2015, ApJ, 813, 88 Zhu et al. (2012) Zhu, Z., Nelson, R. P., Dong, R., Espaillat, C., & Hartmann, L. 2012, ApJ, 755, 6 Zhu et al. (2015) Zhu, Z., Stone, J. M., & Bai, X.-N. 2015, ApJ, 801, 81 Zhu et al. (2019) Zhu, Z., Zhang, S., Jiang, Y.-F., et al. 2019, ApJ, 877, L18 Zsom et al. (2010) Zsom, A., Ormel, C. W., Güttler, C., Blum, J., & Dullemond, C. P. 2010, A&A, 513, A57
Lorentz invariance violation and simultaneous emission of electromagnetic and gravitational waves ${}^{1,3}$E. Passos [email protected]    ${}^{1}$M. A. Anacleto [email protected]    ${}^{1,2}$F. A. Brito [email protected]    ${}^{4}$O. Holanda [email protected]    ${}^{1}$G. B. Souza [email protected]    ${}^{3}$C. A. D. Zarro [email protected] ${}^{1}$Departamento de Física, Universidade Federal de Campina Grande, Caixa Postal 10071, 58429-900, Campina Grande, Paraíba, Brazil. ${}^{2}$Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, João Pessoa, Paraíba, Brazil. ${}^{3}$Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 21945, Rio de Janeiro, Rio de Janeiro, Brazil. ${}^{4}$Departamento de Física Teórica, Instituto de Física, Universidade do Estado do Rio de Janeiro, Rua São Francisco Xavier 524, 20550-013, Maracanã, Rio de Janeiro, Brazil Abstract In this work, we compute some phenomenological bounds for the electromagnetic and massive gravitational high-derivative extensions supposing that it is possible to have an astrophysical process that generates simultaneously gravitational and eletromagnetic waves. We present a Lorentz-violating (LIV) higher-order derivative, following the Myers-Pospelov approach, to electrodynamics and massive gravitational waves. We compute the corrected equation of motion of these models, their dispersion relations and the velocities. The LIV parameters for the gravitational and electromagnetic sector, $\xi_{g}$ and $\xi_{\gamma}$ respectively were also obtained for three different approaches: luminal photons, time delay of flight and the difference of graviton and photon velocities. These LIV parameters depend on the mass scales where the LIV-terms become relevant, $M$ for the electromagnetic sector and $M_{1}$ for the gravitational one. We obtain, using the values for $M$ and $M_{1}$ found in the literature, that $\xi_{g}\sim 10^{-2}$, which is expected to be phenomenolocally relevant and $\xi_{\gamma}\sim 10^{3}$, which cannot be suitable for an effective LIV theory. However, we show that $\xi_{\gamma}$ can be interesting in a phenomological point of view if $M\gg M_{1}$. Finally the difference between the velocities of the photon and the graviton was calculated and our result, $v_{\gamma}-v_{g}\lesssim 0.49\times 10^{-19}$, is compatible with the results already presented in the literature. pacs: XX.XX, YY.YY I Introduction The Lorentz invariance violating (LIV) theories have been extensively studied at high energy systems. The main focus is to develop an effective probe to test the phenomenological limits of Lorentz invariance as a direct consequence on the Planck scale physics such as the fuzzy nature of spacetime provided by quantum gravity theories. In this context, the possible effects related to LIV are given by energy and helicity dependent photon propagation velocities. The LIV parameters bounds on energy can be inferred by measuring the arrival times of photons with different energies emmited almost simultanously from distant objects Camelia01 . In order to measure such bounds, it is necessary an ultra high energy phenomena such as a gamma-ray burst (GRB) Laurent:2011he ; Ahmadi:2012we ; Krawczynski:2013uga ; Abdo or a flare of an active galactic nucleus Aharonian ; Albert:2007qk . The LIV parameters can also be constrained by measuring how the polarization direction of an x-ray beam of cosmological origin changes as function of energy Gambini01 . Such observations have been used as astrophysical laboratories to verify the possible occurrences of LIV in nature Gleiser ; Camal01 ; Matt ; Macc . One approach to investigate LIV effective theories was initially proposed by Myers- Pospelov by breaking the Lorentz symmetry after introducing mass operators of dimension-five along with a nondynamical four-vector $n_{\mu}$ interacting with scalars, fermions, and photons fields MP . If we restrict our attention only to photon sector, there is a single contribution of dimension-five operators which gives a correction of order $\xi_{\gamma}p^{3}/M_{\rm Pl}$. The extension to dimension-$n$ operators satisfies all the Myers-Pospelov approach criteria: $(\rm i)$ quadratic in the fields, $(\rm ii)$ one more derivative than the usual terms, $(\rm iii)$ gauge invariant, $(\rm iv)$ Lorentz invariant, except for the presence of an external four-vector $n_{\mu}$, $(\rm v)$ not reducible to lower dimensional operators at the equations of motion, and $(\rm vi)$ not reducible to a total derivative. In this set-up, one finds that for dimension-$n$ operators, the correction is given by $\xi_{\gamma}p^{n}/(M_{\rm Pl})^{n-2}$. The detection of gravitational waves, reported by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo Collaborations GW01 ; GW02 , opens a new window in observational cosmology and astrophysics. Particularly in astrophysics, the hitherto detected gravitational waves come from the merger of two black-holes. However, it is expected to also measure gravitational waves emitted during the merger of other compact astrophysics objects such as neutron stars or a black-hole and a neutron star. The merger of such two compact objects is supposed to be a very complex phenomena, probably involving electromagnetic waves or neutrino emission, hence to obtain new insights of the merger process, one can observe simultaneously the emission of gravitational waves and electromagnetic waves or neutrinos. It was reported, in the event GW150914 the observation of a gravitational wave GW01 and a short gamma-ray burst (also detected in the same event GW150914 Fermi01 ) by the Fermi Gamma-Ray Space Telescope have been used to obtain constraints on LIV parameters Blas ; Ellins ; Gia ; Vincenzo (see also Will ). This issue started an intense debate in the literature as some authors describe that this electromagnetic counterpart is not possible BH01 and others showed its plausibility BH02 ; Morsony:2016upv . The null results of simultaneous GRB emission and the other detected gravitational wave event Smartt:2016oeu ; Yoshida:2016ddu , GW151226 GW02 , apparently favours that this simultaneous emission was unlikely, however it is not possible to conclude anything, as it there are only two detected gravitational waves events and the physical processes involved in electromagnetic waves and neutrino emissions are not entirely understood. A transient GRB signal above $50$ keV after 0.40 s after the detection of GW150914 was reported in Ref. Fermi01 . This observational fact and a bound for the graviton mass, $m_{g}\leq 10^{-22}$ eV, was used in Refs. Blas ; Ellins ; Gia ; Vincenzo to obtain bounds constraining the difference between light and graviton speed and the energy scale where the LIV effects appear. Our goal is to extend these references by introducing high-order derivative operators which explicitly break Lorentz symmetry. The main purpose of this work is to consider the electromagnetic and gravitational dispersion relations produced by presence of the higher derivative operator in the effective actions. We aim to find new phenomenological constraints on LIV by using simultaneous measures of the gamma-ray busts (GRB) and gravitational wave (GW) produced by the same source, i.e, assuming that such signals emerge from black holes merger. The outline of this paper is as follows. In Sec. II, we have demonstrated that both the electromagnetic and linearized gravitational higher derivative extensions appear as terms of a power series associate to CPT-odd effective actions. In Sec. III, we use a dimension-five operator as modification to Maxwell Lagrangian. The associated dispersion relations are obtained. In Sec. IV, we also use a dimension-five operator to modify the massive Fierz-Pauli Lagrangian. The associated dispersion relations are obtained for massive and massless cases. In Sec. V, we discussed some phenomenological constraints. In Sec.VI, we present our conclusions. II The higher derivative LIV extensions II.1 The electromagnetic sector We consider a CPT-odd pure-photon action proposed by Carrol-Field-Jackiw (CFJ) CFJ and through it we get higher derivative contributions of a power series. The CFJ action is rewritten as $$\displaystyle S_{\gamma}$$ $$\displaystyle=$$ $$\displaystyle-\frac{M\xi_{\gamma}}{2}\int d^{4}x\,f^{\mu\lambda\nu}F_{\lambda% \mu}A_{\nu},\;\;f^{\mu\lambda\nu}\equiv\varepsilon^{\alpha\mu\lambda\nu}n_{\alpha}$$ (1) $$\displaystyle=$$ $$\displaystyle\frac{M\xi_{\gamma}}{2}\int d^{4}x\,A_{\mu}\Pi^{\mu\nu}A_{\nu}$$ where $\xi_{\gamma}$ is a dimensionless parameter, $M$ is the mass scale where LIV effects emerges and $\Pi^{\mu\nu}=f^{\mu\lambda\nu}\partial_{\lambda}$ is the electromagnetic LIV operator that enjoys the following properties: $\partial_{\mu}\Pi^{\mu\nu}=0,\;\;n_{\mu}\Pi^{\mu\nu}=0,$ $\Pi_{\mu\nu}\Pi^{\nu\beta}=-\big{[}\delta^{\beta}_{\mu}\big{(}(n\cdot\partial)% ^{2}-n^{2}\partial^{2}\big{)}-n^{\beta}\big{(}\partial_{\mu}(n\cdot\partial)-n% _{\mu}\partial^{2}\big{)}-\partial^{\beta}\big{(}n_{\mu}(n\cdot\partial)-n^{2}% \partial_{\mu}\big{)}\big{]}$ and $\Pi_{\mu\nu}\Pi^{\mu\nu}=2\big{(}(n\cdot\partial)^{2}-n^{2}\partial^{2}\big{)}$. Notice that the above effective action is gauge invariant (up to a surface term) under gauge transformations $\delta A_{\mu}=\partial_{\mu}\Lambda$. We now extend the CFJ action by replacing the quantity $\xi_{\gamma}M\Pi^{\mu\nu}$ to a power series such as $$\displaystyle\sum_{l=1,3,...}\frac{\xi_{\gamma_{l}}}{(M)^{l-2}}(\Pi^{\mu\nu})^% {l}$$ $$\displaystyle=$$ $$\displaystyle\xi_{\gamma_{1}}M\Pi^{\mu\nu}+\frac{\xi_{\gamma_{3}}}{M}\big{(}% \Pi^{\mu\alpha}\Pi_{\alpha\beta}\Pi^{\beta\nu}=\Pi^{\mu\nu}\hat{D}\big{)}+...$$ (2) $$\displaystyle=$$ $$\displaystyle\xi_{\gamma_{1}}M\Pi^{\mu\nu}+\frac{\xi_{\gamma_{3}}}{M}\Pi^{\mu% \nu}\hat{D}+...$$ where $\hat{D}=(n\cdot\partial)^{2}-n^{2}\partial^{2}$ is a LIV derivative operator. Inserting the series (2) into the action (1) we get $$\displaystyle S_{\gamma}\to\hat{S}_{\gamma}=-\frac{1}{2}\int d^{4}x\,\Big{[}M% \xi_{\gamma_{1}}f^{\mu\lambda\nu}F_{\lambda\mu}A_{\nu}+\frac{\xi_{\gamma_{3}}}% {M}f^{\mu\lambda\nu}\,\hat{D}F_{\lambda\mu}A_{\nu}+...\Big{]}$$ (3) We can rewrite the Eq. (3) in the form $$\displaystyle S_{\gamma}\to\hat{S}_{\gamma}=-\frac{1}{2}\int d^{4}x\,\Big{[}M% \xi_{\gamma_{1}}f^{\mu\lambda\nu}F_{\lambda\mu}A_{\nu}+\frac{\xi_{\gamma_{3}}}% {M}f^{\mu\lambda\rho}n^{\nu}n^{\rho}(\partial_{\lambda}F_{\mu\nu})F_{\rho% \sigma}-\frac{\xi_{\gamma_{3}}}{M}f^{\mu\lambda\rho}n^{2}\eta^{\nu\rho}(% \partial_{\lambda}F_{\mu\nu})F_{\rho\sigma}+...\Big{]}.$$ (4) Notice that we obtain dimension-five operators as extra terms of power series expansion in Eq.(3) (or Eq.(4)) which lead to cubic modifications of the dispersion relations of electromagnetic waves. These contributions obey the main Myers-Pospelov criteria MP . II.2 The gravitational sector In analogy to the above electromagnetic case, we consider the following LIV extension to Fierz-Pauli action proposed in Ref. GravPassos : $$\displaystyle S_{g}$$ $$\displaystyle=$$ $$\displaystyle-\frac{(M_{1})^{3}\xi_{g}}{2}\int d^{4}x\,f^{\mu\lambda\nu}h_{% \rho\mu}\partial_{\lambda}h^{\rho}_{\,\nu},\;\;f^{\mu\lambda\nu}\equiv% \varepsilon^{\alpha\mu\lambda\nu}n_{\alpha}$$ (5) where $\xi_{g}$ is a dimensionless parameter, $M_{1}$ is the mass scale where LIV in the gravitational sector become pronounced. Here the $h_{\mu\nu}$ is a second rank symmetric tensor characterizing weak metric fluctuations ($h_{\mu\nu}=g_{\mu\nu}-\eta_{\mu\nu}$, where $g_{\mu\nu}$ is the metric tensor of the curved space, $\eta_{\mu\nu}={\rm diag}(-1,+1,+1,+1)$ is the metric tensor of the flat space and $h=\eta^{\mu\nu}h_{\mu\nu}$ is the trace of $h_{\mu\nu}$ Hinterbichler:2011tt ). Notice that under gauge transformations $\delta h_{\mu\nu}=\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}$ for a spacetime dependent gauge parameter $\xi_{\mu}(x)$ the action (5) implies in the following variation: $\delta{\cal L}_{g}\sim f^{\mu\lambda\nu}\xi_{\mu}\partial_{\lambda}\partial_{% \rho}h^{\rho}_{\,\nu}$ which does not vanish in general, so that the action $S_{g}$ is not gauge invariant. This issue can be investigated using the Stuckelberg formalism to a massive spin-two field Hinterbichler:2011tt . The presence of explicit LIV terms on the gravity sector leads to an apparent inconsistency as the diffeomorphism is broken and $\nabla_{\mu}T^{\mu\nu}\neq 0$. This inconsistency can be solved if the breaking of the Lorentz invariance occurs spontaneously. These results were shown in Ref. Kostelecky:2003fs . However, in Ref. Bluhm:2014oua , it was shown that the former result of Ref. Kostelecky:2003fs was too strong and that the presence of explicit LIV terms are also permitted as long as some conditions are satisfied. It was also shown that massive gravity satisfies such conditions. Following last section, we rewrite the Eq. (5) in terms of a power series such as in the electromagnetic case. Again we replace the LIV operator to the following power series $$\displaystyle\sum_{l=1,3,...}\frac{\xi_{g_{l}}}{(M_{1})^{l-2}}(\Pi^{\mu\nu})^{l}$$ $$\displaystyle=$$ $$\displaystyle\xi_{g_{1}}M_{1}\Pi^{\mu\nu}+\frac{\xi_{g_{3}}}{M_{1}}\Pi^{\mu\nu% }\hat{D}+...$$ (6) such that $$\displaystyle S_{g}\to\hat{S}_{g}=-\frac{(M_{1})^{2}}{2}\int d^{4}x\,\Big{[}M_% {1}\xi_{g_{1}}f^{\mu\lambda\nu}h_{\rho\mu}\partial_{\lambda}h^{\rho}_{\,\nu}+% \frac{\xi_{g_{3}}}{M_{1}}f^{\mu\lambda\nu}h_{\rho\mu}\partial_{\lambda}\hat{D}% \,h^{\rho}_{\,\nu}+...\Big{]}.$$ (7) Therefore, we also derive higher-dimensional operators as extra terms of power series expansion which lead to cubic modifications of the dispersion relations of gravitational waves. Although the restriction on the gauge invariance, the extra contribution in Eq.(7) satisfies all the Myers-Pospelov criteria to construct LIV higher derivative operators. III The extended electrodynamics In this section we consider the second and third terms of Eq.(4) to modify classical electrodynamics (electromagnetic Maxwell-Myers-Pospelov model). In this case we analyze the dispersion relation of electromagnetic waves. III.1 The model Let us now derive the dynamics associated with the following Lagrangian $$\displaystyle{\cal L}_{\gamma}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{\xi_{% \gamma}}{M}f^{\mu\lambda\rho}n^{\nu}n^{\sigma}(\partial_{\lambda}F_{\mu\nu})F_% {\rho\sigma}+\frac{\xi_{\gamma}}{M}f^{\mu\lambda\rho}n^{2}\eta^{\nu\sigma}(% \partial_{\lambda}F_{\mu\nu})F_{\rho\sigma}$$ (8) where $\xi_{\gamma_{3}}\equiv\xi_{\gamma}$. Using the axial gauge $n^{\mu}A_{\mu}=0$, the equation of motion reads $$\displaystyle\big{(}\partial^{2}\eta^{\mu\nu}-\frac{2\xi_{\gamma}}{M}f^{\mu% \lambda\nu}\partial_{\lambda}\,\hat{D}\big{)}A_{\nu}=0.$$ (9) After a straightforward algebra we find that the free continuous spectrum associated with the equation of motion, Eq. (9), is governed by the following covariant dispersion relation: $$\displaystyle(k_{\gamma}^{2})^{2}-(2\xi_{\gamma}/M)^{2}\big{(}(n\cdot k_{% \gamma})^{2}-n^{2}k_{\gamma}^{2}\big{)}^{3}=0$$ (10) which was also derived in Ref. Reyes . III.2 Modified propagations to electromagnetic waves The solutions to the above dispersion relation for the isotropic configuration, $i.e.$ when $n_{\mu}\equiv(1,\vec{0})$ is chosen to be purely time-like are investigated in Ref. Reyes . From this isotropic configuration we generalize the dispersion relations associated with dimension$-n$ operators: $$\displaystyle E_{\gamma}^{2}-k_{\gamma}^{2}-2\lambda\xi^{(n)}_{\gamma}\frac{k_% {\gamma}^{n}}{M^{n-2}}=0,\;\;\;\;\;k_{\gamma}\equiv|\vec{k}_{\gamma}|$$ (11) with the two polarizations $\lambda=\pm 1$. For $n=3$ we recover the cubic modifications reported in MP . And, for $n=4,5,...$ we find new expressions due to the increase of the dimension of the LIV operator. Solving Eq. (11) for the energy, we obtain the frequency solutions $$\displaystyle E_{\gamma}=k_{\gamma}\sqrt{1+2\lambda\xi^{(n)}_{\gamma}\big{(}k_% {\gamma}/M\big{)}^{n-2}}.$$ (12) The dispersion relation (12) leads to a modified speed of light for a photon with momentum $k_{\gamma}$: $$\displaystyle v_{\gamma}\equiv\frac{\partial E_{\gamma}}{\partial k_{\gamma}}=% \frac{1+n\lambda\xi^{(n)}_{\gamma}\big{(}k_{\gamma}/M\big{)}^{n-2}}{\sqrt{1+2% \lambda\xi^{(n)}_{\gamma}\big{(}k_{\gamma}/M\big{)}^{n-2}}}.$$ (13) This dispersion relation leads to rotations of the polarization of linearly polarized photons during their propagation (see, $e.g.$, Refs. Matt and Passos:2016bbc ) . Now we will carry out an expansion of the above expression for $(k_{\gamma})^{n-2}\ll 1/\big{(}2\xi_{\gamma}(M)^{2-n}\Big{)}$. This leads to $$\displaystyle v_{\gamma}\approx 1+\lambda(n-1)\xi^{(n)}_{\gamma}\bigg{(}\frac{% k_{\gamma}}{M}\bigg{)}^{n-2}.$$ (14) Notice that for $n\geq 3$ the speed $v_{\gamma^{(\lambda=-)}}$ can exceed the speed of light introducing problems of causality (see also Ref. Reyes ). IV The extended linearized gravity IV.1 The Model In this case we consider the second contribution in Eq.(7) as a modification in the dynamics of the massive Fierz-Pauli action. Here we also analyze the dynamics associated with the dispersion relation of the gravitational waves. Considering the following Lagrangian (in our case, for brevity, $\xi_{g_{3}}\equiv\xi_{g}$): $$\displaystyle{\cal L}_{g}$$ $$\displaystyle=$$ $$\displaystyle\frac{(M_{1})^{2}}{2}\Big{[}\frac{1}{2}\partial_{\lambda}h_{\mu% \nu}\partial^{\lambda}h^{\mu\nu}+\partial_{\mu}h_{\nu\lambda}\partial^{\nu}h^{% \mu\lambda}-\partial_{\mu}h^{\mu\nu}\partial_{\nu}h+\frac{1}{2}\partial_{% \lambda}h\partial^{\lambda}h+$$ (15) $$\displaystyle\frac{1}{2}m^{2}_{g}\big{(}h_{\mu\nu}h^{\mu\nu}-h^{2}\big{)}-% \frac{\xi_{g}}{M_{1}}f^{\mu\lambda\nu}h_{\rho\mu}\partial_{\lambda}\hat{D}\,h^% {\rho}_{\,\nu}\Big{]}.$$ We will take this Lagrangian since it can describe a massive spin-two LIV theory. Notice that there is no gauge symmetry due to a mass term and the $-1$ coefficient in $\big{(}h_{\mu\nu}h^{\mu\nu}-h^{2}\big{)}$ is dubbed as Fierz-Pauli tuning. In this paper, we are interested in the phenomenological aspects associated with Eq. (15). Notice that our model, Eq. (15), has two potentially harmful terms, namely the presence of the graviton mass, $m_{g}$, and a fixed background field, $n^{\mu}$, which can give rise to inconsistencies between geometry and dynamics Kostelecky:2003fs ; Bluhm:2014oua . We now discuss how these inconsistencies can be evaded. For the massive gravity terms without the background field, we can proceed twofold: first, it was show in Ref. Bluhm:2014oua that the massive gravity automatically avoids these inconsistencies. Second, one can think that the massive field is not fundamental, $i.e.$ it only exists after some condensation process occurs. It can be shown that this mass term induced by a condensate can appear without offending the original gauge symmetry JT , in conformity with the Elitzur theorem Elitzur:1975im which states that there is no spontaneously symmetry breaking. To dwell upon the background field terms, we again can proceed twice. First, these background field can appear after a spontaneous break of Lorentz symmetry, and this evades the negative result of Ref. Kostelecky:2003fs . Second, we have omitted the the kinetic terms for the background field, these terms can trigger the spontaneous violation of Lorentz symmetry. This full Lagrangian has no inconsistency as $\nabla_{\mu}T^{\mu\nu}=0$, and for an explicit worked example, see Ref. Rougemont . The equations of motion from (15) are given as $$\displaystyle\Box h^{\mu\nu}-\partial_{\lambda}\partial^{\mu}h^{\lambda\nu}-% \partial_{\lambda}\partial^{\nu}h^{\lambda\mu}+\eta^{\mu\nu}\partial_{\lambda}% \partial_{\sigma}h^{\lambda\sigma}+\partial^{\mu}\partial^{\nu}h$$ $$\displaystyle-\eta^{\mu\nu}\Box h+m_{g}^{2}\big{(}h^{\mu\nu}-\eta^{\mu\nu}h% \big{)}-\frac{2\xi_{g}}{M_{1}}f^{\mu\lambda\beta}\partial_{\lambda}\hat{D}\,% \eta^{\alpha\nu}h_{\alpha\beta}=0.$$ (16) Assuming that $m_{g}\neq 0$, after applying $\partial_{\mu}$ on Eq. (IV.1) one obtains $\partial_{\mu}h^{\mu\nu}=\partial^{\nu}h$ and plugging this back into the above equations of motion, we get $$\displaystyle\Box h^{\mu\nu}-\partial^{\mu}\partial^{\nu}h+m_{g}^{2}(h^{\mu\nu% }-\eta^{\mu\nu}h)-\frac{2\xi_{g}}{M_{1}}f^{\mu\lambda\beta}\partial_{\lambda}% \hat{D}\,\eta^{\alpha\nu}h_{\alpha\beta}=0.$$ (17) Taking the trace of this equation, we find $h=0$, which in turn implies that $\partial_{\mu}h^{\mu\nu}=0$. Provided that $\partial_{\mu}h^{\mu\nu}=0$ and $h=0$, Eq. (17) reads $$\displaystyle\Big{[}\big{(}\Box+m_{g}^{2}\big{)}\eta^{\alpha\mu}\eta^{\beta\nu% }-\frac{2\xi_{g}}{M_{1}}f^{\mu\lambda\beta}\partial_{\lambda}\hat{D}\,\eta^{% \alpha\nu}\Big{]}h_{\alpha\beta}=0.$$ (18a) After a straightforward algebra we find that the free continuous spectrum associated with Eq. (18a) is given by the following dispersion relation $$\displaystyle\big{(}k_{g}^{2}-m^{2}_{g}\big{)}^{2}-\big{(}2\xi_{g}/M_{1}\big{)% }^{2}\big{(}(n\cdot k_{g})^{2}-n^{2}k_{g}^{2}\big{)}^{3}=0.$$ (19) Notice that for $m_{g}=0$, the Eq.(19) is equivalent to the Eq.(10), i.e., the electromagnetic case, given $e.g.$ in Ref. Reyes . IV.2 Modified propagations to gravitational waves Moreover, we study the solutions for the dispersion relation given by Eq. (19) in the isotropic configuration, that is, for $n_{\mu}=(1,\vec{0})$ chosen to be purely time-like, for dimension$-n$ operators. Thus, we have $$\displaystyle E_{g}^{2}-k_{g}^{2}-m_{g}^{2}-2\lambda\xi^{(n)}_{g}\frac{k_{g}^{% n}}{M_{1}^{n-2}}=0,\;\;\;\;\;k_{g}\equiv|\vec{k}_{g}|$$ (20) with the two polarizations $\lambda=\pm 1$. Solving the Eq.(20) for $E_{g}$ we find the frequency solutions $$\displaystyle E_{g}=\sqrt{k^{2}_{g}\big{(}1+2\lambda\xi^{(n)}_{g}\big{(}k_{g}/% M_{1}\big{)}^{n-2}\big{)}+m^{2}_{g}}.$$ (21) Notice also that the solutions correctly reproduce the usual ones in the limit $\xi_{g}\to 0$ given in Ref. Hinterbichler:2011tt . We assume here the graviton velocity, $v_{g}$, is given by the group velocity determined from the dispersion relation (21), that is $$\displaystyle v_{g}\equiv\frac{\partial E_{g}}{\partial k_{g}}=\frac{1+n% \lambda\xi^{(n)}_{g}\big{(}k_{g}/M_{1}\big{)}^{n-2}}{\sqrt{1+2\lambda\xi^{(n)}% _{g}\big{(}k_{g}/M_{1}\big{)}^{n-2}+(m_{g}/k_{g})^{2}}}$$ (22) Now expanding for large momenta $k_{g}^{2}\gg m_{g}^{2}$, but keeping $(k_{g})^{n-2}\ll 1/\big{(}2\xi_{g}(M_{1})^{2-n}$ as before, we find, $$\displaystyle v_{g}\approx 1+\lambda(n-1)\xi^{(n)}_{g}\bigg{(}\frac{k_{g}}{M_{% 1}}\bigg{)}^{n-2}-\;\frac{1}{2}\bigg{(}1+\lambda n\xi^{(n)}_{g}\bigg{(}\frac{k% _{g}}{M_{1}}\bigg{)}^{n-2}\bigg{)}\bigg{(}\frac{m_{g}}{k_{g}}\bigg{)}^{2}.$$ (23) In the limit $M_{1}\gg m_{g}$ (for $n\geq 3$), the Eq.(23) takes the form $$\displaystyle v_{g}\approx 1-\frac{m_{g}^{2}}{2k_{g}^{2}}+\lambda(n-1)\xi^{(n)% }_{g}\bigg{(}\frac{k_{g}}{M_{1}}\bigg{)}^{n-2}.$$ (24) By considering $v_{g^{(-)}}$, notice that if $\xi^{(n)}_{g}>0$ and $m_{g}^{2}/k_{g}^{2}>|\xi^{(n)}_{g}|\big{(}k_{g}/M_{1}\big{)}^{n-2}$, then the graviton travels slower than light speed. On the other hand, if $\xi^{(n)}_{g}<0$ and $m_{g}^{2}/k_{g}^{2}<|\xi^{(n)}_{g}|\big{(}k_{g}/M_{1}\big{)}^{n-2}$, then the graviton would propagate faster than light speed. To massless gravitons $(m_{g}=0)$ we have, $$\displaystyle v_{g}\approx 1+\lambda(n-1)\xi^{(n)}_{g}\bigg{(}\frac{k_{g}}{M_{% 1}}\bigg{)}^{n-2}.$$ (25) which is similar to Eq. (14), i.e., the group velocity of photons. In the absence of LIV regimes, $i.e.$, $(\xi_{g}=0)$, we get $$\displaystyle v_{g}\approx 1-\frac{m_{g}^{2}}{2k_{g}^{2}}$$ (26) as in the usual case Ellins . V Phenomenological Aspects In the following we consider the Eq.(14) for photons and Eq.(24) for massive gravitons to impose the upper bounds for the $\xi_{g},\;\xi_{\gamma}-$ LIV. To do this, we use the Fermi Gamma-Ray Burst Monitor (GMB)-LIGO observations associated with a transient source based in the following measured of time arrival delay: $\Delta t\sim 0.40\,{\rm s}$ between the gamma-ray burst and the gravitational wave Fermi01 . Recently, a LIV gravity sector was introduced to investigate its effects in gravitational waves and the behavior of gravity in short-range scales Bailey , however they do not consider the simultaneous LIV of the electromagnetic and gravity sectors. V.1 Approach one: luminal photons In this approach we consider, following Ref. Ellins , that $\Delta v_{g^{(-)}}=\Delta v_{\gamma^{(-)}}\big{|}_{\xi_{\gamma}=0}$ (for luminal photons case) so that $$\displaystyle\frac{\xi_{g}^{(n)}}{(M_{1})^{n-2}}=\frac{1}{2(n-1)}\frac{m_{g}^{% 2}}{(k_{g})^{n}}$$ (27) where we have considered $k_{g}\sim 100{\;\rm Hz}\sim 4.13\times 10^{-13}\,{\rm eV}$ and $m_{g}\lesssim 10^{-22}{\,\rm eV}$ as the energy and the mass of the graviton estimated by LIGO GW01 . By inserting these values into Eq.(27), one finds $$\displaystyle\xi_{g}^{(n)}\lesssim\frac{(10)^{13n-44}\,}{2(n-1)\,(4.13)^{n}}% \bigg{(}\frac{M_{1}}{\rm{eV}}\bigg{)}^{n-2}.$$ (28) The above expression can impose an upper bound for the $\xi_{g}-$ LIV parameter. Notice that for $n=3$, we get $$\displaystyle\xi_{g}^{(n=3)}\lesssim(3.54\,M_{1})\times 10^{-8}(\rm eV)^{-1}$$ (29) which corresponds to a mass-scale dependent parameter. Particularly, if we use $M_{1}\sim 10^{5}$ eV Ellins , we obtain that this upper bound is $\xi_{g}\sim 10^{-2}$, which can be relevant phenomenologically. V.2 Approach two: time delay of flight The difference $\Delta t=\Delta t_{g}-\Delta t_{\gamma}$ between the propagation of the gravitational and electromagnetic waves is given by Vincenzo $$\displaystyle\Delta t=\Delta t_{a}-(1+z)\Delta t_{e}$$ (30) where $\Delta t_{a}$ (measured quantity) is the arrival delay observed at the Earth and $\Delta t_{e}$ (unknown quantity) is the emission delay at the source with redshift $z$. Here we assumethat $\Delta t_{e}=0$ (the simultaneous emission of gravitational and electromagnetic waves) to derive constraints on LIV by velocities of the gravitons and photons. Thus for spatially flat Universe, $\Omega_{k}=0$, we have for luminal gravitons $$\displaystyle\Delta t_{a}=\Delta t_{g}-\Delta t_{\gamma}=H_{0}^{-1}\int_{0}^{z% }\bigg{(}\frac{1}{v_{g}\big{|}_{\xi_{g}=0}}-\frac{1}{v_{\gamma^{(-)}}}\bigg{)}% \frac{dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+\Omega_{\Lambda}}}.$$ (31) where $H_{0}=67.8{\rm\,Km\,(s\,Mpc})^{-1}$ is the Hubble constant ($H_{0}^{-1}=4.55\times 10^{17}{\rm\,s}$) with $\Omega_{m}$ and $\Omega_{\Lambda}$ being the matter and dark energy density parameters, respectively. Inserting Eqs.(14) and (25) into Eq.(31), we find $$\displaystyle\Delta t_{a}=(n-1)H_{0}^{-1}\bigg{(}\frac{m_{g}^{2}}{2k_{g}^{2}}-% \;\xi_{\gamma}^{(n)}\Big{(}\frac{k_{\gamma}}{M}\Big{)}^{n-2}\bigg{)}\int_{0}^{% z}\frac{dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+\Omega_{\Lambda}}}.$$ (32) Thus, from Eq.(32) we get $$\displaystyle\frac{\xi^{(n)}_{\gamma}}{(M)^{n-2}}=-(k_{\gamma})^{2-n}\Bigg{[}% \frac{m_{g}^{2}}{2k_{g}^{2}}-\frac{\Delta t_{a}}{(n-1)H_{0}^{-1}}\bigg{(}\int_% {0}^{z}\frac{dz^{\prime}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+\Omega_{\Lambda}}% }\bigg{)}^{-1}\Bigg{]}$$ (33) Now solving the integral for $\Omega_{m}=0.31$, $\Omega_{\Lambda}=0.69$ at a redshift $z=0.09$ and using the previously assumed values for $\Delta t_{a}=0{.}40$ s, $k_{g}\sim 4.13\times 10^{-13}$ eV and $k_{\gamma}>50{\rm keV}$ being the photon energies for transient source measured by GMB Fermi01 , we obtain $$\displaystyle\xi^{(n)}_{\gamma}\lesssim\frac{\big{(}(0.58)n-1.57\big{)}\times 1% 0^{-(4n+11)}}{(n-1)(5.0)^{n-2}}\bigg{(}\frac{M}{\rm{eV}}\bigg{)}^{n-2}$$ (34) Notice that for $n=3$, we have $$\displaystyle\xi^{(n=3)}_{\gamma}\lesssim(1.70\,M)\times 10^{-25}(\rm{eV})^{-1}$$ (35) which also corresponds to a mass-scale dependent parameter. In particular, inserting $M\sim 10^{28}$ eV, as suggested, in Refs. Albert:2007qk ; Ellins into Eq. (35), $\xi^{(n=3)}_{\gamma}\sim 10^{3}$ is gotten. Hence for this very large $M$ value, our result is not suitable for any realistic phenomenology. One then concludes that eiher the value of $M$ has to be modified (see also Ref. Passos:2016bbc , where another energy scale, namely the Hořava-Lifshitz one is also introduced to give more realistic bounds) or this term cannot be present in the description of a LIV effective theory. To complete our phenomenological analysis let us compare Eq.(28) with Eq.(34). As a consequence, we find the following relationship $$\displaystyle\frac{\xi^{(n)}_{g}}{\xi^{(n)}_{\gamma}}=\frac{(5.0)^{n-2}\times 1% 0^{17n-33}}{\big{(}(0.58)n-1.57\big{)}(4.13)^{n}}\bigg{(}\frac{M_{1}}{M}\bigg{% )}^{n-2}.$$ (36) Therefore, for $(n=3)$, we find, $$\displaystyle\frac{\xi^{(n=3)}_{g}}{\xi^{(n=3)}_{\gamma}}=4.17\times 10^{17}% \bigg{(}\frac{M_{1}}{M}\bigg{)}.$$ (37) Notice that if $M\gg M_{1}$, the above quantity may lead to a realistic constraint. V.3 Approach three: difference between the light and graviton velocities In this approach we use the above results to $\xi_{g}$ and $\xi_{\gamma}$ parameters to constrain the difference between the velocities of electromagnetic and gravitational waves. First, we insert the Eq.(28) into Eq.(24) to find $(\rm{for}\;\lambda=-1)$ $$\displaystyle v_{g}=1-\frac{m_{g}^{2}}{2k_{g}^{2}}-\frac{10^{(13n-44)}}{2(4.13% )^{n}}\bigg{(}\frac{k_{g}}{\rm eV}\bigg{)}^{n-2}.$$ (38) Also using the previous values for $m_{g}$ and $k_{g}$ we obtain $$\displaystyle v_{g}\lesssim 1-0,59\times 10^{-19}.$$ (39) Now inserting the Eq.(34) into Eq.(14) we find $(\rm{also\;at}\;\lambda=-1)$ $$\displaystyle v_{\gamma}=1-\frac{\big{(}(0.58)n-1.57\big{)}\times 10^{-(4n+11)% }}{(5.0)^{n-2}}\bigg{(}\frac{k_{\gamma}}{\rm{eV}}\bigg{)}^{n-2}$$ (40) and by using the previous value for $k_{\gamma}$, we have for $n=3$ $$\displaystyle v_{\gamma}\lesssim 1-1,07\times 10^{-19}.$$ (41) Therefore the difference between velocities given by Eq.(39) and Eq.(41) is given as $$\displaystyle v_{\gamma}-v_{g}\lesssim 0.49\times 10^{-19}$$ (42) which is the approximately the bounds found in Ellins — see also Gia . VI Conclusions In this work, we analyze the LIV effects from electromagnetic and gravitational higher derivative operators using the Myers-Pospelov approach to obtain LIV effective theories. First, we extend the electromagnetic and massive gravitational actions to include LIV high order derivate terms. Then we compute the equations of motion, the dispersion relations for these sectors and the photon and graviton velocities. Assuming that the same process that generated the detected the gravitational waves also emits eletromagnetic waves, also detected by other means, bounds for the LIV parameters for electromagnetic, $\xi_{\gamma}$, and massive gravitational, $\xi_{g}$, sectors are obtained for three approaches, namely, luminal photons, time delay of flight and the difference between photon and graviton velocities. For the first two approaches, there is a dependence of $\xi_{g}$ and $\xi_{\gamma}$ on the respective mass scales $M_{1}$ and $M$, where the LIV effects become relevant. Using the value for $M_{1}$ obtained in Ref. Ellins , it is gotten that $\xi_{\gamma}\sim 10^{-2}$, and this is expected to be phenomenological relevant. For the time delay of flight approach and the value of $M$ given in Refs. Albert:2007qk ; Ellins , it is found that $\xi_{g}\sim 10^{3}$, which cannot represent any realistic LIV scenario, however, the ratio between $\xi_{g}$ and $\xi_{\gamma}$, Eq. (37), can be made phenomenological relevant if $M\gg M_{1}$, which is satisfied, even if we consider that $M$ has to be changed to make $\xi_{\gamma}\sim 1$ in Eq. (35). Finally for the difference of photon and graviton velocities, we found compatible bounds to the ones already obtained in the literature. Acknowledgements. We would like to thank to CNPq for partial financial support. C. A. D. Z. is thankful for the kindness and hospitality of the Physics Department at Federal University of Campina Grande, where part of this work was carried out. The authors would like to thank Quentin Bailey for some useful comments on Sec. II and Sec. IV. References (1) (2) G. Amelino-Camelia, J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos, and S. Sarkar, Nature (London) 393, 763 (1998). (3) P. Laurent, D. Gotz, P. Binetruy, S. Covino and A. Fernandez-Soto, Phys. Rev. D 83, 121301 (2011) doi:10.1103/PhysRevD.83.121301 [arXiv:1106.1068 [astro-ph.HE]]. (4) F. Ahmadi, J. Khodagholizadeh and H. R. Sepangi, Astrophys. Space Sci.  342, 487 (2012) [arXiv:1411.1986 [gr-qc]]. (5) H. Krawczynski et al., arXiv:1303.7158 [astro-ph.HE]. (6) A. A. Abdo, M. Ackermann, M. Ajello et al., Nature (London) 462, 331 (2009); V. Vasileiou et al. Phys. Rev. D 87, 122001 (2013). (7) F. Aharonian, A. G. Akhperjanian, U. Barres de Almeida et al. Phys. Rev. Lett. 101, 402 (2008). (8) J. Albert et al. [MAGIC and Other Contributors Collaborations], Phys. Lett. B 668, 253 (2008) doi:10.1016/j.physletb.2008.08.053 [arXiv:0708.2889 [astro-ph]]. (9) R. Gambini and J. Pullin, Phys. Rev. D 59, 124021 (1999). (10) R. J. Gleiser and C. N. Kozameh, Phys. Rev. D 64, 083007 (2001). (11) G. Amelino-Camelia, New J. Phys. 6, 188 (2004). (12) D. Mattingly, Living Rev. Rel. 8, 5 (2005). (13) L. Maccione and S. Liberati, JCAP 0808, 027 (2008). (14) R. C. Myers and M. Pospelov, Phys. Rev. Lett. 90, 211601 (2003). (15) B. P. Abbott et al., Phys. Rev. Lett. 116, no. 6, 061102 (2016). (16) B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], Phys. Rev. Lett.  116, no. 24, 241103 (2016) [arXiv:1606.04855 [gr-qc]]. (17) V. Connaughton et al., Astrophys. J.  826, no. 1, L6 (2016) [arXiv:1602.03920 [astro-ph.HE]]. (18) D. Blas, M. M. Ivanov, I. Sawicki and S. Sibiryakov, Pisma Zh. Eksp. Teor. Fiz.  103, no. 10, 708 (2016) [JETP Lett.  103, no. 10, 624 (2016)] [arXiv:1602.04188 [gr-qc]]. (19) J. Ellis, N. E. Mavromatos and D. V. Nanopoulos, Mod. Phys. Lett. A 31, no. 26, 1675001 (2016) [arXiv:1602.04764 [gr-qc]]. (20) G. Calcagni, arXiv:1603.03046 [gr-qc] (2016); M. Arzano and G. Calcagni, Phys. Rev. D 93, no. 12, 124065 (2016) Addendum: [Phys. Rev. D 94, no. 4, 049907 (2016)] [arXiv:1604.00541 [gr-qc]]. (21) V. Branchina and M. De Domenico, arXiv:1604.0853 [gr-qc] (2016). (22) S. Mirshekari, N. Yunes, and C. M. Will, Phys. Rev. D 85, 024041 (2012). (23) M. Lyutikov, arXiv:1602.07352 [astro-ph] (2016). (24) A. Loeb, Astrophys. J.  819, no. 2, L21 (2016) [arXiv:1602.04735 [astro-ph.HE]]. (25) B. J. Morsony, J. C. Workman and D. M. Ryan, Astrophys. J.  825, no. 2, L24 (2016) [arXiv:1602.05529 [astro-ph.HE]]. (26) S. J. Smartt et al., Astrophys. J.  827, no. 2, L40 (2016) [arXiv:1606.04795 [astro-ph.HE]]. (27) M. Yoshida et al., arXiv:1611.01588 [astro-ph.HE]. (28) S. M. Carroll, G. B. Field and R. Jackiw, Phys. Rev. D 41, 1231 (1990). (29) A. Ferrari, M. Gomes, J. R. Nascimento, E. Passos, A. Y. Petrov, and A. J. da Silva, Phys. Lett. B 652, 174 (2007). (30) K. Hinterbichler, Rev. Mod. Phys.  84, 671 (2012) [arXiv:1105.3735 [hep-th]]. (31) V. A. Kostelecky, Phys. Rev. D 69, 105009 (2004) [hep-th/0312310]. (32) R. Bluhm, Phys. Rev. D 91, no. 6, 065034 (2015) [arXiv:1401.4515 [gr-qc]]. (33) C. M. Reyes, Phys. Rev. D 82, 125036 (2010). (34) E. Passos, E. M. C. Abreu, M. A. Anacleto, F. A. Brito, C. Wotzasek and C. A. D. Zarro, Phys. Rev. D 93, no. 8, 085022 (2016) [arXiv:1603.01558 [hep-th]]. (35) L. S. Grigorio, M. S. Guimaraes, R. Rougemont, C. Wotzasek and C. A. D. Zarro, Phys. Rev. D 86, 027705 (2012) [arXiv:1202.3798 [hep-th]]; M. S. Guimaraes, R. Rougemont, C. Wotzasek and C. A. D. Zarro, Phys. Lett. B 723, 422 (2013) [arXiv:1209.3073 [hep-th]]; L. S. Grigorio, M. S. Guimaraes, R. Rougemont, C. Wotzasek and C. A. D. Zarro, Phys. Rev. D 88, 065009 (2013) [arXiv:1307.1035 [hep-th]]; J. R. Nascimento, A. Y. Petrov, C. Wotzasek and C. A. D. Zarro, Phys. Rev. D 89, no. 6, 065030 (2014) [arXiv:1403.0786 [hep-th]]. (36) S. Elitzur, Phys. Rev. D 12, 3978 (1975). (37) R. Rougemont, J. Noronha, C. A. D. Zarro, C. Wotzasek, M. S. Guimaraes and D. R. Granado, JHEP 1507, 070 (2015) [arXiv:1505.02442 [hep-th]]. (38) Q. G. Bailey, A. Kostelecký and R. Xu, Phys. Rev. D 91, no. 2, 022006 (2015) [arXiv:1410.6162 [gr-qc]]; V. A. Kostelecký and M. Mewes, Phys. Lett. B 757, 510 (2016) [arXiv:1602.04782 [gr-qc]]; A. Kostelecky and M. Mewes, arXiv:1611.10313 [gr-qc];
Effect of clustering on the orientational properties of a fluid of hard right isosceles triangles Yuri Martínez-Ratón [email protected] Grupo Interdisciplinar de Sistemas Complejos (GISC), Departamento de Matemáticas, Escuela Politécnica Superior, Universidad Carlos III de Madrid, Avenida de la Universidad 30, E-28911, Leganés, Madrid, Spain    Enrique Velasco [email protected] Departamento de Física Teórica de la Materia Condensada, Instituto de Física de la Materia Condensada (IFIMAC) and Instituto de Ciencia de Materiales Nicolás Cabrera, Universidad Autónoma de Madrid, E-28049, Madrid, Spain Abstract Recent studies have shown the fluid of hard right triangles to possess fourfold and quasi-eightfold (octatic) orientational symmetries. However, the standard density-functional theory for two-dimensional anisotropic fluids, based on two-body correlations, and an extension to incorporate three-body correlations, fail to describe these symmetries. To explain the origin of octatic symmetry we postulate strong particle clustering as a crucial ingredient. We use Scaled Particle Theory to analyze four binary mixtures of hard right triangles and squares, three of them being extreme models for a one-component fluid, where right triangles can exist as monomeric entities together with triangular dimers, square dimers or square tetramers. Phase diagrams exhibit a rich phenomenology, with demixing and three-phase coexistences. More important, under some circumstances the orientational distribution function of triangles has equally high peaks at relative particle angles $0,\ \pi/2,$ and $\pi$, signalling fourfold, tetratic order, but also secondary peaks located at $\pi/4$ and $3\pi/4$, a feature of eightfold, octatic order. Also, we extend the binary mixture model to a quaternary mixture consisting of four types of clusters: monomers, triangular and square dimers, and square tetramers. This mixture is analyzed using Scaled Particle Theory under the restriction of fixed cluster fractions. Apart from the obvious tetratic phase promoted by tetramers, we found that, for certain cluster compositions, the total orientational distribution function of monomers can exhibit quasi-eightfold (octatic) symmetry. The study gives evidence on the importance of clustering to explain the peculiar orientational properties of liquid-crystal phases in some two dimensional fluids. I Introduction The experimental and theoretical study of two-dimensional (2D) fluids of hard anisotropic particles has enjoyed an upsurge in recent years, mainly motivated by the development of novel experimental techniques such as lithographic particle fabrication Zhao1 ; Zhao2 ; Zhao3 ; Zhao4 . Using these techniques, micro-prisms of any cross-sectional shape can be fabricated, and suspensions of these particles can be adsorbed on surfaces, giving rise to effectively two-dimensional fluids of diffusing Brownian particles Zhao1 ; Zhao2 ; Zhao3 ; Zhao4 . The fluid phase behavior can be explored by varying particle volume fraction, and in many cases a plethora of exotic liquid-crystal and crystalline phases results. These phases possess symmetries that strongly depend on particle shape. Some of these phases were predicted theoretically and later confirmed by Monte Carlo (MC) simulations, and different theoretical models have been developed to explain the rich phase behavior of these 2D hard-core fluids and its particle shape dependence Schlaken ; Frenkel ; Donev ; Dijkstra ; MR1 ; MR2 ; MR3 ; MR4 ; Dani ; Escobedo ; Glotzer1 ; Cinacchi . Of particular interest are the triatic (TR) and tetratic (T) phases found in fluids of hard equilateral triangles Zhao4 ; Dijkstra ; MR3 and squares (and also in rectangles of small aspect ratios) MR2 ; Escobedo , with particle axes pointing along six or four equivalent directors, respectively. The T phase can be viewed as the 2D analog of the biaxial nematic phase, recently found to be stable in colloidal suspensions of board-like particles Vroege ; Vanakaras and whose stability can be enhanced by size polydispersity Roij ; Patti ; Patti2 . Vertically vibrated granular monolayers are being studied as experimental models of real 2D fluids in thermal equilibrium. Under specific experimental conditions, monolayers of squares experiments and cylinders Dani2 ; MR4b ; MR5 exhibit the presence of T and also smectic liquid-crystal textures in the steady-state configurations. The excitation of topological defects in the orientational and positional director fields of these fluidized granular monolayers, when confined into circular cavities, have been observed and studied MR5 . These results point to the importance of hard core entropic interactions in the stability of these dissipative textures, which turn out to be very similar to those obtained in equilibrium experiments on monolayers of colloidal spherocylinders confined in cavities of different shapes Wittmann1 ; Wittmann2 . This connection opens up the possibility that vibrated granular monolayers may be considered as valid experimental models to probe the interplay between symmetry and order in 2D fluids. The most successful theoretical tool used in the study of liquid-crystal phase behavior of hard-body fluids is Density Functional Theory (DFT) hard-body . The main advantage of DFT is that it allows to obtain, via functional minimization, the equilibrium angular distribution function $h(\phi)$ of a 2D oriented fluid, i.e. the probability density of particle axes to orient at an angle $\phi$ with respect to one of the equivalent directors. As shown in Ref. MR3, , the Scaled-Particle Theory (SPT) version of DFT (which includes only two-body correlations) predicts that the uniaxial nematic (N) phase is the only stable orientation phase of a fluid of hard right triangles, i.e. no exotic liquid-crystal phases do exist in this fluid. A bifurcation analysis, corroborated by rigorous functional minimization close to the bifurcation, confirmed this result, and the incorporation of three-particle correlations did not modify this scenario nosotros . By contrast, MC simulations of the same fluid showed the presence of the T phase (obtained by expanding a perfect T-crystal), along with an additional exotic oriented fluid phase with eightfold symmetry, the octatic (O) phase, obtained by compressing the isotropic (I) fluid. Even though all evidence suggests T to be the true stable phase, the fluid is prone to developing strong O correlations as density is increased from the I fluid. Note that the T and O phases are highly-symmetric phases having fourfold and eightfold symmetries, i.e. their angular distribution functions have the property $h(\phi)=h(\phi+2\pi/n)$, with $n=4$ and 8, respectively Dijkstra ; nosotros . The failure of the standard Onsager theory and its variations, all based on two-body correlations, to predict the phase behaviour of hard-particle models, at least at a qualitative level, is quite unusual in the history of liquid crystals. In Ref. nosotros, we advanced a reason why the standard two-body theory, and also the three-body-extended theory, cannot predict the highly-symmetric T and O phases, namely the crucial contribution of fourth-, and probably even higher-order, correlations in this system. Given the difficulty of improving the standard theories by incorporating such high-order correlations, in this work we explore different ideas in an effort to understand the problem from different perspectives. On the one hand we focus on a system that should promote orientational correlations with O symmetry: a binary mixture of hard right triangles and squares. The excluded area between these two particles shows local minima at relative angles $\phi=\pi/4$ and $3\pi/4$, which could drive a stable O phase. As will be seen, this property of the excluded area is not enough to promote bulk O ordering, and the N and T phases are the only oriented phases that get stabilized in the phase diagrams for the four different mixtures analyzed. Despite this, we found that the orientational distribution function of triangles, for certain values of mixture composition, exhibits small secondary peaks located at the relative angles above. On the other hand, we study the effect of particle clustering in the orientational properties of hard right triangles. Clustering is an extreme consequence of high-order particle correlations and could be a complementary point of view to extract useful information on the fluid behavior. We formulate a theory for clustering with the help of a toy model for particle self-assembling where monomers are just "free" right triangles. These monomers are assumed to self-assemble into different triangular and square-shaped clusters, the latter coming in two varieties: dimers and tetramers. An effectively quaternary mixture results from these considerations, which is analyzed using the SPT version of DFT. We numerically minimize the functional for particular compositions and, from the equilibrium angular distribution functions of the four species, a monomer distribution function $h_{\rm m}(\phi)$ can be predicted. It is then shown that, for certain cluster compositions, this function has quasi-eightfold symmetry, i.e. four peaks of similar heights at $\phi=k\pi/4$ ($k=0,\cdots,3$) in the interval $[0,\pi]$. This result demonstrates the relevance of clustering to explain the presence of O orientational symmetry in a fluid of right triangles. Aside from the theoretical calculations, we have also performed MC simulations of the real fluid of right triangles to confirm the presence of clustering. It has been pointed out Glotzer2 that entropic interactions between anisotropic particles in dense fluids can in some sense be regarded as chemical bonds, that in turn may promote particle self-assembling. In our simulations we define a criterion to identify different clusters: triangular, square and rhomboidal dimers, and also square tetramers. Cluster fractions are analyzed as a function of packing fraction in MC compression runs starting from the I fluid, and also from expansion runs from the T crystalline phase. We show that the T phase is enriched in square dimers and tetramers, with a small proportion of the remaining clusters. By contrast, all clusters have similar fractions in the O phase. Instead of the fixed cluster compositions assumed in our toy model, a more sophisticated model to describe strong clustering effects in hard particle fluids should certainly predict cluster compositions at equilibrium in a consistent fashion. Chemical equilibrium between different clusters, a mass conservation law, and a larger variety of clusters (such as clusters with rhomboidal shape), along with effective internal energies of clusters are important ingredients that the new model should take into account. This line of research we leave for future developments. The article is organized as follows. In Sec. II we study four different binary mixtures of hard squares and hard right triangles and calculate their phase diagrams and the orientational properties of the different species. The effect of clustering on the stability of the liquid-crystal phase with eightfold symmetry is analyzed in Sec. III. MC simulations and results for cluster fractions are shown in Section IV. Finally some conclusions are drawn in Sec. V. II Binary mixtures of right triangles and squares In this section we study the phase behavior and orientational properties of binary mixtures of squares (species 1) and right triangles (species 2). The motivation is that the cross excluded area ${\cal K}_{12}^{(2)}(\phi)$ (apart from being symmetric with respect to $\phi=\pi/2$) exhibits four local minima in the interval $[0,\pi)$, located at relative angles $\phi=k\pi/4$ ($k=0,\cdots,3$); see Fig. 1 where the relative angle $\phi$ is defined and the excluded area shown for the particular case of squares and triangles of the same (equally-sized) lengths, $l_{1}=l_{2}$ (solid curve in the figure). We can see however that the gain in scaled excluded area for these relative angles is rather modest. For comparison the same figure shows the excluded area between like species. In these cases the gain in excluded area at the local minima is much higher. Despite this, the presence of four local minima in ${\cal K}_{12}^{(2)}(\phi)$ could in turn promote the stability of the O phase in the binary mixture. Even though the standard DFT does not predict the stability of the O phase in one-component fluids nosotros , the mixing of right triangles with other particles such that the cross excluded area presents eight local minima in the interval $[0,2\pi)$ could generate an O phase. One example of such a particle is the square. We remind the reader that simulations, DFT and experimental studies on vibrated monolayer of hard squares Frenkel ; MR1 ; experiments have shown that the one-component fluid exhibits I-T and T-crystal second-order phase transitions. To analyze this mixture we used a DFT based on the SPT-second virial theory, generalized to binary mixtures. The proposed expression for the excess free-energy per particle in reduced thermal units is $$\displaystyle\varphi_{\rm exc}[\{h_{i}\}]=-\log(1-\eta)+\frac{\rho}{2(1-\eta)}$$ $$\displaystyle\times\sum_{i,j}x_{i}x_{j}\left({\cal K}^{(2)}_{ij,0}-a_{i}-a_{j}+\frac{1}{2}\sum_{n\geq 1}^{N}{\cal K}^{(2)}_{ij,n}h_{n}^{(i)}h_{n}^{(j)}\right).$$ (1) Here a truncated Fourier expansion of the orientational distribution functions, $$\displaystyle h_{i}(\phi)=\frac{1}{2\pi}\left(1+\sum_{n\geq 1}^{N}h_{n}^{(i)}\cos(2n\phi)\right),$$ (2) is used to calculate the double angular average with respect to $h_{i}(\phi)$ and $h_{j}(\phi^{\prime})$ in the SPT expression nosotros0 ${\cal K}_{ij}^{(2)}(\phi-\phi^{\prime})-a_{i}-a_{j}$, giving the term between parenthesis in Eqn. (1). The total packing fraction is defined as $\displaystyle\eta=\rho\left<a\right>$, i.e. the product of the total number density $\rho$ and the average area $\displaystyle\left<a\right>\equiv\sum_{i}x_{i}a_{i}$, given by a sum over species of the products of molar fractions $x_{i}$ and particle areas $a_{i}=l_{i}^{2}/i$ ($i=1,2$). Here $l_{i}$ is the equally-sized side-length of species $i$ (see Fig. 1). The coefficients ${\cal K}^{(2)}_{ij,n}$ can be computed analytically from the expressions $$\displaystyle{\cal K}^{(2)}_{ij,n}-(a_{i}+a_{j})\delta_{n0}$$ $$\displaystyle=-\frac{4l_{i}l_{j}\left[1+\delta_{i2}\delta_{j2}+(-1)^{n}+(\delta_{i2}+\delta_{j2})\sqrt{2}\cos\left(\frac{n\pi}{2}\right)\right]}{2^{\delta_{i2}+\delta_{j2}}(4n^{2}-1)\pi}.$$ (3) The ideal part of the free-energy for the mixture is $$\displaystyle\varphi_{\rm id}(\{h_{i}\})=\log\eta-1+\sum_{i}x_{i}\left[\log x_{i}+\int_{0}^{2\pi}h_{i}(\phi)\log h_{i}(\phi)\right].$$ (4) The total free-energy per particle is $\varphi(\{h_{i}\})=\varphi_{\rm id}(\{h_{i}\})+\varphi_{\rm exc}(\{h_{i}\})$, and the Gibbs free-energy functional per particle $g$ can be obtained from $$\displaystyle g(\{h_{i}\})=\varphi(\{h_{i}\})+\frac{\beta p}{\rho}.$$ (5) The latter expression is very useful for the calculation of the phase diagrams of binary mixtures, in particular when the fluid demixes into two coexisting phases. The procedure is: (i) fix the pressure to some constant value $p(x,\rho)=\rho^{2}\partial\varphi/\partial\rho=p_{0}$; (ii) using this constraint, calculate the total density $\rho(x;p_{0})$ as a function of the molar fraction of squares $x\equiv x_{1}$, and insert back into the Gibbs free-energy to obtain the function $g(x,p_{0})$. Note that in the above procedure all Fourier amplitudes $\{h_{n}^{(i)}\}$ have to be obtained through the equilibrium condition $\partial\varphi/\partial h_{n}^{(i)}=0$. From the double-tangent construction of the function $g(x,p_{0})$, which guarantees the equality of chemical potentials of the species at the coexisting phases, we find the coexisting values $x^{(a)}$ and $x^{(b)}$, and from these the coexisting densities $\rho^{(a)}$ and $\rho^{(b)}$. Changing the pressure $p_{0}$ and repeating the above procedure we can construct that part of the phase diagram in the pressure-composition plane where demixing is present. In the case of second order phase transitions a bifurcation analysis is required (see MR3 for the case of mixtures of triangles using the SPT formalism). The packing fraction at bifurcation (spinodal curves) turns out to be a simple generalization of the corresponding expression for the one-component fluid nosotros : $$\displaystyle\eta_{n}=\frac{1}{1-\sum_{i}x_{i}{\cal K}^{(2)}_{ii,n}/\langle a\rangle}.$$ (6) The orientational order of squares and triangles is measured using the set of order parameters $$\displaystyle Q_{2n}^{(i)}=\int_{0}^{2\pi}d\phi h_{i}(\phi)\cos(2n\phi)=\frac{h_{n}^{(i)}}{2}.$$ (7) These parameters account for N ($n=1$), T ($n=2$), TR ($n=3$) and O ($n=4$) ordering. II.1 Bifurcation curves The spinodal curves $\eta_{n}(x)$ for $n=1,\cdots,4$ from Eqn. (6) are plotted in Fig. 2 for a binary mixture of particles with $l_{1}=l_{2}$. The I-N ($n=1$) and I-T ($n=2$) curves departing from $x=0$ (one-component triangle fluid) and $x=1$ (one-component square fluid) are monotonically increasing functions of $x_{i}$ ($i=1$ for I-N and 2 for I-T respectively), and intersect at $x^{*}\simeq 0.376$. This in turn means that mixing stabilizes the I phase, which can be easily explained by the different (two- vs. four-fold) symmetries of the liquid-crystal phases of hard-triangle and hard-square fluids, respectively. The shaded area in the figure indicates the region of I-phase stability against orientational order. As we will shortly see, the point $(x^{*},\eta^{*})$ is always located inside the demixing gap. It is interesting to note from Fig. 2 that the effect of mixing leads to the lowering of the packing-fraction difference between the I-O and I-N or I-T bifurcations. This indicates that the O phase of the mixture becomes "less" unstable with respect to the N or T phases. However this mixing effect is not sufficient to stabilize it. II.2 Phase diagrams Phase diagrams have been calculated for four different binary mixtures, Figs. 3 (a)-(d). Defining the length ratio $\kappa_{l}\equiv l_{1}/l_{2}$, the four mixtures have $\kappa_{l}=1/2$, $1/\sqrt{2}$, $1$, and $\sqrt{2}$. The area ratio $\kappa_{a}\equiv a_{1}/a_{2}$ for the mixtures is $1/2$, $1$, $2$, and 4, respectively. I-N and I-T second-order transition curves depart from the $x=0$ and $x=1$ axes, respectively. Both curves end in corresponding tricritical points. For pressures above these tricritical points the corresponding transitions become of first order. In the case of the I-T curve the transition corresponds to strong demixing, with strong fractionation of the two species. Both phase transitions are bounded above by a triple I-N-T coexistence (dotted horizontal lines in panels (a)-(c) of Fig. 3), and for higher pressures demixing takes place between a N phase, rich in triangles, and a T phase, rich in squares. It is interesting to note that the lowest tricritical point is always that of the I-T spinodal curve. This is the effect of the large decrement in total averaged excluded area, $\displaystyle\sum_{i,j}x_{i}x_{j}\langle\langle{\cal K}_{ij}^{(2)}\rangle\rangle_{h}$, when T (instead of N) orientational ordering is induced by squares. It is clear from Fig. 3(b) that mixing of species with approximately the same areas (values of $\kappa_{a}$ in the neighborhood of unity) also exhibit strong demixing. This can be explained by two facts, which we elaborate in the following. (i) The different (two- vs. four-fold) symmetries of the N and T phases of the hard-triangle and hard-square fluids, respectively. Triangles oriented into a T configuration generate a high free-energy cost due to the particular form of the triangle-triangle excluded area. From Fig. 1 we can see how the equipartition of particle orientations into the discrete set of angles $\{0,\pi/2,\pi\}$ (perfect T ordering) generates an averaged triangle-triangle scaled excluded area $$\displaystyle\langle\langle{\cal A}(\phi)\rangle\rangle\equiv\frac{1}{2a}\langle\langle{\cal K}_{22}^{(2)}(\phi)\rangle\rangle_{h}-1$$ $$\displaystyle=\frac{1}{4}\left[{\cal A}(0)+2{\cal A}(\pi/2)+{\cal A}(\pi)\right]=\frac{7}{4},$$ (8) larger than that corresponding to equipartition into the angles $\{0,\pi\}$ (perfect N ordering), equal to $\displaystyle{\frac{1}{2}\left[{\cal A}(0)+{\cal A}(\pi)\right]=\frac{3}{2}}$. Also a fluid of hard squares cannot exhibit a N phase due to the symmetry of the particles, which give an excluded area invariant under $\pi/2$-rotations. Therefore, at high enough pressures, phase-separation into two phases, each having the orientational order promoted by the most populated species, guarantees a much lower free-energy at equilibrium. (ii) The decrease in excluded area promoted by orientational order is less for the triangle-square pairs than for triangle-triangle or square-square pairs (Fig. 1), which obviously favors the demixed state. A final comment on the phase diagrams is that the coexistence region of the first-order I-N transition shrinks dramatically with the ratio $\kappa_{l}$, eventually disappearing for $\kappa_{l}=\sqrt{2}$ (see panel (d) of Fig. 3). This is the most likely scenario, something that cannot be settled with total certainty as the minimization in Fourier space cannot be achieved successfully for pressure values close to the intersection between the I-N second order transition curve and the N binodal of the N-T demixing. Our numerical minimization scheme does not give reliable results at these pressure values, even for a number of Fourier coefficients $h_{n}^{(i)}$ equal to 100, due to the incorrect numerical representation of $h_{i}(\phi)$. A pressure difference, measured from the I-T tricritical point, of $\beta\Delta pl_{1}^{2}\simeq 160$, is the highest pressure for which we could perform accurate numerical minimizations. If this scenario were correct the second-order I-N transition would end as a critical end-point at the N binodal of the N-T demixing transition. These phase diagrams resemble those recently obtained from MC simulations of mixtures of hard disks and squares Escobedo2 , where the packing fraction for the I-hexatic transition (counterpart of the present I-T transition) increases with the molar fraction of squares (disks). Both transition curves merge in a mosaic region where a micro-segregated phase with mixed hexatic and T symmetries become stable. This region is in turn bounded above by a triangular-solid-T demixing which, at high enough pressures, becomes a solid-solid demixing between a triangular solid, rich in disks, and a square crystal, rich in squares. Here we do not consider nonuniform phases in the triangle-square mixture, so we cannot discard that, for high enough pressures, the demixed phases found are in fact crystalline (instead of liquid crystalline). We now study the orientational properties of the mixture with asymmetry $\kappa_{l}=1/\sqrt{2}$ ($\kappa_{a}=1$). The fluid pressure was fixed to a value $\beta pl_{2}^{2}=360$, i.e. above the I-N-T triple point (see panel (b) of Fig. 3). The total free-energy per particle was minimized with respect to all Fourier amplitudes $\{h_{n}^{(i)}\}$, and the equilibrium orientational distribution functions $\{h_{i}(\phi)\}$ and order parameters $Q_{2n}^{(i)}$ were obtained as a function of mixture composition $x$. Results for the latter are shown in Fig. 4. Note that, for a wide range of compositions, the curves $Q_{2n}^{(i)}(x)$ represent order parameters of the unstable mixture due to the demixing transition shown in Fig. 3(b). However it is illustrative to look at the behavior of the orientational ordering of triangles and squares as a function of composition for the whole interval $[0,1]$. Close to the one-component limits $x=0$ or $x=1$, the N order parameter of triangles $Q_{2}^{(2)}$, or the T order parameter of squares, $Q_{4}^{(1)}$, is highest, indicating strong N or T orientational ordering. In the neighborhood of $x=0$ squares follow the orientation of the more abundant triangular species by orienting their axes into a T configuration, but with rather low orientational ordering. However, as Fig. 5(a) indicates, the function $h_{1}(\phi)$ has (aside from the three main peaks located at $\{0,\pi/2,\pi\}$, typical of the T symmetry), two additional small peaks at $\pi/4$ and $3\pi/4$; this is again an indication that square-triangle interactions are behind the rising of orientational correlations with eightfold symmetry. This in turn affects the difference between the order parameters $Q_{4}^{(1)}$ and $Q_{8}^{(1)}$, which is anyway rather small as can be seen from Fig. 4 (a). For triangles, we see from Fig. 4(b) that $Q_{2}^{(2)}(x)$ decreases with $x$ and becomes zero for $x\geq x^{*}\simeq 0.65$, while $Q_{4}^{(2)}$ is always different from zero, exhibits a local minimum at $x^{*}$ and then increases monotonically. This means that for $x^{*}\leq x\leq 1$ triangles adopt the same T-orientational symmetry as squares. It is interesting to note that, for these compositions, the values of the T ($Q_{4}^{(2)}$) and O ($Q_{8}^{(2)}$) order parameters of triangles are very similar, which again points to the existence of square-triangle correlations with eightfold symmetry. This feature can be directly seen in Fig. 5 (b), where the functions $h_{i}(\phi)$ are plotted for a stable mixture with $x=0.93$. Note the strong T ordering of both, squares and triangles, indicated by the presence of sharp peaks at $\{0,\pi/2,\pi\}$, but also the presence of small satellite peaks in $h_{2}(\phi)$ at $\{\pi/4,3\pi/4\}$, a clear signature of the O-orientational correlations promoted by square-triangle interactions. Despite the presence of these correlations, we should bear in mind that triangles are clearly oriented in a T-configuration, with the symmetry $h_{2}(\phi)=h_{2}(\phi+\pi/2)$. The exact O symmetry, $h_{2}(\phi)=h_{2}(\phi+\pi/4)$, is never observed for any mixture-asymmetry, pressure or composition values. Fig. 5(c) shows the functions $h_{i}(\phi)$ for a mixture with a reference value of $x=0.63665$ (at which the curves $Q_{2}^{(2)}$ and $Q_{4}^{(2)}$ cross each other). Note that this mixture is unstable with respect to phase-separation. In this case squares clearly orient in a T configuration, but triangles have a rather low uniaxial N ordering, with significant secondary peaks at $\{\pi/4,\pi/2,3\pi/4\}$, again a signature of eightfold square-triangle orientational correlations. The conclusion can be extracted from all these results that, despite the existence of square-triangle interactions, which promote the presence of small secondary peaks in $h_{2}(\phi)$ signalling O-type ordering, the effect is not sufficient to stabilize the O phase. III The clustering effect As shown in our previous paper nosotros , a DFT based on the second- or third-virial coefficients is not capable of accounting for the O liquid-crystal symmetry in a one-component fluid of hard right triangles. MC simulations Dijkstra ; nosotros , however, indicate that these correlations are present. Clearly, any attempt to formulate a theory for the fluid of hard right triangles should consider at least four-body particle correlations. Needless to say this is an exceedingly complicated task. Because of these high-order correlations, we may expect particles to be prone to arrange into more or less short-lived clusters containing a few (but anomalously high as compared to ‘normal’ fluids) number of particles. In the case of right triangles it is not difficult to think of the geometries of the most stable particle arrangements (see below). It is also reasonable to expect that these ‘clusters’ may dominate the fluid structure and govern the bulk orientational properties of the fluid. With this idea in mind, a step forward in our attempts to construct an alternative model for O correlations is based on considering these clusters as special entities, connecting to the idea self-assembly of particles (taken as monomer units). This idea leads to an extreme model, where monomers form ‘superparticles’, which in turn may orient in such a way as to give rise to eightfold symmetry in the final monomer orientational distribution function. Based on previous MC simulations Dijkstra ; nosotros and on additional MC simulations to be presented below, we have identified what can be regarded as ‘important’ local particle configurations in the fluid at high packing fractions. A total of four such configurations have been chosen because of their high probability along the MC chains generated in the simulations. A sketch of these four important configurations is shown in Fig. 6, where they are drawn in different colors. Hereafter these configurations will be called ‘clusters’ and we now proceed to defined their structure and shapes. First we define ‘big-square’ clusters. These appear close to the T-K transition and may be regarded as tetramers, made out of four right triangles (monomers) with their short equally-sized sides (of length $l$) almost in contact, and with their right-angled vertexes also in close proximity. This configuration gives a big square with side equal to the triangle hypotenuse ($\sqrt{2}l$). Four-body correlations obviously induce the formation of these structures. We also identify another type of cluster of square symmetry but with smaller size, obtained by joining two triangular monomers by their hypotenuse, creating a small square dimer of side $l$. Obviously the presence of these clusters in the fluid is very likely because this configuration guarantees the absolute minimum of the triangle-triangle exluded area (when the relative angle between particle axes is $\pi$), but note that tetramers do not result by merging two of these clusters. Next, if two triangular monomers are joined by their equally-sized sides with the right-angled vertexes in contact they form a large right-angled triangular cluster with the equally-sized lengths equal to $\sqrt{2}l$. Triangular clusters need to be considered in the analysis since tetramers can be formed by merging two of these. The last species to consider is, obviously, the free triangular monomer (the building block of all the larger clusters). We should note that a rhomboidal dimer (see Fig. 6) can be also formed by two triangles with their small sides almost in contact and their right-angled and acute-angled vertexes in close proximity. Even though these clusters are present in some cases (see later), they will be discarded from the model in order to make it computationally manageable. Also, other possible clusters of different shapes or larger sizes will not be taken into account. Again we refer to Fig. 6 for the definition of the four clusters and also to Table 1 where their shapes, sizes and areas are summarized. In the following we present a simple extension of the SPT model for the quaternary mixture that results from a consideration of the four clusters defined above as distinct species. $l_{i}^{(k)}$ will denote the length of the species, with $i$ indicating the geometry ($i=1$ for squares and $i=2$ for triangles), and $k$ the cluster size ($k=1$ for small and $k=2$ for big clusters). The excess free-energy per particle of the mixture can be obtained by substituting the product $l_{i}l_{j}$ by $l_{i}^{(k)}l_{j}^{(m)}$, and the particle areas $a_{i}$ by $a_{i}^{(k)}$, into Eqn. (3), thus obtaining the generalized coefficients ${\cal K}_{ij,n}^{(2,km)}$ which are then used in (1). Note that the sums over $ij$ in the latter equation, and also in the ideal free-energy (4), should run over four ($ijkm$) and two ($ik$) indexes, respectively. Obviously we need to extend the same labelling to the molar fractions and to the orientational distribution functions: $x_{i}^{(k)}$ and $h_{i}^{(k)}(\phi)$, respectively. The total packing fraction is then $\displaystyle\eta=\rho\sum_{ik}x_{i}^{(k)}a_{i}^{(k)}$, and the orientational order parameters are written as $$\displaystyle Q_{2n}^{(ik)}=\int_{0}^{2\pi}d\phi h_{i}^{(k)}(\phi)\cos(2n\phi)\hskip 22.76228pt(n=1,\cdots,4).$$ (9) Now we specify how the orientational ordering of monomers is calculated from that of clusters. Cluster numbers in the mixture are given by $n_{i}^{(k)}=x_{i}^{(k)}N_{\rm c}$ ($i,k=1,2$), with $N_{\rm c}$ the total number of clusters. The total number of monomers distributed among all different clusters can be calculated as $N_{\rm m}=\left(2x_{1}^{(1)}+4x_{1}^{(2)}+x_{2}^{(1)}+2x_{2}^{(2)}\right)N_{\rm c}$. First let us consider the case of square dimers. The main axes of the triangular monomers point parallel and antiparallel to one of the square axis, i.e. the one perpendicular to the square diagonal coinciding with the monomer hypotenuse (see Fig. 8). Note that if we select the square axis to be parallel to the other diagonal these angles are $\pm\pi/2$. However, as the squares only have I or T liquid-crystal phases, which are orientationally invariant with respect to $\pi/2$ rotations, the above definitions are identical, i.e. they do not affect the final result for the orientational ordering of monomers. Thus the contribution of the $n_{1}^{(1)}$ square dimers to the global orientational ordering of monomers is given by the function $$\displaystyle h_{\rm m}^{(11)}(\phi)=\frac{n_{1}^{(1)}}{N_{\rm m}}\left(h_{1}^{(1)}(\phi)+h_{1}^{(1)}(\phi+\pi)\right)$$ $$\displaystyle=\frac{x_{1}^{(1)}\left(h_{1}^{(1)}(\phi)+h_{1}^{(1)}(\phi+\pi)\right)}{2x_{1}^{(1)}+4x_{1}^{(2)}+x_{2}^{(1)}+2x_{2}^{(2)}}.$$ (10) For square tetramers, we can see, as Fig. 8 shows, that the axes of the triangular monomers are at angles $\{\pi/4,-\pi/4,3\pi/4,-3\pi/4\}$ with respect to one of the square diagonals. Thus the contribution of the $n_{1}^{(2)}$ square tetramers is $$\displaystyle h_{\rm m}^{(12)}(\phi)=\frac{x_{1}^{(2)}\sum_{k=\pm 1}\left(h_{1}^{(2)}(\phi+k\pi/4)+h_{1}^{(2)}(\phi+3k\pi/4)\right)}{2x_{1}^{(1)}+4x_{1}^{(2)}+x_{2}^{(1)}+2x_{2}^{(2)}}.$$ The $n_{2}^{(1)}$ free triangular monomers give a contribution of $$\displaystyle h_{\rm m}^{(21)}(\phi)=\frac{x_{2}^{(1)}h_{2}^{(1)}(\phi)}{2x_{1}^{(1)}+4x_{1}^{(2)}+x_{2}^{(1)}+2x_{2}^{(2)}}.$$ (12) Finally, for triangular dimers, it is easy to see that the two monomer axes point at angles $\{3\pi/4,-3\pi/4\}$ with respect to the main axis of the dimer (see Fig. 8). The contribution of the $n_{2}^{(1)}$ triangular dimers is then $$\displaystyle h_{\rm m}^{(22)}(\phi)=\frac{x_{2}^{(2)}\left(h_{2}^{(2)}(\phi-3\pi/4)+h_{2}^{(2)}(\phi+3\pi/4)\right)}{2x_{1}^{(1)}+4x_{1}^{(2)}+x_{2}^{(1)}+2x_{2}^{(2)}}.$$ (13) The total orientational distribution function of monomers is just the sum of the different contributions obtained above, $$\displaystyle h_{\rm m}(\phi)=\sum_{i,j=1,2}h_{\rm m}^{(ij)}(\phi),$$ (14) and from this we can calculate the order parameters of monomers as $$\displaystyle Q_{2n}^{(\rm m)}=\int_{0}^{2\pi}d\phi h_{\rm m}(\phi)\cos(2n\phi).$$ (15) We have performed a minimization of the total free-energy per particle with respect to all the Fourier amplitudes $\{h_{n}^{(ik)}\}$ (note the labelling extension $i,k=1,2$) of the quaternary mixture. Possible demixing scenarios were not searched for because we are only interested in the effect of clustering on the orientational ordering of monomers. Fig. 7(a) shows the equilibrium orientational distribution functions $\{h_{i}^{(j)}(\phi)\}$, for a scaled pressure fixed to $p^{*}\equiv\beta p\left(l_{2}^{(1)}\right)^{2}=220$, and a set of molar fractions with values $x_{1}^{(1)}=0.4$, $x_{1}^{(2)}=0.15$, $x_{2}^{(1)}=0.35$ and $x_{2}^{(2)}=0.1$, fulfilling the equality $x_{1}^{(1)}+x_{2}^{(2)}=x_{1}^{(2)}+x_{2}^{(1)}=0.5$. Clearly the system exhibits T ordering in all the species, with square clusters being more ordered and with the presence of secondary peaks (located at $\phi=\pi/4$ and $\phi=3\pi/4$) in the distribution functions of triangular clusters. As explained above, this is due to square-triangle interactions. The monomer distribution function $h_{\rm m}(\phi)$, calculated from Eqn. (14), is shown in panel (b). It has a quasi-eightfold symmetry, with peaks located at $k\pi/4$ ($k=1,\cdots,4$). Note however that the perfect symmetry $h_{\rm m}(\phi)=h_{\rm m}(\phi+\pi/4)$ is not exactly fulfilled: small differences in the height of the peaks are clearly visible. To put this result in perspective we remark that, in a MC simulation, small differences like these would naturally be attributed to the effect of limited angular sampling in the histogram of $h(\phi)$. In any case the function $h_{\rm m}(\phi)$ plotted in Fig. 7(b) shows a high O-type ordering, which is a direct consequence of the strong particle clustering. Next, order parameters $Q_{2n}^{(ij)}$ obtained using Eqn. (15) are shown in Fig. 9 as a function of monomer composition $x_{2}^{(1)}\in[0,0.45]$, following a path in molar fractions $x_{1}^{(1)}=x_{2}^{(1)}+0.05$, $x_{1}^{(2)}=0.5-x_{2}^{(1)}$ and $x_{2}^{(2)}=0.45-x_{2}^{(1)}$ (these values fulfill the constraints $x_{1}^{(1)}+x_{2}^{(2)}=x_{1}^{(2)}+x_{2}^{(1)}=0.5$: the sum of compositions of big triangles and small squares is equal to the sum of compositions of the other two species). Panel (a) shows that square tetramers have the largest T ordering, as compared to that of square dimers. This order decreases with $x_{2}^{(1)}$ as a consequence of the fact that $x_{1}^{(2)}$ (the fraction of big squares) also decreases along the selected path. In turn, triangular dimers and monomers exhibit N ordering up to $x_{2}^{(1)}\simeq 0.3$, beyond which it vanishes due to the fact that $x_{2}^{(2)}$ (the fraction of big triangles) decreases with $x_{2}^{(1)}$ along the same path, see panel (b). From this value, triangular species follow the T ordering of the square species. It is interesting to note that the O order parameter of the triangular species, $Q_{8}^{(2j)}$ ($j=1,2$), becomes larger than the T order parameter, $Q_{4}^{(2j)}$, indicating the presence of satellite peaks in $h_{2}^{(j)}(\phi)$ at $\pi/4$ and $3\pi/4$. Panel (c) shows the monomer order parameters $Q_{2n}^{(\rm m)}$ as a function of $x_{2}^{(1)}$, calculated from Eqn. (15). The N ordering of monomers is relatively low, something that can be seen from the negligible value of $Q_{2}^{(\rm m)}$ which becomes zero beyond $x_{2}^{(1)}=0.3$. We also see that, close to $x_{2}^{(1)*}\simeq 0.33$, the T order parameter, $Q_{4}^{(\rm m)}$, becomes zero, while the O ordering, measured through $Q_{8}^{(\rm m)}$, is relatively high in the neighborhood of $x_{2}^{(1)*}$. Therefore there exists an interval in $x_{2}^{(1)}$ around $x_{2}^{(1)*}$ where the orientational distribution function of monomers $h_{\rm m}(\phi)$ is similar to that shown in Fig. 7 (b), i.e. it shows a quasi-eightfold symmetry. Note that, for $x_{2}^{(1)}<x_{2}^{(1)*}$, the T director of square clusters coincides with that of the preferential alignment of square dimers, while for $x_{2}^{(1)}>x_{2}^{(1)*}$ it changes to that of square tetramers (rotated by $\pi/4$ with respect to the former). This is the reason why the order parameter $Q_{4}^{(\rm m)}$ exhibits a change in sign at $x_{2}^{(1)*}$. Again we can conclude from these results that the O ordering is highly enhanced by the presence of particle clustering: when monomers are mainly distributed into square dimers and tetramers, there exists an interval in the composition of free monomers for which the monomer distribution function exhibits quasi-eightfold symmetry. Fig. 10 shows the evolution of the order parameters of all species [panels (a) and (b]) and of the monomers [panel (c)] as a function of reduced pressure $p^{*}$ for the same fixed set of compositions as in Fig. 7: $x_{1}^{(1)}=0.4$, $x_{1}^{(2)}=0.15$, $x_{2}^{(1)}=0.35$, and $x_{2}^{(2)}=0.1$, where all species exhibit T ordering. The quasi-O ordering, measured by $Q_{8}^{(\rm m)}$, increases from zero at the same pressure value where square dimers and tetramers exhibit a second-order I-T transition at $\beta pl_{2}^{2}\approx 100$. For higher pressures the O order parameter of monomers, $Q_{8}^{(\rm m)}$, is significantly larger than the T order parameter $Q_{4}^{(\rm m)}$. The angular distribution function of monomers with quasi eightfold-symmetry can also be obtained for a ternary mixture of free monomers and dimers of triangular or square symmetry, i.e. for vanishingly small tetramer composition. This is shown in Fig. 11(a), where all cluster orientational distribution functions, $h_{i}^{(j)}(\phi)$, are shown for a quaternary mixture with $x_{1}^{(1)}=0.45$, $x_{1}^{(2)}=0.01$, $x_{2}^{(1)}=0.1$ and $x_{2}^{(2)}=0.44$ and reduced pressure $p^{*}=300$. For comparison, the monomer function $h_{\rm m}(\phi)$ is also shown (see inset). This time free monomers and triangular dimers clearly orient in a N-like configuration. As pointed out before, the axes of monomers in triangular dimers are oriented with respect to the dimer axis with relative angles of $\pm 3\pi/4$, while monomers in square dimers have their axes parallel or anti-parallel to the dimer axis. Taking into account these relative orientations, and the fractions of the different clusters ($0.45$ and $0.44$ for triangular and square dimers, respectively; and the rather small values of $0.1$ and $0.01$ for free monomers and tetramers, respectively), the quasi-eightfold symmetry of $h_{\rm m}(\phi)$ explains itself. We expect that the inclusion of the fourth virial coefficient in the theory would cause the orientational distribution function of triangles to exhibit strong O correlations, since configurations where two or four triangles form triangular or square dimers, and also square tetramers, might be entropically favored. The above set of values $\{x_{i}^{(j)}\}$ is just a particular case of the path obtained by varying the free-monomer composition $x_{2}^{(1)}$ inside the interval $[0,0.98]$, together with the constraints $x_{1}^{(1)}=0.5-x_{2}^{(1)}/2$, $x_{1}^{(2)}=0.01$, $x_{2}^{(2)}=0.49-x_{2}^{(1)}/2$ (keeping fixed the small tetramer composition). The evolution of the order parameters of monomers, $Q_{n}^{(\rm m)}$, with respect to $x_{2}^{(1)}$ along this path and for the same pressure $p^{*}=300$ is shown in Fig. 11(b). A wide region exists, close to the $x_{2}^{(1)}=0$ axis, where the O order parameter $Q_{8}^{(\rm m)}$ is highest, which corresponds to strong eightfold orientational correlations between monomers. In the opposite region, close to the $x_{2}^{(1)}=1$ axis, monomers are oriented in an N-like configuration ($Q_{2}^{(\rm m)}$ begin the highest parameter). Obviously this is a direct consequence of the small population of triangular and square clusters with respect to free monomers. IV Monte Carlo simulations To understand the relevance of clusters in the configurations of hard right triangles and give some support to the assumptions underlying the models presented in the previous sections, we have performed NVT-MC simulations of a system of 576 particles in a square box using periodic boundary conditions, using $2\times 10^{5}$ MC steps for equilibration and $3\times 10^{5}$ MC steps for averaging. Different expansion and compression runs were performed, starting from different initial configurations, to explore different liquid-crystalline phases. For more details on the simulations we refer to our previous work nosotros . As shown in Ref. nosotros, , the high-density fluid of hard right triangles seems to be very prone to staying in specific configurations which can be controlled by an adequate choice of symmetry for the initial configuration. This would mean that there are dense regions in phase space from which it would be difficult to escape, probably due to unlikely local rearrangements of particle orientations. Thermodynamically we could think of these configurations as corresponding to metastable phases. A reasonable procedure to identify the true stable phase at each density would be to perform free-energy calculations using thermodynamic integration or applying the coupling method Frenkel1 . In this section, however, we are not interested in thermodynamic stability (which would require extensive simulations), but rather we use this feature of the hard right-triangle fluid to probe for particle clustering in fluids of different bulk symmetries. Also, since we are using the MC technique, we are not probing the stability in time of clusters as separate entities, but simply the occurrence of a set of particular configurations and their relative importance along the MC chains. While the definition of clusters in a one-component fluid of hard particles may be clearly specified, the criteria used in a simulation to associate a particular local configuration of particles to a given cluster type is somewhat arbitrary. Here we have focused on the clusters defined in Section III used to explore the consequences of the mixture model since, as explained previously, these are the most natural configurations of the system. We advance that indeed these configurations are very frequent, depending on the total fluid density. Again we refer to Fig. 6 and Table 1 for the definition of three types of clusters: square tetramers, square dimers, an triangular dimers. In the simulations we have also focused on rhomboidal dimers (defined in Section III), since they may be present at not too high densities. To define a dimer with a particular symmetry (either square, triangular or rhomboidal), we calculate the distance between the barycenters of two neighbouring triangles as well as their relative angle. Each perfect dimer (particles in contact with the correct relative orientation) has specific values for these two variables (relative distance and relative angle). We define a dimer of a given type whenever these variables depart by less than 15% from their ideal values. This criterion is totally arbitrary and, in fact, may lead to a situation where some pairs of particles are not considered to form a dimer, whereas visually one would clearly ascribe the pair to be a dimer. Also, a given pair of neighboring particles may be considered as a dimer discontinuously along the MC chain. Finally, the fact that the same tolerance is used to define all types of dimers may introduce a bias in the relative fraction of dimers. Again, the analysis is qualitative and not aimed at extracting definite numbers on quantities that are otherwise ill-defined. Finally, square tetramers are defined as the association of two triangular dimers by applying a more relaxed criterion on distance and angle (20%) with respect to the ideal values for a square tetramer with all four particles in contact. This is because the positions and angles of the two particles of a triangular dimer will already depart from the ideal values. Fig. 12 shows the fractions of the four species of clusters as a function of packing fraction. To calculate the fraction of a given species, the average number of clusters of that type along the simulation is divided by the average number of clusters of all species, including "isolated" triangles (monomers). Several expansion and compression runs were performed, following the results presented in Ref. nosotros, . In Fig. 12 the different clusters are indicated by different symbols, as shown in the keybox. Filled symbols correspond to compression runs, while open symbols are from expansion runs. In the isotropic phase, our cluster criterion identifies a small fraction of the two types of dimers and rhomboids. They correspond to local arrangements that do not persist along the MC chain. As density is increased (compression run, filled symbols) cluster fractions also increase. In the liquid-crystal region an order-parameter analysis (see Ref. nosotros, ) identifies this phase as tetratic or octatic, since the $Q_{4}$ and $Q_{8}$ adopt comparable and relatively high values. The strong O ordering is coming mainly from square dimers, which also force neighboring monomers to adopt orientations that foster this type of ordering. Note that rhombic clusters are also present, but no tetramers can be identified. An expansion run from a perfect crystal of tetramers (perfect tetratic crystal) at $\eta=0.98$ is also shown in Fig. 12. Initially only tetramers are present, but as density is decreased these clusters break into triangular dimers. At the end of the crystal phase the latter clearly dominate, but at the same time a I fraction of square dimers is created. These are probably formed by "free" monomers that have been detached from neighboring tetramers. In essence the existence of these clusters and the evolution of their fractions with density are perfectly compatible with a crystal phase showing thermal fluctuations. As expected, the fraction of rhombic clusters is essentially negligible as particles would have to rotate locally by $90^{\circ}$, which is very difficult at high packing fraction. Melting of the crystal is associated with a change in the variation of fractions with density. Tetramers have disappeared while the fractions of square and triangular clusters tend to be similar. The fluid becomes ordered in a T phase, dominated by these two types of clusters. As density is further decreased this phase changes to the isotropic phase. Note that in the T phase no rhombic clusters are excited. However, the fraction of these clusters increases suddenly, and the equilibrium value of this fraction in the isotropic phase is reproduced by the expansion run from the T as soon as the phase transition is crossed. A final compression run was performed from the T phase. As expected, the fluid cannot crystallize and the fraction of square clusters does not match the one obtained from the expansion run, even though triangular clusters do have almost identical fractions. In summary, the MC simulations show that the particle clusters introduced in Sections II and III do occur in the fluid of hard right triangles, in high proportion and in varying degrees according to the global orientational order of the system. Therefore, a model based on the equilibrium statistics of these clusters as separate entities may be a fruitful way to understand the essential orientational properties of the hard right triangle fluid. V Conclusions In this paper we have addressed the origin of the liquid-crystal phase of hard right triangles. Compression Monte Carlo simulations indicate the existence of an exotic liquid-crystalline phase exhibiting tetratic order and strong octatic correlations, which the standard and extended SPT versions of DFT are unable to describe. As a step forward, and in view of the apparently important clustering tendencies of right triangles into square and triangular clusters, we have implemented an SPT (second-virial) approach to analize the phase behavior of four different binary mixtures of right triangles and squares, and calculated their respective phase diagrams. The length-asymmetry of the species are $\kappa_{\rm l}=1/2$, $1/\sqrt{2}$, 1 and $\sqrt{2}$, where the second case corresponds to species having equal particle areas ($\kappa_{\rm l}=1/\sqrt{2}$). All mixtures (including the equal-area one) exhibit strong I-T and N-T demixing, a region of I-N first order transition, and the presence of a I-N-T triple point. The demixing scenarios directly follow from the different liquid-crystal symmetries exhibited by the one-component fluids of hard triangles and hard squares, i.e. N and T symmetries, respectively. Triangles in the mixture can adopt a T ordering which follows the natural ordering of the more populated square species, but with an orientational distribution function, $h_{2}(\phi)$, exhibiting two relatively small satellite peaks located at $\{\pi/4,3\pi/4\}$, which points to the importance of square-triangle interactions in the existence of orientational correlations with eightfold symmetry. However, the distribution function has a clear T character (with T and O order parameters having similar values). We also provided some results that explain the evidence that, under certain conditions, particle clustering can give rise to situations where the O order parameter, $Q_{8}$, is much higher that the N, $Q_{2}$, or T, $Q_{4}$ order parameters. Our results are based on the implementation of a toy model consisting of a quaternary mixture where species are triangular monomers, triangular and square dimers, and finally tetramers, all assumed to form in the real one-component fluid by monomer self-assembling. Evidence form MC simulation for the prevalence of these clusters in the equilibrium configurations of this fluid was presented in Section IV. We then used the SPT approach to estimate the free-energy of the mixture and, via minimization, calculated the equilibrium orientational distribution functions of clusters. From them the corresponding function for monomers can be derived. The question of the possible demixing scenarios, which do certainly exist in these mixtures, was not addressed. The focus was put on the monomer distribution function $h_{\rm m}(\phi)$ which, for certain sets of cluster compositions, exhibits quasi eightfold symmetry, with four peaks of similar, although not identical, height in the interval $[0,\pi]$. This demonstrates that the elusive eightfold ordering seen in the simulations, but not in the standard two- and three-body versions of DFT, can originate from the prevalence of particle clustering and its effect on the global orientational properties of the fluid. Further studies focusing on the dynamics of these processes may reveal whether the clustering idea is just a convenient artifact to partition configurational space or a real situation with clearly separated time scales associated to cluster kinetics, internal cluster dynamics and the fluid dynamics of clusters as separate entities. In any case, our results indicate that it is reasonable to appeal to the clustering effect to explain why the O phase, observed in simulations, cannot be stabilized by the usual implementations of DFT. Even if the idea of the fluid as a collection of clusters turns out to be useful, the present assumption that cluster composition can be fixed in advance should be improved by considering these compositions as an output of the model. Consistent with the theory for the kinetics of clustering, monomer aggregation and evaporation, and cluster formation and fragmentation, may be described by a set of chemical reactions, with certain reaction constant ratios at equilibrium. These ratios can be obtained from the difference between the chemical potentials of the species involved in the reactions. The more realistic model would consider the fluid as a polydisperse mixture of cluster or superparticles of different sizes and shapes, each cluster having a particular association energy as in models of associated fluids. We believe these ideas deserve some exploration in the future. Acknowledgements.Financial support under grant FIS2017-86007-C3-1-P from Ministerio de Economía, Industria y Competitividad (MINECO) of Spain, and PGC2018-096606-B-I00 from Agencia Estatal de Investigación-Ministerio de Ciencia e Innovación of Spain, is acknowledged. References (1) K. Zhao, C. Harrison, D. Huse, W. B. Russel, and P. M. Chaikin, Phys. Rev. E 76, 040401(R) (2007) (2) K. Zhao, R. Bruinsma, and T. G. Mason, Proc. Natl. Acad. Sci. USA 108, 2684 (2011). (3) K. Zhao, R. Bruisma, and T. G. Mason, Nat. Commun. 3, 801 (2012). (4) Z. Hou, Y. Zong, Z. Sun, F. Ye., T. G. Mason, and K. Zhao, Nat. Communn. 11, 2064 (2020). (5) H. Schlacken, H.-J. Mogel, and P. Schiler, Mol. Phys. 93, 777 (1998). (6) K. W. Wojciechowski and D. Frenkel, Comp. Met. Sci. Tech. 10, 235 (2004). (7) A. Donev, J. Burton, F. H. Stilinger, and S. Torquato, Phys. Rev. B 73, 054109 (2006). (8) A. P. Gantapara, W. Qi, and M. Dijkstra, Soft Matter 11, 8684 (2015). (9) Y. Martínez-Ratón, E. Velasco, and L. Mederos, J. Chem. Phys. 122, 064903 (2005). (10) Y. Martínez-Ratón, E. Velasco, and L. Mederos, J. Chem. Phys. 125, 014501 (2006). (11) Y. Martínez-Ratón, A. Díaz-De Armas and E. Velasco, Phys. Rev. E 97, 052703 (2018). (12) Y. Martínez-Ratón and E. Velasco, Phys. Rev. E 102, 052128 (2020). (13) T. Geigenfeind and D. de las Heras, J. Chem. Phys. 150, 184906 (2019). (14) C. Avendaño and F. A. Escobedo, Soft Matter 8, Soft Matter 8, 4675 (2012). (15) , J. A. Millan, M. Engel, and S. C. Glotzer, Phys. Rev. X 7, 021001 (2017). (16) J. P. Ramírez González and G. Cinacchi, Phys. Rev. E 104, 054604 (2021). (17) E. van den Pol, A. Petukhov, D. Thies-Weesie, D. Byelov and G. Vroege, Phys. Rev. Lett. 92, 145505 (2004). (18) S. D. Peroukidis and A. G. Vanakaras, Soft Matter 9, 7419 (2013). (19) S. Belli, A. Patti, M. Dijkstra and R. van Roij, Phys. Rev. Lett. 107, 148303 (2011). (20) E. M. Rafael, D. Corbett, A. Cuetos and A. Patti, Soft Matter 16, 5565 (2020). (21) E. M. Rafael, L. Toni, D. Corbett, A. Cuetos and A. Patti, Phys. Fluids 33, 067115 (2021). (22) L. Walsh and N. Menon, J. Stat. Mech. 083302 (2016). (23) T. Müller, D. de las Heras, I. Rehberg and K. Huang, Phys. Rev. E 91 062207 (2015). (24) M. González-Pinto, F. Borondo, Y. Martínez-Ratón and E. Velasco, Soft Matter 13, 2571 (2017). (25) M. González-Pinto, J. Renner, D. de las Heras, Y. Martínez-Ratón and E. Velasco, New. J. Phys. 21, 033002 (2019). (26) R. Wittmann, L. B. G. Cortes, H. Löwen and D. G. A. L. Aarts, Nat. Commun. 12, 623 (2021). (27) P. A. Monderkamp, R. Wittmann, L. B. G. Cortes, D. G. A. L. Aarts, F. Smallenburg and H. Löwen, Phys. Rev. Lett. 127, 198001 (2021). (28) L. Mederos, E. Velasco and Y. Martínez-Ratón, J. Phys.: Condens. Matter 26, 463101 (2014). (29) Y. Martínez-Ratón and E. Velasco, Phys. Rev. E 104, 054132 (2021). (30) E. S. Harper, G. van Anders, and S. C. Glotzer, PNAS 116, 16703 (2019). (31) Y. Martínez-Ratón, E. Velasco and L. Mederos, Phys. Rev. E 72, 031703 (2005). (32) P. Bolhuis and D. Frenkel, J. Chem. Phys. 106, 666 (1997). (33) B. P. Prajwal and F. A. Escobedo, Phys. Rev. Mat. 5, 024003 (2021).
Weighted Endpoint Estimates for Commutators of Calderón-Zygmund Operators 00footnotetext: 2010 Mathematics Subject Classification. Primary 47B47; Secondary 42B20, 42B30, 42B35. Key words and phrases. Calderón-Zygmund operator, commutator, Muckenhoupt weight, $\mathop{\mathrm{missing}}{BMO}$ space, Hardy space. The second author is supported by Vietnam National Foundation for Science and Technology Development (Grant No. 101.02-2014.31). The third author is supported by the National Natural Science Foundation of China (Grant Nos. 11571039 and 11361020), the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20120003110003) and the Fundamental Research Funds for Central Universities of China (Grant Nos. 2014KJJCA10). Yiyu Liang, Luong Dang Ky and Dachun Yang 111Corresponding author ( ) Abstract Let $\delta\in(0,1]$ and $T$ be a $\delta$-Calderón-Zygmund operator. Let $w$ be in the Muckenhoupt class $A_{1+\delta/n}({\mathbb{R}}^{n})$ satisfying $\int_{{\mathbb{R}}^{n}}\frac{w(x)}{1+|x|^{n}}\,dx<\infty$. When $b\in{\rm BMO}(\mathbb{R}^{n})$, it is well known that the commutator $[b,T]$ is not bounded from $H^{1}(\mathbb{R}^{n})$ to $L^{1}(\mathbb{R}^{n})$ if $b$ is not a constant function. In this article, the authors find out a proper subspace ${\mathop{\mathcal{missing}}{BMO}_{w}({\mathbb{R}}^{n})}$ of $\mathop{\mathrm{missing}}{BMO}(\mathbb{R}^{n})$ such that, if $b\in{\mathop{\mathcal{missing}}{BMO}_{w}({\mathbb{R}}^{n})}$, then $[b,T]$ is bounded from the weighted Hardy space $H_{w}^{1}(\mathbb{R}^{n})$ to the weighted Lebesgue space $L_{w}^{1}(\mathbb{R}^{n})$. Conversely, if $b\in{\rm BMO}({\mathbb{R}}^{n})$ and the commutators of the classical Riesz transforms $\{[b,R_{j}]\}_{j=1}^{n}$ are bounded from $H^{1}_{w}({\mathbb{R}}^{n})$ into $L^{1}_{w}({\mathbb{R}}^{n})$, then $b\in{\mathop{\mathcal{missing}}{BMO}_{w}({\mathbb{R}}^{n})}$. 1 Introduction Given a function $b$ locally integrable on $\mathbb{R}^{n}$ and a classical Calderón-Zygmund operator $T$, we consider the linear commutator $[b,T]$ defined by setting, for smooth, compactly supported functions $f$, $$[b,T](f)=bT(f)-T(bf).$$ A classical result of Coifman et al. [4] states that the commutator $[b,T]$ is bounded on $L^{p}(\mathbb{R}^{n})$ for $p\in(1,\infty)$, when $b\in{\rm BMO}(\mathbb{R}^{n})$. Moreover, their proof does not rely on a weak type $(1,1)$ estimate for $[b,T]$. Indeed, this operator is more singular than the associated Calderón-Zygmund operator since it fails, in general, to be of weak type $(1,1)$, when $b$ is in ${\rm BMO}(\mathbb{R}^{n})$. Moreover, Harboure et al. [7, Theorem (3.1)] showed that $[b,T]$ is bounded from $H^{1}(\mathbb{R}^{n})$ to $L^{1}(\mathbb{R}^{n})$ if and only if $b$ equals to a constant almost everywhere. Although the commutator $[b,T]$ does not map continuously, in general, $H^{1}(\mathbb{R}^{n})$ into $L^{1}(\mathbb{R}^{n})$, following Pérez [11], one can find a subspace $\mathcal{H}^{1}_{b}(\mathbb{R}^{n})$ of $H^{1}(\mathbb{R}^{n})$ such that $[b,T]$ maps continuously $\mathcal{H}^{1}_{b}(\mathbb{R}^{n})$ into $L^{1}(\mathbb{R}^{n})$. Very recently, Ky [10] found the largest subspace of $H^{1}(\mathbb{R}^{n})$ such that all commutators $[b,T]$ of Calderón-Zygmund operators are bounded from this subspace into $L^{1}(\mathbb{R}^{n})$. More precisely, it was showed in [10] that there exists a bilinear operators $\mathfrak{R}:=\mathfrak{R}_{T}$ mapping continuously $H^{1}(\mathbb{R}^{n})\times{\rm BMO}(\mathbb{R}^{n})$ into $L^{1}(\mathbb{R}^{n})$ such that, for all $(f,b)\in H^{1}(\mathbb{R}^{n})\times{\rm BMO}(\mathbb{R}^{n})$, we have (1.1) $$[b,T](f)=\mathfrak{R}(f,b)+T(\mathfrak{S}(f,b)),$$ where $\mathfrak{S}$ is a bounded bilinear operator from $H^{1}(\mathbb{R}^{n})\times{\rm BMO}(\mathbb{R}^{n})$ into $L^{1}(\mathbb{R}^{n})$ which is independent of $T$. The bilinear decomposition (1.1) allows ones to give a general overview of all known endpoint estimates; see [10] for the details. For the weighted case, when $b\in{\rm BMO}({\mathbb{R}}^{n})$, Álvarez et al. [1] proved that the commutator $[b,T]$ is bounded on the weighted Lebesgue space $L_{w}^{p}({{{\mathbb{R}}}^{n}})$ with $p\in(1,\infty)$ and $w\in A_{p}({{{\mathbb{R}}}^{n}})$, where $A_{p}({{{\mathbb{R}}}^{n}})$ denotes the class of Muckenhoupt weights. Similar to the unweighted case, $[b,T]$ may not be bounded from the weighted Hardy space $H_{w}^{1}({{{\mathbb{R}}}^{n}})$ into the weighted Lebesgue space $L_{w}^{1}({{{\mathbb{R}}}^{n}})$ if $b$ is not a constant function. Thus, a natural question is whether there exists a non-trivial subspace of ${\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}$ such that, when $b$ belongs to this subspace, the commutator $[b,T]$ is bounded from $H_{w}^{1}({{{\mathbb{R}}}^{n}})$ to $L_{w}^{1}({{{\mathbb{R}}}^{n}})$. The purpose of the present paper is to give an answer for the above question. To this end, we first recall the definition of the Muckenhoupt weights. A non-negative measurable function $w$ is said to belong to the class of Muckenhoupt weight $A_{q}({{{\mathbb{R}}}^{n}})$ for $q\in[1,\infty)$, denoted by $w\in A_{q}({{{\mathbb{R}}}^{n}})$ if, when $q\in(1,\infty)$, (1.2) $$[w]_{A_{q}({{{\mathbb{R}}}^{n}})}:=\sup_{B\subset{{{\mathbb{R}}}^{n}}}\frac{1}% {|B|}\int_{B}w(x)\,dx\left\{\frac{1}{|B|}\int_{B}[w(y)]^{-q^{\prime}/q}\,dy% \right\}^{q/q^{\prime}}<\infty,$$ where $1/q+1/q^{\prime}=1$, or, when $q=1$, (1.3) $$[w]_{A_{1}({{{\mathbb{R}}}^{n}})}:=\sup_{B\subset{{{\mathbb{R}}}^{n}}}\frac{1}% {|B|}\int_{B}w(x)\,dx\left(\mathop{\mathrm{missing}}{\,ess\,sup\,}_{y\in B}[w(% y)]^{-1}\right)<\infty.$$ Here the suprema are taken over all balls $B\subset{{{\mathbb{R}}}^{n}}$. Let $$A_{\infty}({{{\mathbb{R}}}^{n}}):=\bigcup_{q\in[1,\infty)}A_{q}({{{\mathbb{R}}% }^{n}}).$$ Let $w\in A_{\infty}({\mathbb{R}}^{n})$ and $q\in(0,\infty]$. If $q\in(0,\infty)$, then we let $L^{q}_{w}({\mathbb{R}}^{n})$ be the space of all measurable functions $f$ such that (1.4) $$\|f\|_{L^{q}_{w}({{{\mathbb{R}}}^{n}})}:=\left\{\int_{{\mathbb{R}}^{n}}|f(x)|^% {q}w(x)\,dx\right\}^{1/q}<\infty.$$ When $q=\infty$, $L^{\infty}_{w}({\mathbb{R}}^{n})$ is defined to be the same as $L^{\infty}({\mathbb{R}}^{n})$ and, for any $f\in L^{\infty}_{w}({{{\mathbb{R}}}^{n}})$, let $$\|f\|_{L^{\infty}_{w}({{{\mathbb{R}}}^{n}})}:=\|f\|_{L^{\infty}({{{\mathbb{R}}% }^{n}})}.$$ Let $\phi$ be a function in the Schwartz class, $\mathcal{S}({\mathbb{R}}^{n})$, satisfying $\phi(x)=1$ for all $x\in B(0,1)$. The maximal function of a tempered distribution $f\in\mathcal{S}^{\prime}({\mathbb{R}}^{n})$ is defined by (1.5) $${\mathcal{M}_{\phi}}f:=\sup_{t\in(0,\infty)}|f*\phi_{t}|,$$ where $\phi_{t}(\cdot):=\frac{1}{t^{n}}\phi(t^{-1}\cdot)$ for all $t\in(0,\infty)$. Then the weighted Hardy space $H^{1}_{w}({\mathbb{R}}^{n})$ is defined as the space of all tempered distributions $f\in\mathcal{S}^{\prime}({\mathbb{R}}^{n})$ such that $$\|f\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}:=\|{\mathcal{M}_{\phi}}f\|_{L^{1}_{w}(% {{{\mathbb{R}}}^{n}})}<\infty;$$ see [5]. Notice that $\|\cdot\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}$ defines a norm on $H^{1}_{w}({\mathbb{R}}^{n})$, whose size depends on the choice of $\phi$, but the space $H^{1}_{w}({\mathbb{R}}^{n})$ is independent of this choice. Definition 1.1. Let $w\in A_{\infty}({{{\mathbb{R}}}^{n}})$ and $\int_{{\mathbb{R}}^{n}}\frac{w(x)}{1+|x|^{n}}\,dx<\infty$. A locally integrable function $b$ is said to be in $\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$ if (1.6) $$\|b\|_{\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})}:=\sup_{B}\left\{\int_{B^{% \complement}}\frac{w(x)}{|x-x_{B}|^{n}}\,dx\frac{1}{w(B)}\int_{B}|b(x)-b_{B}|% dx\right\}<\infty,$$ where the supremum is taken over all balls $B\subset{\mathbb{R}}^{n}$ and $B^{\complement}:={{{\mathbb{R}}}^{n}}\backslash B$. Here and hereafter, $x_{B}$ denotes the center of ball $B$, $$w(B):=\int_{B}w(x)\,dx\quad\mathrm{and}\quad b_{B}:=\frac{1}{|B|}\int_{B}b(x)% \,dx.$$ It should be pointed out that the space $\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$ has been considered first by Bloom [2] when studying the pointwise multipliers of weighted BMO spaces (see also [14]). Recall that a locally integrable function $b$ is said to be in ${\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}$ if $$\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}:=\sup_{B}\frac{% 1}{|B|}\int_{B}|b(x)-b_{B}|\,dx<\infty,$$ where the supremum is taken over all balls $B\subset{\mathbb{R}}^{n}$. Remark 1.2. (i) $\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})\subset{\rm BMO}({\mathbb{R}}^{n})$ and the inclusion is continuous (see Proposition 2.1 of Section 2). (ii) It is easy to show that, when $n=1$, $w(x):=|x|^{-1/2}\in A_{1}({\mathbb{R}})$ and $\int_{\mathbb{R}}\frac{w(x)}{1+|x|}\,dx<\infty$. Let $$\displaystyle f(x):=\left\{\begin{array}[]{l l}|1-x|,&|x|\leq 1,\\ \\ 0,&|x|>1.\end{array}\right.$$ Then $f\in\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$, which implies that ${\mathop{\mathcal{missing}}{BMO}_{w}({{{\mathbb{R}}}^{n}})}$ is not a trivial function space. To state our main results, we first recall the definition of Calderón-Zygmund operators. For $\delta\in(0,1]$, a linear operator $T$ is called a $\delta$-Calderón-Zygmund operator if $T$ is a linear bounded operator on $L^{2}({{{\mathbb{R}}}^{n}})$ and there exist a kernel $K$ on $({{{\mathbb{R}}}^{n}}\times{{{\mathbb{R}}}^{n}})\setminus\{(x,x):\ x\in{{{% \mathbb{R}}}^{n}}\}$ and a positive constant $C$ such that, for all $x,\,y,\,z\in{{{\mathbb{R}}}^{n}}$, $$|K(x,y)|\leq\frac{C}{|x-y|^{n}}\quad\mathrm{if}\quad x\neq y,$$ $$|K(x,y)-K(x,z)|+|K(y,x)-K(z,x)|\leq C\frac{|y-z|^{\delta}}{|x-y|^{n+\delta}}% \quad\mbox{ if }\quad|x-y|>2|y-z|$$ and, for all $f\in L^{2}({{{\mathbb{R}}}^{n}})$ with compact support and $x\notin\mathop{\mathrm{missing}}{\,supp\,}(f)$, $$Tf(x)=\int_{\mathop{\mathrm{missing}}{\,supp\,}(f)}K(x,y)f(y)\,dy.$$ The main result of this paper is the following theorem. Theorem 1.3. Let $\delta\in(0,1]$, $w\in A_{1+\delta/n}({{{\mathbb{R}}}^{n}})$ with $\int_{{\mathbb{R}}^{n}}\frac{w(x)}{1+|x|^{n}}\,dx<\infty$ and $b\in{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}$. Then the following two statements are equivalent: (i) for every $\delta$-Calderón-Zygmund operator $T$, the commutator $[b,T]$ is bounded from $H^{1}_{w}({\mathbb{R}}^{n})$ into $L^{1}_{w}({\mathbb{R}}^{n})$; (ii) $b\in\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$. Remark 1.4. When $w(x)\equiv 1$ for all $x\in{{{\mathbb{R}}}^{n}}$, we see that $\int_{{{\mathbb{R}}}^{n}}\frac{1}{1+|x|^{n}}\,dx=\infty$ and hence, in this case, ${\mathop{\mathcal{missing}}{BMO}_{w}({{{\mathbb{R}}}^{n}})}$ can be seen as a zero space in ${\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}$. In this case, Theorem 1.3 coincides with the result in [7]. The next theorem gives a sufficient condition of the boundedness of $[b,T]$ on $H_{w}^{1}({{{\mathbb{R}}}^{n}})$. Recall that, for $w\in A_{p}({\mathbb{R}}^{n})$ with $p\in(1,\infty)$ and $q\in[p,\infty]$, a measurable function $a$ is called an $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),q)$-atom related to a ball $B\subset{{{\mathbb{R}}}^{n}}$ if (i) $\mathop{\mathrm{missing}}{\,supp\,}a\subset B$, (ii) $\int_{{{\mathbb{R}}}^{n}}a(x)\,dx=0$, (iii) $\|a\|_{L^{q}_{w}({{{\mathbb{R}}}^{n}})}\leq[w(B)]^{1/q-1}$ and also that $T^{*}1=0$ means $\int_{\mathbb{R}^{n}}Ta(x)\,dx=0$ holds true for all $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),q)$-atoms $a$. Theorem 1.5. Let $\delta\in(0,1]$, $T$ be a $\delta$-Calderón-Zygmund operator, $w\in A_{1+\delta/n}({{{\mathbb{R}}}^{n}})$ with $\int_{{\mathbb{R}}^{n}}\frac{w(x)}{1+|x|^{n}}\,dx<\infty$ and $b\in\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$. If $T^{*}1=0$, then the commutator $[b,T]$ is bounded on $H^{1}_{w}({\mathbb{R}}^{n})$, namely, there exists a positive constant $C$ such that, for all $f\in H^{1}_{w}({\mathbb{R}}^{n})$, $$\|[b,T](f)\|_{H^{1}_{w}({\mathbb{R}}^{n})}\leq C\|f\|_{H^{1}_{w}({\mathbb{R}}^% {n})}.$$ Finally we make some conventions on notation. Throughout the whole article, we denote by $C$ a positive constant which is independent of the main parameters, but it may vary from line to line. The symbol $A\lesssim B$ means that $A\leq CB$. If $A\lesssim B$ and $B\lesssim A$, then we write $A\sim B$. For any measurable subset $E$ of ${{{\mathbb{R}}}^{n}}$, we denote by $E^{\complement}$ the set ${{{\mathbb{R}}}^{n}}\setminus E$ and its characteristic function by $\chi_{E}$. We also let ${\mathbb{N}}:=\{1,\,2,\,\ldots\}$ and ${\mathbb{Z}}_{+}:={\mathbb{N}}\cup\{0\}$. 2 Proofs of Theorems 1.3 and 1.5 We begin with pointing out that, if $w\in A_{\infty}({{{\mathbb{R}}}^{n}})$, then there exist $p,\,r\in(1,\infty)$ such that $w\in A_{p}({{{\mathbb{R}}}^{n}})\cap RH_{r}({{{\mathbb{R}}}^{n}})$, where $RH_{r}({{{\mathbb{R}}}^{n}})$ denotes the reverse Hölder class of weights $w$ satisfying that there exists a positive constant $C$ such that $$\left(\frac{1}{|B|}\int_{B}[w(x)]^{r}\,dx\right)^{1/r}\leq C\frac{1}{|B|}\int_% {B}w(x)\,dx$$ for every ball $B\subset{\mathbb{R}}^{n}$. Moreover, there exist positive constants $C_{1}\leq C_{2}$, depending on $[w]_{A_{\infty}({{{\mathbb{R}}}^{n}})}$, such that, for any measurable sets $E\subset B$, (2.1) $$C_{1}\left(\frac{|E|}{|B|}\right)^{p}\leq\frac{w(E)}{w(B)}\leq C_{2}\left(% \frac{|E|}{|B|}\right)^{(r-1)/r}.$$ In order to prove Theorems 1.3 and 1.5, we need the following proposition and several technical lemmas. Proposition 2.1. Let $w\in A_{\infty}({\mathbb{R}}^{n})$. Then there exists a positive constant $C$ such that, for any $f\in\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$, $$\|f\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}\leq C\|f\|_{% \mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})}.$$ Proof. By (2.1), for any ball $B\subset{{{\mathbb{R}}}^{n}}$, we have $$\displaystyle\int_{B^{\complement}}\frac{w(x)}{|x-x_{B}|^{n}}\,dx\frac{1}{w(B)}$$ $$\displaystyle\geq\int_{2B\backslash B}\frac{w(x)}{|x-x_{B}|^{n}}\,dx\frac{1}{w% (B)}$$ $$\displaystyle\geq\frac{w(2B\backslash B)}{|2B|}\frac{1}{w(B)}$$ $$\displaystyle\gtrsim\frac{1}{|B|}.$$ This proves that $\|f\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}\lesssim\|f\|_{% \mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})},$ which completes the proof of Proposition 2.1. ∎ Lemma 2.2. Let $f$ be a measurable function such that supp $f\subset B:=B(x_{0},r)$ with $x_{0}\in{{{\mathbb{R}}}^{n}}$ and $r\in(0,\infty)$. Then there exists a positive constant $C:=C(\phi,n)$, depending only on $\phi$ and $n$, such that, for all $x\notin B$, $$\frac{1}{|x-x_{0}|^{n}}\left|\int_{B(x_{0},r)}f(y)\,dy\right|\leq C{\mathcal{M% }_{\phi}}f(x).$$ Proof. For $x\notin B(x_{0},r)$ and any $y\in B(x_{0},r)$, it follows that $$\frac{|x-y|}{2|x-x_{0}|}<\frac{|x-x_{0}|+r}{2|x-x_{0}|}\leq 1,$$ which, together with $\phi\equiv 1$ on $B(0,1)$, further implies that $\phi(\frac{x-y}{2|x-x_{0}|})=1$. Thus, we know that $$\displaystyle{\mathcal{M}_{\phi}}f(x)=$$ $$\displaystyle\sup_{t\in(0,\infty)}|f*\phi_{t}(x)|\geq|f*\phi_{2|x-x_{0}|}(x)|$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2^{n}|x-x_{0}|^{n}}\left|\int_{B(x_{0},r)}f(y)\phi\left(% \frac{x-y}{2|x-x_{0}|}\right)\,dy\right|$$ $$\displaystyle\gtrsim$$ $$\displaystyle\frac{1}{|x-x_{0}|^{n}}\left|\int_{B(x_{0},r)}f(y)\,dy\right|,$$ which completes the proof of Lemma 2.2. ∎ Lemma 2.3. Let $w\in A_{\infty}({\mathbb{R}}^{n})$ and $q\in[1,\infty)$. Then there exists a positive constant $C$ such that, for any $f\in{\rm BMO}({\mathbb{R}}^{n})$ and any ball $B\subset{\mathbb{R}}^{n}$, $$\left[\frac{1}{w(B)}\int_{B}|f(x)-f_{B}|^{q}w(x)\,dx\right]^{1/q}\leq C\|f\|_{% {\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}.$$ Proof. It follows from the John-Nirenberg inequality that there exist two positive constants $c_{1}$ and $c_{2}$, depending only on $n$, such that, for all $\lambda>0$, $$|\{x\in B:|f(x)-f_{B}|>\lambda\}|\leq c_{1}e^{-c_{2}\frac{\lambda}{\|f\|_{{% \mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}}}|B|;$$ see [8]. Therefore, by (2.1), we see that $$\displaystyle\frac{1}{w(B)}\int_{B}|f(x)-f_{B}|^{q}w(x)\,dx$$ $$\displaystyle=$$ $$\displaystyle q\int_{0}^{\infty}\lambda^{q-1}\frac{w(\{x\in B:|f(x)-f_{B}|>% \lambda\})}{w(B)}\,d\lambda$$ $$\displaystyle\lesssim$$ $$\displaystyle\int_{0}^{\infty}\lambda^{q-1}\left[\frac{|\{x\in B:|f(x)-f_{B}|>% \lambda\}|}{|B|}\right]^{(r-1)/r}\,d\lambda$$ $$\displaystyle\lesssim$$ $$\displaystyle\int_{0}^{\infty}\lambda^{q-1}e^{-c_{2}\frac{r-1}{r}\frac{\lambda% }{\|f\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}}}\,d\lambda$$ $$\displaystyle\lesssim$$ $$\displaystyle\|f\|^{q}_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}},$$ which completes the proof of Lemma 2.3. ∎ Lemma 2.4. Let $\delta\in(0,1]$, $q\in(1,1+\delta/n)$ and $w\in A_{q}({{{\mathbb{R}}}^{n}})$. Assume that $T$ is a $\delta$-Calderón-Zygmund operator. Then there exists a positive constant $C$ such that, for any $b\in{\rm BMO}({\mathbb{R}}^{n})$ and $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),q)$-atom $a$ related to the ball $B\subset{\mathbb{R}}^{n}$, $$\|(b-b_{B})Ta\|_{L^{1}_{w}({{{\mathbb{R}}}^{n}})}\leq C\|b\|_{{\mathop{\mathrm% {missing}}{BMO}({{{\mathbb{R}}}^{n}})}}.$$ Proof. It suffices to show that $${\mathrm{I}}_{1}:=\int_{2B}|[b(x)-b_{B}]Ta(x)|w(x)\,dx\lesssim\|b\|_{{\mathop{% \mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}$$ and $${\mathrm{I}}_{2}:=\int_{(2B)^{\complement}}|[b(x)-b_{B}]Ta(x)|w(x)\,dx\lesssim% \|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}.$$ Indeed, by the boundedness of $T$ from $H^{1}_{w}({{{\mathbb{R}}}^{n}})$ to $L^{1}_{w}({{{\mathbb{R}}}^{n}})$ and from $L^{q}_{w}({{{\mathbb{R}}}^{n}})$ to itself with $q\in(1,1+\delta/n)$ (see [6, Theorem 2.8]), the Hölder inequality and Lemma 2.3, we conclude that $$\displaystyle{\mathrm{I}}_{1}$$ $$\displaystyle=$$ $$\displaystyle\int_{2B}|[b(x)-b_{B}]Ta(x)|w(x)\,dx$$ $$\displaystyle\leq$$ $$\displaystyle|b_{2B}-b_{B}|\|Ta\|_{L^{1}_{w}({{{\mathbb{R}}}^{n}})}+\int_{2B}|% [b(x)-b_{2B}]Ta(x)|w(x)\,dx$$ $$\displaystyle\lesssim$$ $$\displaystyle\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}+% \left[\int_{2B}|b(x)-b_{2B}|^{q^{\prime}}w(x)\,dx\right]^{1/q^{\prime}}\left[% \int_{2B}|Ta(x)|^{q}w(x)\,dx\right]^{1/q}$$ $$\displaystyle\lesssim$$ $$\displaystyle\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}+[w% (2B)]^{1/q^{\prime}}\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}% })}}\|a\|_{L^{q}_{w}({{{\mathbb{R}}}^{n}})}$$ $$\displaystyle\lesssim$$ $$\displaystyle\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}},$$ here and hereafter, $1/q^{\prime}+1/q=1$. On the other hand, by the Hölder inequality, (1.3), Lemma 2.3 and (2.1), we know that $$\displaystyle{\mathrm{I}}_{2}$$ $$\displaystyle=\int_{(2B)^{\complement}}|[b(x)-b_{B}]Ta(x)|w(x)\,dx$$ $$\displaystyle=\int_{(2B)^{\complement}}|b(x)-b_{B}|\left|\int_{B}a(y)[K(x,y)-K% (x,x_{0})]\,dy\right|w(x)\,dx$$ $$\displaystyle\leq\int_{B}|a(y)|\int_{(2B)^{\complement}}|b(x)-b_{B}|\left|K(x,% y)-K(x,x_{0})\right|w(x)\,dx\,dy$$ $$\displaystyle=\int_{B}|a(y)|\sum_{k=1}^{\infty}\int_{2^{k+1}B\setminus 2^{k}B}% |b(x)-b_{B}|\left|K(x,y)-K(x,x_{0})\right|w(x)\,dx\,dy$$ $$\displaystyle\lesssim\int_{B}|a(y)|\,dy\sum_{k=1}^{\infty}\int_{2^{k+1}B% \setminus 2^{k}B}\frac{r^{\delta}}{(2^{k}r)^{n+\delta}}|b(x)-b_{B}|w(x)\,dx$$ $$\displaystyle\lesssim\left[\int_{B}|a(y)|^{q}w(y)\,dy\right]^{1/q}\left[\int_{% B}[w(y)]^{-q^{\prime}/q}\,dy\right]^{1/q^{\prime}}$$ $$\displaystyle         \times\sum_{k=1}^{\infty}2^{-k\delta}\frac{1}{|2^{k+1}B|% }\int_{2^{k+1}B}\left[|b(x)-b_{2^{k+1}B}|+|b_{2^{k+1}B}-b_{B}|\right]w(x)\,dx$$ $$\displaystyle\lesssim\frac{|B|}{w(B)}\sum_{k=1}^{\infty}2^{-k\delta}k\frac{w(2% ^{k+1}B)}{|2^{k+1}B|}\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n% }})}}$$ $$\displaystyle\lesssim\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n% }})}}\sum_{k=1}^{\infty}k2^{-k[\delta+n-nq]}$$ $$\displaystyle\lesssim\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n% }})}},$$ since $\delta+n-nq>0$ and $|b_{2^{k+1}B}-b_{B}|\lesssim k\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{% \mathbb{R}}}^{n}})}}$ for all $k\geq 1$. Combining (2) and (2), we then complete the proof of Lemma 2.4. ∎ The following lemma is due to Bownik et al. [3, Theorem 7.2]. Lemma 2.5. Let $w\in A_{1+\delta/n}({\mathbb{R}}^{n})$ and ${\mathcal{X}}$ be a Banach space. Assume that $T$ is a linear operator defined on the space of finite linear combinations of continuous $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atoms with the property that $$\sup\left\{\|T(a)\|_{{\mathcal{X}}}:\mbox{$a$ is a continuous $(H_{w}^{1}({{{% \mathbb{R}}}^{n}}),\infty)$-atom}\right\}<\infty.$$ Then $T$ admits a unique continuous extension to a bounded linear operator from $H^{1}_{w}({\mathbb{R}}^{n})$ into ${\mathcal{X}}$. Let $w\in A_{1+\delta/n}({\mathbb{R}}^{n})$ and $\varepsilon\in(0,\infty)$. Recall that $m$ is called an $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty,\varepsilon)$-molecule related to the ball $B\subset{\mathbb{R}}^{n}$ if (i) $\int_{{\mathbb{R}}^{n}}m(x)dx=0$, (ii) $\|m\|_{L^{\infty}(S_{j})}\leq 2^{-j\varepsilon}[w(S_{j})]^{-1}$, $j\in{\mathbb{Z}}_{+}$, where $S_{0}=B$ and $S_{j}=2^{j+1}B\setminus 2^{j}B$ for $j\in{\mathbb{N}}$. Lemma 2.6. Let $w\in A_{1+\delta/n}({\mathbb{R}}^{n})$ and $\varepsilon>0$. Then there exists a positive constant $C$ such that, for any $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty,\varepsilon)$-molecule $m$ related to the ball $B$, it holds true that $$m=\sum_{j=0}^{\infty}\lambda_{j}a_{j},$$ where $\{a_{j}\}_{j=0}^{\infty}$ are $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atoms related to the balls $\{2^{j+1}B\}_{j\in{\mathbb{Z}}_{+}}$ and there exists a positive constant $C$ such that $|\lambda_{j}|\leq C2^{-j\varepsilon}$ for all $j\in{\mathbb{Z}}_{+}$. Proof. The proof of this lemma is standard (see, for example, [12, Theorem 4.7]), the details being omitted. ∎ Now we are ready to give the proofs of Theorems 1.3 and 1.5. Proof of Theorem 1.3. First, we prove that (ii) implies (i). Since $w\in A_{1+\delta/n}({\mathbb{R}}^{n})$, it follows that there exists $q\in(1,1+\delta/n)$ such that $w\in A_{q}({\mathbb{R}}^{n})$. By Lemma 2.5, it suffices to prove that, for any continuous $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atom $a$ related to the ball $B=B(x_{0},r)$ with $x_{0}\in{{{\mathbb{R}}}^{n}}$ and $r\in(0,\infty)$, (2.4) $$\|[b,T](a)\|_{L^{1}_{w}({{{\mathbb{R}}}^{n}})}\lesssim\|b\|_{\mathcal{BMO}_{w}% ({{{\mathbb{R}}}^{n}})}.$$ By Lemma 2.4 and the boundedness of $T$ from $H^{1}_{w}({{{\mathbb{R}}}^{n}})$ to $L^{1}_{w}({{{\mathbb{R}}}^{n}})$, (2.4) is reduced to showing that (2.5) $$\|(b-b_{B})a\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}\lesssim\|b\|_{\mathcal{BMO}_{% w}({{{\mathbb{R}}}^{n}})}.$$ To do this, for every $x\in(2B)^{\complement}$ and $y\in B$, we see that $|x-y|\sim|x-x_{0}|$ and $$\displaystyle{\mathcal{M}_{\phi}}([b-b_{B}]a)(x)$$ $$\displaystyle\lesssim\sup_{t\in(0,\infty)}\frac{1}{t^{n}}\int_{B}\int_{B}|b(y)% -b_{B}||a(y)|\left|\phi\left(\frac{x-y}{t}\right)\right|dy$$ $$\displaystyle\lesssim\frac{1}{|x-x_{0}|^{n}}\int_{B}|b(y)-b_{B}||a(y)|\,dy.$$ Hence $$\int_{(2B)^{\complement}}{\mathcal{M}_{\phi}}([b-b_{B}]a)(x)w(x)\,dx\lesssim\|% b\|_{\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})}.$$ In addition, by the boundedness of ${\mathcal{M}_{\phi}}$ on $L^{q}_{w}({{{\mathbb{R}}}^{n}})$ with $q\in(1,1+\delta/n)$, Lemma 2.3 and Proposition 2.1, we know that $$\displaystyle\int_{2B}{\mathcal{M}_{\phi}}([b-b_{B}]a)(x)w(x)\,dx$$ $$\displaystyle\lesssim w(2B)^{1/q^{\prime}}\|(b-b_{B})a\|_{L^{q}_{w}({{{\mathbb% {R}}}^{n}})}$$ $$\displaystyle\lesssim\left[\frac{1}{w(B)}\int_{B}|b(x)-b_{B}|^{q}w(x)\,dx% \right]^{1/q}$$ $$\displaystyle\lesssim\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n% }})}}$$ $$\displaystyle\lesssim\|b\|_{\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})},$$ which concludes the proof of (ii) implying (i). We now prove that (i) implies (ii). Let $\{R_{j}\}_{j=1}^{n}$ be the classical Riesz transforms. Then, by Lemma 2.4, we find that, for any $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atom $a$ related to the ball $B$ and $j\in\{1,\ldots,n\}$, $$\displaystyle\|R_{j}([b-b_{B}]a)\|_{L^{1}_{w}({{{\mathbb{R}}}^{n}})}$$ $$\displaystyle\leq\|[b,R_{j}](a)\|_{L^{1}_{w}({{{\mathbb{R}}}^{n}})}+\|(b-b_{B}% )R_{j}a\|_{L^{1}_{w}({{{\mathbb{R}}}^{n}})}$$ $$\displaystyle\lesssim\|[b,R_{j}]\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})\to L^{1}_{% w}({{{\mathbb{R}}}^{n}})}+\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R}% }}^{n}})}},$$ here and hereafter, $$\|[b,R_{j}]\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})\to L^{1}_{w}({{{\mathbb{R}}}^{n% }})}:=\sup_{\|f\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}\leq 1}\|[b,R_{j}]f\|_{L^{1% }_{w}({{{\mathbb{R}}}^{n}})}.$$ By the Riesz transform characterization of $H_{w}^{1}({{{\mathbb{R}}}^{n}})$ (see [13]), we see that $(b-b_{B})a\in H^{1}_{w}({\mathbb{R}}^{n})$ and, moreover, (2.6) $$\|(b-b_{B})a\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}\lesssim\|b\|_{{\mathop{% \mathrm{missing}}{BMO}({{{\mathbb{R}}}^{n}})}}+\sum_{j=1}^{n}\|[b,R_{j}]\|_{H^% {1}_{w}({{{\mathbb{R}}}^{n}})\to L^{1}_{w}({{{\mathbb{R}}}^{n}})}.$$ For any ball $B:=B(x_{0},r)\subset{\mathbb{R}}^{n}$ with $x_{0}\in{{{\mathbb{R}}}^{n}}$ and $r\in(0,\infty)$, let $$a:=\frac{1}{2w(B)}(f-f_{B})\chi_{B},$$ where $f:={\mbox{\small\rm sign}}\,(b-b_{B})$. It is easy to see that $a$ is an $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atom related to the ball $B$. Moreover, for every $x\notin B$, Lemma 2.2 gives us that $$\displaystyle\frac{1}{|x-x_{0}|^{n}}\frac{1}{2w(B)}\int_{B}|b(x)-b_{B}|\,dx$$ $$\displaystyle=\frac{1}{|x-x_{0}|^{n}}\int_{B}(b(x)-b_{B})a(x)\,dx$$ $$\displaystyle\lesssim{\mathcal{M}_{\phi}}([b-b_{B}]a)(x).$$ This, together with (2.6), allows to conclude that $b\in\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})$ and, moreover, $$\|b\|_{\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})}\lesssim\|b\|_{{\mathop{\mathrm% {missing}}{BMO}({{{\mathbb{R}}}^{n}})}}+\sum_{j=1}^{n}\|[b,R_{j}]\|_{H^{1}_{w}% ({{{\mathbb{R}}}^{n}})\to L^{1}_{w}({{{\mathbb{R}}}^{n}})},$$ which complete the proof of Theorem 1.3. ∎ Proof of Theorem 1.5. By Lemma 2.5, it suffices to prove that, for any continuous $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atom $a$ related to the ball $B$, (2.7) $$\|[b,T](a)\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}\lesssim\|b\|_{\mathcal{BMO}_{w}% ({{{\mathbb{R}}}^{n}})}.$$ By (2.5) and the boundedness of $T$ on $H^{1}_{w}({{{\mathbb{R}}}^{n}})$ (see [9, Theorem 1.2]), (2.7) is reduced to proving that $$\|(b-b_{B})Ta\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}\lesssim\|b\|_{\mathcal{BMO}_% {w}({{{\mathbb{R}}}^{n}})}.$$ Since $w\in A_{1+\delta/n}({\mathbb{R}}^{n})$, it follows that there exists $q\in(1,1+\delta/n)$ such that $w\in A_{q}({\mathbb{R}}^{n})$. By this and the fact that $T$ is a $\delta$-Calderón-Zygmund operator, together with a standard argument, we find that $Ta$ is an $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty,\varepsilon)$-molecule related to the ball $B$ with $\varepsilon:=n+\delta-nq>0$. Therefore, by Lemma 2.6, we have $$Ta=\sum_{j=0}^{\infty}\lambda_{j}a_{j},$$ where $\{a_{j}\}_{j=0}^{\infty}$ are $(H_{w}^{1}({{{\mathbb{R}}}^{n}}),\infty)$-atoms related to the balls $\{2^{j+1}B\}_{j=0}^{\infty}$ and $|\lambda_{j}|\lesssim 2^{-j\varepsilon}$ for all $j\in{\mathbb{Z}}_{+}$. Thus, by (2.5) and Proposition 2.1, we obtain $$\displaystyle\|(b-b_{B})Ta\|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}$$ $$\displaystyle\leq\sum_{j=0}^{\infty}|\lambda_{j}|\left[\|(b-b_{2^{j+1}B})a_{j}% \|_{H^{1}_{w}({{{\mathbb{R}}}^{n}})}+\|(b_{2^{j+1}B}-b_{B})a_{j}\|_{H^{1}_{w}(% {{{\mathbb{R}}}^{n}})}\right]$$ $$\displaystyle\lesssim\|b\|_{\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})}\sum_{j=0}% ^{\infty}2^{-j\varepsilon}+\|b\|_{{\mathop{\mathrm{missing}}{BMO}({{{\mathbb{R% }}}^{n}})}}\sum_{j=0}^{\infty}(j+1)2^{-j\varepsilon}$$ $$\displaystyle\lesssim\|b\|_{\mathcal{BMO}_{w}({{{\mathbb{R}}}^{n}})},$$ which completes the proof of (i) implying (ii) and hence Theorem 1.5. ∎ Acknowledgements. The paper was completed when the second author was visiting to Vietnam Institute for Advanced Study in Mathematics (VIASM), who would like to thank the VIASM for its financial support and hospitality. References [1] J. Álvarez, R. J. Bagby, D. S. Kurtz and C. Pérez, Weighted estimate for commutators of linear operators, Studia Math. 104 (1993), 195-209. [2] S. Bloom, Pointwise multipliers of weighted BMO spaces, Proc. Amer. Math. Soc. 105 (1989), 950-960. [3] M. Bownik, B. Li, D. Yang and Y. Zhou, Weighted anisotropic Hardy spaces and their applications in boundedness of sublinear operators, Indiana Univ. Math. J. 57 (2008), 3065-3100. [4] R. R. Coifman, R. Rochberg and G. Weiss, Factorization theorems for Hardy spaces in several variables, Ann. of Math. (2) 103 (1976), 611-635. [5] J. García-Cuerva, Weighted $H^{p}$ spaces, Dissertations Math. (Rozprawy Mat.) 162 (1979), 1-63. [6] J. García-Cuerva and K. Kazarian, Calderón-Zygmund operators and unconditional bases of weighted Hardy spaces, Studia Math. 109 (1994), 255-276. [7] E. Harboure, C. Segovia and J. L. Torrea, Boundedness of commutators of fractional and singular integrals for the extreme values of $p$, Illinois J. Math. 41 (1997), 676-700. [8] F. John and L. Nirenberg, On functions of bounded mean oscillation, Comm. Pure Appl. Math. 14 (1961), 415-426. [9] L. D. Ky, A note on $H^{p}_{w}$-boundedness of Riesz transforms and $\theta$-Calderón-Zygmund operators through molecular characterization. Anal. Theory. Appl. 27 (2011), No. 3, 251-264. [10] L. D. Ky, Bilinear decompositions and commutators of singular integral operators, Trans. Amer. Math. Soc. 365 (2013), 2931-2958. [11] C. Pérez, Endpoint estimates for commutators of singular integral operators, J. Func. Anal. 128 (1995), 163-185. [12] L. Song and L. Yan, Riesz transforms associated to Schrödinger operators on weighted Hardy spaces, J. Funct. Anal. 259 (2010), 1466-1490. [13] R. L. Wheeden, A boundary value characterization of weighted $H^{1}$, Enseignement Math. (2) 22 (1976), 121-134. [14] K. Yabuta, Pointwise multipliers of weighted BMO spaces, Proc. Amer. Math. Soc. 117 (1993), 737-744. Yiyu Liang Department of Mathematics, Beijing Jiaotong University, Beijing 100044, People’s Republic of China E-mail: [email protected] Luong Dang Ky Department of Mathematics, University of Quy Nhon, 170 An Duong Vuong, Quy Nhon, Binh Dinh, Vietnam E-mail: [email protected] Dachun Yang (Corresponding Author) School of Mathematical Sciences, Beijing Normal University, Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, People’s Republic of China E-mail: [email protected]
Reciprocity and unitarity in scattering from a non-Hermitian complex PT-symmetric potential Zafar Ahmed [email protected] Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400 085, India (December 4, 2020) Abstract In quantum scattering, Hermiticity is necessary for both reciprocity and unitarity. Reciprocity means that both reflectivity ($R$) and transmitivity ($T$) are insensitive to the direction of incidence of a wave (particle) at a scatterer from left/right. Unitarity means that $R+T=1$. In scattering from non-Hermitian PT-symmetric structures the (left/right) handedness (non-reciprocity) of reflectivity is known to be essential and unitarity remains elusive so far. Here we present a surprising occurrence of both reciprocity and unitarity in some parametric regimes of scattering from a complex PT-symmetric potential. In special cases, we show that this potential can even become invisible $(R=0,T=1)$ remarkably this time from both left and right sides. We also find that this potential in a parametric regime enjoys a pseudo-unitarity of the type: $T+\sqrt{R_{left}R_{right}}=1$. PACS: 03.65, 11.30.Er, 42.25.Bs In non-relativistic quantum mechanics Hermiticity is the necessary condition for a Hamiltonian to have: real discrete spectrum, and both unitarity and reciprocity in scattering. Reciprocity means that both reflectivity $(R)$ and transmitivity $(T)$ are insensitive to the direction of incidence of a wave (particle) at a scatterer from left/right. Unitarity in scattering means that $R+T=1$. In various branches of physics the complex optical potentials have been in use since a long time to account for the absorption of the incident flux in to unknown channels. Consequently, non-Hermiticity is synonymous to absorption or emission of flux. In this kind of scattering the unitarity is broken as the probability of reflection ($R$) and transmission ($T$) do not add to 1 and one instead has $R+T+A=1$ where $A$ is the probability of absorption. Bender and Boettcher [1,2] conjectured that the eigenspectrum of a non-Hermitian complex potential in a parametric regime was discrete and real. This potential was PT-symmetric [invariant under Parity $(x\rightarrow-x)$ and Time-reversal $(i\rightarrow-i)$]. Also this potential was not amenable to exact analytic solutions so it required special methods to prove the reality of its spectrum [3]. Their conjecture has initiated a debate: ‘Must a Hamiltonian be Hermitian ?’[2] and it has inspired a large body of investigations leading to the extension of the quantum mechanics in complex domain(see e.g., [1-21,23-31]). About thirteen years later, the present work addresses the same question but this time for reciprocity and unitarity in scattering. Surprisingly, this time again the answer is no. For the scattering from a complex non-Hermitian potential it has been possible to prove [4] that if non-Hermitian complex potential is spatially asymmetric the reflectivity (R) shows handedness $R_{left}\neq R_{right}$ whereas transmitivity (T) remains invariant to the direction of the incidence of the particle from left or right. The complex PT-symmetric potentials being spatially anti-symmetric are automatically entitled to this handedness [5-18]. This contrasting feature of scattering from that of reciprocity in Hermitian case perhaps may have discouraged one to look for unitarity in the scattering from complex PT-symmetric potentials. Nevertheless, various works [4-18] normally display non-unitarity in scattering from complex PT-symmetric potentials. There has been a very impressive progress in the investigations of the scattering from a complex PT-symmetric potential. In some PT-symmetric structures the absence unitarity has been marked with new pseudo-unitarity conditions such as $T-1=\pm\sqrt{R_{left}R_{right}},{R_{left}+R_{right}\over 2}-T=1$(see Eqs. (9) and (17), respectively in Ref. [9]). The concepts like spectral singularity [11-14] and invisibility [15,18] have been well developed both theoretically and experimentally. For the spectral singularity one looks for positive energies where there are very large (infinite) [11] peaks in both $R(k)$ and $T(k)$. The instance where $R_{right}(k)$ or $R_{left}(k)=0$ and $T(k)=1$ is called unidirectional invisibility [15-18] in both complex-non-Hermitian and complex PT-symmetric potentials. Notice that this invisibility is direction dependent either from left or from right. This is the consequence of the handedness of reflectivity in these potentials. The ghost of non-Hermiticity has already been busted in PT-symmetric domain, new features such as spectral singularity [11-14] and invisibility [15-17] of such potentials are being investigated. Novel optical devices and materials have been engineered to realize wave propagation through a complex PT-symmetric medium [9,15-17,19-21]. In this scenario of scattering from a complex PT-symmetric potential, we present surprising parametric regimes in the well known complex PT-symmetric Scarf II potential wherein we observe reciprocity, unitarity, invisibility (from both left and right). We also find that this potential satisfies one of the newly proposed pseudo-unitarity conditions [9]. The scattering from Scarf II potential is well studied wherein the reflection and transmission amplitudes have been well worked out [5,22]. The non-Hermitian complex PT-symmetric version of the Scarf II potential has been very useful in the investigations of complex PT-symmetric potentials in various ways [5,7,13,14,23-30]. We would like to write the complex Scarf II potential as $$V(x)=-(B^{2}+A^{2}+A)\mbox{sech}^{2}x+iB(2A+1)\tanh x~{}\mbox{sech}x$$ (1) which is known to have real discrete spectrum [5,22-28]. This potential in another parametric form displays phase-transition [24] of real discrete eigenvalues to complex conjugate pairs about a critical value of a parameter when the PT-symmetry breaks down [1,2]. Let $2\mu=1=\hbar^{2}$ and $k=\sqrt{E}$, where $E$ is the energy. Following [5,22], we can write the transmission amplitude for (1) as [30] $$t_{A,B}(k)=\frac{\Gamma[-A-ik]\Gamma[1+A-ik]\Gamma[\frac{1}{2}+B-ik]\Gamma[% \frac{1}{2}-B-ik]}{\Gamma[-ik]\Gamma[1-ik]\Gamma^{2}[\frac{1}{2}-ik]},$$ (2) $$r_{A,B}(k)=t_{A,B}(k)i\left[\frac{\cos\pi A\sin\pi B}{\cosh\pi k}+\frac{\sin% \pi A\cos\pi B}{\sinh\pi k}\right].$$ (3) The transmitivity $T(k)=|t(k)|^{2}$ and the reflectivity $R(k)=|r(k)|^{2}$. We have re-derived (2,3) to find [14] that for (1) $$\displaystyle t_{left}(k)=t_{A,B}(k),r_{left}(k)=r_{A,B}(k)$$ (4) $$\displaystyle\mbox{and}\quad t_{right}(k)=t_{A,-B}(k),r_{right}(k)=r_{A,-B}(k).$$ However, this point can also be verified easily by noticing that the potential (1) satisfies: $V(x,B)=V(-x,-B)$. Making multiple use of the property of Gamma functions namely $\Gamma(z)\Gamma(1-z)=\pi~{}\mbox{cosec}\pi z$ we express the transmitivity, $T(k)$ as $$T(k)={\sinh^{2}\pi k\cosh^{2}\pi k\over(\sinh^{2}\pi k+\sin^{2}\pi A)(\sinh^{2% }\pi k+\cos^{2}\pi B)},\quad A,B\in R$$ (5) It follows that $T(k)$ will be both normal $(<1)$ and anomalous $(>1)$. For the cases $A=B$ or when $A=n+1/2$, or $B=n$, $n\in I$, the transmitivity is normal. For the cases $A=n$ and $A\neq B$ the transmitivity is anomalous at small energies. Moreover, when the transmitivity (5) is normal, it can be readily checked that the present results on $T(E)$ and $R_{left}(E)$ and $R_{right}(E)$ (2-5) satisfy a pseudo-unitarity of the type: $$T(E)+\sqrt{R_{left}(E)R_{right}(E)}=1,$$ (6) See Ref. [9] for this (6) and other proposals of pseudo-unitarity. Ordinarily, for real values of the parameters $A,B$ the Eqs. (2-5) yield to the rule [4,8,9,11,15,16] of left/right handedness (non-reciprocity) of $R(k)$ and sometimes the non-unitarity manifests as a pseudo-unitarity condition (6). For other (in)variances see [14]. When $A=-(n+1)-i\alpha$ and $B=i\alpha-(n+1/2)$ with $n\in I^{+}+\{0\},\alpha>0$ in Eqs. (2,3) a recent phenomenon of spectral singularity [11] is observed wherein at $E=\alpha^{2}$ [14] both $R$ and $T$ become infinite. For other special values of the parameters $A$ and $B$, from Eqs. (2-5) the following extra-ordinary features arise: {1} Reciprocity and unitarity Case 1: When $A=n+1/2,n\in I$ and $B$ is real, from (2) and (3) we get $$R(k)={\cos^{2}\pi B\over\sinh^{2}\pi k+\cos^{2}\pi B},\quad T(k)={\sinh^{2}\pi k% \over\sinh^{2}\pi k+\cos^{2}\pi B},$$ (7) Case 2: When $B\in I$ and $A$ is real,we get $$R(k)={\sin^{2}\pi A\over\sinh^{2}\pi k+\sin^{2}\pi A},\quad T(k)={\sinh^{2}\pi k% \over\sinh^{2}\pi k+\sin^{2}\pi A},$$ (8) In both the cases from Eq.(4) the acclaimed reciprocity of reflectivity follows: $R_{left}(k)=R_{right}(k)$. The reflectivity is also symmetric under time-reversal: $R(-k)=R(k)$. The claimed unitarity can be checked readily using Eqs.(7,8). $\diamond$ {2} Invisibility with reciprocity: In the above two cases of unitarity when $A=(n+1/2),B=(m+1/2)$ or $A=n,B=m$ ($n,m\in I$) check that two cases of invisibility occur wherein $R(k)=0,T(k)=1$ at any energy. This invisibility of the complex PT-symmetric potential (1) is not unidirectional [15-18], this time it is from both sides left and right. $\diamond$ Earlier such an invisibility has been termed as bi-directional invisibility and first found in $V(x)=-{1\over(x+ia)^{2}}$ [31] using the methods super symmetric quantum mechanics. More recently bi-directional invisibility has been found in the PT-symmetric Ginocchio’s potential [32]. Interestingly, both these instances are for the potentials of the type $V(x)=V_{0}f(x+ia)$ where a real Hermitian potential has been complexified by an imaginary shift of the $x$ co-ordinate. In these types of potentials as argued in [5] $R_{left},R_{right}$ are of the type $e^{\pm 2ka}R^{\prime}$ where $R^{\prime}$ is reflectivity of the Hermitian potential $(V_{0}f(x))$. Consequently, the pseudo-unitarity (6) is satisfied and the left and right reflectivity zero(s) occur at the same discrete energ(ies). The reflectivity of the non-Hermitian complex potentials which are spatially symmetric shows [4] reciprocity along with non-unitarity $R+T<(>)1$ when the imaginary part of the potential is negative (positive) definite for $x\in(-\infty,\infty)$. For example for $V(x)=(V_{1}\mp iV_{2})~{}\mbox{sech}^{2}x,V_{1},V_{2}\in R,V_{2}>0~{}(V_{2}<0)$, the reflectivity will show reciprocity along with non-unitarity $R+T<(>)1$, respectively. Hence, we know that reciprocity does not imply unitarity. We speculate that unitarity may be sufficient for reciprocity of reflectivity in scattering from a complex non-Hermitian potential. Notwithstanding a rapid research in PT-symmetric structures these days and the familiarity of the complex PT-symmetric Scarf II potential, to the best of our knowledge, the above two paradoxical or exceptional features $\{1,2\}$ are new and un-noticed, so far. The observations and the proofs of the non-reciprocity of reflectivity in scattering from complex PT-symmetric potentials are abound, however, one ought to look for reciprocity under some special parametric condition hereafter. The question whether there are various parametric regimes in other PT-symmetric structures yielding to reciprocity (of reflectivity), unitarity, spectral singularity, invisibility and the pseudo-unitarity of the type (6) is open for investigations. With regard to this, studying exactly solvable complex potentials becomes even more important. The present exposition provides a paradigm shift in the thinking in two ways. Firstly, in scattering, Hermiticity and time-reversal symmetry of an interaction are not necessary for unitarity and reciprocity (of reflectivity), respectively. Secondly, the complex PT-symmetric structures are very versatile having multiple parametric regimes displaying various properties. References 1. C. M. Bender and S. Boettcher, Phys. Rev. Lett. 80 (1998) 5243. 2. C. M. Bender, D.C. Brody, H.F. Jones , Am. J. Phys 71 (2003) 1095. 3. P. Dorey, C. Dunning and R. Tateo, J. Phys. A: Math. & Gen. 34 (2001) 5679. 4. Z. Ahmed, Phys. Rev. A 64 (2001) 042716. 5. G. Levai, F. Cannata and A. Ventura, J. Phys. A: Math. Gen. 34 (2001) 839. 6. R. N. Deb, A. Khare , B.D. Roy, Phys. Lett A 307 (2003) 215. 7. Z. Ahmed, Phys. Lett. A 324 (2004) 152. 8. F. Cannata, J.-P. Dedonder and A. Ventura, Ann. Phys.(N.Y.) 322 (2007) 397. 9. Li Ge, Y.D. Chong and A.D. Stone , arXiv 1112-5167v1 [physics-optics] Dec 21. 2011 (See also refs. there in) 10. H. F. Jones, J. Phys. A: Math. Theor. 45 (2011) 135306. 11. A. Mostafazadeh, Phys. Rev. Lett. 102 (2009) 220402. 12. S. Longhi, Phys. Rev. A 81 (2010) 022102. 13. Z. Ahmed, J. Phys. A: Math. Theor. 42 (2009) 472005. 14. Z. Ahmed 2012, J. Phys. A: Math. Theor. 45 (2009) 032004. 15. M. Kulishov, J. M. Laniel, N. Belanger, J. Azana and D. V. Plant, Opt. Express 13 (2005) 3068. 16. Z. Lin, H. Ramezani, T. Eichelkraut, T. Kottos, H. Cao and D.N. Christodoulides, Phys. Rev. Lett. 106 (2011) 213901. 17. S. Longhi, J. Phys. A: Math. Theor. 44 (2011) 485302. 18. A. Mostafazadeh, arXiv: 1206.0116v1 [mat-ph] 1 June, 2012. 19. Z. H. Musslimani, K.G. Makris, R. El-Ganainy, and D.N. Christodoulides, Phys. Rev. Lett. 100 (2008) 030402. (See also references there in) 20. A. Guo, G. J. Salamo,D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou and D. N. Christodoulides, Phys. Rev. Lett. 103 (2009) 093902. 21. C. E. Rüter, G.E. Makris,R. El-Ganainy, D.N. Christodoulides, M. Segev and D. Kip, Nature Physics 6 (2010) 192. 22. A. Khare, and U.P. Sukhatme, J. Phys. A: Math. Gen. 21 (1988) L501. 23. B. Bagchi, C. Quesne, Phys. Lett A: 273 (2000) 285. 24. Z. Ahmed, Phys. Lett. A 282 (2001)343-348; 287 295. 25. G. Levai and M. Znojil, J. Phys. A: Math. Gen. 35 (2002) 8793. 26. G. Levai, F. Cannata and A. Ventura, J. Phys. A: Math. Gen. 35 (2002) 5041. 27. G. Levai, F. Cannata and A. Ventura, Phys. Lett. A 300 (2002) 271. 28. F. Correa and M.S. Plyushchay, Ann. Phys. (N.Y) 327 (2012) 1761. 29. A. Sinha, Euro. Phys. Lett. 98 (2012) 60005. 30. These Eqs. are the same as given in (Eq. (6) in Ref. [14]), excepting an error for $r_{A,B}$ in [14]. There [14] $\sin\pi B$ and $\cos\pi B$ should have appeared correctly as $\sinh\pi B$ and $\cosh\pi B$, respectively. 31. A. A. Andrianov, M. V. Ioffe, F. Cannata, J.-P. Dedoner, Int. J. Mod. Phys. A 14 (2005) 3068. 32. A. Ghatak, A.N. Joseph, B.P. Mandal, Z. Ahmed, J. Phys. A: Math. Theor. 45 (2012) 465305.
Structural and metal-insulator transitions in ionic liquid-gated Ca${}_{3}$Ru${}_{2}$O${}_{7}$ surface Conor P. Puls    Xinxin Cai    Yuhe Zhang Department of Physics and and Materials Research Institute, Pennsylvania State University, University Park, PA 16802, USA    Jin Peng    Zhiqiang Mao Department of Physics, Tulane University, New Orleans, LA 70118, USA    Ying Liu [email protected] Department of Physics and and Materials Research Institute, Pennsylvania State University, University Park, PA 16802, USA Key Laboratory of Artificial Structures and Quantum Control (Ministry of Education), Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China (December 6, 2020) Abstract We report the fabrication and measurements of ionic liquid gated Hall bar devices prepared on thin Ca${}_{3}$Ru${}_{2}$O${}_{7}$ flakes exfoliated from bulk single crystals that were grown by a floating zone method. Two types of devices with their electrical transport properties dominated by $c$-axis transport in Type A or that of the in-plane in Type B devices, were prepared. Bulk physical phenomena, including a magnetic transition near 56 K, a structural and metal-insulator transition at a slightly lower temperature, as well as the emergence of a highly unusual metallic state as the temperature is further lowered, were found in both types of devices. However, the Shubnikov-de Haas oscillations were found in Type A but not Type B devices, most likely due to enhanced disorder on the flake surface. Finally, the ionic liquid gating of a Type B device revealed a shift in critical temperature of the structural and metal-insulator transitions, suggesting that such transitions can be tuned by the electric field effect. The discovery of odd-parity, spin-triplet superconductivity in Sr${}_{2}$RuO${}_{4}$Maeno et al. (1994) generated much interest in related compounds in the Ruddlesden-Popper (R-P) series of (Ca,Sr)${}_{n+1}$Ru${}_{n}$O${}_{3n+1}$. Interestingly, while strontium ruthenates in the R-P series of Sr${}_{n+1}$Ru${}_{n}$O${}_{3n+1}$ (Sr${}_{2}$RuO${}_{4}$ is the n = 1 member of the series) are all metals, the calcium ruthenates are more strongly correlated than their strontium ruthenate counterparts, featuring metallic as well as insulating behavior accompanied by magnetic, structural, and metal-insulator phase transitions. In particular, the bilayer calcium ruthenate, Ca${}_{3}$Ru${}_{2}$O${}_{7}$, features a band-dependent Mott metal-insulator transition at 56 K, followed by a structural as well as metal-insulator transition at 48 K as the temperature is lowered.Cao et al. (1997); Yoshida et al. (2005); Bao et al. (2008) Furthermore, a bulk spin valve behavior featuring colossal magnetoresistance was discovered, McCall et al. (2003); Singh and Auluck (2006); Ohmichi et al. (2004) which was attributed to the existence of strongly spin-dependent resistive states in Ca${}_{3}$Ru${}_{2}$O${}_{7}$, which can be tuned by the application of an in-plane field leading to a spin-reorientation and a large resistance change.Singh and Auluck (2006) Two observations on Ca${}_{3}$Ru${}_{2}$O${}_{7}$ are particularly intriguing. First, despite of the co-existence of the structural and metal-insulator transitions at 48 K indicating strong coupling among charge, spin, and lattice degrees of freedom in Ca${}_{3}$Ru${}_{2}$O${}_{7}$,Hotta and Dagotto (2001); Forte et al. (2010) resonant X-ray scattering measurements did not yield any evidence for an orbital ordering in Ca${}_{3}$Ru${}_{2}$O${}_{7}$,Bohnenbuck et al. (2008) which raises the question of whether the structural transition is actually electronically driven. Second, a highly unusual metallic state with a very low carrier density was found to emerge below around 8 K. An electronically driven structural transition is a phenomenon of current technological interest in the context of oxide electronics. Similar phenomena was found in vanadium oxide, which features a metal-insulator transition just above room temperature and has been proposed for next-generation field-effect transistor technologies.Stefanovich et al. (2000); Ruzmetov et al. (2010) The emergence of an unusually low carrier density metallic state in an insulating phase, which results in the Shubnikov-de Haas oscillations (SdHOs), resembles that observed in under doped high $T_{c}$ superconductors Tokura and Nagaosa (2000) that was attributed to the presence of pre-formed electron pairs. As to the metallic phase found below 8 K, even though its existence was revealed long ago in the flux grown crystals,Cao et al. (1997); Yoshida et al. (2005); Bao et al. (2008) and confirmed in the flux grown crystals more recently,Kikugawa et al. (2010) the nature of this phase has rarely been discussed. Electric field effect study of this system will provide insight into these questions. The challenge of studying the electric field effect of Ca${}_{3}$Ru${}_{2}$O${}_{7}$ is two-fold. First, high-quality thin films of Ca${}_{3}$Ru${}_{2}$O${}_{7}$ are difficult to prepare. Furthermore, Ca${}_{3}$Ru${}_{2}$O${}_{7}$ is neither very resistive or electronically anisotropic, making the non-surface contribution to total sample conductance significant for any electric field effect samples. The exfoliation of layered materials into thin single-crystal flakes, inspired by the graphene workNovoselov et al. (2005) provides a solution to the first problem. However, the issue of the small surface contribution to total sample conductance is difficult to address. In this regard, making thin flakes and using very high charge density change will help. Specifically, significant electric field effect of Ca${}_{3}$Ru${}_{2}$O${}_{7}$ would require that 10${}^{13}$-10${}^{15}$ cm${}^{-2}$ per bilayer be achieved.Yoshida et al. (2007) Ionic liquid gating techniques, capable of inducing up to 10${}^{15}$ cm${}^{-2}$ charge carriersYuan et al. (2009) by the formation of an electrical double layer (EDL) at the sample/liquid interface,Skinner et al. (2010) have previously been developed for studies of insulating transition metal oxides. Superconductivity was discovered in insulating KTaO${}_{3}$ by gating beyond 3 x 10${}^{14}$ cm${}^{-2}$,Ueno et al. (2011) and in YBCO the superconducting critical temperature was pushed to zero by depleting a comparable density.Leng et al. (2011) EDL gating has also confirmed carrier-mediation of ferromagnetic ordering in Ti${}_{0.90}$Co${}_{0.10}$O${}_{2}$.Yamada et al. (2011) Single crystals of Ca${}_{3}$Ru${}_{2}$O${}_{7}$ used in this study were grown by a floating zone method. Flakes of Ca${}_{3}$Ru${}_{2}$O${}_{7}$ were exfoliated via mechanical cleavage from bulk crystals and deposited onto a substrate of 300 nm SiO${}_{2}$ thermally grown on undoped Si. Flakes are typically on the order of 30-50 $\mu$m in lateral length, and between 0.5 and 1 $\mu$m in thickness along the $c$-axis; one flake is shown in Fig. 1a. Thickness was estimated by focusing the both the flake and the substrate within the sub-micron depth of view field of our optical microscope, and confirmed by atomic force microscope (AFM) measurements. We developed a process to contact only the top surface of the flake by hard-baking a photo-lithographically defined window on the surface of a flake before defining metal contacts. We patterned Ti/Au metal contacts in a Hall bar geometry. A short, low-power oxygen etch cleaned the sample surface sufficiently after processing. A completed device is shown in Fig. 1b. This surface-contacted geometry prepares the device for top-gating with an ionic liquid, shown schematically in Fig. 1c, and is preferred to maximize the surface signal in metallic, though anisotropic, materials. We use the ionic liquid N,N-diethyl-N-(2-methoxyethyl)-N-methylammonium bis(trifluoromethylsulphonyl-imide), DEME-TFSI) as the gate dielectirc. Devices were measured within a Physical Property Measurement System (Quantum Design) with a base temperature of 1.8 K and a 9T superconducting magnet. Gate voltage is applied just above the freezing point of DEME-TFSI at 210 KSato et al. (2004) and the sample is cooled with the gate voltage held constant. In Fig. 2, we show longitudinal resistance $R$ $vs.$ temperature $T$ in two Ca${}_{3}$Ru${}_{2}$O${}_{7}$ flake Hall bar devices. Both devices showed metallic behavior and essentially a linear $R\propto T$ behavior until an antiferromagnetic ordering transitionCao et al. (1997); Bao et al. (2008) at 54 and 56 K in these two samples, corresponding to the 56 K transition in the bulk, resulting in a sudden drop in sample resistance. Lowering temperature further, a structural transition and a sharp jump in sample resistance was found near 49 and 51 K, respectively, corresponding to the 48 K transition in the bulk Yoshida et al. (2005). However, qualitatively different behaviors were found for the two devices at low temperatures. For Sample A, the insulating behavior was found below the structure transition, persisting to around 8 K, below which a metallic behavior was found. For Sample B, the insulating behavior lived much shorter than Sample A, with a metallic behavior found below 33 K. The temperature dependence of the sample resistance seen in Sample A is essentially that of the bulk measured along the $c$-axis, $\rho_{c}$, while that seen Sample B, resembles that of the bulk in-plane resistivity, $\rho_{ab}$. Kikugawa et al. (2010); Yoshida et al. (2004); Ohmichi et al. (2004) The $c$-axis resistivity of the bulk crystals was found to feature more than a factor of eight resistance rise below the structural transition in bulk crystals, and that for our Sample A is roughly factor of 4, which suggests that the sample resistance measured in Sample A contains contributions from both $ab$- and $c$-axis electrical transport. Interestingly, the temperature dependence of the sample resistance for Sample A, which we refer here to as a Type A sample, was found in most devices we prepared. Given that the ratio of $c$-axis resistivity $\rho_{c}$ to $ab$-axis resistivity $\rho_{ab}$ is only over factor of three, this is not unexpected. Devices with behavior resembling to that of Sample B, which we refer to as a Type B sample, were much harder to come by. The sample resistance for Type B samples consists mostly contribution from the flake surface, taking up the behavior of bulk $\rho_{ab}(T)$. The feature found at 33 K in bulk $\rho_{ab}$Yoshida et al. (2004), which was seen in $R$ $vs.$ $T$ for Sample B and in $dR/dT$ for Sample A, marks the onset of a quasi two-dimensional metallic state. The metallic behavior found in $\rho_{c}$ below 8 K, on the other hand, signals a incoherent-coherent transition in the $c$-axis transport and the emergence of a fully three-dimensional metal in Ca${}_{3}$Ru${}_{2}$O${}_{7}$. Interestingly, the 8 K feature in $\rho_{c}$ was observed in floating zone,Kikugawa et al. (2010) but not in self-flux grown crystals,Yoshida et al. (2004); Ohmichi et al. (2004) which seems to suggests that coherent $c$-axis transport is fragile, sensitive to disorder. The observation of the 8 K feature in our Type A sample therefore suggests that the good crystallinity of our flakes, consistent with the temperature dependence of the Hall coefficient, $R_{H}$($T$), shown in Fig. 2c. Incidentally, small deviation from bulk behavior in $R_{H}$($T$) was found for Sample B at low temperatures, likely reflecting the effect of disorder on the flake surface. The above analysis is supported by our observation of SdHOs in Type A samples which are absent in Type B samples. We show in Fig.3a $R$ with $H$ at $T$ = 1.8 K in a Type A device. The background-subtracted resistance oscillations, $\Delta R$ $vs.$ $H^{-1}$, were shown in Fig. 3c. Similar behavior was found in a separate Type A device (data not shown). Up to three sets of oscillations were observed in bulk Ca${}_{3}$Ru${}_{2}$O${}_{7}$ .Cao et al. (2003); Kikugawa et al. (2010) However, a Fourier transform of $\Delta R$ obtained in our flake devices suggests only a single set of SdHOs with a frequency of 43 T, which was also seen in the bulk, likely due to a low maximum $H$ in the present work. It is known that the frequency of SdHOs depends on the carrier density. Even though the precise carrier density of the device cannot be obtained from the Hall measurements because the thickness of the layer affected by the gating is not known, the carriers added to the surface can be estimated based on our control experiment carried out on graphene. Careful comparison of the SdHOs without gating with those obtained when a ionic liquid gating voltage of 3 V was applied, corresponding a carrier density change larger than 10${}^{13}$ cm${}^{-2}$, the SdHOs remained essentially the same (Figs. 3c and d). This observation suggests that the SdHOs can not come from the surface of the flake. Incidentally, SdHOs were not observed in Type B samples, which, together with the deviation from the bulk behavior seen in $R_{H}$($T$), indicates clearly that the transport in Type B samples is dominated by the surface layer featuring disorder stronger than that in the interior of the flake. The surface dominance in Type B samples could be due to mechanical separation formed during the exfoliation process even though there is no direct evidence for it. The surface layer-dominated transport in Type B samples facilitates measurable response to applied gate voltage $V_{G}$ across an ionic liquid. In Fig. 4a, we show that with $V_{G}$ = 3 V, the induction of electrons at the flake surface increases conductivity by up to 20% at lowest temperatures in the metallic regime, likely due to an added carrier density larger than 10${}^{13}$ cm${}^{-2}$ as mentioned above. In numerically calculated $dR/dT$ in Fig. 4b, we observe a shift in the peak insulating slope associated with the structural transition in Ca${}_{3}$Ru${}_{2}$O${}_{7}$ , from 50 to 53 K. The shift of a structural transition with carrier density confirms that the transition is electronically driven even though orbital ordering is absent. Interestingly, although recent electrical transport studies of bulk Ca${}_{3}$Ru${}_{2}$O${}_{7}$ under pressure have indicated that the structural transition is linked to the long-range antiferromagnetic ordering,Yoshida et al. (2008) the 56 K transition observed in our Type B sample is barely shifted (Fig. 4b). The three-dimensional metallic state emerged below 8 K is puzzling. The area of the primary Fermi surface $\mathcal{A}$ can be estimated from the period in $H^{-1}$ of our SdHOs using the formula $\Delta H^{-1}=2\pi e/\hbar\mathcal{A}$. A frequency of 43 T gives $\mathcal{A}\approx$ 0.3% of the 1st Brillouin zone, using lattice parameters from Ref. Yoshida et al. (2005), in agreement with bulk measurements of both SdHOs and ARPES.Baumberger et al. (2006) It is intriguing that the onset temperature for this metallic state with a tiny carrier density appears to be unchanged under a 3V ionic liquid gating. Together with the fact that the 56 K magnetic transition was barely shifted by the same ionic liquid gating of 3 V, our experiment seems to suggest that the emergence of this metallic phase is magnetic in origin. In conclusion, we have developed a surface-contact technique for devices prepared on exfoliated Ca${}_{3}$Ru${}_{2}$O${}_{7}$ flakes. Comparison with features seen on these devices prepared on exfoliated Ca${}_{3}$Ru${}_{2}$O${}_{7}$ flakes and those in floating zone-grown bulk crystals suggests that the transport properties observed in the Type A and Type B samples are dominated by $c$ axis and in-plane contributions, respectively. Magneto electrical transport measurements, including the observation of SdHOs, support the emergence of a highly unusual metallic state featuring small Fermi surface pockets at low temperatures. The demonstration of an electric field effect on the structural transition temperature on Ca${}_{3}$Ru${}_{2}$O${}_{7}$ surface suggests a new approach to the study of complex transition metal oxides for which thin films are unavailable. We would like to thank M. Sigrist, N. Staley and M. Ulrich for useful discussions. The work at Penn State is supported by DOE under Grant No. DE-FG02-04ER46159. The work at Tulane is supported by NSF under DMR-1205469. The nano fabrication part of this work is supported by the National Science Foundation (NSF) under Grant DMR-0908700 and Penn State MRI Nanofabrication Lab under NSF Cooperative Agreement 0335765, NNIN with Cornell University. Y. L. also acknowledges support from MOST of China (Grant 2012CB927403) and NSFC (Grant 11274229) for data analysis and manuscript preparation. References Maeno et al. (1994) Y. Maeno, H. Hashimoto, K. Yoshida, S. Nishizaki, T. Fujita, J. G. Bednorz, and F. Lichtenberg, Nature 372, 532 (1994). Cao et al. (1997) G. Cao, S. McCall, J. E. Crow, and R. P. Guertin, Phys. Rev. Lett. 78, 1751 (1997). Yoshida et al. (2005) Y. Yoshida, S.-I. Ikeda, H. Matsuhata, N. Shirakawa, C. H. Lee, and S. Katano, Phys. Rev. B 72, 054412 (2005). Bao et al. (2008) W. Bao, Z. Q. Mao, Z. Qu, and J. W. Lynn, Phys. Rev. Lett. 100, 247203 (2008). McCall et al. (2003) S. McCall, G. Cao, and J. E. Crow, Phys. Rev. B 67, 094427 (2003). Singh and Auluck (2006) D. J. Singh and S. Auluck, Phys. Rev. Lett. 96, 097203 (2006). Ohmichi et al. (2004) E. Ohmichi, Y. Yoshida, S.-I. Ikeda, N. Shirakawa, and T. Osada, Phys. Rev. B 70, 104414 (2004). Hotta and Dagotto (2001) T. Hotta and E. Dagotto, Phys. Rev. Lett. 88, 017201 (2001). Forte et al. (2010) F. Forte, M. Cuoco, and C. Noce, Phys. Rev. B 82, 155104 (2010). Bohnenbuck et al. (2008) B. Bohnenbuck, I. Zegkinoglou, J. Strempfer, C. Schussler-Langeheine, C. S. Nelson, Ph. Leininger, H.-H. Wu, E. Schierle, J. C. Lang, G. Srajer, et al., Phys. Rev. B 77, 224412 (2008). Stefanovich et al. (2000) G. Stefanovich, A. Pergament, and D. Stefanovich, J. Phys.: Condens. Matter 12, 8837 (2000). Ruzmetov et al. (2010) D. Ruzmetov, G. Gopalakrishnan, C. Ko, V. Narayanamurti, and S. Ramanathan, J. Appl. Phys. 107, 114516 (2010). Tokura and Nagaosa (2000) Y. Tokura and N. Nagaosa, Science 288, 462 (2000). Kikugawa et al. (2010) N. Kikugawa, A. W. Rost, C. W. Hicks, A. J. Schofield, and A. P. Mackenzie, J. Phys. Soc. Jpn. 79, 024704 (2010). Novoselov et al. (2005) K. S. Novoselov, D. Jiang, F. Schedin, T. J. Booth, V. V. Khotkevich, S. V. Morozov, and A. K. Geim, PNAS 102, 10451 (2005). Yoshida et al. (2007) Y. Yoshida, S.-I. Ikeda, and N. Shirakawa, J. Phys. Soc. Jpn. 76, 085002 (2007). Yuan et al. (2009) H. Yuan, H. Shimotani, A. Tsukazaki, A. Ohtomo, M. Kawasaki, and Y. Iwasa, Adv. Funct. Mater. 19, 1046 (2009). Skinner et al. (2010) B. Skinner, M. S. Loth, and B. I. Shklovskii, Phys. Rev. Lett. 104, 128302 (2010). Ueno et al. (2011) K. Ueno, S. Nakamura, H. Shimotani, H. T. Yuan, N. Kimura, T. Nojima, H. Aoki, Y. Iwasa, and M. Kawasaki, Nat. Nanotechnol. 6, 408 (2011). Leng et al. (2011) X. Leng, J. Garcia-Barriocanal, S. Bose, Y. Lee, and A. M. Goldman, Phys. Rev. Lett. 107, 027001 (2011). Yamada et al. (2011) Y. Yamada, K. Ueno, T. Fukumura, H. T. Yuan, H. Shimotani, Y. Iwasa, L. Gu, S. Tsukimoto, Y. Ikuhara, and M. Kawasaki, Science 332, 1065 (2011). Sato et al. (2004) T. Sato, G. Masuda, and K. Takagi, Electrochim. Acta 49, 3603 (2004). Yoshida et al. (2004) Y. Yoshida, I. Nagai, S.-I. Ikeda, N. Shirakawa, M. Kosaka, and N. Mori, Phys. Rev. B 69, 220411(R) (2004). Cao et al. (2003) G. Cao, L. Balicas, Y. Xin, J. E. Crow, and C. S. Nelson, Phys. Rev. B 67, 184405 (2003). Yoshida et al. (2008) Y. Yoshida, M. Hedo, S.-I. Ikeda, N. Shirakawa, and Y. Uwatoko, Physica B 403, 1213 (2008). Baumberger et al. (2006) F. Baumberger, N. J. C. Ingle, N. Kikugawa, M. A. Hossain, W. Meevasana, R. S. Perry, K. M. Shen, D. H. Lu, A. Damascelli, A. Rost, et al., Phys. Rev. Lett. 96, 107601 (2006).
One-Photon and Two-Photon Double-Slit Interferences in Spontaneous and Stimulated Parametric Down-Conversions De-Zhong Cao    Zhuan Li    Yan-Hua Zhai Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875, China    Kaige Wang the corresponding author [email protected] CCAST (World Laboratory), P. O. Box 8730, Beijing 100080, China Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875, China (November 20, 2020) Abstract We theoretically discuss one-photon and two-photon double-slit interferences for spontaneous and stimulated parametric down-conversions. We show that the two-photon sub-wavelength interference can exist in a general spontaneous parametric down-conversion (SPDC) for both type I and type II crystals. We propose an alternative way to observe sub-wavelength interference by a joint-intensity measurement which occurs for only type I crystal in a higher gain of SPDC. When a signal beam injects into the crystal, it may create two interference patterns by two stimulated down-converted beams, showing no sub-wavelength interference effect. pacs: 42.50.Dv, 42.65.Lm, 42.25.Hz, 42.82.Cr ††preprint: APS/123-QED I Introduction The Young’s double-slit interference experiment is one of the powerful ways to exhibit the nature of optical field, both classical and nonclassical coherence effects. In the recent years, an interesting subject is devoted to the study of two-photon double-slit interference in the process of spontaneous parametric down-conversion (SPDC).shih1 -gigi Since in this process a pair of converted beams created by a pump beam is in entanglement, the two-photon double-slit interference may show some peculiar phenomena such as the quantum sub-wavelength lithography and the ghost interference. For the former, both signal and idler beams are set together to pass through a double-slit,fon1 ,fon2 ,sal -ab2 ,shi ,bri and for the latter, the double-slit is placed on the path for only one beam.shih1 -bar ,fon3 ,wal ,gigi The original idea of the quantum lithography comes from the reduction of de Broglie wavelength for combining two massive particles. In optical system, the sub-wavelength interference occurs for a biphoton state.yama In addition to the two-photon double-slit interference, the quantum lithography can be carried out in a Mach-Zehnder interferometer.boto -eda Due to the fact that this effect can overcome the Rayleigh diffraction limit, it may have prospective application in photo-lithography technology. In most of the theoretical analyses, the sub-wavelength interference is explained by a two-photon entangled state which can be acquired in the SPDC with very low gain. Nevertheless, it is the obstacle in practical application due to the lower power. Therefore, the exploration of these quantum effects in macroscopic regime makes sense.gigi ,na In this paper, we study one-photon and two-photon double-slit interferences in both spontaneous and stimulated parametric down-conversions. We focus on the case in which a double-slit is inserted on the paths for both signal and idler beams. We find that the sub-wavelength lithography can occur at very high gain of SPDC with substantial visibility. The discussion covers both type I and type II crystals which exhibit different behaviors in two-photon interference. In the stimulated process, the amplified beam cannot perform quantum lithography but create rich interference patterns. The paper is organized as follows: in Sec. II we briefly review double-slit interferences for a coherent state and a two-photon state. In Sec. III we cite several formula as review of the optical parametric down-conversion process. We analyze two-photon double-slit interference in Secs. IV and V for the spontaneous and stimulated processes, respectively. The final section VI is the conclusion and discussion. II Double-slit Interferences for a Coherent State and a Two-Photon State We consider the scheme of Young’s double-slit experiment as shown in Fig. 1. The double-slit function is defined by $$T(x)=\mathop{\mathrm{r}ect}(\frac{x-d/2}{b})+\mathop{\mathrm{r}ect}(\frac{x+d/% 2}{b})\text{,}$$ (1) where $d$ is the distance between the centers of two slits and $b$ is the width of each slit. In Fig. 1, both the double-slit and the detection screen are placed at the two focus planes of a lens. By ignoring the thickness of the double-slit, the transverse envelope operators of the input field $e(x,t)$ and the output field $e^{\prime}(x,t)$ of the double-slit are related as $$e^{\prime}(x,t)=T(x)e(x,t)+[1-T(x)]e_{vac}(x,t)\text{,}$$ (2) where the vacuum field operator $e_{vac}(x,t)$ is introduced for the sake of $e^{\prime}(x,t)$ satisfying the bosonic commutation relation. Since the vacuum field has no contribution to the normal-order correlation, it can be neglected in the calculations below. In the paraxial approximation, the field $r(x,t)$ in the detection plane $P_{2}$ is expressed by the Fourier transform of the lens $$r(x,t)=\sqrt{\frac{k}{2\pi f}}\int e^{\prime}(x^{\prime},t)\exp[-{\normalsize i% }\frac{k}{f}x^{\prime}x]dx^{\prime}.$$ (3) By substituting Eq. (2) into Eq. (3), one obtains $$r(x,t)=\frac{1}{2\pi}\sqrt{\frac{k}{f}}\mathop{\displaystyle\iint}\widetilde{T% }(\frac{kx}{f}-q)\widetilde{e}(q,\Omega)\exp[-{\normalsize i}\Omega t]dqd% \Omega\text{,}$$ (4) where $$\widetilde{T}(q)=\frac{1}{\sqrt{2\pi}}\int T(x)e^{-iqx}dx=\frac{2b}{\sqrt{2\pi% }}\mathop{\mathrm{s}inc}(qb/2)\cos(qd/2)$$ (5) is the Fourier transform of the double-slit function $T(x)$, and $\widetilde{e}(q,\Omega)$ is the Fourier transform of $e(x,t)$ for both the spatial and temporal variables. First, we consider the input field to be a stationary, monochromatic plane wave in a coherent state $$\left\langle e(x,t)\right\rangle=A\text{,}$$ (6) where $A$ is a constant. It has $$\left\langle\widetilde{e}(q,\Omega)\right\rangle=2\pi A\delta(q)\delta(\Omega)% \text{.}$$ (7) In the detection plane, the first-order correlation is calculated as $$G^{(1)}(x_{1},x_{2},t)\equiv\left\langle r^{\dagger}(x_{1},t)r(x_{2},t)\right% \rangle=\frac{kA^{2}}{f}\widetilde{T}^{\ast}(\frac{kx_{1}}{f})\widetilde{T}(% \frac{kx_{2}}{f}).$$ (8) Hence, the intensity distribution in the detection plane is written as $$G^{(1)}(x,x,t)\equiv\left\langle r^{\dagger}(x,t)r(x,t)\right\rangle=\frac{kA^% {2}}{f}\widetilde{T}^{2}(\frac{kx}{f})=I_{0}\mathop{\mathrm{s}inc}^{2}(\frac{% \pi bx}{\lambda f})\cos^{2}(\frac{\pi dx}{\lambda f})$$ (9) where $I_{0}=\frac{2kb^{2}A^{2}}{\pi f}$ and $\lambda=2\pi/k$. Note that $\widetilde{T}(x)$ is a real function. Equation (9) represents an interference fringe with the interval $\lambda f/d$ in the range $\lambda f/b$, as shown in Fig. 2. Similarly, the second-order correlation function can be obtained as $$\displaystyle G^{(2)}(x_{1},x_{2},t)$$ $$\displaystyle=$$ $$\displaystyle\left\langle r^{\dagger}(x_{1},t)r^{\dagger}(x_{2},t)r(x_{2},t)r(% x_{1},t)\right\rangle$$ (10) $$\displaystyle=$$ $$\displaystyle\frac{k^{2}A^{4}}{f^{2}}\widetilde{T}^{2}(\frac{kx_{1}}{f})% \widetilde{T}^{2}(\frac{kx_{2}}{f}).$$ According to the theory of field coherence, the separability of spatial variables in the first-order and the second-order correlation functions verifies the perfect coherence of the field. Since the field operators at different positions are commutable, the second-order correlation of the field is in fact the spatial intensity correlation and it can be observed by a coincidence measurement as shown in Fig. 1a. The spatial patterns related to $G^{(1)}(x,x,t)$ and $G^{(2)}(x_{1},x_{2},t)$ are called the one-photon and the two-photon interferences, respectively. According to Eq. (10), in the coincidence measurement, if we scan one detector by fixing another, the same interference fringe as the one-photon interference can be observed. Now, we introduce two other ways of observations for the two-photon double-slit interference. One is the spatial intensity-correlation measurement by scanning two detectors synchronously at a pair of symmetric positions, $x_{1}=-x_{2}=x$. The other one is the two-photon intensity measurement by using a two-photon detector which generates an photo-electron by absorbing two photons. Applying these two observations to Eq. (10), one has $$G^{(2)}(x,x,t)=G^{(2)}(x,-x,t)=I_{0}^{2}\mathop{\mathrm{s}inc}^{4}(\frac{\pi bx% }{\lambda f})\cos^{4}(\frac{\pi dx}{\lambda f}).$$ (11) In Fig. 2, we plot $G^{(2)}(x,x,t)$ ($G^{(2)}(x,-x,t)$) in comparison with $G^{(1)}(x,t)$ for the coherent beam. It shows that the two interference patterns are alike. The above discussion on the coherent state is analogous to the classical field. Then, we consider a two-photon state as input, which is the quantum state without classical analogue. A general two-photon state can be written as $$|\psi\rangle=\mathop{\displaystyle\int}dq_{s}dq_{i}C(q_{s},q_{i})a_{s}^{% \dagger}(q_{s})a_{i}^{\dagger}(q_{i})|0\rangle,$$ (12) where $a_{s}^{\dagger}$ and $a_{i}^{\dagger}$ are the creation operators for $s$ and $i$ photons which are assumed to be distinguishable. $q_{s}$ and $q_{i}$ are the transverse wave-vectors. When the input field is stationary, Eq. (4) can be simplified as $$r(x)=\sqrt{\frac{k}{2\pi f}}\mathop{\displaystyle\int}\widetilde{T}(\frac{kx}{% f}-q)\widetilde{e}(q)dq.$$ (13) By using Eq. (13), the first-order correlation functions for $s$-photon and $i$-photon in the detection plane are obtained as $$\displaystyle G_{s}^{(1)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle\frac{k}{2\pi f}\int dqdq_{1}dq_{2}C^{\ast}(q_{1},q)C(q_{2},q)% \widetilde{T}^{\ast}(\frac{kx_{1}}{f}-q_{1})\widetilde{T}(\frac{kx_{2}}{f}-q_{% 2}),$$ (14a) $$\displaystyle G_{i}^{(1)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle\frac{k}{2\pi f}\int dqdq_{1}dq_{2}C^{\ast}(q,q_{1})C(q,q_{2})% \widetilde{T}^{\ast}(\frac{kx_{1}}{f}-q_{1})\widetilde{T}(\frac{kx_{2}}{f}-q_{% 2}),$$ (14b) respectively. For two fields case, the second-order correlation function is defined by $$G^{(2)}(x_{1},x_{2},t)=\left\langle r_{i}^{\dagger}(x_{1},t)r_{s}^{\dagger}(x_% {2},t)r_{s}(x_{2},t)r_{i}(x_{1},t)\right\rangle,$$ (15) which describes the coincidence probability of $s$-photon at position $x_{2}$ and $i$-photon at position $x_{1}$. In the case of $x_{1}=x_{2}=x$, it describes a two-photon intensity distribution. First, for the two-photon state (12), we calculate the two-photon wavepacket in the detection plane $$\langle 0|r_{s}(x_{2})r_{i}(x_{1})|\psi\rangle=\frac{k}{2\pi f}\int dq_{s}dq_{% i}C(q_{s},q_{i})\widetilde{T}(\frac{kx_{2}}{f}-q_{s})\widetilde{T}(\frac{kx_{1% }}{f}-q_{i}).$$ (16) Then the second-order correlation is obtained as $$G^{(2)}(x_{1},x_{2})=|\langle 0|r_{s}(x_{2})r_{i}(x_{1})|\psi\rangle|^{2}.$$ (17) We discuss two extreme cases: two photons are independent and perfectly entangled in the transverse wave-vector. In the unentangled case, $C(q_{s},q_{i})=C$ ${}_{s}(q_{s})C_{i}(q_{i})$, the first- and second-order correlations are written as $$\displaystyle G_{m}^{(1)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle\frac{k}{2\pi f}\int dqC_{m}^{\ast}(q)\widetilde{T}^{\ast}(\frac{% kx_{1}}{f}-q)\int dqC_{m}(q)\widetilde{T}(\frac{kx_{2}}{f}-q),\qquad(m=s,i),$$ (18a) $$\displaystyle G^{(2)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle\left|\frac{k}{2\pi f}\int dqC_{s}(q)\widetilde{T}(\frac{kx_{2}}{% f}-q)\int dqC_{i}(q)\widetilde{T}(\frac{kx_{1}}{f}-q)\right|^{2}$$ (18b) $$\displaystyle=$$ $$\displaystyle G_{i}^{(1)}(x_{1},x_{1})G_{s}^{(1)}(x_{2},x_{2}).$$ Equations (18) show two separabilities. The one is the separability of positions in both the first- and second-order correlation functions. The other is that the second-order correlation is factorized to two one-photon intensity distribution. That is, the two-photon interference consists of two individual one-photon interferences, verifying the Dirac’s statement: ”Each photon interferes only with itself. Interference between two different photons never occurs.” However, these features mean the perfect first- and second-order coherencies for the two independent photons. In the opposite extreme, the perfect entanglement in wavevector, $q_{s}+q_{i}=0$, is hold in state (12). For simplicity, we assume $C(q_{s},q_{i})\rightarrow\delta(q_{s}+q_{i})$, and the first- and second-order correlations are written as $$\displaystyle G_{s}^{(1)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle G_{i}^{(1)}(x_{1},x_{2})=\frac{k}{2\pi f}\int dq\widetilde{T}^{% \ast}(\frac{kx_{1}}{f}+q)\widetilde{T}(\frac{kx_{2}}{f}+q)=\frac{k}{f\sqrt{2% \pi}}\widetilde{T}[\frac{k}{f}(x_{2}-x_{1})],$$ (19a) $$\displaystyle G^{(2)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle\left|\frac{k}{2\pi f}\int dq\widetilde{T}(\frac{kx_{2}}{f}+q)% \widetilde{T}(\frac{kx_{1}}{f}-q)\right|^{2}=\frac{k^{2}}{2\pi f^{2}}% \widetilde{T}^{2}[\frac{k}{f}(x_{1}+x_{2})],$$ (19b) where we use the integrals $$\displaystyle\int dq\widetilde{T}^{\ast}(\frac{kx_{1}}{f}\pm q)\widetilde{T}(% \frac{kx_{2}}{f}\pm q)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2\pi}\iiint dqdx_{1}^{\prime}dx_{2}^{\prime}T(x_{1}^{% \prime})T(x_{2}^{\prime})e^{i(\frac{kx_{1}}{f}\pm q)x_{1}^{\prime}-i(\frac{kx_% {2}}{f}\pm q)x_{2}^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle\mathop{\displaystyle\iint}dx_{1}^{\prime}dx_{2}^{\prime}T(x_{1}^% {\prime})T(x_{2}^{\prime})\delta(x_{1}^{\prime}-x_{2}^{\prime})e^{i\frac{kx_{1% }}{f}x_{1}^{\prime}-i\frac{kx_{2}}{f}x_{2}^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle\mathop{\displaystyle\int}dx_{1}^{\prime}T(x_{1}^{\prime})T(x_{1}% ^{\prime})e^{i\frac{k}{f}(x_{1}-x_{2})x_{1}^{\prime}}=\sqrt{2\pi}\widetilde{T}% [\frac{k}{f}(x_{2}-x_{1})],$$ and $$\int dq\widetilde{T}^{\ast}(\frac{kx_{1}}{f}\pm q)\widetilde{T}(\frac{kx_{2}}{% f}\mp q)=\sqrt{2\pi}\widetilde{T}[\frac{k}{f}(x_{2}+x_{1})].$$ (21) Note that $T^{2}(x)=T(x)$. Equations (19) show that for the wavevector-entangled two-photon state the first- and second-order correlation functions in double-slit interference have position-correlation which results in decoherence. In the measurement, letting $x_{1}=x_{2}$ in Eq. (19), one obtains $$\displaystyle G_{s}^{(1)}(x,x)$$ $$\displaystyle=$$ $$\displaystyle G_{i}^{(1)}(x,x)=\frac{k}{f\sqrt{2\pi}}\widetilde{T}[0],$$ (22a) $$\displaystyle G^{(2)}(x,x)$$ $$\displaystyle=$$ $$\displaystyle\frac{k^{2}}{2\pi f^{2}}\widetilde{T}^{2}[\frac{k}{f}(2x)].$$ (22b) Therefore, the one-photon double-slit interference disappears completely and the two-photon double-slit interference shows a sub-wavelength property since it has $$\widetilde{T}^{2}(\frac{k}{f}2x)\propto\mathop{\mathrm{s}inc}^{2}[\frac{\pi bx% }{(\lambda/2)f}]\cos^{2}[\frac{\pi dx}{(\lambda/2)f}].$$ (23) The fringe is the same as the ordinary double-slit interference with the half of the wavelength. The above analysis explains the complementarity of coherence and entanglement.sal ,ab1 ,ab2 We emphasize that the discussion is also true for the case when $s$ and $i$ photons can be indistinguishable. III The Basic Formula in the Optical Parametric Down-Conversion In the optical parametric down-conversion, in which a plane-wave pump field of frequency $\omega_{p}$ activates a $\chi^{(2)}$ nonlinear crystal, the basic unitary transformation is described bygigi , 12 -14 $$\widetilde{e}_{m}(q,\Omega)=U_{m}(q,\Omega)\widetilde{a}_{m}(q,\Omega)+V_{m}(q% ,\Omega)\widetilde{a}_{n}^{\dagger}(-q,-\Omega)\qquad(m\neq n=s,i),$$ (24) where $\widetilde{e}_{m}(q,\Omega)$ and $\widetilde{a}_{m}(q,\Omega)$ are the output and input field operators, respectively. $q$ is the transverse wavevector and $\Omega$ is the frequency deviation from the carrier frequency. The transfer coefficients $U_{m}(q,\Omega)$ and $V_{m}(q,\Omega)$ are given by14 $$U_{s}(q,\Omega)=\Theta_{s}(q,\Omega)[\cosh\Gamma(q,\Omega)+i\frac{\Delta(q,% \Omega)}{2\Gamma(q,\Omega)}\sinh\Gamma(q,\Omega)],$$ (25) $$V_{s}(q,\Omega)=\Theta_{s}(q,\Omega)\frac{g}{\Gamma(q,\Omega)}\sinh\Gamma(q,% \Omega),$$ (26) $$U_{i}(q,\Omega)=\Theta_{i}(q,\Omega)[\cosh\Gamma(-q,-\Omega)+i\frac{\Delta(-q,% -\Omega)}{2\Gamma(-q,-\Omega)}\sinh\Gamma(-q,-\Omega)],$$ (27) $$V_{i}(q,\Omega)=\Theta_{i}(q,\Omega)\frac{g}{\Gamma(-q,-\Omega)}\sinh\Gamma(-q% ,-\Omega),$$ (28) where $$\Theta_{m}(q,\Omega)=e^{i[k_{mz}(q,\Omega)-k_{nz}(-q,-\Omega)-2k_{m}+k_{p}]l_{% c}/2},\qquad(m\neq n=s,i),$$ (29) $$\Gamma(q,\Omega)=\sqrt{g^{2}-\Delta^{2}(q,\Omega)/4},$$ (30) $$\Delta(q,\Omega)=[k_{sz}(q,\Omega)+k_{iz}(-q,-\Omega)-k_{p}]l_{c},$$ (31) $$\Delta_{0}=(k_{s}+k_{i}-k_{p})l_{c}.$$ (32) $g$ is the coupling strength and $l_{c}$ is the length of crystal. $\Delta_{0}$ is the collinear phase mismatching of the central frequency components which correspond to the wave-numbers $k_{j}$ ($j=s,i,p$). For simplicity, we assume that two down-converted beams have the degenerate carrier frequency $\omega_{p}/2$. Hence, Eq. (31) can be reduced to an even function of both $q$ and $\Omega$ $$\Delta(q,\Omega)\approx\Delta_{0}+\Omega^{2}/\Omega_{0}^{2}-q^{2}/q_{0}^{2},$$ (33) where $\Omega_{0}$ and $q_{0}$ are defined as the frequency and spatial-frequency bandwidths, respectively. Equations (24)-(33) describe the SPDC process of type II crystal but also can be applicable to type I crystal. For the former, two converted beams are orthogonally polarized, whereas for the latter, they are degenerate in both polarization and frequency but spatially separated. Therefore, Eq. (24) can describe type I crystal by omitting the subscripts. As a matter of fact, under the assumption of the carrier frequency degeneracy, Eqs. (25) and (26) are the same as Eqs. (27) and (28). IV Double-Slit Interference in Spontaneous Parametric Down-Conversion In Fig. 1, the two down-converted beams generated from the crystal illuminate a double-slit and then are detected in the focus plane of the lens. We designate $a_{m}(x,t)$, $e_{m}(x,t)$,$\ e_{m}^{\prime}(x,t)$, $r_{m}(x,t)$ the slowly varying field operators for the input surface $P_{in}$ and the output surface $P_{out}$ of the crystal and the output plane of the double-slit $P_{1}$ and the detection plane $P_{2}$, respectively. Substituting Eq. (24) into Eq. (4), we may calculate the first- and the second-order correlations for the field in the detection plane $P_{2}$. In this section, we consider the case of the spontaneous parametric down-conversion in which the input fields are in the vacuum state. The first-order correlations for two beams are obtained to be $$\displaystyle G_{m}^{(1)}(x_{1},x_{2})$$ $$\displaystyle=$$ $$\displaystyle M_{m}(x_{1},x_{2})\equiv\langle 0|r_{m}^{\dagger}(x_{1},t)r_{m}(% x_{2},t)|0\rangle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(m=s,i)$$ $$\displaystyle=$$ $$\displaystyle\frac{k/f}{(2\pi)^{2}}\iint dqd\Omega\quad\left|V_{m}(q,\Omega)% \right|^{2}\widetilde{T}^{\ast}(\frac{kx_{1}}{f}-q)\widetilde{T}(\frac{kx_{2}}% {f}-q).$$ $G_{m}^{(1)}(x,x)$ illustrates the intensity distribution for beam $m$ in the detection plane. Then we consider the second-order correlation function defined by Eq. (15), which now describes the intensity correlation between the signal beam at position $x_{2}$ and the idler beam at position $x_{1}$. In the case of $x_{1}=x_{2}=x$, it describes a two-photon intensity distribution. For type II crystal, the two-photon intensity distribution can be measured experimentally by the scheme shown in Fig. 1b in which the coincidence measurement of two orthogonally polarized photons is performed at the same position $x_{1}=x_{2}=x$. However, for type I crystal, the subscripts $s$ and $i$ should be omitted in Eq. (15). If there is a two-photon detector available, one may observe the two-photon intensity distribution at position $x_{1}=x_{2}=x$. Alternatively, a realistic detective scheme for type I crystal is shown in Fig. 1a in which the intensity correlation is measured by two one-photon detectors at the different positions $x_{1}$ and $x_{2}$. For the vacuum state set in Eq. (15), by using Eqs. (4) and (24), the second-order correlation is calculated as $$G^{(2)}(x_{1},x_{2})=M_{i}(x_{1},x_{1})M_{s}(x_{2},x_{2})+\left|N_{is}(x_{1},x% _{2})\right|^{2}+\delta_{is}\left|M(x_{1},x_{2})\right|^{2},$$ (35) where $\delta_{is}$ is 1 for type I and 0 for type II crystal. $M_{m}(x_{1},x_{2})$ is given by Eq. (IV), and $$N_{mn}(x_{{\normalsize 1}},x_{{\normalsize 2}})=\frac{k/f}{(2\pi)^{2}}\iint dqd% \Omega\quad V_{m}(q,\Omega)U_{n}(-q,-\Omega)\widetilde{T}(\frac{kx_{{% \normalsize 1}}}{f}-q)\widetilde{T}(\frac{kx_{{\normalsize 2}}}{f}+q).\qquad(m% \neq n=s,i)$$ (36) Equation (35) shows that the second-order correlation is related to the first-order correlation. In Eq. (35), the first term, which is separable in term of both polarization and position, describes the part of two-photon interference contributed by two individual single-photon double-slit processes. However, the second and the third terms describe two-photon interference effect related to photon entanglement. The difference between the two types of crystals is devoted by the third term, that is, the first-order correlation $|M(x_{1},x_{2})|^{2}$. We note that, since $\Theta_{i}(q,\Omega)\Theta_{s}(-q,-\Omega)=\Theta_{s}(q,\Omega)\Theta_{i}(-q,-% \Omega)=\exp[-i\Delta_{0}]$, it has $V_{i}(q,\Omega)U_{s}(-q,-\Omega)=V_{s}(q,\Omega)U_{i}(-q,-\Omega)$, and hence $N_{is}(x_{{\normalsize 1}},x_{{\normalsize 2}})=N_{si}(x_{{\normalsize 1}},x_{% {\normalsize 2}})$. This result is obvious because exchange of the indices $i$ and $s$ makes no difference physically. Moreover, due to Eq. (33), $\Delta(q,\Omega)$ is the even function of $q$ and $\Omega$ under the assumption of the frequency degeneracy so that $\left|V_{i}(q,\Omega)\right|^{2}=\left|V_{s}(q,\Omega)\right|^{2}$. Therefore, two first-order correlation functions for the signal and the idler beams are equal in type II crystal, i.e. $M_{i}(x_{1},x_{2})=M_{s}(x_{1},x_{2})$. Nevertheless, we keep the subscripts in Eq. (35) for a general description of type II crystal in case the two converted beams have different carrier frequencies. In order to obtain the analytical result for the integrals, we discuss two bandwidth limits of SPDC process: the broad and the narrow bandwidths. In the broadband limit, $q_{0}\gg 2\pi/d$, $U_{m}(q,\Omega)$ and $V_{m}(q,\Omega)$ are much flatter in comparison with $\widetilde{T}(q)$ and one can set $U_{m}(q,\Omega)\approx U_{m}(0,\Omega)$ and $V_{m}(q,\Omega)\approx V_{m}(0,\Omega)$ in the integrals. By taking into account Eqs. (II) and (21), Eqs. (IV) and (36) can be rewritten as $$M_{m}(x_{1},x_{2})=\frac{\eta_{m}}{\sqrt{2\pi}}\mathop{\displaystyle\int}dq% \widetilde{T}^{\ast}(\frac{kx_{1}}{f}-q)\widetilde{T}(\frac{kx_{2}}{f}-q)=\eta% _{m}\widetilde{T}[\frac{k}{f}(x_{2}-x_{1})],\text{\qquad}(m=s,i),$$ (37a) and $$N_{mn}(x_{1},x_{2})=\xi_{mn}\widetilde{T}[\frac{k}{f}(x_{1}+x_{2})],\text{% \qquad}(m\neq n=s,i),$$ (38a) respectively, where we define $$\eta_{m}=\frac{k/f}{(2\pi)^{3/2}}\int\left|V_{m}(0,\Omega)\right|^{2}d\Omega$$ and $$\xi_{mn}=\frac{k/f}{(2\pi)^{3/2}}\int V_{m}(0,\Omega)U_{n}(0,-\Omega)d\Omega$$. In the broadband limit, which exhibits the maximum entanglement in transverse wavevector for two converted beam, we see again the position-correlation in the correlation functions. The first-order correlation Eq. (37a) shows the same position-correlation as Eq. (19a) for the two-photon state with the maximum wavevector-entanglement. This makes the one-photon intensity distribution in the detection plane $P_{2}$ homogeneous, $M_{m}(x,x)=\eta_{m}\widetilde{T}(0)$, so that the one-photon double-slit interference disappears completely. In this limit, the second-order correlation (35) is obtained to be $$G^{(2)}(x_{1},x_{2})=\eta_{i}\eta_{s}\left\{\widetilde{T}^{2}(0)+\delta_{is}% \widetilde{T}^{2}[\frac{k}{f}(x_{2}-x_{1})]\right\}+\left|\xi_{is}\right|^{2}% \widetilde{T}^{2}[\frac{k}{f}(x_{1}+x_{2})].$$ (39) The first term in {} comes from two individual single-photon double-slit processes which are now homogeneous. The second term in {} and the last term manifest explicitly the position-corrlation. If we fix one detector at a position and scan another in the coincidence measurement, the interference fringe observed is the same as the single-photon one. To show the position correlation in the two-photon interference, we discuss two special observations, that is, $x_{1}=x_{2}=x$ and $x_{1}=-x_{2}=x$. Equation (39) is reduced to $$G^{(2)}(x,x)=\eta_{i}\eta_{s}(1+\delta_{is})\widetilde{T}^{2}(0)+\left|\xi_{is% }\right|^{2}\widetilde{T}^{2}(\frac{k}{f}2x),$$ (40) and $$G^{(2)}(x,-x)=(\eta_{i}\eta_{s}+\left|\xi_{is}\right|^{2})\widetilde{T}^{2}(0)% +\delta_{is}\eta_{i}\eta_{s}\widetilde{T}^{2}(\frac{k}{f}2x).$$ (41) The former exhibits a two-photon intensity distribution and the latter exhibits the intensity correlation at a pair of symmetric positions. Both Eqs. (40) and (41) include a term $\widetilde{T}^{2}(\frac{k}{f}2x)$ which characterizes a sub-wavelength interference pattern by the factor of $\lambda/2$ in comparison with the ordinary interference shown by Eq. (9). Obviously, due to Eq. (40), the sub-wavelength interference for the two-photon intensity distribution can occur in both type I and type II crystals. However, Eq. (41) shows that, for type I crystal, when a pair of single-photon detectors are placed at a pair of symmetric positions and moved synchronously in the opposite direction, the sub-wavelength interference can be also observed. But this effect never happens in type II crystal. According to Eqs. (40) and (41), the visibilities of fringes designated by $G^{(2)}(x,x)$ and $G^{(2)}(x,-x)$ are calculated to be $$\mathcal{V}_{1}=\frac{1}{1+2(1+\delta_{is})\theta},$$ (42) and $$\mathcal{V}_{2}=\frac{1}{3+2/\theta},$$ (43) respectively, where $\theta\equiv\eta_{i}\eta_{s}/\left|\xi_{is}\right|^{2}$. Note that $\mathcal{V}_{2}$ makes sense only for type I crystal, for which $\theta\equiv\eta^{2}/\left|\xi\right|^{2}$. As the parameter $\theta$ is increased from a very small quantity, $\mathcal{V}_{1}$ decreases monotonously from unity and $\mathcal{V}_{2}$ increases from zero up to 1/3. Since the parameter $\theta$ is related to the coupling strength $g$ of SPDC, we plot the visibilities as functions of $g$ in Figs. 3, in which $\mathcal{V}_{1}$s for type II and type I crystals are indicated by the solid and dashed lines, respectively, and $\mathcal{V}_{2}$ for type I crystal is indicated by the dotted line. In a weak coupling of SPDC, which may generate approximately a two-photon entangled state, the visibilities $\mathcal{V}_{1}$ for both type I and II crystals reach perfectness. The sub-wavelength interference in the weak SPDC has been observed experimentally.fon1 ,shih2 ,shi ,eda The important fact is that, in Figs. 3, the sub-wavelength interference can still exist, with a substantial visibility, even in very high gain of SPDC, in which the beams contain a large amount of photons. On the other hand, in the strong coupling of type I crystal, the sub-wavelength interference can occur in the joint-intensity measurement at a pair of symmetric positions. Equations (42) and (43) show that, for type I crystal, two observations of sub-wavelength interferences compete for visibility. When $\mathcal{V}_{1}$ reaches the perfectness, $\mathcal{V}_{2}$ vanishes. However, in the very high gain of SPDC of type I, two visibilities are the same as 25%. In the opposite limit, we assume that the SPDC has a very narrow bandwidth $q_{0}\ll 2\pi/d$. Extremely, letting $q_{0}\rightarrow 0$, the transfer coefficient $V_{m}(q,\Omega)$ ($m=s$,$i$) tends to the delta function $$V_{m}(q,\Omega)\rightarrow\ V_{m}(0,\Omega)\delta(q)$$ (44) Equations (IV) and (36) are respectively written as $$M_{m}(x_{1},x_{2})=\frac{1}{\sqrt{2\pi}}\eta_{m}\widetilde{T}(\frac{kx_{1}}{f}% )\widetilde{T}^{\star}(\frac{kx_{2}}{f}),$$ (45) and $$N_{mn}(x_{1},x_{2})=\frac{1}{\sqrt{2\pi}}\xi_{mn}\widetilde{T}(\frac{kx_{1}}{f% })\widetilde{T}(\frac{kx_{2}}{f})\text{.}$$ (46) In this limit, the position-correlation disappears completely. The second-order correlation is then $$G^{(2)}(x_{1},x_{2})=\frac{1}{2\pi}[(1+\delta_{is})\eta_{i}\eta_{s}+|\xi{}_{is% }|^{2}]\widetilde{T}^{2}(\frac{kx_{1}}{f})\widetilde{T}^{2}(\frac{kx_{2}}{f}).$$ (47) Therefore, the one-photon intensity distribution $M_{m}(x,x)$ and the second-order correlation $G^{(2)}(x_{1},x_{2})$ in the plane $P_{2}$ are the same as the case for the coherent state. In the narrow bandwidth limit, since each converted beam is monochromatic, the one-photon double-slit interference occurs with the perfect visibility. On the other hand, two monochromatic converted beams have no more correlation in the transverse wavevector so that the position-correlation degrades completely in the two-photon interference. We plot the two-photon interference patterns by varying the bandwidth of SPDC process $q_{0}$ in Figs. 4, in which Figs. 4a and 4b (4c and 4d) are for type I (type II) crystal. In Figs. 4a and 4c, a low gain of SPDC, $g=(1/2)\log 1.5$ (with the amplification rate 1.5) is taken in two-photon intensity measurement so that the sub-wavelength interferences with a better visibility are achieved when the normalized bandwidth $q_{0}b/(2\pi)$ is increased. Two sets of patterns are very similar with exception of tiny higher intensity for type I crystal. Figures 4b and 4d show the interference patterns for the joint-intensity measurement of two one-photon detectors at a pair symmetric positions. The sub-wavelength interferences can be observed only for type I crystal when the gain of SPDC is higher, for instance, $g=(1/2)\log 10$ (with the amplification rate 10) is taken in the figures. Though the visibilities are lower, the intensities of the patterns are getting much higher. This also happens in the case of two-photon intensity measurement. The three plots of Figs. 4a-4c show that the bandwidth of SPDC governs two-photon sub-wavelength interference. For a very small bandwidth of SPDC, the two converted beams are de-entangled in transverse wavevector so that the nonclassical sub-wavelength interference disappears. V Double-Slit Interference in Stimulated Parametric Down-Conversion In the stimulated optical parametric process, a signal beam is injected into the nonlinear crystal and is then amplified. The nonlinear crystal becomes an optical parametric amplifier (OPA). We assume inputting a stable plane-wave beam in a coherent state $$\left\langle\widetilde{a}_{s}(q,\Omega)\right\rangle=2\pi A\delta(q-Q)\delta(% \Omega),$$ (48) where $Q$ designates the transverse wavevector of the input beam deviated from the normal incidence. For type II crystal, we set the input beam as the signal which can be identified by the polarization, while the idler beam is in the vacuum state. For type I crystal, the subscript $s$ in the input beam (48) is omitted. Considering the input beam describing by Eq. (48) instead of the vacuum state, we calculate the first-order correlation in the plane $P_{2}$ $$G_{m}^{(1)}(x_{1},x_{2})=W_{m}^{\ast}(x_{1},Q)W_{m}(x_{2},Q)+M_{m}(x_{1},x_{2}% ),\qquad(m=s,i)$$ (49) where $$\displaystyle W_{s}(x,Q)$$ $$\displaystyle=$$ $$\displaystyle A\sqrt{k/f}U_{s}(Q,0)\widetilde{T}(kx/f-Q)$$ (50a) $$\displaystyle W_{i}(x,Q)$$ $$\displaystyle=$$ $$\displaystyle A\sqrt{k/f}V_{i}(-Q,0)\widetilde{T}(kx/f+Q)$$ (50b) for type II crystal, and $$W(x,Q)=A\sqrt{k/f}[U(Q,0)\widetilde{T}(kx/f-Q)+V(-Q,0)\widetilde{T}(kx/f+Q)]$$ (51) for type I crystal. $M_{m}(x_{1},x_{2})$ has been defined by Eq. (IV). In Eq. (49), the first and the second terms show contributions coming from the stimulated and the spontaneous processes, respectively, and they are independent in the one-photon interference pattern. Obviously, the stimulated part shows the first-order coherence due to the separability of the spatial variables. Nevertheless, for the input beam is enough strong, the spontaneous process can be neglected. $G_{m}^{(1)}(x,x)\simeq|W_{m}(x,Q)|^{2}$ describes an amplified double-slit interference pattern in comparison with the case without crystal. For type II crystal, the input signal beam creates two interference patterns: one for the signal beam with the amplification rate $|U_{s}(Q,0)|^{2}$ and the other for the idler beam with the amplification rate $|V_{s}(-Q,0)|^{2}$.gigi2 According to Eqs. (50), these two patterns are the same as the ordinary fringe (see Eq. (9)) and can be identified by the polarization and separated in space when the input beam is well tilted in incidence. However, for type I crystal, the one-photon interference pattern is written as $$\displaystyle|W(x,Q)|^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{kA^{2}}{f}\{|U(Q,0)|^{2}\widetilde{T}^{2}(\frac{kx}{f}-Q)+|% V(-Q,0)|^{2}\widetilde{T}^{2}(\frac{kx}{f}+Q)$$ (52) $$\displaystyle+[U(Q,0)V^{\ast}(-Q,0)\widetilde{T}(\frac{kx}{f}-Q)\widetilde{T}(% \frac{kx}{f}+Q)+\text{c.c.}]\}.$$ The first and second terms illustrate the two interference patterns created by two stimulated beams, i.e. the signal beam with the transverse wavevector $Q$ and the idler beam with the transverse wavevector $-Q$. The third term shows an additional coherent superposition of the two interferences of the converted beams. In result, the interference pattern can be different from the ordinary one due to the ”interference term” in the square brackets of Eq. (52). Only for the normal incidence, $Q=0$, the interference pattern is the same as the ordinary one. Figures 5a and 5b show the density patterns of the one-photon interference by varying the transverse wavevector of the input beam for type I and type II crystals, respectively. In Fig. 5a, the fringes alternate between onset and offset when the transverse wavevector $Q$ of the input field is increased until the two fringes are well apart. The offset is due to the destructive interference of two indistinguishable stimulated beams when they are folded. However, for type II crystal, we put two interference patterns for the signal and idler beams together in Fig. 5b, in which the right part is for the signal field, and is stronger than the left part (idler). This corresponds to the measurement when the detection is insensitive to the polarizations. Different from type I crystal, when the transverse wavevector $Q$ of the input field is increased, the bright and dark spots of the patterns exchange alternatively until the two fringes are apart. This feature comes from the incoherent superposition of the two patterns for the signal and idler beams. We go through a long derivation, using the unitary transformation (24) and the bosonic commutation relation, and obtain the second-order correlation function $$\displaystyle G^{(2)}(x_{1},x_{2})$$ $$\displaystyle=[\left|W_{i}(x_{1},Q)\right|^{2}+M_{i}(x_{1},x_{1})][\left|W_{s}% (x_{2},Q)\right|^{2}+M_{s}(x_{2},x_{2})]$$ (53) $$\displaystyle+[W_{i}(x_{1},Q)W_{s}(x_{2},Q)N_{is}^{\ast}(x_{1},x_{2})+\text{c.% c.}]+\left|N_{is}(x_{1},x_{2})\right|^{2}$$ $$\displaystyle+\delta_{is}\{[W^{\star}(x_{1},Q)W(x_{2},Q)M(x_{1},x_{2})+\text{c% .c.}]+\left|M(x_{1},x_{2})\right|^{2}\}.$$ Again, Eq. (53) can describe two types of crystals. For type I crystal, the subscripts $i$ and $s$ are omitted and $\delta_{is}=1$, whereas for type II crystal $\delta_{is}=0$. Similar to Eq. (35), the first term is the product of the two one-photon interferences for the converted beams. The second and the fourth terms exhibit the coupling of the interferences between the stimulated and spontaneous processes. This result is similar to the discussion of the image amplification in the optical parametric amplification in which $W(x)$ describes an amplified image.wkg We note that the interferences coming from the stimulated process are uncorrected in positions. This reflects that the stimulated process does not includes the entanglement in transverse wavevectors for the two converted beams. Therefore, the quantum lithography cannot be achieved in the stimulated parametric amplification by input a coherent beam. In Figs. 6, we plot the density patterns of the two-photon interferences by varying the input direction of the signal field. Figures 6a and 6b show patterns of $G^{(2)}(x,x)$ and $G^{(2)}(x,-x)$ for type I crystal, respectively. These two patterns are similar to the one-photon interference case with the exception of that the sub-wavelength fringe coming from the spontaneous process appears in the central part. In Fig. 6a, the fringe of the signal part (the right side) is stronger than the idler part (the left side), whereas in Fig. 6b, the pattern is mirror-symmetry since exchange of two detectors at a pair of symmetric positions makes no difference. However, Figs. 6c and 6d show patterns of $G^{(2)}(x,x)$ and $G^{(2)}(x,-x)$ for type II crystal, respectively. Correspondingly, Fig. 6c is similar to the one-photon interference case with the exception of that of the spontaneous contribution. In Fig. 6d, we define $x$ as the transverse position for the detector sensitive to the polarization of the signal beam, so that it is comparable to the one-photon case for the signal beam (the right part in Fig. 5b). Furthermore, in Fig. 6d, the spontaneous process contributes a homogeneous background instead of a sub-wavelength fringe in Fig. 6c. If two detectors in the joint-intensity measurement are polarization-insensitive, one observes a symmetric patterns which is the mirror-symmetry of Fig. 6d. VI Conclusions In summary, we formulate the first- and second-order correlation functions in the Young’s double-slit interference for both spontaneous and stimulated parametric down-conversions. The results reveal the relations between the first- and second-order correlations, and hence it can explain the complementarity of coherence and entanglement. We show that the nonclassical sub-wavelength two-photon interference can occur macroscopically in a general spontaneous parametric process. For a high gain of SPDC, in which the converted beams contain a huge number of photons, the sub-wavelength interference pattern is intensive with a substantial visibility. This makes the quantum lithography technology in practicability. Moreover, we find an alternative way to observe the sub-wavelength interference for only type I crystal, in which a joint-intensity measurement is performed by a pair of one-photon detectors placed at the symmetric positions. The advantage of this method is that the quantum lithography in type I of SPDC can be performed by two one-photon detectors instead of a two-photon detector which could be unavailable. Since, this effect occurs only in a higher gain of SPDC, it reflects macroscopically quantum nature for the entangled beams containing a huge number of photons. The two ways of observations compete for the visibility, so that at the low gain of SPDC of type I the interference of the first observation reaches perfectness while the second observation disappears. In the stimulated process, the one-photon and two-photon interference patterns generated by a pair of stimulated down-converted beams are alike. For type I crystal, since two converted beams are polarization-indistinguishable, the coherent superposition of two converted beams causes a secondary interference which may fade the fringe when two beams are not well apart. However, this effect does not exist for type II crystal because of the distinguishability in polarization for the two stimulated beams. VII Acknowledgment This research was supported by the National Program of Fundamental Research No. 2001CB309310 and the National Natural Science Foundation of China, Project Nos. 10074008, 60278021 and 10174007. References (1) D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, Phys. Rev. Lett. 74, 3600 (1995). (2) P. H. S. Ribeiro, S. Pádua, J. C. M. da Silva, and G. A. Barbosa, Phys. Rev. A 49, 4176 (1994). (3) P. H. S. Ribeiro and G. A. Barbosa, Phys. Rev. A 54, 3489 (1996). (4) G. A. Barbosa, Phys. Rev. A 54, 4473 (1996). (5) E. J. S. Fonseca, C. H. Monken, and S. Pádua, Phys. Rev. Lett. 82, 2868 (1999). (6) E. J. S. Fonseca, C. H. Monken, S. Pádua, and G. A. Barbosa, Phys. Rev. A 59, 1608 (1999). (7) E. J. S. Fonseca, P. H. S. Ribeiro, S. Pádua, and C. H. Monken, Phys. Rev. A 60, 1530 (1999); E. J. S. Fonseca, Z. Paulini, P. Nussenzveig, C. H. Monken, and S. Pádua, Phys. Rev. A 63, 043819 (2001) (8) B. E. A. Saleh, A. F. Abouraddy, A. V. Sergienko, and M. C. Teich, Phys. Rev. A 62, 043816 (2000). (9) M. D’Angelo, M. V. Chekhova, and Y. Shih, Phys. Rev. Lett. 87, 013602 (2001). (10) A. F. Abouraddy, M. B. Nasr, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, Phys. Rev. A 63, 063803 (2001). (11) A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, J. Opt. B: Quantum Semiclass. Opt. 3, S50 (2001). (12) S. P. Walborn, M. O. T. Cunha, S. Pádua, and C. H. Monken, Phys. Rev. A 65, 033818 (2002). (13) R. Shimizu, K. Edamatsu, and T. Itoh, Phys. Rev. A 67, 041805 (2003). (14) G. Brida, E. Cagliero, G. Falzetta, M. Genovese, M. Gramegna, and E. Predazzi, Phys. Rev. A 68, 033803 (2003). (15) A. Gatti, E. Brambilla, and L. A. Lugiato, Phys. Rev. Lett. 90, 133603 (2003). (16) J. Jocobson, G. Björk, I. Chuang, and Y. Yamamoto, Phys. Rev. Lett. 74, 4835 (1995). (17) A. N. Boto, P. Kok, D. S. Abrams, S. L. Braunstein, C. P. Williams, and J. P. Dowling, Phys. Rev. Lett. 85, 2733 (2000). (18) E. M. Nagasako, S. J. Bentley, R. W. Boyd, and G. S. Agarwal, Phys. Rev. A 64, 043802 (2001). (19) K. Edamatsu, R. Shimizu, and T. Itoh, Phys. Rev. Lett. 89, 213601 (2002). (20) M. I. Kolobov and L. A. Lugiato, Phys. Rev. A 52, 4930 (1995). (21) I. V. Sokolov, M. I. Kolobov, and L. A. Lugiato, Phys. Rev. A 60, 2420 (1999). (22) E. Brambilla, A. Gatti, M. Bache, and L. A. Lugiato, arXiv:quant-ph/0306116 (2003). (23) K. Wang, G. Yang, A. Gatti, L. A. Lugiato, J. Opt. B: Quantum Semiclass. Opt. 5, S1 (2003). (24) A. Gatti, E. Brambilla, L. A. Lugiato, and M. I. Kolobov, Phys. Rev. Lett. 83, 1763 (1999). Captions of Figures : Fig. 1. Schemes of Young’s double-slit interference with a convex lens: (a) a one-photon (two-photon) detector measures one-photon (two-photon) intensity distribution; two one-photon detectors measure joint-intensity distribution at a pair of symmetric positions. (b) for type II crystal, two-photon intensity distribution is measured by a polarizing beamsplitter (PBS) and two one-photon detectors. Fig. 2. One-photon (solid line) and two-photon (dashed line) double-slit interference patterns for a coherent beam. Fig. 3. Visibilities versus the gain of SPDC for different collinear phase-mismatching (a) $\Delta_{0}=-5.85$; (b) $\Delta_{0}=0$; and (c) $\Delta_{0}=5.85$. Solid and Dashed lines designate $\mathcal{V}_{1}$ for type II and type I crystals, respectively; Dotted line designates $\mathcal{V}_{2}$ for type I crystal. Fig. 4. Two-photon interference patterns versus the normalized bandwidth of SPDC $q_{0}b/(2\pi)$: (a) $G^{(2)}(X,X)$ for type I crystal; (b) $G^{(2)}(X,-X)$ for type I crystal; (c) $G^{(2)}(X,X)$ for type II crystal; and (d) $G^{(2)}(X,-X)$ for type II crystal, where $X=xkb/(2\pi f)$ is the normalized transverse position in the detection plane. The gains are set as $g=(1/2)\log 1.5$ in (a) and (c), and $g=(1/2)\log 10$ in (b) and (d). The other parameters in Figs. 4-6 are taken as the phase matching $\Delta_{0}=0$, the ratio $b/d=0.2$, and $\frac{kb\Omega_{0}}{4\pi^{2}f}=1$ for an arbitrary unit of the correlation functions. Fig. 5. Stimulated one-photon interference patterns by varying the normalized transverse wavevector of the input field $Qb/(2\pi)$ for (a) type I crystal and (b) type II crystal. The normalized bandwidth and the input intensity are taken as $q_{0}b/(2\pi)=2$ and $\frac{2kb^{2}}{\pi f}A^{2}=1$, respectively. Fig. 6. Stimulated two-photon interference patterns by varying the normalized transverse wavevector of the input field $Qb/(2\pi)$: (a) $G^{(2)}(x,x)$ for type I crystal; (b) $G^{(2)}(x,-x)$ for type I crystal; (c) $G^{(2)}(x,x)$ for type II crystal; and (d) $G^{(2)}(x,-x)$ for type II crystal. The normalized bandwidth and the input intensity are the same as in Fig. 5.
LPENSL-TH-10/02 Large distance asymptotic behavior of the emptiness formation probability of the $XXZ$ spin-$\textstyle{\frac{1}{2}}$ Heisenberg chain N. Kitanine111Graduate School of Mathematical Sciences, University of Tokyo, Japan, [email protected]   On leave of absence from Steklov Institute at St. Petersburg, Russia,   J. M. Maillet222 Laboratoire de Physique, UMR 5672 du CNRS, ENS Lyon, France, [email protected],   N. A. Slavnov333 Steklov Mathematical Institute, Moscow, Russia, [email protected],   V. Terras444Department of Physics and Astronomy, Rutgers University, USA, [email protected]    On leave of absence from LPMT, UMR 5825 du CNRS, Montpellier, France Abstract Using its multiple integral representation, we compute the large distance asymptotic behavior of the emptiness formation probability of the $XXZ$ spin-$\frac{1}{2}$ Heisenberg chain in the massless regime. 1 Emptiness formation probability at large distance The Hamiltonian of the $XXZ$ spin-$1\over 2$ Heisenberg chain is given by $$H=\sum_{m=1}^{M}\left(\sigma^{x}_{m}\sigma^{x}_{m+1}+\sigma^{y}_{m}\sigma^{y}_% {m+1}+\Delta(\sigma^{z}_{m}\sigma^{z}_{m+1}-1)\right).$$ (1.1) Here $\Delta$ is the anisotropy parameter, $\sigma^{x,y,z}_{m}$ denote the usual Pauli matrices acting on the quantum space at site $m$ of the chain. The emptiness formation probability $\tau(m)$ (the probability to find in the ground state a ferromagnetic string of length $m$) is defined as the following expectation value $$\tau(m)=\langle\psi_{g}|\prod_{k=1}^{m}\frac{1-\sigma_{k}^{z}}{2}|\psi_{g}\rangle,$$ (1.2) where $|\psi_{g}\rangle$ denotes the normalized ground state. In the thermodynamic limit ($M\to\infty$), this quantity can be expressed as a multiple integral with $m$ integrations [1, 2, 3, 4, 5]. Recently, in the article [6], a new multiple integral representation for $\tau(m)$ was obtained. It leads in a direct way to the known answer at the free fermion point $\Delta=0$ [9], in particular using a saddle point method, and to its first exact determination outside the free fermion point, namely at $\Delta={\frac{1}{2}}$ [10]. The purpose of this letter is to present the evaluation of the asymptotic behavior of $\tau(m)$ at large distance $m$, in the massless regime $-1<\Delta<1$, via the saddle point method. We find $$\lim_{m\to\infty}\frac{\log\tau(m)}{m^{2}}=\log\frac{\pi}{\zeta}+\frac{1}{2}% \int\limits_{\mathbb{R}-i0}\frac{d\omega}{\omega}\frac{\sinh\frac{\omega}{2}(% \pi-\zeta)\cosh^{2}\frac{\omega\zeta}{2}}{\sinh\frac{\pi\omega}{2}\sinh\frac{% \omega\zeta}{2}\cosh\omega\zeta},$$ (1.3) where $\cos\zeta=\Delta$, $0<\zeta<\pi$. If $\zeta$ is commensurate with $\pi$ (in other words if $e^{i\zeta}$ is a root of unity), then the integral in (1.3) can be taken explicitly in terms of $\psi$-function (logarithmic derivative of $\Gamma$-function). In particular for $\zeta=\frac{\pi}{2}$ and $\zeta=\frac{\pi}{3}$ (respectively $\Delta=0$ and $\Delta=1/2$) we obtain from (1.3) $$\begin{array}[]{l}{\displaystyle\lim_{m\to\infty}\frac{\log\tau(m)}{m^{2}}=% \frac{1}{2}\log 2,\qquad\Delta=0,}\\ \rule{0.0pt}{30.0pt}{\displaystyle\lim_{m\to\infty}\frac{\log\tau(m)}{m^{2}}=% \frac{3}{2}\log 3-3\log 2,\qquad\Delta=\frac{1}{2},}\end{array}$$ which coincides with the known results obtained respectively in [7, 8, 9] and in [11, 10]. For the particular case of the $XXX$ chain ($\Delta=1$, $\zeta=0$) the asymptotic behavior can be evaluated also by the saddle point method and it is given by $$\lim_{m\to\infty}\frac{\log\tau(m)}{m^{2}}=\log\left(\frac{\Gamma(\frac{3}{4})% \Gamma(\frac{1}{2})}{\Gamma(\frac{1}{4})}\right)\approx\log(0.5991),$$ (1.4) which is in good agreement with the known numerical result $\log(0.598)$, obtained in [12]. Below, we explain the main features of our method. A more detailed account of the proofs and techniques involved will be published later. 2 The saddle point method The multiple integral representation for $\tau(m)$ obtained in [6] can be written in the form $$\displaystyle{\displaystyle\hskip-2.845276pt\tau(m)=\left(\frac{i}{2\zeta\sin% \zeta}\right)^{m}\left(\frac{\pi}{\zeta}\right)^{\frac{m^{2}-m}{2}}\int\limits% _{\cal D}d^{m}\lambda\cdot F(\{\lambda\},m)}$$ $$\displaystyle{\displaystyle\hskip 2.845276pt\times\prod\limits_{a>b}^{m}\frac{% \sinh\frac{\pi}{\zeta}(\lambda_{a}-\lambda_{b})}{\sinh(\lambda_{a}-\lambda_{b}% -i\zeta)\sinh(\lambda_{a}-\lambda_{b}+i\zeta)}\prod\limits_{a=1}^{m}\left(% \frac{\sinh(\lambda_{a}-\frac{i\zeta}{2})\sinh(\lambda_{a}+\frac{i\zeta}{2})}{% \cosh\frac{\pi}{\zeta}\lambda_{a}}\right)^{m},}$$ (2.1) with $$F(\{\lambda\},m)=\lim_{\xi_{1},\dots\xi_{m}\to-\frac{i\zeta}{2}}\frac{1}{\prod% \limits_{a>b}^{m}\sinh(\xi_{a}-\xi_{b})}{\det}_{m}\left(\frac{-i\sin\zeta}{% \sinh(\lambda_{j}-\xi_{k})\sinh(\lambda_{j}-\xi_{k}-i\zeta)}\right).$$ (2.2) Here the integration domain ${\cal D}$ is $-\infty<\lambda_{1}<\lambda_{2}<\cdots<\lambda_{m}<\infty$. Following the standard arguments of the saddle point method we estimate the integral (2) by the maximal value of the integrand. Let $\{\lambda^{\prime}\}$ be the set of parameters corresponding to this maximum. They satisfy the saddle point equations and for large $m$ we assume that their distribution can be described by a density function $\rho(\lambda^{\prime})$: $$\rho(\lambda^{\prime}_{j})=\lim_{m\to\infty}\frac{1}{m(\lambda^{\prime}_{j+1}-% \lambda^{\prime}_{j})}.$$ (2.3) Thus for large $m$, one can replace sums over the set $\{\lambda^{\prime}\}$ by integrals. Namely, if $f(\lambda)$ is integrable on the real axis, then $$\begin{array}[]{l}{\displaystyle\frac{1}{m}\sum_{j=1}^{m}f(\lambda^{\prime}_{j% })\to\int_{-\infty}^{\infty}f(\lambda)\rho(\lambda)\,d\lambda,}\\ \rule{0.0pt}{30.0pt}{\displaystyle\frac{1}{m}\sum_{j=1\atop{j\neq k}}^{m}\frac% {f(\lambda^{\prime}_{j})}{\lambda^{\prime}_{j}-\lambda^{\prime}_{k}}\to V.P.% \int_{-\infty}^{\infty}\frac{f(\lambda)}{\lambda-\lambda^{\prime}_{k}}\rho(% \lambda)\,d\lambda,}\end{array}\qquad\qquad\qquad m\to\infty.$$ Due to (2) it is easy to see that in the point $\lambda^{\prime}_{1},\dots,\lambda^{\prime}_{m}$ the products in the second line of (2) behave as $\exp(c\ m^{2})$. Our goal is now to estimate the behavior of the term $F(\{\lambda^{\prime}\},m)$. To do this we factorize the determinant in (2.2) as follows for large $m$: $$\displaystyle{\displaystyle\hskip 2.845276pt{\det}_{m}\left(\frac{-i\sin\zeta}% {\sinh(\lambda^{\prime}_{j}-\xi_{k})\sinh(\lambda^{\prime}_{j}-\xi_{k}-i\zeta)% }\right)}$$ $$\displaystyle{\displaystyle\hskip 34.143307pt=(-2\pi i)^{m}{\det}_{m}\Bigl{(}% \delta_{jk}-\frac{K(\lambda^{\prime}_{j}-\lambda^{\prime}_{k})}{2\pi im\rho(% \lambda^{\prime}_{k})}\Bigr{)}{\det}_{m}\Bigl{(}\frac{i}{2\zeta\sinh\frac{\pi}% {\zeta}(\lambda^{\prime}_{j}-\xi_{k})}\Bigr{)},}$$ (2.4) with $$K(\lambda)=\frac{i\sin 2\zeta}{\sinh(\lambda-i\zeta)\sinh(\lambda+i\zeta)}.$$ (2.5) Indeed, for $m\to\infty$ one has $$\displaystyle{\displaystyle\hskip-14.226378pt{\det}_{m}\Bigl{(}\delta_{jk}-% \frac{K(\lambda^{\prime}_{j}-\lambda^{\prime}_{k})}{2\pi im\rho(\lambda^{% \prime}_{k})}\Bigr{)}{\det}_{m}\Bigl{(}\frac{i}{2\zeta\sinh\frac{\pi}{\zeta}(% \lambda^{\prime}_{j}-\xi_{k})}\Bigr{)}}$$ $$\displaystyle{\displaystyle\hskip-14.226378pt={\det}_{m}\Bigl{(}\frac{i}{2% \zeta\sinh\frac{\pi}{\zeta}(\lambda^{\prime}_{j}-\xi_{k})}-\sum_{l=1}^{m}\frac% {K(\lambda^{\prime}_{j}-\lambda^{\prime}_{l})}{2\pi im\rho(\lambda^{\prime}_{l% })}\frac{i}{2\zeta\sinh\frac{\pi}{\zeta}(\lambda^{\prime}_{l}-\xi_{k})}\Bigr{)}}$$ (2.6) $$\displaystyle{\displaystyle\hskip 2.845276pt\longrightarrow{\det}_{m}\Bigl{(}% \frac{i}{2\zeta\sinh\frac{\pi}{\zeta}(\lambda^{\prime}_{j}-\xi_{k})}-\int_{-% \infty}^{\infty}\frac{K(\lambda^{\prime}_{j}-\mu)}{2\pi i}\frac{i\,d\mu}{2% \zeta\sinh\frac{\pi}{\zeta}(\mu-\xi_{k})}\Bigr{)}}$$ $$\displaystyle{\displaystyle\hskip 85.358268pt=\left(\frac{1}{2\pi}\right)^{m}{% \det}_{m}\left(\frac{\sin\zeta}{\sinh(\lambda^{\prime}_{j}-\xi_{k})\sinh(% \lambda^{\prime}_{j}-\xi_{k}-i\zeta)}\right).}$$ Here we have used the fact that the function $i/2\zeta\sinh\frac{\pi}{\zeta}(\lambda_{j}-\xi)$ solves the Lieb integral equation for the density of the ground state of the $XXZ$ magnet [13] (and we have used the notations of [6]) . The second determinant in the r.h.s. of (2) is a Cauchy determinant, hence, $$F(\{\lambda^{\prime}\},m)=(-i)^{m}\left(\frac{\pi}{\zeta}\right)^{\frac{m^{2}+% m}{2}}\frac{\prod\limits_{a>b}^{m}\sinh\frac{\pi}{\zeta}(\lambda^{\prime}_{a}-% \lambda^{\prime}_{b})}{\prod\limits_{a=1}^{m}\cosh^{m}\frac{\pi}{\zeta}\lambda% ^{\prime}_{a}}\cdot{\det}_{m}\Bigl{(}\delta_{jk}-\frac{K(\lambda^{\prime}_{j}-% \lambda^{\prime}_{k})}{2\pi im\rho(\lambda^{\prime}_{k})}\Bigr{)}.$$ (2.7) The behavior of the determinant in (2.7) can be estimated via Hadamard inequality $$|{\det}_{m}(a_{jk})|\leq(\max|a_{jk}|)^{m}m^{\frac{m}{2}}.$$ (2.8) applied to the above determinant and to the determinant of the inverse matrix , which shows that $$\lim_{m\to\infty}\frac{1}{m^{2}}~{}\log{\det}_{m}\left(\delta_{jk}-\frac{K(% \lambda^{\prime}_{j}-\lambda^{\prime}_{k})}{2\pi im\rho(\lambda^{\prime}_{k})}% \right)=0.$$ (2.9) The last equation means that ${\det}_{m}\left(\delta_{jk}-K(\lambda^{\prime}_{j}-\lambda^{\prime}_{k})/{2\pi im% \rho(\lambda^{\prime}_{k})}\right)$ does not contribute to the leading term of the asymptotics. Hence, it can be excluded from our considerations. Thus, up to subleading corrections of the exponential type the emptiness formation probability behaves as $$\tau(m)\longrightarrow\left(\frac{\pi}{\zeta}\right)^{m^{2}}e^{m^{2}S_{0}},% \qquad m\to\infty,$$ (2.10) with $$\displaystyle{\displaystyle\hskip 14.226378ptS_{0}\equiv S(\{\lambda^{\prime}% \})=\frac{1}{m^{2}}\sum_{a>b}^{m}\log\left(\frac{\sinh^{2}\frac{\pi}{\zeta}(% \lambda^{\prime}_{a}-\lambda^{\prime}_{b})}{\sinh(\lambda^{\prime}_{a}-\lambda% ^{\prime}_{b}-i\zeta)\sinh(\lambda^{\prime}_{a}-\lambda^{\prime}_{b}+i\zeta)}% \right)}$$ $$\displaystyle{\displaystyle\hskip 71.13189pt+\frac{1}{m}\sum_{a=1}^{m}\log% \left(\frac{\sinh(\lambda^{\prime}_{a}-i\zeta/2)\sinh(\lambda^{\prime}_{a}+i% \zeta/2)}{\cosh^{2}\frac{\pi}{\zeta}\lambda^{\prime}_{a}}\right).}$$ (2.11) Here the parameters $\{\lambda^{\prime}\}$ are the solutions of the saddle point equations $$\frac{\partial S_{0}}{\partial\lambda^{\prime}_{j}}=0.$$ (2.12) In our case the system (2.12) has the form $$\displaystyle{\displaystyle\hskip 2.845276pt\frac{2\pi}{\zeta}\tanh\frac{\pi% \lambda^{\prime}_{j}}{\zeta}-\coth(\lambda^{\prime}_{j}-i\zeta/2)-\coth(% \lambda^{\prime}_{j}+i\zeta/2)}$$ $$\displaystyle{\displaystyle\hskip 2.845276pt=\frac{1}{m}\sum_{k=1\atop{k\neq j% }}^{m}\left(\frac{2\pi}{\zeta}\coth\frac{\pi}{\zeta}(\lambda^{\prime}_{j}-% \lambda^{\prime}_{k})-\coth(\lambda^{\prime}_{j}-\lambda^{\prime}_{k}-i\zeta)-% \coth(\lambda^{\prime}_{j}-\lambda^{\prime}_{k}+i\zeta)\right).}$$ (2.13) Using (2) we transform (2) into the integral equation for the density $\rho(\lambda)$ $$\displaystyle{\displaystyle\hskip-11.381102pt\frac{2\pi}{\zeta}\tanh\frac{\pi% \lambda}{\zeta}-\coth(\lambda-i\zeta/2)-\coth(\lambda+i\zeta/2)}$$ $$\displaystyle{\displaystyle\hskip-11.381102pt=V.P.\int_{-\infty}^{\infty}\left% (\frac{2\pi}{\zeta}\coth\frac{\pi}{\zeta}(\lambda-\mu)-\coth(\lambda-\mu-i% \zeta)-\coth(\lambda-\mu+i\zeta)\right)\rho(\mu)\,d\mu.}$$ (2.14) Respectively the action $S_{0}$ takes the form $$\displaystyle{\displaystyle\hskip 2.845276ptS_{0}=\int_{-\infty}^{\infty}d% \lambda\rho(\lambda)\log\left(\frac{\sinh(\lambda-i\zeta/2)\sinh(\lambda+i% \zeta/2)}{\cosh^{2}\frac{\pi}{\zeta}\lambda}\right)}$$ $$\displaystyle{\displaystyle\hskip 28.452756pt+\frac{1}{2}\int_{-\infty}^{% \infty}d\mu d\lambda\rho(\lambda)\rho(\mu)\log\left(\frac{\sinh^{2}\frac{\pi}{% \zeta}(\lambda-\mu)}{\sinh(\lambda-\mu-i\zeta)\sinh(\lambda-\mu+i\zeta)}\right% ).}$$ (2.15) Since the kernel of the integral operator in (2) depends on the difference of the arguments, this equation can be solved via Fourier transform. Then $$\hat{\rho}(\omega)=\int_{-\infty}^{\infty}e^{i\omega\lambda}\rho(\lambda)\,d% \lambda=\frac{\cosh\frac{\omega\zeta}{2}}{\cosh\omega\zeta}.$$ (2.16) Making the inverse Fourier transform we find $$\rho(\lambda)=\frac{\cosh\frac{\pi\lambda}{2\zeta}}{\zeta\sqrt{2}\cosh\frac{% \pi\lambda}{\zeta}},$$ (2.17) which obviously satisfies the needed normalisation condition for density (integral on the real axis equals one). It remains to substitute (2.16), (2.17) into (2), and after straightforward calculations we arrive at $$S_{0}=\frac{1}{2}\int\limits_{\mathbb{R}-i0}\frac{d\omega}{\omega}\frac{\sinh% \frac{\omega}{2}(\pi-\zeta)\cosh^{2}\frac{\omega\zeta}{2}}{\sinh\frac{\pi% \omega}{2}\sinh\frac{\omega\zeta}{2}\cosh\omega\zeta}.$$ (2.18) Thus, we have obtained (1.3). In the case of the $XXX$ chain ($\Delta=1$) one should rescale $\lambda_{j}\to\zeta\lambda_{j}$, $\xi_{j}\to\zeta\xi_{j}$ in the original multiple integral representation (2) for $\tau(m)$ and then proceed to the limit $\zeta\to 0$ . The remaining computations are then very similar to the ones described above, therefore we present here only the main results. The behavior of $\tau(m)$ is now given by $$\tau(m)\longrightarrow\pi^{m^{2}}e^{m^{2}S_{0}},\qquad m\to\infty.$$ (2.19) The action $S_{0}$ in the saddle point has the form $$\displaystyle{\displaystyle\hskip-14.226378ptS_{0}=\int_{-\infty}^{\infty}\log% \left(\frac{(\lambda-i/2)(\lambda+i/2)}{\cosh^{2}\pi\lambda}\right)\rho(% \lambda)\,d\lambda}$$ $$\displaystyle{\displaystyle\hskip 28.452756pt+\frac{1}{2}\int_{-\infty}^{% \infty}d\mu d\lambda\rho(\lambda)\rho(\mu)\log\left(\frac{\sinh^{2}\pi(\lambda% -\mu)}{(\lambda-\mu-i)(\lambda-\mu+i)}\right).}$$ (2.20) The analog of the integral equation (2) in the $XXX$ case is $$2\pi\tanh\pi\lambda-\frac{2\lambda}{\lambda^{2}+\frac{1}{4}}=V.P.\int_{-\infty% }^{\infty}\left(2\pi\coth\pi(\lambda-\mu)-\frac{2(\lambda-\mu)}{(\lambda-\mu)^% {2}+1}\right)\rho(\mu)\,d\mu.$$ (2.21) The solution of this equation is $$\rho(\lambda)=\frac{\cosh\frac{\pi\lambda}{2}}{\sqrt{2}\cosh\pi\lambda}.$$ (2.22) Substituting (2.22) into (2) we finally arrive at (1.4). Acknowledgments N. K. is supported by JSPS grant P01177. N. K. would like to thank M. Jimbo for help. N. S. is supported by the grants RFBR-02-01-00484, Foundation of the Support of Russian Science, Leading Scientific Schools 00-15-96046, the Program Nonlinear Dynamics and Solitons and by CNRS. J.M. M. is supported by CNRS. V. T is supported by DOE grant DE-FG02-96ER40959 and by CNRS. N. K, N. S. and V. T. would like to thank the Theoretical Physics group of the Laboratory of Physics at ENS Lyon for hospitality, which makes this collaboration possible. We also would like to thank the organizors of the ”6th International Workshop Conformal Field Theory and Integrable Models” held in Chernogolovka, September 15-21, 2002, for the nice and stimulating scientific (and extra-scientific) atmosphere they succeeded to generate. References [1] M. Jimbo, K. Miki, T. Miwa and A. Nakayashiki, Phys. Lett. A 168 (1992) 256. [2] M. Jimbo and T. Miwa, Journ. Phys. A: Math. Gen., 29 (1996) 2923. [3] M. Jimbo and T. Miwa, Algebraic analysis of solvable lattice models (AMS, 1995). [4] N. Kitanine, J. M. Maillet and V. Terras, Nucl. Phys. B, 554 [FS] (1999) 647, math-ph/9807020. [5] N. Kitanine, J. M. Maillet and V. Terras, Nucl. Phys. B, 567 [FS] (2000) 554; math-ph/9907019. [6] N. Kitanine, J. M. Maillet, N. A. Slavnov, V. Terras, Nucl. Phys. B 641 [FS] (2002) 487; hep-th/0201045. [7] A. R. Its, A. G. Izergin, V. E. Korepin and N. A. Slavnov, Phys. Rev. Lett., 70 (1993) 1704. [8] M. Shiroishi, M. Takahashi and Y. Nishiyama, Emptiness Formation Probability for the One-Dimensional Isotropic $XY$ Model, cond-mat/0106062. [9] N. Kitanine, J. M. Maillet, N. A. Slavnov, V. Terras, Correlation functions of the $XXZ$-$\scriptstyle{\frac{1}{2}}$ Heisenberg chain at the free fermion point from their multiple integral representations, hep-th/0203169, to appear in Nuclear Physics B, 2002. [10] N. Kitanine, J. M. Maillet, N. A. Slavnov, V. Terras, J. Phys. A: Math. Gen. 35 (2002) L385-L388; hep-th/0201134. [11] A. V. Razumov and Yu. G. Stroganov, J. Phys. A: Math. Gen. 34 (2001) 3185-90; cond-mat/0012141. [12] H. E. Boos, V. E. Korepin, Y. Nishiyama, M. Shiroishi, Quantum Correlations and Number Theory, cond-mat/0202234. [13] E. Lieb, T. Shultz and D. Mattis, Ann. Phys., 16 (1961) 407.
Characteristic noise features in light transmission across membrane protein undergoing photocycle Anshuman. J. Das    Sabyasachi Mukhopadhyay    K. S. Narayan [email protected] Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bangalore-560064, India (November 20, 2020) Abstract We demonstrate a technique based on noise measurements which can be utilized to study dynamical processes in protein assembly. Direct visualization of dynamics in membrane protein system such as bacteriorhodopsin (bR) upon photostimulation are quite challenging. bR represents a model system where the stimulus-triggered structural dynamics and biological functions are directly correlated. Our method utilizes a pump-probe near field microscopy method in the transmission mode and involves analyzing the transmittance fluctuations from a finite size of molecular assembly. Probability density distributions indicating the effects of finite size and statistical correlations appear as a characteristic frequency distribution in the noise spectra of bR whose origin can be traced to photocycle kinetics. Valuable insight into the molecular processes were obtained from the noise studies of bR and its mutant D96N as a function of external parameters such as temperature, humidity or presence of an additional pump source. pacs: Valid PACS appear here To appear in the Journal of Chemical Physics, Vol. 134, Issue 6 I INTRODUCTION Noise features from a functioning system can reveal valuable insight into processes occurring over several time scalesChen and Yu (2007). Stochastic fluctuations are observed in systems like gene expression, protein, nano-medicine and can be used to probe and characterize gene circuits and phase transitionsRaser and O’Shea (2005); Simpson et al. (2009). Identifying sources of noise, controlling and processing noise-related data using sophisticated techniques based on frequency domain analysis provides insight into molecular information that control the systemCox et al. (2008); Rao, Wolf, and Arkin (2002); Grima (2010a). Noise as a tool to study electronic events and processes at microscopic length scales is extensively utilized in condensed matter physics. There are not many situations encountered where noise studies of optical absorption processes at room temperature has proven to be useful. Noise in biological processes are dealt at macroscopic-systems level and their correlation to molecular events are rarely emphasized. It is expected that fluctuations in a small system to be pronounced and we utilize that aspect by choosing a model biomolecular organization with the appropriate system parameters. Typical fluctuations from finite sized and correlated systems yield probability density functions (PDF) that deviate from Gaussian or Lorentzian distributionsMontroll and Shlesinger (1982); Hill, Dissado, and Jackson (1981). Kinetics associated with biochemical reactions can be modeled based on modified Langevin equations (LEQ). Solution of LEQ expressed in terms of reaction rates for mRNA and protein systems has been used to obtain noise frequency ranges which in turn are related to the decay ratesSimpson, Cox, and Sayler (2004). It has also been shown that fluctuations in small systems are larger in magnitude and can get comparable to the mean values of the variablesSimpson et al. (2009). Hence there is a need to modify the rate equations to accurately model finite systems. Effective mesoscopic rate equations (EMRE) have been formulated to account for the breakdown in the linear noise approximationKampen (2007); Grima (2010b); Thomas, Straube, and Grima (2010). Bacteriorhodopsin (bR), a retinal protein is an ideal system to demonstrate the utility of noise studies, as it can be modeled by a 2-state model LEQ in spite of a complex photocycle. The longest time constants are in the ms-s range and can be controlled by external parameters. Finite or small system sizes are essentially defined by the film thickness and the near-field scanning optical microscope (NSOM) probe geometry controlling the excitation volume. NSOM based sub-diffraction limit imaging technique utilizes narrow optical fiber tips to image single molecules and nano-systemsDunn (1999). It was shown in our laboratory, that it was possible to monitor the intermediate states of bR, using transmission NSOMArun, Mukhopadhyay, and Narayan (2010). The light driven proton pump mechanism in bR is initiated by the isomerization of retinal protein from all-trans to 13-cis configuration followed by formation of series of intermediate states, referred to as J, K, L, M, N and O statesStoeckenius, Lozier, and Bogomolni (1979); Lanyi (2004). M state (absorption maxima at wavelength, $\lambda$ $\sim$ 412 nm) representing the deprotonated state has the highest population among all other intermediate states and exhibits strongest spectral shift following photo-excitation at $\lambda$ $\sim$ 570 nm which finally decays thermally or upon blue light illumination. The distinct spectroscopic signature of the intermediate states in the photocycle of bR provides the platform to observe fluctuations associated with the different molecular states. To our knowledge, the intensity fluctuations time series in the transmittance of light through a molecular assembly has not been analyzed and related to internal molecular changes. Although single molecule spectroscopic techniques have been demonstrated for fluorescent systemsBarkai, Jung, and Silbey (2004). A probable reason could be from the experimental constraint of non-availability of systems with appropriate inherent signal to noise ratio within the data acquisition rates. The advent of NSOM techniques overcomes some of these constraints, and in the process signals from an assembly consisting of $<$ 600 molecules can be closely studied with a combination of probe and pump beams (SN.1)sup . A distinct time series signal overriding on a noise feature of sizable amplitude signal describes the transmission fluctuations Tr(t). The characteristic signature of bR in form of pump induced absorption is observed to accompany changes in Tr(t). Tr(t) upon transformation to the frequency domain provides a consistent picture of the events. We implement an algorithm to process the data acquired over large time scales to arrive at a robust representation of the frequency distribution. II THEORY The kinetics involved in the reaction initiated by the photoexcitaion of the ground state of bR is reasonably well understood. The photocycle kinetics, primarily the M-state lifetimes in bR are known to be affected by pH conditions, additional light-pump at $\lambda_{405}$ corresponding to the M-state excitation, humidity, temperature and presence of metal nanoparticlesBiesso et al. (2009). These changes introduced by external factors are manifested as peak and linewidth shifts in the noise frequency profile. A quantitative handle on the problem can be initiated using a simplified model where the bR molecule is reduced to a system with two excited states (Fig. 1a), where B, M and N represents the ground state, excited state, and a long-lived intermediate respectively with a finite reversible rate along with the associated rate constants $k^{f}_{i}(B\rightarrow M)$, $k^{r}_{i}(M\rightarrow B)$ for the $i^{th}$ molecule. $k^{f}_{i}=\alpha(\lambda)I$, where $\alpha(\lambda)$ is the absorption coefficient and I is intensity. $k^{r}_{i}$ on the other hand is the inverse of the M state lifetime ($\tau_{M}$). A set of LEQ can be formulated for the two state model with noise factor as an additional term to justify the nondeterministic outcome: $dN_{B}/dt=-k^{f}_{i}N_{B}+k^{r}_{i}N_{M}+\phi_{B}(t)$ and $dN_{M}/dt=k^{f}_{i}N_{B}-k^{r}_{i}N_{M}+\phi_{M}(t)$ where, $N_{B}$ and $N_{M}$ are concentrations of B and M state of a similar set of i molecules and $\phi_{B}$ and $\phi_{M}$ are noise terms. The nature of terms $\phi_{B}$ and $\phi_{M}$ could be white noise with or without correlations. It should be mentioned that the choice of Ornstein Uhlenbeck process with correlations can lead to more realistic solutions. In the presence of $\lambda_{405}$ for exciting the M-intermediate state, these equations get further modified associated with the additional pump-rate. The noise terms in these equations can have a characteristic noise frequency range analogous to the approach by Simpson et alSimpson, Cox, and Sayler (2004). The PDF can now be attributed to predominantly arise from M state lifetime and can assume a sum of Lorentzian profiles (modified by normal distribution). In finite systems the trimers in bR are known not to get simultaneously excited, due to heterogeneityShibata et al. (2010) (Fig. 1b). The fluctuations arising from these sources get further modified by the cooperative effects, random excitation and decay of molecules, reversible states in the photocycle and thermally driven transitions. Heterogeneity is averaged out in bulk systems, but in a finite size ensemble they play a significant role in extracting useful molecular informationQuaranta and Garbett (2010). The general principles in the method may be applicable to other analogous chromophore containing membrane-protein systems. III MATERIALS AND METHODS Wild-type bR (WT-bR) films of different thickness were obtained from aqueous suspension of purple membrane patches ($\sim$ 0.5 mg/ml, pH $\sim$ 9.2)(SF.1,2)sup ; He et al. (2005). These films were characterized by absorption spectroscopy and transient absorption to ascertain bR features (SF.3)sup . After the films were prepared on quartz substrates the topography (AFM) (SF.4)sup and corresponding transmission (NSOM) map of the sample segment with a probe laser (532 nm) were simultaneously recorded (SF.5) sup . Once the bR patch was located, the tip was then positioned onto a point on the patch using software controlled stepper motors and the piezo stage. Typically bR patches have lateral dimensions ranging from 500 nm to 2 $\mu m$ (for multilayer) and heights ranging from 7 nm (monolayer) to several 200 nm (30 layers). The NSOM tip (100 nm diameter) illuminates only a fraction of the region of the 500 nm to 2 $\mu m$ bR patch. The photoexcitaion volume is governed by the aeral coverage of the NSOM tip and the thickness of the patch. A typical Tr(t) trace for WT-bR (Fig. 2a (inset)) was obtained using a customized Nanonics MultiView 4000 NSOM model with cantilever tip geometry (SF.6)sup . In a typical experiment, a continuous wave laser (probe) was coupled to the NSOM tip ($\sim$ 100 nm) and the transmitted optical signal from a bR patch was measured using a photo-multiplier tube (PMT) through the objective with magnification 50x and N.A = 0.45 (SF.7)sup . Additional pump beam (405 nm) could be introduced in the set-up and the measurements with appropriate filters were also carried out (SF.8,9)sup . Noise histograms for bare quartz also do not reveal any distinct features when additional pump source is introduced (SF.10)sup . The noise histograms for the quartz substrate are primarily a measure of source and detector fluctuations. Temperature variation was carried out by connecting conducting adhesive-tapes from the scanning stage to a hot plate. Increasing the hot plate temperature resulted in heating the sample stage in the temperature range 296 K to 318 K. The temperature at the sample was calibrated with a digital thermometer. A scan was carried out at room temperature and the transmission data was recorded. Subsequent scans were carried out as the temperature was increased. Humidity variation was carried out by enclosing the scanning stage in a chamber with a steam inlet (SF.11)sup . A source of steam was connected and the scan was then carried out. The humidity of the enclosure was independently verified and controlled by the duration of the steam flow into the chamber. Scans were typically carried out in steps of 30 minutes. Algorithms used to analyze data were similar to the ones used in photon counting experiments to determine the coherence times of pseudo-thermal sources (SN.2)Ricci et al. (2007). Data from PMT was captured using a digital oscilloscope (LeCroy, Wave Runner 6100A) at 250 kHz sampling rate for WT-bR and 500 Hz for D96N mutant. About 100 consecutive data sets of 1 s (for WT-bR) and 10 s (for D96N) were captured as separate windows. For the WT-bR case each window was further split in 15 sub-windows and Fast Fourier Transform (FFT) was applied to each one of these windows. The frequency corresponding to the maximum amplitude in the FFT signal was stored for all the windows. A histogram was constructed of the frequencies and their occurrences leaving out the DC contribution. IV RESULTS These traces are utilized to obtain noise histograms using a rigorous, standardized procedure. Autocorrelation function (ACF) obtained from Tr(t) for quartz and WT-bR samples of different patch heights, clearly indicate the absence of correlations for the bare quartz and a sizable correlation for the bR region (Fig. 2a). The frequency distribution arrived from Tr(t) for WT-bR patch was consistently in the form of a lognormal type distribution about a characteristic $\omega_{max}$. Lognormal distribution refers to a skewed distribution where higher frequencies extend the contribution, and whose natural logarithm follows a normal distribution given by the expression, $f(x;\mu,\sigma)=(1/x\sigma\sqrt{(2\pi)})(exp(-(ln(x)-\mu)^{2}/(2\sigma^{2}))$, where $\mu$ and $\sigma$ are the mean and standard deviation of ln(x) respectively. A common feature arrived upon Tr(t) analysis is the presence of characteristic maximum in the amplitude corresponding to a frequency, $\omega_{max}$ riding a distribution (Fig. 2b,c). The magnitude of $\omega_{max}\sim$ 500 - 700 Hz is in the range of the lifetime associated with the M-state which constitutes a large fraction of the entire molecular photocycle span. The interpretation of the dominant frequency associated with the intensity fluctuations to the bR photocycle is consistent with a set of measurements where photocycle rates were intentionally affected by varying certain external factors. The overall profile of the noise spectrum is then expected largely to be controlled by a distribution arising from the heterogeneity in the molecular state in the ensemble and the photocycle kinetics. The skewed nature or the non-normal probability density function (PDF) can also point to the statistically correlated nature of these fluctuations (Fig. 2b,c). Trimers in bR are known to effects arising from correlations in excitation and de-excitation processesShibata et al. (2010). It has been shown that the degree of skewness can be related to the correlation or to the system sizeHill, Dissado, and Jackson (1981). It is difficult from our results to quantify the relative contributions from these factors. We speculate that both correlations and finite size effects appear to play an important role in giving rise to the non-Gaussian PDFs which defines our system. We do not discount other possible representations such as combination of Lorentzians and Gaussians, but a straight forward lognormal fit yielded reasonable fit parameters. The appropriateness of lognormal function is apparent in form of good Gaussian fits when the distribution is plotted on a log frequency axis (Fig. 2b,c inset). Further Kolmogorov-Smirnov tests were done to confirm that the statistics is lognormal (ST.1,2) sup . The lognormal behaviour of the distribution is an indicator of statistically correlated events or networks of interacting elements in the present finite size systemKoulakov, Hromadka, and Zador (2009). It arises from multiplicative product of many independent random variables. A shift of $\omega_{max}$ was observed when the additional photo-excitation pump source ($\lambda$ = 405 nm) is introduced in the Tr(t) measurements. It is known that life time of the M state is altered by the thermally driven events in photocycle hastened upon the pumping the intermediate states, thereby biasing the population to the ground state. The blue shift in the $\omega_{max}$ upon $\lambda_{405}$ excitation can then be associated with the reduction in the photocycle span. The parameters in the distribution are also dependent on the number of bR layers (film thickness) (Fig. 2b,c). The number of bR layers is a direct way to control the ensemble size. The Tr(t) data exhibits a maximum at 505 Hz for 40 nm thick patch and 630 Hz for 100 nm thick patch which gets shifted to 630 Hz (Fig. 2b) and 720 Hz (Fig. 2c) upon additional photo-excitation (ST.1)sup . Changes in the mean value upon fitting to a lognormal profile, where mean takes up the value of $m=exp(\mu+\sigma^{2}/2)$, are similar to the changes in $\omega_{max}$ as a function of thickness and pump excitation. It is observed that 25 nm to 150 nm thick patches illuminated by 100 nm tip yields PDFs which reflect internal molecular, M state dependence on external conditions. In case of monolayer which consists of about 200 trimers in the cross section of the incident light beam where quantifiable near-field absorption is observed, the characteristic noise pattern however is missing (SF.9)sup . The noise features apparently appear to be dominated by the laser fluctuations and detector characteristics (similar to bare quartz substrates). On the other hand, in the regime of thick films ($>$ 200 nm), heterogeneity and other random molecular events are averaged out and the noise pattern is again featurelessQuaranta and Garbett (2010). Hence there seems to be a critical size regime (14 nm - 150 nm) which yields characteristic stochastic features. The T dependent bR photocycle kinetics is modified in dried-film form by the presence of competing factor of moisture concentration related to the proton uptake process . Upon heating the bR films (to 320 K), the increase in moisture depletion with T reduces the photocycle rate. These changes are observed in the form of the red-shift in the $\omega_{m}$ with increasing T (Fig. 3a, inset). However, upon increasing the moisture concentration, the expected increase in the photocycle rate in form of the blue shift of $\omega_{m}$ is observed from lognormal distribution fits (Fig. 3b, inset). It is interesting to note that the time series data, Tr(t) is also a gauge for ambient conditions, as clearly noted from measurements where the proton concentration gets depleted and enriched upon switching off and on the source for moisture respectively. In order to generalize the procedure to other systems we demonstrate the method using genetically modified bR (variant D96N), which has a longer M state lifetime ($>$ 100 ms in film) due to the replacement of aspartic acid (Asp-96) by asparagine (Asn-96)Otto et al. (1989). The noise measurements were suitably extended to the low frequency range and the data was collected at 250 KHz and 500 Hz sampling rates. Noise features in D96N: (i) Noise distribution around the maximum of 120 mHz was observed, which shifted to 480 mHz upon the additional $\lambda_{405}$ excitation (Fig. 3c). A blue shift was also observed upon increasing the humidity (Fig. 3c inset)(SF.12)sup . (ii) A frequency distribution was also observed about 750 Hz which however is independent of the illumination. This can correspond to the rise fluctuations which are of the order of ms and dependent only on intensity of illumination and the absorption coefficient (SF.13)sup . Low frequency ($<$ 2 Hz) histograms for WT-bR do not exhibit distinct pattern or shifts with pump excitation (SF.14) sup . The Tr(t) measurements on bR films provide an important toolkit as a model system to examine the role and implications of fluctuations. Signature of characteristic noise which appears to be ensemble size dependent in the light-induced proton pump function throws open interesting questions. Molecular processes involved in the functioning of the protein are expected to be dependent on the organization prevailing at much higher length scales and the collective behaviour which emerges depends on these interactions. The interactions can further be probed by varying external parameters and the lognormal distributions analysis of Tr(t) can help quantify the collective response. The general principles in the method should be applicable to other interesting biological systems. Acknowledgements. We thank Prof. Mudi Sheves for providing the WT-bR suspensions and Prof. David Cahen for help with electrostatic film assembly technique for bR. We acknowledge support from DAE and DST, Govt. of India for partial funding. References Chen and Yu (2007) Z. Chen and C. C. Yu, “Measurement noise maximum as a signature of a phase transition,” Phys. Rev. Lett 98, 057204 (2007) Raser and O’Shea (2005) J. M. Raser and E. K. O’Shea, “Noise in gene expression: origins, consequences and control,” Science 309, 2010 (2005) Simpson et al. (2009) M. L. Simpson, C. D. Cox, M. S. Allen, J. M. McCollum, R. D. Dar, D. K. Karig,  and J. F. Cooke, “Noise in biological circuits,” Nanomed. and Nanobio. 1, 214 (2009) Cox et al. (2008) C. D. Cox, J. M. McCollum, M. S. Allen, R. D. Dar,  and M. L. Simpson, “Using noise to probe and characterize gene circuits,” PNAS 105, 10809 (2008) Rao, Wolf, and Arkin (2002) C. V. Rao, D. M. Wolf,  and A. P. Arkin, “Control, exploitation $\&$ tolerance of intracellular noise,” Nature 420, 231 (2002) Grima (2010a) R. Grima, “Intrinsic biochemical noise in crowded intracellular conditions,” J. Chem. Phys. 132, 185102 (2010a) Montroll and Shlesinger (1982) E. W. Montroll and M. F. Shlesinger, “On 1/f noise and other distributions with long tails,” PNAS 79, 3380 (1982) Hill, Dissado, and Jackson (1981) R. M. Hill, L. A. Dissado,  and R. Jackson, “The examination of correlated noise,” J. Phys. C: Solid State Phys. 14, 3915 (1981) Simpson, Cox, and Sayler (2004) M. L. Simpson, C. D. Cox,  and G. S. Sayler, “Frequency domain chemical langevin analysis of stochasticity in gene transcriptional regulation,” J. of Theo. Bio. 229, 383 (2004) Kampen (2007) N. G. V. Kampen, Stochastic Processes in Physics and Chemistry (Elsevier, Amsterdam, 2007) Grima (2010b) R. Grima, “An effective rate equation approach to reaction kinetics in small volumes: Theory and application to biochemical reactions in nonequilibrium steady-state conditions,” J. Chem. Phys. 133, 035101 (2010b) Thomas, Straube, and Grima (2010) P. Thomas, A. V. Straube,  and R. Grima, “Stochastic theory of large-scale enzyme-reaction networks: Finite copy number corrections to rate equation models,” J. Chem. Phys. 133, 195101 (2010) Dunn (1999) R. C. Dunn, “Near-field scanning optical microscopy,” Chem.Rev. 99, 2891 (1999) Arun, Mukhopadhyay, and Narayan (2010) N. Arun, S. Mukhopadhyay,  and K. S. Narayan, “Monitoring intermediate states of bacteriorhodopsin monolayers using near field optical microscopy,” Appl. Opt. 49, 1131 (2010) Stoeckenius, Lozier, and Bogomolni (1979) W. Stoeckenius, R. H. Lozier,  and R. A. Bogomolni, “Bacteriorhodopsin and the purple membrane of halobacteria,” Biochem.Biophys. Acta 505, 215 (1979) Lanyi (2004) J. K. Lanyi, “Bacteriorhodopsin,” Annu. Rev. Physiol. 66, 665 (2004) Barkai, Jung, and Silbey (2004) E. Barkai, Y. J. Jung,  and R. Silbey, ‘‘Theory of single molecule spectroscopy: beyond the ensemble average,” Annu. Rev. Phys. Chem. 55, 457 (2004) (18) See supplementary information for experimental setup, sample preparation technique, data analysis and additional details. [URL will be inserted by AIP] Biesso et al. (2009) A. Biesso, W. Qian, X. Huang,  and M. A. El-Sayed, “Gold nanoparticles surface plasmon field effects on the proton pump process of the bacteriorhodopsin photosynthesis,” J. Am. Chem. Soc. 131, 2442 (2009) Shibata et al. (2010) M. Shibata, H. Yamashita, T. Uchihashi, H. K. H.,  and T. Ando, “High-speed atomic force microscopy shows dynamic molecular processes in photoactivated bacteriorhodopsin,” Nat. Nano. 5, 208 (2010) Quaranta and Garbett (2010) V. Quaranta and S. P. Garbett, “Not all noise is waste,” Nat. Methods 7, 269 (2010) He et al. (2005) T. He, N. Friedman, D. Cahen,  and M. Sheves, “Bacteriorhodopsin monolayers for optoelectronics: orientation $\&$ photoelectric response on solid supports,” Adv. Mater. 17, 1023 (2005) Ricci et al. (2007) M. L. M. Ricci, J. Mazzaferri, A. V. Bragas,  and O. E. Martinez, “Photon counting statistics using a digital oscilloscope,” Am. J. Phy. 75, 707 (2007) Koulakov, Hromadka, and Zador (2009) A. A. Koulakov, T. Hromadka,  and A. M. Zador, ‘‘Correlated connectivity and the distribution of firing rates in the neocortex,” J. Neuroscience 29, 3685 (2009) Otto et al. (1989) H. Otto, T. Marti, M. Holz, T. Mogi, M. Lindau, H. G. Khorana,  and M. P. Heyn, “Aspartic acid-96 is the internal proton donor in the reprotonation of the schiff base of bacteriorhodopsin,” PNAS 86, 9228 (1989)
Homology versus homotopy in fibrations and in limits Manuel Amann (Date:: June 1st, 2020) Abstract. Motivated by prominent problems like the Hilali conjecture Yamaguchi–Yokura recently proposed certain estimates on the relations of the dimensions of rational homotopy and rational cohomology groups of fibre, base and total spaces in a fibration of rationally elliptic spaces. In this article we prove these estimates in the category of formal elliptic spaces and, in general, whenever the total space in addition has positive Euler characteristic or has the rational homotopy type of a homogeneous manifold (respectively of a known example) of positive sectional curvature. Additionally, we provide general estimates approximating the conjectured ones. Moreover, we suggest to study families of rationally elliptic spaces under certain asymptotics, and we discuss the conjectured estimates from this perspective for two-stage spaces. Key words and phrases:fibrations, Hilali conjecture, rational homotopy, rational cohomology, elliptic spaces, formal elliptic spaces, asymptotic behaviour, positively curved manifolds 2010 Mathematics Subject Classification: 55P62 (Primary), 57N65, 53C20 (Secondary) \xyoption all Introduction The Hilali conjecture ([8]) speculates that for a rationally elliptic space, i.e. a simply-connected space with both finite rational cohomology and rational homotopy groups, the dimension of rational cohomology is at least as large as the dimension of the rational homotopy groups, i.e. in other words, their well-defined quotient $$\displaystyle h(X)=\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}$$ is well-defined and smaller equal one. While the conjecture still being open, this quotient was considered in several different further circumstances (for example see [16]). Recently, it was asked by Yamaguchi–Yokura how this quotient behaves in fibrations of rationally elliptic spaces. It is the goal of this article to provide several special cases of their conjectured estimates on the one hand and, on the other hand, to study this quotient asymptotically—first suggesting, specifying and discussing different reasonable notions of “asymptotic behaviour” for families of rationally elliptic spaces. Throughout this article we denote by $X$ a simply-connected CW-complex. Cohomology is considered with rational coefficients. As stated above, we call $X$ rationally elliptic if it is simply-connected and both $\dim\pi_{*}(X)\otimes{\mathbb{Q}}<\infty$ and $\dim H^{*}(X)<\infty$. It is called $F_{0}$ or positively elliptic, if $\chi(X)>0$. (By abuse of notation, we shall refer to rationally elliptic spaces as just being elliptic. This is also indebted to the fact that the article builds on rational methods. In particular, whenever we speak of a “fibration”, it is actually enough to have a “rational fibration” structure.) Recall that the prominent subclass of elliptic spaces, the class of two-stage spaces, is defined as follows: Their minimal Sullivan models $(\Lambda V,{\operatorname{d}})$ up to isomorphism admit decompositions of the form $V^{\textrm{odd}}=W_{0}\oplus W_{1}$ and ${\operatorname{d}}(V^{\textrm{even}}\oplus W_{0})=0$, ${\operatorname{d}}W^{1}{\,\subseteq\,}\Lambda(V^{\textrm{even}}\oplus W^{0})$. We provide several notions of convergence for families of elliptic spaces. See Section 2 for an elaborate discussion of this. In particular, we rigorously define $\pi$-convergence there. It appears to be very hard to control the possible values of $h(X)$; so it seems reasonable to consider its asymptotic behaviour. To our knowledge this is the first time such a discussion is launched. As a first step this then permits to prove Theorem A. The family of two-stage spaces $\mathcal{X}$ $\pi$-converges to $0$, i.e. with $\dim\pi_{*}(X)\otimes{\mathbb{Q}}$ (for $X\in\mathcal{X}$) tending to $\infty$, $h(X)$ tends to $0$. The following results deal with the behaviour of $h(X)$ in fibrations. Hence consider a fibration $F\overset{}{\hookrightarrow}X\to B$ of rationally elliptic spaces. Conjecture 0.1 (Yamaguchi–Yokura, [22]). (1) $$\displaystyle\frac{1}{2}\cdot h(F\times B)\leq h(X)<h(F)+h(B)+\frac{1}{4}$$ As an application of Theorem A we discuss an asymptotic version of this problem first. We say that a class $\mathcal{X}$ of rationally elliptic spaces asymptotically satisfies Conjecture 0.1 if the following holds: There is $k\in{\mathbb{N}}$ such that if $\dim\pi_{*}(X)\otimes{\mathbb{Q}}\geq k$ for $X\in\mathcal{X}$, then $X$ satisfies the conjecture. Clearly, any family $\mathcal{X}$ $\pi$-converging to $0$ asymptotically satisfies the right hand side equation, i.e. for $X\in\mathcal{X}$ with large enough rational homotopy groups, the quotient $h(X)$ is smaller than $1/4$. Corollary B. Let $\mathcal{X}$ be a family $\pi$-convergent to $0$. Then $\mathcal{X}$ asymptotically satisfies the right hand side of Inequality (1). In particular, this holds true for two-stage spaces. Clearly, there are two-stage spaces (already products of spheres) of arbitrarily large rational homotopy. Actually, our estimates for two-stage spaces are explicit, and it is easy from there to provide concrete numbers for $\dim\pi_{*}(X)\otimes{\mathbb{Q}}$ starting the range in which the inequality holds. The following results aim to verify Conjecture 0.1 in particular cases. As a vital tool to do so we first verify a more or less close approximation to the left hand side of Conjecture 0.1 in Theorem C. For any fibration $F\overset{}{\hookrightarrow}X\to B$ of elliptic spaces it holds that $$\displaystyle h(F\times B)\leq 3\cdot h(X)$$ Next we prove the conjecture whenever $X$ is positively elliptic, as well as in the category of formal elliptic spaces. Recall that a space is formal if its rational homotopy type is isomorphic to the one of its cohomology algebra. Formal spaces form one of the most prominent classes of spaces in rational homotopy theory. Theorem D. Let $F\overset{}{\hookrightarrow}X\to B$ be a fibration of elliptic spaces. If $F$ is an $F_{0}$-space it holds that $$\displaystyle h(F\times B)\leq 2\cdot h(X)$$ Moreover, Conjecture 0.1 holds with respect to any such fibration whenever • $X$ is an $F_{0}$-space, or • $F$ is an $F_{0}$-space and satisfies the Halperin conjecture. For a brief discussion of the Halperin conjecture see Section 1.1. In particular, there we provide a list of several classes of spaces for which the conjecture is verified. The last formulation is stricter than necessary: For a fixed totally non-homologous to zero fibration $F\overset{}{\hookrightarrow}X\to B$ with $F$ an $F_{0}$-space the required inequalities hold already. With the presented formulation it is our goal to stress that conjecturally any $F_{0}$-space $F$ should render the fibration totally non-homologous to zero. Combining, refining and extending the previous arguments we finally obtain Theorem E. Conjecture 0.1 holds for a fibration $F\overset{}{\hookrightarrow}X\to B$ of elliptic formal spaces. This is proved in Propositions 5.3 and 5.4. As a corollary to this we can prove Conjecture 0.1 whenever $X$ is a known example of positive sectional curvature, respectively a homogeneous space of positive curvature—see Section 1.2 for more details on these classes. This is particularly interesting for different reasons: First of all these spaces constitute a nice class of highly important geometric examples. Second, maybe more strikingly, let us recall the Petersen–Wilhelm conjecture which states that whenever $X\to B$ is a Riemannian submersion with $X$ (and consequently $B$) positively curved Riemannian manifolds, then $2\dim B>\dim X$; respectively, in the case of compact spaces, when this submersion is a fibration $F\overset{}{\hookrightarrow}X\to B$, $\dim B>\dim F$. We recall the general property (for example see [1], [7, Proposition 1, p. 5]) that with $X$ being elliptic (and not necessarily a manifold with curvature bound) the fibration features in the category of elliptic spaces already. In particular, this would provide an a priori weaker formulation of Conjecture 0.1. In [1] we proved the Petersen–Wilhelm conjecture in a much more general context for the known examples of positive curvature of even dimensions only using their rational structure. (Since several odd-dimensional examples rationally split as a product, rational tools are not enough in odd dimensions; in [7] the odd-dimensional examples are verified using finite coefficients.) Note further that due to the Bott–Grove–Halperin conjecture positively curved manifolds should be elliptic. In even dimensions the equally famous Hopf conjecture speculates that they have positive Euler characteristic. That is, conjecturally the case of even-dimensional positively curved manifolds should be completely covered by Theorem C. In summary, in this context Conjecture 0.1 controls much more complicated invariants of fibration decompositions of positively curved manifolds than merely dimensions. Given all previous observations and conjectures one might speculate that Conjecture. For a closed simply-connected positively curved manifold $M$ and any fibration $F\overset{}{\hookrightarrow}M\to B$ of simply-connected spaces Estimate (1) is well-defined and holds true. Viewed from a different angle, we once again observe that positively curved manifolds seem to constitute a class of spaces behaving extremely well with respect to several different topological approaches. We verify this speculation on the known examples. Corollary F. Conjecture 0.1 holds true whenever the cohomology algebra $H^{*}(X)$ is generated by at most one even-degree and at most one odd-degree element. In particular, this is true if $X$ has the rational type of a simply-connected closed homogeneous space of positive sectional curvature respectively of any known example of a closed manifold admitting positive sectional curvature. The content of this is the observation that any fibration then only involves formal spaces. We remark that the confirmation of the conjecture for $X$ positively elliptic is yet another corollary of Theorem E as well: If $\chi(X)>0$, by the multiplicativity of the Euler characteristic in fibrations, so are $\chi(B),\chi(F)>0$. It is well-known that $F_{0}$-spaces are formal. We leave it to the reader to reformulate our results in larger generality for nilpotent spaces and nilpotent fibrations. Structure of the article. In Section 1 we discuss some relevant aspects from Rational Homotopy Theory. In Section 2 we note first observations on the conjecture before, in the second part of the section, we elaborately consider and discuss different notions of convergence for families of elliptic spaces. In Section 3 we explain the proof of Theorem A, which is rather independent of the following arguments. Section 4 is devoted to the proof of Theorem C. In particular, there we prove Lemma 4.2, which is central to our arguments and underlies nearly all further (and partly even previous) work. Finally, in Section 5 we refine and massively extend the previous arguments in order to provide proofs of Theorems D and E. As an application of the obtained results we use this to show Corollary F in a subsequent step. Acknowledgements. The author was supported both by a Heisenberg grant and his research grant AM 342/4-1 of the German Research Foundation; he is moreover associated to the DFG Priority Programme 2026. 1. Some tools from Rational Homotopy Theory 1.1. Excerpts from Rational Homotopy Theory This section cannot provide and is not intended to give an introduction to the theory. We expect the reader to have gained a certain familiarity with necessary concepts for example from [5] or [6]. We merely recall some tools and aspects which play a larger role in the article. Many computations of the article rely on the theory of (minimal) Sullivan models of simply-connected spaces $X$. Just to recall these are certain commutative differential graded algebras $(\Lambda V,{\operatorname{d}})$ encoding the rational homotopy type of $X$ with $V$ a positively graded rational vector space and $\Lambda V$ the tensor product on the symmetric algebra of the evenly-graded part $V^{\textrm{even}}$ and the exterior algebra on the oddly-graded part $V^{\textrm{odd}}$. Moreover, we use relative models and models of fibrations as constructed in [5, Proposition 15.5]. That is, for a fibration of simply-connected spaces $F\overset{}{\hookrightarrow}E\to B$ and for Sullivan models $(\Lambda V,\bar{\operatorname{d}})$ of $F$ and $(\Lambda W,{\operatorname{d}})$ of $B$, a model for $E$ is given by a tensor product $(\Lambda V\otimes\Lambda W,{\operatorname{d}})$ where $(\Lambda W,{\operatorname{d}})$ is a differential subalgebra, and the projection induced by $W\to 0$ yields the model $(\Lambda V,\bar{\operatorname{d}})$ of $F$. We investigate fibrations and their Sullivan models from a cohomological and a homotopical point of view. For the associated Serre spectral sequence see [5, Chapter 18]. For the associated long exact sequence of homotopy groups in terms of models see [5, Section 15(e), p. 214]. In particular, recall that with the terminology from the last paragraph, the long exact homotopy sequence dualises to the exact sequence $$\displaystyle\ldots\to W^{k}\to H^{k}(W\oplus V,{\operatorname{d}}_{0})\to V^{% k}\xrightarrow{{\operatorname{d}}_{0}}W^{k+1}\to\ldots$$ with transgression ${\operatorname{d}}_{0}$ with respect to which also cohomology $H^{k}(W\oplus V,{\operatorname{d}}_{0})$ is taken. This ${\operatorname{d}}_{0}$ denotes the linear part of the differential ${\operatorname{d}}$ on $(\Lambda(V\oplus W)$ defined by ${\operatorname{im\,}}({\operatorname{d}}-{\operatorname{d}}_{0}){\,\subseteq\,% }\Lambda^{\geq 2}(V\oplus W)$. (Clearly, $V^{k}\cong\pi_{k}(F)\otimes{\mathbb{Q}}$, $W^{k}\cong\pi_{k}(B)\otimes{\mathbb{Q}}$—see [5, Theorem 15.11, p. 208]—and $H^{k}(W\oplus V,{\operatorname{d}}_{0})\cong\pi_{k}(E)\otimes{\mathbb{Q}}$ taking into account that the model of the fibration is not necessarily minimal.) We shall speak of rational homotopy groups of $F$ and $B$ being contracted when passing to $X$ which is supposed to indicate that such a homotopy group lies in the kernel respectively the image of ${\operatorname{d}}_{0}$ and hence exists in $F$ respectively $B$, but no longer contributes non-trivial homotopy to $X$. We shall moreover draw on Euler and homotopy Euler characteristics. We use the convention to define the latter for an elliptic minimal Sullivan algebra $(\Lambda V,{\operatorname{d}})$ as $$\displaystyle\chi_{\pi}(\Lambda V,{\operatorname{d}})=\dim V^{\textrm{odd}}-% \dim V^{\textrm{even}}$$ The Euler characteristic is multiplicative (which can be proved using the Serre spectral sequence), the homotopy Euler characteristic is additive in fibrations (as follows from the depicted long exact homotopy sequence). The formal dimension $n$ of an elliptic space, i.e. the largest degree with non-trivial cohomology can be computed via the following dimension formula using the degrees and dimensions of its homotopy groups—see [5, Theorems 32.2 (iii), p. 436 and 32.6 (i), p. 441]. For this we recall the even and odd exponents $a_{i}$ and $b_{i}$ of a minimal Sullivan algebra $(\Lambda V,{\operatorname{d}})$ defined by the property that the $2a_{i}$ are the degrees of a basis of $V^{\textrm{even}}$ and the $2b_{i}-1$ are the degrees of a basis of $V^{\textrm{odd}}$. It then holds that $$\displaystyle\sum(2b_{i}-1)-\sum(2a_{i}-1)=n$$ Compare Remark 4.1. Elliptic spaces $X$ of positive Euler characteristic, so-called $F_{0}$-spaces or positively elliptic spaces possess a very rigid structure: Their rational cohomology is concentrated in even degrees; actually it is given by a polynomial algebra modulo a regular sequence whence these spaces are (hyper-/intrinsically) formal. Moreover, from [5, Formula (32.14), p. 446] we recall that the total dimension of their cohomology, i.e. their Euler characteristic, which equals the sum of all Betti numbers in this case, is given by (2) $$\displaystyle\dim H^{*}(X)=\prod_{i=1}^{q}\frac{b_{i}}{a_{i}}$$ where the $b_{i}$ and $a_{i}$ range over the odd respectively the even exponents of a minimal model of $X$. As a consequence, positively elliptic spaces admit pure models—see [5, Chapter 32, p. 434]. This contributes to the importance of pure spaces in rational homotopy theory. There are many prominent classes of pure spaces featuring homogeneous spaces and biquotients as well as cohomogeneity one manifolds. Recall our definition of two-stage spaces in the introduction which clearly constitutes a slight generalisation of pureness. Two-stage spaces gain special importance due to the following Proposition 1.1 (see Proposition 5.10, p. 32, in [3], cf. [10]). Let $X$ be a formal elliptic space. Then rationally it is the total space of a totally non-homologous to zero fibration with model $$\displaystyle(\Lambda B,0)\rightarrow(\Lambda B\otimes\Lambda V,{\operatorname% {d}})\rightarrow(\Lambda V,\bar{\operatorname{d}})$$ where $B=B^{odd}$, and $(\Lambda V,\bar{\operatorname{d}})$ is positively elliptic. That is, in particular, formal elliptic spaces are two-stage. Recall that a fibration $F\overset{}{\hookrightarrow}X\to B$ is called totally non-homologous to zero if the induced map $H^{*}(X)\to H^{*}(F)$ is surjective, or, equivalently, if the associated Serre spectral sequence degenerates at the $E_{2}$-term. Remark 1.2. Moreover, we may assume that the model $(\Lambda B\otimes\Lambda V,{\operatorname{d}})$ is minimal, i.e. the fibration to be $\pi$-trivial as well, (and then decompose it as depicted). This follows from the proof of [3, Proposition 5.10] in which we decomposed a two-stage model of $X$ with stage one mapping to a regular sequence in the algebra generated by stage $0$ in this form. Without restriction, we may choose the model we start with to be minimal. Indeed, the model comes from [4, Theorem II, p. 577], and can be chosen as a minimal model of the hyperformal cohomology $H^{*}(X)$. $\boxbox$ One of the most famous and most influential conjectures in the area is Conjecture 1.3 (Halperin). Let $F\overset{}{\hookrightarrow}X\to B$ be a fibration of simply-connected spaces with $F$ positively elliptic. Then the fibration is totally non-homologous to zero. In particular, this conjecture was verified • on compact homogeneous spaces of positive Euler characteristic ([19]). • on simply-connected Hard-Lefschetz spaces ([15]). • in the case of at most three generators of the cohomology algebra (see [20] and [11]). • for spaces of formal dimension at most $16$ or Euler characteristic at most $16$ (see [2, Theorem 11.6]). • in the “generic case” (cf. [18]). • to be closed under fibrations of simply-connected spaces of finite type (cf. [13]), i.e. if both base and fibre satisfy the Halperin conjecture, so does the total space. These classes of examples enrich Theorem D. Note further that it is known that the Halperin conjecture for an elliptic space $F$ holds if and only if it holds in the category of elliptic spaces, more precisely, already for base spaces being odd dimensional spheres (see [12, Theorem 1.5, p. 6], [14]). 1.2. Positively curved spaces and their rational structure By “positive curvature” we shall always denote positive sectional curvature. The known examples of simply-connected positively curved closed manifolds are the following (cf. [7]): • the subsequent homogeneous spaces, namely compact rank one symmetric spaces ${\mathbb{S}}^{n}$, ${\mathbb{C}}{\mathbf{P}}^{n}$, ${\mathbb{H}}{\mathbf{P}}^{n}$, $\operatorname{CaP}^{2}$, the Wallach flag manifolds $W^{6}$, $W^{12}$, $W^{24}$, the Aloff–Wallach spaces $W_{p,q}^{7}$, and the Berger spaces $B^{7}$, $B^{13}$. • the biquotients $E^{6}$ due to Eschenburg, the family ${\mathbf{SU}}(3)\;\!\!\!\sslash\;\!\!\!{\mathbb{S}}^{1}$ (parametrised by different inclusions) generalising and comprising the Aloff–Wallach spaces, the family of Bazaikin spaces ${\mathbf{SU}}(5)\;\!\!\!\sslash\;\!\!\!{\mathbf{Sp}}(2){\mathbb{S}}^{1}$ in dimension $13$ containing $B^{13}$, and • a cohomogeneity one example $P^{2}$ of dimension $7$ due to Dearricot and Grove–Verdiani–Ziller. Without going into details, collecting the information for example from [23], [1], [7] we derive that all these spaces are formal and elliptic, and, in any case, the following holds: • If $M$ is even-dimensional, it is positively elliptic. • If $M$ is odd-dimensional, it satisfies $\chi_{\pi}(M)=1$. It is either rationally a sphere, or its rational cohomology algebra has exactly one generator in positive even-degree and exactly one in odd-degree. This is the necessary information underlying the proof of Corollary F for positively curved manifolds (see Property $(*)$ and Remark 5.6). We remark further that there is a classification of simply-connected positively curved homogeneous spaces by Wallach and Bérard–Bergery (for example see [21]) which states that the cited homogeneous examples are actually the only ones. 2. First observations 2.1. Fibrations Let $F\overset{}{\hookrightarrow}X\to B$ a fibration of elliptic spaces. We call it $\pi$-trivial, if $\pi_{*}(X)\otimes{\mathbb{Q}}=\pi_{*}(F)\otimes{\mathbb{Q}}\oplus\pi_{*}(B)% \otimes{\mathbb{Q}}$, or, equivalently, if the relative minimal model of the fibration is actually a minimal model of $X$. It is interesting to observe that $\pi$-trivial fibrations play a role converse to the one of totally non-homologous to zero ones with respect to Conjecture 0.1; more precisely, • if the fibration is totally non-homologous to zero, then $h(X)\leq h(F)+h(B)$ (note that the computations of [22, Page 3] for the product fibration apply similarly to yield this inequality), and the right hand side of (1) is satisfied, in particular. • if the fibration is $\pi$-trivial, then $h(F\times B)\leq h(X)$, and the left hand side of (1) is satisfied, in particular. As for the latter, it suffices to recall that the Serre spectral sequence of the fibration $F\overset{}{\hookrightarrow}X\to B$ of simply-connected spaces of finite-dimensional rational cohomology satisfies $E_{2}^{p,q}=H^{p}(B)\otimes H^{q}(F)$, whence (3) $$\displaystyle\dim H^{*}(X)\leq\dim H^{*}(F)\cdot\dim H^{*}(B)$$ If the fibration is $\pi$-trivial, it follows that $\pi_{*}(F\times E)\otimes{\mathbb{Q}}=\pi_{*}(X)\otimes{\mathbb{Q}}$. We deduce the given estimate for $h(X)$. See also [22, Proposition 3.2, p. 3] where the same arguments are used to verify the conjecture for fibrations which are both $\pi$-trivial and totally non-homologous to zero. If the fibration is not $\pi$-trivial, which usually is the case, it is in particular necessary to understand how much rational homotopy is contracted when passing to $\pi_{*}(X)\otimes{\mathbb{Q}}$. This is dealt with in Theorems C (in the general situation) and D (for positively elliptic fibres). So the situation of $\pi$-trivial fibrations, or, more generally, both degeneracy properties of “$\pi$-triviality” and “totally non-homologous to zero”, nicely motivate these further generalisations. 2.2. Convergence There are several notions possible for defining “convergence” of a family of elliptic spaces. Let us start discussing them. Definition 2.1. Let $\mathcal{X}$ be a family of elliptic spaces. We say that $\mathcal{X}$ has accumulation point $c\in{\mathbb{R}}\cup\{\infty\}$ if for any $\varepsilon>0$ there exist infinitely many $X\in\mathcal{X}$ with $|h(X)-c|<\varepsilon$. We say that $\mathcal{X}$ converges to $c\in{\mathbb{R}}$ if $c$ is its only accumulation point. Example 2.2. • Clearly, by definition, no finite family $\mathcal{X}$ of elliptic spaces can have accumulation points nor converge. No family with universally bounded cohomology can converge to zero. • Every infinite family $\mathcal{X}$ of elliptic spaces has a (possibly infinite) accumulation point. If $\mathcal{X}$ converges to $c$, then so does any infinite subfamily $\mathcal{Y}{\,\subseteq\,}\mathcal{X}$. • The family $\{{\mathbb{C}}{\mathbf{P}}^{n}\}_{n\geq 1}$ is a family of universally bounded homotopy $\dim\pi_{\textrm{odd}}({\mathbb{C}}{\mathbf{P}}^{n})\otimes{\mathbb{Q}}=2$ converging to zero. Compare Proposition 2.6. • The family $\{{\mathbb{S}}^{n}\}_{n\geq 2}$ realises infinitely many rational homotopy groups, each element satisfies $\dim\pi_{*}(X)\leq 2$ for $X\in\mathcal{X}$. Clearly $h(X)\in\{1/2,1\}$ for $X$ in $\mathcal{X}$, and the family has two accumulation points (although the set $\{h(X)\mid X\in\mathcal{X}\}$ is finite and hence does not have any accumulation points). Odd spheres converge to $\tfrac{1}{2}$, even ones to $1$. • There are infinite families $\mathcal{X}$ of elliptic spaces realising only finitely many Betti numbers (for example, see [6, Chapter 6.2, p. 243]). Hence these families have positive accumulation points. • Such families can already be found to realise the same cohomology algebras (see [17]). • All of these example families only realise finitely many rational homotopy groups, hence the accumulation points are positive, but not infinite. Taking product or more elaborate constructions one may easily adapt limit points. $\boxbox$ The next Proposition generalises our observation on the family of spheres. Proposition 2.3. Let $\mathcal{X}$ be a family of elliptic spaces, let $\mathcal{P}$ denote the family of pure spaces, $\mathcal{Q}$ the one of two-stage spaces. If $\mathcal{P}{\,\subseteq\,}\mathcal{X}$ respectively $\mathcal{Q}{\,\subseteq\,}\mathcal{X}$, then any number $h(X)$ for $X\in\mathcal{P}$ respectively for $X\in\mathcal{Q}$ is an accumulation point of $\mathcal{X}$. Proof. For every pure respectively two-stage space $X$ we construct an infinite sequence of pure respectively two-stage spaces $X_{i}$ satisfying $h(X)=h(X_{i})$ for all $i\geq 1$. This pureness/two-stage property will be obvious from the construction. This can be done as follows. Recall that pure spaces are two-stage in particular. Let $(\Lambda(V_{0}\oplus V_{1}),{\operatorname{d}})$ be the two-stage decomposition of the minimal model of $X$. We choose the minimal model in its isomorphism class such that we display $V_{1}$ with minimal possible dimension. Hence, the differential is injective on $V_{1}$ and differentials have a well-defined degree. Let $v_{1},\ldots,v_{k}$ be a homogeneous basis of $V_{0}$ and $v^{\prime}_{1},\ldots,v^{\prime}_{k^{\prime}}$ be a homogeneous basis of $V_{1}$. Up to spatial realisation, it suffices to construct a two-stage minimal model $(\Lambda W,{\operatorname{d}})=(\Lambda(W_{0}\oplus W_{1}),{\operatorname{d}})$ of $X_{i}$—for the sake of simplicity we suppress the index $i$ in the models. For this we construct a homogeneous basis $w_{0},\ldots,w_{k}$ of $W_{0}$ and $w^{\prime}_{1},\ldots,w^{\prime}_{k^{\prime}}$ of $W_{1}$. The $w_{j}$ and $w_{j}^{\prime}$ will be degree shifts of the corresponding $v_{j}$, $v^{\prime}_{j}$. We extend degrees multiplicatively. Hence it remains to define $$\displaystyle\deg w_{j}$$ $$\displaystyle:={3^{i}\cdot j}$$ $$\displaystyle\deg w_{j}^{\prime}$$ $$\displaystyle:=3^{i}\cdot\deg({\operatorname{d}}v_{j}^{\prime})-1$$ and extend degrees multiplicatively as usual. We write ${\operatorname{d}}v_{j}^{\prime}=p_{j}(v_{i})$ as a polynomial $p_{j}$ in the $v_{i}$, and we denote by $p_{j}(w_{i})$ the corresponding polynomial replacing the $v_{i}$ by the $w_{i}$. Hence set $$\displaystyle{\operatorname{d}}w_{j}$$ $$\displaystyle:=0$$ $$\displaystyle{\operatorname{d}}w_{j}^{\prime}$$ $$\displaystyle:=p_{j}(w_{i})$$ which is well-defined by construction. Hence all the $X_{i}$ are well-defined pure respectively two-stage spaces. They are all mutually distinct due to degrees. Then all $X_{i}$ have isomorphic minimal models, however, using isomorphisms not respecting the grading. Indeed, by construction, the isomorphism to $(\Lambda V,{\operatorname{d}})$ is induced by the correspondence $v_{i}\sim w_{i}$, $v_{i}^{\prime}\sim w_{i}^{\prime}$. (For this note that due to multiplication with $3^{i}$ the parity of the basis is preserved.) In particular, $h(X_{i})=h(X)$ for all $i$. This proves the result. ∎ As a consequence the family of pure or two-stage spaces or any family containing them like the family of all elliptic spaces does not converge to any limit point. Note that the elements in the sequences we constructed in the last proof all had the same rational homotopy groups. It seems more interesting to understand what happens if rational homotopy tends to infinity. Definition 2.4. A family $\mathcal{X}$ of elliptic spaces has $\pi$-accumulation point $c\in{\mathbb{R}}\cup\{\infty\}$, if for all $\varepsilon>0$ there exist $n(\varepsilon)\in{\mathbb{N}}$ and infinitely many $X\in\mathcal{X}$ with $\dim\pi_{*}(X)\otimes{\mathbb{Q}}\geq n(\varepsilon)$ and $|h(X)-c|<\varepsilon$. The family $\pi$-converges to $c\in{\mathbb{R}}\cup\{\infty\}$, i.e. $$\displaystyle\lim_{\dim\pi_{*}(X)\otimes{\mathbb{Q}}\to\infty}h(X)=c$$ if $c$ is the only $\pi$-accumulation point. In the folllowing let us discuss zero as an accumulation point. We need some preparatory results first. We provide an easy and coarse estimate on the dimension of the cohomology of a pure space. Note that the important aspect for us is that this estimate is given purely in terms of degrees and dimensions of rational homotopy groups, since the formal dimension $d$ of an elliptic space can be computed just using this degree information. Lemma 2.5. Let $(\Lambda V,{\operatorname{d}})$ be a pure minimal Sullivan algebra of formal dimension $d$. Denote by $a_{1},\ldots,a_{k}$ the degrees of a homogeneous basis of $V^{\textrm{even}}$. Then $$\displaystyle\dim H(\Lambda V,{\operatorname{d}})\leq 2^{\dim V^{\textrm{odd}}% -\dim V^{\textrm{even}}}\cdot\prod_{1\leq i\leq k}\lceil d/a_{i}\rceil$$ Proof. Let $w_{1},\ldots,w_{k}$ be such a homogeneous basis of $V^{\textrm{even}}$ with $\deg w_{i}=a_{i}$. Consider the rational fibration given by the relative model (4) $$\displaystyle(\Lambda\langle v_{1},\ldots,v_{k}\rangle\otimes\Lambda V,{% \operatorname{d}})$$ with fibre $(\Lambda\langle v_{1},\ldots,v_{k}\rangle,0)$ generated by elements of degrees $\deg v_{i}=a_{i}\cdot(\lceil d/a_{i}\rceil+1)-1$ satisfying ${\operatorname{d}}v_{i}=w_{i}^{\lceil d/a_{i}\rceil+1}$. By construction—we chose the $v_{i}$ to map to elements of degree larger than the formal dimension $d$ of $(\Lambda V,{\operatorname{d}})$ under the differential ${\operatorname{d}}$—the total space actually has the following minimal model up to isomorphism. $$\displaystyle(\Lambda\langle v_{1},\ldots,v_{k}\rangle\otimes\Lambda V,{% \operatorname{d}})\cong(\Lambda V,{\operatorname{d}})\otimes(\Lambda\langle v_% {1},\ldots,v_{k}\rangle,0)$$ Hence the relative model (4) admits a second rational fibration structure with base space $(\Lambda(V^{\textrm{even}}\oplus\langle v_{1},\ldots,v_{k}\rangle),{% \operatorname{d}})$ and fibre $(\Lambda V^{\textrm{odd}},0)$. The dimension of the cohomology of the base space is $\prod_{1\leq i\leq k}\lceil d/a_{i}\rceil$. By the $E_{2}$-term of the associated Serre spectral sequence of this new fibration we deduce that $$\displaystyle\dim H(\Lambda V,{\operatorname{d}})\cdot 2^{k}$$ $$\displaystyle=$$ $$\displaystyle\dim H((\Lambda V,{\operatorname{d}})\otimes(\Lambda\langle v_{1}% ,\ldots,v_{k}\rangle,0))$$ $$\displaystyle\leq$$ $$\displaystyle\dim H(\Lambda V^{\textrm{odd}},0)\cdot\dim H(\Lambda(V^{\textrm{% even}}\oplus\langle v_{1},\ldots,v_{k}\rangle),{\operatorname{d}})$$ $$\displaystyle\leq$$ $$\displaystyle 2^{\dim V^{\textrm{odd}}}\cdot\prod_{1\leq i\leq k}\lceil d/a_{i}\rceil$$ The assertion follows. ∎ For the next proposition it would have been enough to work with the well-known estimate $\dim H^{*}(X)\leq 2^{\dim X}$ for an elliptic space (and again to use that formal dimension can be expressed via the degrees of rational homotopy groups). As a service to the reader we provided the last lemma with its concise proof instead. Proposition 2.6. If the family $\mathcal{X}$ of elliptic spaces has $0$ as an accumulation point, then $$\displaystyle\sup_{X\in\mathcal{X}}\{\dim\pi_{*}(X)\otimes{\mathbb{Q}}\}=% \infty\qquad\textrm{or}\qquad\sup_{X\in\mathcal{X}}\{i\in{\mathbb{N}}\mid\pi_{% i}(X)\otimes{\mathbb{Q}}\neq 0\}=\infty$$ In any case formal dimensions are unbounded, i.e. $$\displaystyle\sup_{X\in\mathcal{X}}\{\dim X\}=\infty$$ Proof. It suffices to show that fixing the rational homotopy groups $\pi_{*}(X)\otimes{\mathbb{Q}}$ of an elliptic space $X$, there exists $\alpha\in{\mathbb{N}}$ such that $\dim H^{*}(X^{\prime})\leq\alpha$ for all elliptic $X^{\prime}$ satisfying $\pi_{*}(X^{\prime})\otimes{\mathbb{Q}}=\pi_{*}(X)\otimes{\mathbb{Q}}$. Indeed, this implies that for $\dim H^{*}(X)$ to be unbounded within $\mathcal{X}$ (which is clearly necessary for accumulation point zero), it is required to have infinitely many configurations $\pi_{*}(X)$. That is, either infinitely many homotopy Betti numbers or infinitely many degrees of rational homotopy groups (or both). In any case the dimension formula (see Section 1.1) for elliptic spaces (together with the observation that the existence of an even-degree basis element of the rational homotopy groups requires the existence of an additional odd-degree one of at least twice the degree, see [5, Proposition 32.9]) yields the unboundedness of formal dimensions. So let us show the existence of $\alpha$. By the odd spectral sequence ([5, Chapter 32(b), p. 438]) $\dim H(\Lambda V,{\operatorname{d}})\leq\dim H(\Lambda V,{\operatorname{d}}_{% \sigma})$ for a minimal Sullivan algebra $(\Lambda V,{\operatorname{d}})$ with associated pure one $(\Lambda V,{\operatorname{d}}_{\sigma})$. Hence, without restriction, we may assume that the $X^{\prime}$ are pure spaces, and we have to show that fixing rational homotopy groups there are only finitely many $\dim H^{*}(X^{\prime})$ for pure $X^{\prime}$ realising the homotopy groups. This follows from Lemma 2.5 in which we provide an upper bound on cohomology merely in terms of the degrees and dimensions of the rational homotopy groups. ∎ Recall the family of complex projective spaces from Example 2.2 with constant dimension of rational homotopy groups and diverging cohomology. Here the top degrees of rational homotopy diverge. The family of products of spheres of a fixed dimension clearly has bounded top homotopical degree and diverging homotopical dimension. Both families $\pi$-converge to zero. This illustrates that both cases in the proposition really can occur. So we already started to answer Question. Which accumulation points can be realised by a family of elliptic spaces? Or, much more interestingly, which $\pi$-accumulation-points can be realised? Remark 2.7. Instead of merely looking at limits, we also suggest to have a closer look at the rate of convergence. For example, if convergence is governed by $n\mapsto\tfrac{n}{2^{n}}$, then the elements of $\mathcal{X}$ satisfy the toral rank conjecture. (Of course, this is a rather restrictive condition.) Clearly, it is well-known (see [6, Theorem 7.13, p. 279]) that the toral rank ${\operatorname{rk\,}}(X)$ of an elliptic $X$ satisfies ${\operatorname{rk\,}}(X)\leq\chi_{\pi}(X)\leq\dim\pi_{\textrm{odd}}(X)$. Then $$\displaystyle\frac{{\operatorname{rk\,}}(X)}{\dim H^{*}(X)}$$ $$\displaystyle\leq\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}\leq% \frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{2^{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}}$$ and (ignoring trivial cases of contractible $$X$$ or vanishing rank) $$\displaystyle\dim H^{*}(X)$$ $$\displaystyle\geq 2^{{\operatorname{rk\,}}(X)}\cdot\frac{{\operatorname{rk\,}}% (X)}{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}\cdot\frac{2^{\dim\pi_{*}(X)\otimes{% \mathbb{Q}}}}{2^{{\operatorname{rk\,}}(X)}}$$ Since $\dim\pi_{*}(X)\otimes{\mathbb{Q}}-{\operatorname{rk\,}}(X)+\log_{2}{% \operatorname{rk\,}}(X)-\log_{2}\dim\pi_{*}(X)\otimes{\mathbb{Q}}$ is greater equal $0$ for all relevant values, the toral rank conjecture holds in this situation. $\boxbox$ Next we investigate how convergence to zero behaves under fibrations whence extending the class to more instances. Proposition 2.8. Let $\mathcal{FIB}=(F\overset{}{\hookrightarrow}X\to B)$ be a family of totally non-homologous to zero fibrations of elliptic spaces. • The family $\mathcal{X}$ of total spaces $\pi$-converges to zero if both the family $\mathcal{F}$ of fibres and the family $\mathcal{B}$ of base spaces do. • The family of total spaces $\mathcal{X}$ is $\pi$-convergent to zero if and only if so is the family of spaces $\mathcal{F}\times\mathcal{B}=(F\times B)$. Proof. Assume first that both $\mathcal{F}$ and $\mathcal{B}$ $\pi$-converge to zero, and we shall show that $\mathcal{X}$ also $\pi$-converges to $0$. Now with $h(F)$ and $h(B)$ also $$\displaystyle h(X)=\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}\leq% \frac{\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes{\mathbb{Q}}}{% \dim H^{*}(X)}\leq h(F)+h(B)$$ tends to zero (using the assumption that both $\dim H^{*}(F),\dim H^{*}(B)\leq\dim H^{*}(X)$). Due to Lemma 4.2 and the formula $$\displaystyle 3\dim\pi_{*}(X)\otimes{\mathbb{Q}}\geq\dim\pi_{*}(F)\otimes{% \mathbb{Q}}+\dim\pi_{*}(B)\otimes{\mathbb{Q}}$$ which we obtain from there, we derive that with $\pi_{*}(F)\otimes{\mathbb{Q}}$ and $\pi_{*}(B)\otimes{\mathbb{Q}}$ also $\dim\pi_{*}(X)\otimes{\mathbb{Q}}$ is unbounded. Hence $\mathcal{X}$ $\pi$-converges to zero. Now we deal with the second assertion. By the very last argument we derive that $\mathcal{X}$ has unbounded rational homotopy if and only if $\mathcal{F}\times\mathcal{B}$ has. It remains to show that $h(X)$ tends to zero if and only if $h(F\times B)$ does. Due to the fibrations being totally non-homologous to zero we have $$\displaystyle\tfrac{1}{3}\cdot h(F\times B)$$ $$\displaystyle=\frac{\tfrac{1}{3}\cdot\dim\pi_{*}(F\times B)\otimes{\mathbb{Q}}% }{\dim H^{*}(X)}$$ $$\displaystyle\leq h(X)$$ $$\displaystyle=\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}$$ $$\displaystyle\leq\frac{\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes% {\mathbb{Q}}}{\dim H^{*}(F\times B)}=h(F\times B)$$ and the assertion follows. ∎ To avoid confusion: for the family $\mathcal{F}\times\mathcal{B}$ the respective spaces $F$ and $B$ belong to the same fibration. As the proof shows, for the first part of the statement instead of a totally non-homologous to zero fibration it would be enough to have the weaker properties $\dim H^{*}(F),\dim H^{*}(B)\leq\dim H^{*}(X)$. 3. Proof of Theorem A We use the two-stage decomposition for the space $X$ described in the introduction. Proof of Theorem A. We need to show that for arbitrarily large numbers there exist infinitely many two-stage spaces $X$ with larger homotopy and with $h(X)$ tending to zero. It is clear (for example just by taking products of two-stage spaces) that the class of two-stage spaces has unbounded rational homotopy. Hence it remains to see that the number $h(X)$ tends to zero with $\dim\pi_{*}(X)\otimes{\mathbb{Q}}$ going to infinity. We recall from [9, Theorem 2.3, p. 195] that $$\displaystyle\dim H^{*}(X)\geq 2^{\dim W^{1}-\dim V^{\textrm{even}}}$$ Moreover, by word-length $$\displaystyle\dim H^{*}(X)\geq$$ $$\displaystyle 1+\dim\Lambda^{\leq 2}V^{\textrm{even}}+\dim\Lambda^{\leq 2}W^{0}$$ $$\displaystyle+\dim V^{\textrm{even}}\cdot\dim W^{0}-\dim W^{1}$$ $$\displaystyle=$$ $$\displaystyle 1+2\dim V^{\textrm{even}}+{\dim V^{\textrm{even}}\choose 2}+\dim W% ^{0}+{\dim W^{0}\choose 2}$$ $$\displaystyle+\dim V^{\textrm{even}}\cdot\dim W^{0}-\dim W^{1}$$ Set $n:=\dim V^{\textrm{even}}$, $m:=\dim W^{0}$, $r:=\dim W^{1}-\dim V^{\textrm{even}}$. (It is clear that $r\geq 0$.) It follows that $$\displaystyle h(X)\leq\frac{2n+m+r}{\max(\tfrac{1}{2}(n^{2}+n+m^{2}+m+2nm+2-2r% ),2^{r})}$$ and we have to show that as one of $n,m,r$ goes to infinity, this expression falls below $1/k$ for any $k\in{\mathbb{N}}$. We consider two different cases. Case 1. Suppose that $r\leq\frac{n+m}{2}$, which implies that $$\displaystyle h(X)\leq\frac{5n+3m}{n^{2}+m^{2}+2nm+2}$$ In this case, whenever $n\to\infty$ or $m\to\infty$ the right hand side becomes arbitrarily small. Case 2. Suppose that $r\geq\frac{n+m}{2}$, i.e. , in particular, $4r\geq 2n+m$. Then $r$ tends to infinity if so do $n$ or $m$. Moreover, $$\displaystyle h(X)\leq\frac{2n+m+r}{2^{r}}\leq\frac{5r}{2^{r}}$$ This converges to $0$ whenever any of $n,m,r$ tend to infinity. ∎ We leave it to the reader to make use of the fact that the estimates are explicit, i.e. to provide concrete numbers for $n,m,r$ for which the estimates hold. 4. Proof of Theorem C Remark 4.1. In the following we shall draw on the dimension formula (see Section 1.1). We remark that this formula for general elliptic Sullivan algebras $(\Lambda V,{\operatorname{d}})$ is obtained by a reduction to the pure case, i.e. by passing to the associated pure model $(\Lambda V,{\operatorname{d}}_{\sigma})$. For this (see [5, Proposition 32.4, p. 438]) it is shown that in the case when $(\Lambda V,{\operatorname{d}})$ is minimal its cohomology is finite dimensional if and only if so is $H(\Lambda V,{\operatorname{d}}_{\sigma})$. In [5, Proposition 32.7, p. 442] it is shown that $(\Lambda V,{\operatorname{d}}_{\sigma})$ and $(\Lambda V,{\operatorname{d}})$ have the same maximal degrees of non-vanishing cohomology. Although, as it seems, despite not being explicitely required in the assertions the proof of this latter result also assumes the minimality of $(\Lambda V,{\operatorname{d}})$. Clearly, already the cohomological finiteness result is wrong without the minimality assumption, as already the example of the contractible algebra $(\Lambda\langle x,y\rangle,x\mapsto y,\deg x=2,\deg y=3)$ with associated pure algebra $(\Lambda\langle x,y\rangle,0)$ of infinite-dimensional cohomology shows. Also [5, Theorems 32.6, p. 441, and 32.9, p. 442] draw on minimality although, putatively, not stated. Clearly, the difference between minimal and non-minimal models is eradicated when formulating the dimension formula in terms of homotopy groups (see [5, p. 434]). The dimension formula, however, stays correct the way it is formulated via even and odd exponents of Sullivan algebras (see Section 1.1) also for non-minimal algebras if either $(\Lambda V,{\operatorname{d}})$ is pure or under the following restriction: Up to isomorphism a Sullivan algebra can be written as the product of a minimal and a contractible one (see [5, Theorem 14.9, p. 187]). Hence it remains to verify when the dimension formula holds for the contractible factor, i.e. basically for the two situations $(\Lambda\langle x,y\rangle,x\mapsto y)$ once for $\deg x$ odd and $\deg y=\deg x+1$ even, and once for $\deg x$ even and $\deg y=\deg x+1$ odd. In the first case we obtain dimension $\deg x-(\deg y-1)=\deg x-\deg x=0$, the dimension formula holds; in the second one it fails due to $\deg y-(\deg x-1)=\deg y-\deg y+2=2$. (Note that a pureness assumption excludes the second case.) However, in the proof of the following lemma we see that the latter algebra cannot be decomposed as the total space of a fibration of elliptic spaces by exactly comparing the dimension formula of such total spaces with the ones of the corresponding product spaces of potential fibre and base. Indeed, this boils down to exactly the same “$(+2)$-contradiction” we just observed. $\boxbox$ We now prove a crucial lemma underlying several results. Note that we already drew on it in Section 2 (which we do not use at all for the subsequent reasoning). Lemma 4.2. Let $F\overset{}{\hookrightarrow}X\to B$ be a fibration of rationally elliptic spaces. Then it holds that $$\displaystyle\dim\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{\textrm{odd}}(B)\otimes{\mathbb{Q}}\geq\dim\pi_{% \textrm{even}}(B)\otimes{\mathbb{Q}}$$ $$\displaystyle\dim\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{\textrm{even}}(X)\otimes{\mathbb{Q}}\geq\dim\pi_{% \textrm{even}}(F)\otimes{\mathbb{Q}}$$ $$\displaystyle\dim\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{\textrm{odd}}(F)$$ and, in total, $$\displaystyle\dim\pi_{*}(X)\otimes{\mathbb{Q}}+2\dim\pi_{\textrm{odd}}(X)% \otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes{% \mathbb{Q}}$$ Proof. We recall the dimension formula (see Section 1.1) for the rationally elliptic space $X$. $$\displaystyle\dim X=\sum_{i}b_{i}-\sum_{j}(a_{j}-1)$$ where the $b_{i}$ range over the degrees of a homogeneous basis of $\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$, and the analog for the $a_{j}$ and $\pi_{\textrm{even}}(X)\otimes{\mathbb{Q}}$, and $\dim X$ denotes formal dimension. We now fix models and homogeneous bases of base and fibre, namely $(\Lambda\langle f_{i}\rangle_{i},\bar{\operatorname{d}})$ a minimal model of $F$, and $(\Lambda\langle b_{j}\rangle_{j},{\operatorname{d}})$ one of $B$ yielding the model of the fibration, i.e. a (not necessarily minimal) Sullivan model for $X$ given by $$\displaystyle(\Lambda\langle f_{i},b_{j}\rangle_{i,j},{\operatorname{d}})$$ Consider the long exact homotopy sequence $$\displaystyle\pi_{i}(F)\otimes{\mathbb{Q}}\to\pi_{i}(X)\otimes{\mathbb{Q}}\to% \pi_{i}(B)\otimes{\mathbb{Q}}\xrightarrow{\partial}\pi_{i-1}(F)\otimes{\mathbb% {Q}}\to\pi_{i-1}(X)\otimes{\mathbb{Q}}$$ Up to a change of basis we may assume that (passing to the dual sequence) $\ker\partial^{*}=\langle f_{i}\rangle_{1\leq i\leq m}$. Consequently (see Section 1.1 and the description of the differential there), $$\displaystyle{\operatorname{d}}_{0}|_{\langle f_{i}\rangle_{1\leq i\leq m}}% \colon\thinspace\langle f_{i}\rangle_{1\leq i\leq m}\to\Lambda\langle b_{i},f_% {j}\rangle_{i,j}/\Lambda^{\geq 2}\langle b_{i},f_{j}\rangle_{i,j}\cong\langle b% _{i},f_{j}\rangle_{i,j}$$ is injective with image in $\langle b_{i}\rangle_{i}$. Again, up to change of basis, we may assume that ${\operatorname{d}}_{0}(f_{i})=b_{i}$, and $\deg f_{i}+1=\deg b_{i}$ for $1\leq i\leq m$. Hence a minimal model of $X$ is given by $(\Lambda\langle f_{i},b_{j}\rangle_{i,j>m},\tilde{\operatorname{d}})$ with a suitably adapted differential $\tilde{\operatorname{d}}$. Next, we use the equality of formal dimensions $\dim X=\dim F+\dim B$ (which can easily be deduced from the Serre spectral sequence and the fact that $E_{2}^{\dim B,\dim F}={\mathbb{Q}}$, which is left invariant by the differentials), and compute both sides separately. By applying the dimension formula to the two respective minimal models of $X$ and of $F\times B$ it follows that $$\displaystyle\sum_{i>m}\deg b_{i}^{\textrm{odd}}+\deg f_{i}^{\textrm{odd}}-% \sum_{j>m}(\deg b_{j}^{\textrm{even}}+\deg f_{j}^{\textrm{even}}-2)$$ $$\displaystyle=$$ $$\displaystyle\sum_{i}\deg b_{i}^{\textrm{odd}}+\deg f_{i}^{\textrm{odd}}-\sum_% {j}(\deg b_{j}^{\textrm{even}}+\deg f_{j}^{\textrm{even}}-2)$$ That is, $$\displaystyle 0$$ $$\displaystyle=\sum_{i\leq m}\deg b_{i}^{\textrm{odd}}+\deg f_{i}^{\textrm{odd}% }-\sum_{j\leq m}(\deg b_{j}^{\textrm{even}}+\deg f_{j}^{\textrm{even}}-2)$$ $$\displaystyle=\sum_{i\leq m}(\deg f_{i}^{\textrm{even}}+1)-(\deg f_{i}^{% \textrm{even}}-1)+\sum_{i\leq m}\deg f_{i}^{\textrm{odd}}-\big{(}\deg f_{i}^{% \textrm{odd}}+1-1\big{)}$$ $$\displaystyle=2\cdot\#_{1\leq i\leq m}f_{i}^{\textrm{even}}$$ It follows that there is no even-degree element in the kernel of ${\operatorname{d}}_{0}=\partial^{*}$, i.e. any even-degree rational homotopy group of $F$ passes non-trivially to $X$. Respectively, the equation writes as $$\displaystyle 0$$ $$\displaystyle=\sum_{i\leq m}\deg b_{i}^{\textrm{odd}}+\deg f_{i}^{\textrm{odd}% }-\sum_{j\leq m}(\deg b_{j}^{\textrm{even}}+\deg f_{j}^{\textrm{even}}-2)$$ $$\displaystyle=\sum_{i\leq m}\deg b_{i}^{\textrm{odd}}-(\deg b_{i}^{\textrm{odd% }}-1-1)+\sum_{i\leq m}(\deg b_{i}^{\textrm{even}}-1)-\big{(}\deg b_{i}^{% \textrm{even}}-1\big{)}$$ $$\displaystyle=2\cdot\#_{1\leq i\leq m}b_{i}^{\textrm{odd}}$$ and odd-degree rational homotopy groups of the base space $B$ pass injectively to $X$. Both observations taken together prove that $$\displaystyle\dim\pi_{\textrm{even}}(X)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{\textrm{even}}(F)\otimes{\mathbb{Q}}\qquad\textrm{and}$$ $$\displaystyle\dim\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{\textrm{odd}}(B)\otimes{\mathbb{Q}}$$ It is well known (see [5, Proposition 32.10, p. 444]) that a rationally elliptic $Y$ satisfies $\dim\pi_{\textrm{odd}}(Y)\otimes{\mathbb{Q}}\geq\dim\pi_{\textrm{even}}(Y)% \otimes{\mathbb{Q}}$. Hence it remains to prove that $\dim\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}\geq\dim\pi_{\textrm{odd}}(F)% \otimes{\mathbb{Q}}$ whence the formula $$\displaystyle\dim\pi_{*}(X)\otimes{\mathbb{Q}}+2\dim\pi_{\textrm{odd}}(X)% \otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes{% \mathbb{Q}}$$ follows by summation. The linear part of the differential ${\operatorname{d}}$, namely $$\displaystyle{\operatorname{d}}_{0}\colon\thinspace\langle f_{i}^{\textrm{odd}% }\rangle_{i}\to\big{(}\Lambda\langle f_{i},b_{j}\rangle_{i,j}/\Lambda^{\geq 2}% \langle f_{i},b_{j}\rangle_{i,j}\big{)}^{\textrm{even}}\cong\langle f_{i}^{% \textrm{even}},b_{j}^{\textrm{even}}\rangle_{i,j}$$ maps into $\langle b_{i}^{\textrm{even}}\rangle_{i}$. From the proof on [5, p. 443] we cite that for each $b_{i}^{\textrm{even}}$ there exists a basis element $b_{i}^{\textrm{odd}}$ (of degree at least $2\deg b_{i}^{\textrm{even}}-1$). Hence we derive that $\ker{\operatorname{d}}_{0}|_{\langle f_{i}^{\textrm{odd}}\rangle_{i}}$ passes directly to $\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$, and that also its image, ${\operatorname{im\,}}{\operatorname{d}}_{0}$, is injectively represented in the odd-degree rational homotopy of $X$. The intersection of those odd degree elements contributed by the fibre and those by the base is clearly trivial. It follows that $$\displaystyle\dim\pi_{\textrm{odd}}(F)\otimes{\mathbb{Q}}$$ $$\displaystyle=\dim\langle f_{i}^{\textrm{odd}}\rangle_{i}$$ $$\displaystyle=\dim\ker{\operatorname{d}}_{0}|_{\langle f_{i}^{\textrm{odd}}% \rangle_{i}}+\dim{\operatorname{im\,}}{\operatorname{d}}_{0}|_{\langle f_{i}^{% \textrm{odd}}\rangle_{i}}$$ $$\displaystyle\leq\dim\pi_{\textrm{odd}}(X)\otimes{\mathbb{Q}}$$ ∎ Remark 4.3. We remark that the estimate $$\displaystyle\dim\pi_{*}(X)\otimes{\mathbb{Q}}+2\dim\pi_{\textrm{odd}}(X)% \otimes{\mathbb{Q}}\geq\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes% {\mathbb{Q}}$$ is sharp as is shown by the example of the Hopf fibration ${\mathbb{S}}^{3}\overset{}{\hookrightarrow}{\mathbb{S}}^{7}\to{\mathbb{S}}^{4}$. $\boxbox$ We are finally in the position to provide the Proof of Theorem C. From (3) we recall that $\dim H^{*}(X)\leq\dim H^{*}(F)\cdot\dim H^{*}(B)$. It follows from Lemma 4.2 that for elliptic spaces $F\overset{}{\hookrightarrow}X\to B$ the following estimate holds: (5) $$\displaystyle h(F\times B)=\frac{\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}% (B)\otimes{\mathbb{Q}}}{\dim H^{*}(F)\cdot\dim H^{*}(B)}\leq\frac{3\dim\pi_{*}% (X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}=3h(X)$$ ∎ 5. Proofs of Theorems D and E We shall now refine previous arguments to the case of positively elliptic $F$ or $X$. Proof of Theorem D. Let us first prove the right Inequality in (1) in the depicted cases. For this we observe the following: Due to the multiplicativity of the Euler characteristic in fibrations (see Section 1.1), the space $X$ is $F_{0}$ if and only if so are both $F$ and $B$. Hence if $X$ is $F_{0}$ so are all spaces involved. Moreover, a positively elliptic space has rational cohomology concentrated in even degrees (see [5, Proposition 32.10]). Hence the Serre spectral sequence degenerates for lacunary reasons, and the fibration is totally non-homologous to zero whence the right inequality in (1) holds (see Section 2.1). The degeneration at the $E_{2}$-term is enforced by the assumption that $F$ satisfies the Halperin conjecture. Let us now deal with the left inequality in (1). We observed that in both settings from the assertion $F$ is positively elliptic. As in the proof of Theorem C, we recall that $\dim H^{*}(X)\leq\dim H^{*}(F)\cdot\dim H^{*}(B)$. From Inequality (5) we recall that $h(F\times B)\leq 3h(X)$. In the case when $F$ is positively elliptic we improve this to $h(F\times B)\leq 2h(X)$ by refining the respective proof. Indeed, it now suffices to show that (6) $$\displaystyle 2\dim\pi_{*}(X)\otimes{\mathbb{Q}}\geq\dim\pi_{*}(F)\otimes{% \mathbb{Q}}+\dim\pi_{*}(B)\otimes{\mathbb{Q}}$$ since then $$\displaystyle h(F\times B)$$ $$\displaystyle=\frac{\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes{% \mathbb{Q}}}{\dim H^{*}(F\times B)}$$ $$\displaystyle\leq 2\cdot\frac{\tfrac{1}{2}\cdot(\dim\pi_{*}(F)\otimes{\mathbb{% Q}}+\dim\pi_{*}(B)\otimes{\mathbb{Q}})}{\dim H^{*}(X)}$$ $$\displaystyle\leq 2\cdot\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}$$ $$\displaystyle=2\cdot h(X)$$ (Note that the first inequality is actually an equality using that our fibration is totally non-homologous to zero; yet, this is irrelevant for the argument at this stage of the proof.) As we observed in the proof of Lemma 4.2, $({\operatorname{im\,}}{\operatorname{d}}_{0})_{\textrm{odd}}=0$, i.e. only odd degree homotopy groups from $F$ contract even degree ones from $B$. This implies that $$\displaystyle\dim\pi_{*}(X)\otimes{\mathbb{Q}}=\dim\pi_{*}(F)\otimes{\mathbb{Q% }}+\dim\pi_{*}(B)\otimes{\mathbb{Q}}-2c$$ where $$\displaystyle c\leq\min\{\dim\pi_{\textrm{odd}}(F)\otimes{\mathbb{Q}},\dim\pi_% {\textrm{even}}(B)\otimes{\mathbb{Q}}\}$$ Since both $F$ is an $F_{0}$-space, we derive that $$\displaystyle\dim\pi_{\textrm{odd}}(F)\otimes{\mathbb{Q}}$$ $$\displaystyle=\dim\pi_{\textrm{even}}(F)\otimes{\mathbb{Q}}\qquad\dim\pi_{*}(F% )\otimes{\mathbb{Q}}=2\dim\pi_{\textrm{odd}}(F)$$ Since $\chi_{\pi}(B)\geq 0$, we always have for elliptic $B$ that $$\displaystyle\dim\pi_{\textrm{odd}}(B)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}\qquad\dim\pi_{*% }(B)\otimes{\mathbb{Q}}\geq 2\dim\pi_{\textrm{even}}(B)$$ It follows that $$\displaystyle\dim\pi_{*}(X)\otimes{\mathbb{Q}}$$ $$\displaystyle\geq$$ $$\displaystyle\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes{\mathbb{Q% }}-\min\{\dim\pi_{*}(B)\otimes{\mathbb{Q}},\dim\pi_{*}(F)\otimes{\mathbb{Q}}\}$$ $$\displaystyle\geq$$ $$\displaystyle\max\{\dim\pi_{*}(B)\otimes{\mathbb{Q}},\dim\pi_{*}(F)\otimes{% \mathbb{Q}}\}$$ whence Inequality (6). ∎ Remark 5.1. The inequality (6) certainly does not hold when $B$ is positively elliptic (instead of $F$). For this just consider the Hopf fibration ${\mathbb{S}}^{3}\overset{}{\hookrightarrow}{\mathbb{S}}^{7}\to{\mathbb{S}}^{4}$ (with corresponding inequality $2<3$) again. $\boxbox$ In the proof of Theorem D we came to a point where we had to discuss fibrations of $F_{0}$-spaces. Those are necessarily totally non-homologous to zero. In the proof of Theorem E we have to deal with fibrations of formal elliptic spaces. As $F_{0}$-spaces are formal, this generalises the previous discussion. However, such a fibration is no longer necessarily totally non-homologous to zero, as again the example of the Hopf fibration ${\mathbb{S}}^{3}\overset{}{\hookrightarrow}{\mathbb{S}}^{7}\to{\mathbb{S}}^{4}$ shows already. Hence we shall have to discuss the trade-off of homotopy and cohomology degeneration. Let $F\overset{}{\hookrightarrow}X\to B$ be a fibration of formal elliptic spaces. Due to Proposition 1.1 we know that such a formal elliptic space has the structure of the total space of a totally non-homologous to zero fibration of an $F_{0}$-space over a product of odd-dimensional spheres. Hence $X$ admits the following Sullivan model $$\displaystyle(\Lambda V_{F}\otimes\Lambda T_{F}\otimes\Lambda V_{B}\otimes% \Lambda T_{B},{\operatorname{d}})$$ with $T_{F}$, $T_{B}$ concentrated in odd degrees, $V_{B}^{\textrm{even}}=V_{B}^{\textrm{odd}}$, $V_{F}^{\textrm{even}}=V_{F}^{\textrm{odd}}$, $(\Lambda V_{F}\otimes\Lambda T_{F},\bar{\operatorname{d}})$ a model of $F$, $(\Lambda V_{B}\otimes\Lambda T_{B},{\operatorname{d}})$ a model of $B$. Next we prove that whenever we contract an element of $T_{F}$ cohomology halves at least. Lemma 5.2. $$\displaystyle\dim H^{*}(X)\leq\frac{\dim H^{*}(F\times B)}{2^{\dim{% \operatorname{im\,}}({\operatorname{d}}_{0}|_{T_{F}})}}$$ Proof. We denote by $$\displaystyle 2a_{1},\ldots,2a_{\dim V_{F}^{\textrm{even}}},2a_{\dim V_{F}^{% \textrm{even}}+1},\ldots,2a_{\dim V_{F}^{\textrm{even}}+V_{B}^{\textrm{even}}}$$ the degrees of a homogeneous basis of $V_{F}^{\textrm{even}}\oplus V_{B}^{\textrm{even}}$, and by $$\displaystyle 2b_{1}-1,\ldots,2b_{\dim V_{F}^{\textrm{odd}}}-1,2b_{\dim V_{F}^% {\textrm{odd}}+1}-1,\ldots,2b_{\dim V_{F}^{\textrm{odd}}+\dim V_{B}^{\textrm{% odd}}}-1,$$ $$\displaystyle 2b_{\dim V_{F}^{\textrm{odd}}+\dim V_{B}^{\textrm{odd}}+1}-1,% \ldots,2b_{\dim V_{F}^{\textrm{odd}}+\dim V_{B}^{\textrm{odd}}+\dim T_{F}}-1,$$ $$\displaystyle 2b_{\dim V_{F}^{\textrm{odd}}+\dim V_{B}^{\textrm{odd}}+\dim T_{% F}+1}-1,\ldots,2b_{\dim V_{F}^{\textrm{odd}}+\dim V_{B}^{\textrm{odd}}+\dim T_% {F}+\dim T_{B}}-1$$ the degrees of a homogeneous basis of $V_{F}^{\textrm{odd}}\oplus V_{B}^{\textrm{odd}}\oplus T_{F}\oplus T_{B}$. Since $X$ is formal as well, we can compute its cohomology using the degrees of the rational homotopy groups of $F$ and $B$. Thus it holds that (7) $$\displaystyle\dim H^{*}(X)=2^{\dim T_{F}+\dim T_{B}}\cdot\prod_{1\leq i\leq% \dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{even}}}b_{\pi(i)}/a_{i}$$ for some permutation $\pi$ of $\{1,\ldots,\dim V_{F}^{\textrm{odd}}+V_{B}^{\textrm{odd}}+\dim T_{F}\}$ in particular satisfying $b_{\pi(i)}\leq b_{i}$. The first factor comes from the product of odd spheres over which $X$ (being formal elliptic) fibres rationally and in a totally non-homologous to zero manner; actually $\dim T_{F}+\dim T_{B}=\chi_{\pi}(X)$. The right hand side computes the cohomological dimensions of possible positively elliptic fibre parts of this totally non-homologous to zero fibration decomposition of $X$. For this we observe that odd-degree homotopy of this part a priori may come from all of $V_{F}^{\textrm{odd}}\oplus V_{B}^{\textrm{odd}}\oplus T_{F}$. The degree restrictions for the $b_{i}$ essentially draw on this factor being positively elliptic, i.e. a non-trivial relation of lower degree cannot be replaced by one of higher degree whence the one of higher degree must be trivial and yields a free factor of odd degree—indeed, the number of relations in the positively elliptic part equals the number of cohomology generators. As we need to take into account that some homotopy groups might be contracted, we may even have that $b_{\pi(i)}=a_{i}$. We shall make this more precise. Let $c:=\dim{\operatorname{im\,}}({\operatorname{d}}_{0}|_{T_{F}})$ denote the dimension of the subgroup of $\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}$ which is contracted by the rational homotopy groups of $F$ dual to $T_{F}$ and hence does not contribute to $\pi_{\textrm{even}}(X)\otimes{\mathbb{Q}}$. (We focus on this homotopy solely, although, clearly, more homotopy groups may be contracted by means of ${\operatorname{d}}_{0}(V_{F}^{\textrm{odd}})$.) We may express the $F_{0}$-part in the previous estimate as $$\displaystyle\prod_{1\leq i\leq\dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{% even}}}b_{\pi(i)}/a_{i}$$ $$\displaystyle=$$ $$\displaystyle\prod_{1\leq i\leq\dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{% even}}-c}\underbrace{b_{\pi(i)}/a_{i}}_{\geq 2\textrm{ or }=1}$$ $$\displaystyle\cdot\prod_{\dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{even}}% -c+1\leq i\leq\dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{even}}}% \underbrace{b_{\pi(i)}/a_{i}}_{=1}$$ where we reordered such that the last $c$ factors are those contracted by $T_{F}$ as depicted. For this, recall again from [5, p. 443] that, after decomposing the algebra into a minimal one times a contractible one, up to reordering the quotients $b_{\pi(i)}/a_{i}$ are at least $2$ on the minimal factor; they equal $1$ on the contractible factor, since, from the proof of Lemma 4.2 we recall that ${\operatorname{d}}_{0}$ is trivial on $V_{F}^{\textrm{even}}$, and only an odd-degree element of $V_{F}$ can map non-trivially to $V_{B}$. Hence the dimension formula yields $b_{\pi(i)}/a_{i}=1$ in this situation. Next we draw some consequences from this description: The minimal model of $F\times B$ is just the product of the minimal models of $F$ and $B$. Hence, in order to compute its cohomology, every factor $b_{i}/a_{i}\geq 2$ for $1\leq i\leq\dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{even}}$ yields a factor of at least $2$. In other words, every basis element of $T_{F}$ contracted via ${\operatorname{d}}_{0}$ hence reduces the dimension of the cohomology of $F\times B$ by a factor of $2$ at least. (Note that this is not true for elements contracted by ${\operatorname{d}}_{0}(V_{F}^{\textrm{odd}})$.) This together with $b_{\pi(i)}\leq b_{i}$ from above implies that $$\displaystyle\dim H^{*}(X)\leq$$ $$\displaystyle 2^{\dim T_{F}+\dim T_{B}}\cdot\prod_{1\leq i\leq\dim V_{F}^{% \textrm{even}}+\dim V_{B}^{\textrm{even}}-c}b_{\pi(i)}/a_{i}$$ $$\displaystyle\leq$$ $$\displaystyle 2^{\dim T_{F}+\dim T_{B}-c}\cdot\prod_{1\leq i\leq\dim V_{F}^{% \textrm{even}}+\dim V_{B}^{\textrm{even}}}b_{i}/a_{i}$$ $$\displaystyle=$$ $$\displaystyle 2^{-c}\cdot\dim H^{*}(F\times B)$$ which proves the asserted estimate. ∎ This now enables us to prove Theorem E in the form of the next two propositions, one for each estimate in (1). Proposition 5.3. Let $F\overset{}{\hookrightarrow}X\to B$ be a fibration of formal elliptic spaces. Then $h(F\times B)\leq 2\cdot h(X)$. Proof. We recall from Lemma 4.2 that $$\displaystyle\dim\pi_{*}(F\times B)\otimes{\mathbb{Q}}\leq$$ $$\displaystyle 3\dim\pi_{*}(X)\otimes{\mathbb{Q}}$$ We combine this with Lemma 5.2 and the notation from its proof (in particular, $c=\dim{\operatorname{im\,}}({\operatorname{d}}_{0}|_{T_{F}})$) leading to $$\displaystyle h(X)$$ $$\displaystyle=\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}$$ $$\displaystyle\geq\frac{2^{c}\cdot\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}% (F\times B)}$$ $$\displaystyle\geq\frac{2^{c}\cdot\big{(}\tfrac{1}{3}\cdot(\dim\pi_{*}(F\times B% )\otimes{\mathbb{Q}})\big{)}}{\dim H^{*}(F\times B)}$$ $$\displaystyle=2^{c}\cdot\tfrac{1}{3}\cdot h(F\times B)$$ Hence, in order to establish $h(F\times B)\leq 2h(X)$, it remains to observe that $2^{c+1}\cdot\tfrac{1}{3}\geq 1$ is equivalent to $c\geq 1$ and to discuss the case $c=0$. If $c=0$, we argue as follows. We decompose the minimal model of $F$ as $(\Lambda V_{F}\otimes\Lambda T_{F},\bar{\operatorname{d}})$ as in the proof of Lemma 5.2. Hence by the arguments from the proof of Lemma 4.2, ${\operatorname{d}}_{0}$ can only be non-trivial on a space of dimension $\dim V_{F}^{\textrm{odd}}$. Clearly, $\dim V_{F}^{\textrm{odd}}\leq\tfrac{1}{2}\dim(V_{F}\oplus T_{F})$. Analogously, ${\operatorname{im\,}}{\operatorname{d}}_{0}{\,\subseteq\,}V_{B}^{\textrm{even}}$, and $\dim{\operatorname{im\,}}{\operatorname{d}}_{0}\leq\tfrac{1}{2}\dim(V_{B}% \oplus T_{B})$. That is, both from fibre and from base space at most half-dimensional rational homotopy is contracted. Hence $$\displaystyle\dim\pi_{*}(X)\otimes{\mathbb{Q}}\geq\frac{1}{2}\dim\pi_{*}(F% \times B)$$ Adapting the inequalities above and using (3), we then have $$\displaystyle h(X)$$ $$\displaystyle=\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}$$ $$\displaystyle\geq\frac{\tfrac{1}{2}\cdot\dim\pi_{*}(F\times B)\otimes{\mathbb{% Q}}}{\dim H^{*}(F\times B)}$$ $$\displaystyle=\tfrac{1}{2}\cdot h(F\times B)$$ and the result follows also in this case. ∎ Proposition 5.4. Let $F\overset{}{\hookrightarrow}X\to B$ be a fibration of formal elliptic spaces. Then $$\displaystyle h(X)<h(F)+h(B)+\frac{1}{4}$$ Proof. The proof basically consists of refining Equation (7)—in particular, drawing on the terminology and results established there. Hence we recall that $$\displaystyle\dim H^{*}(X)=2^{\dim T_{F}+\dim T_{B}}\cdot\prod_{1\leq i\leq% \dim V_{B}^{\textrm{even}}+\dim V_{F}^{\textrm{even}}}b_{\pi(i)}/a_{i}$$ for some permutation $\pi$ of $\{1,\ldots,\dim V_{F}^{\textrm{odd}}+V_{B}^{\textrm{odd}}+\dim T_{F}\}$ satisfying $b_{\pi(i)}\leq b_{i}$. We now claim and prove that up to renumbering $(\pi(1),\ldots,\pi(\dim V_{F}^{\textrm{even}}))=(1,\ldots,\dim V_{F}^{\textrm{% even}})$, i.e. the first $\dim V_{F}^{\textrm{even}}$ many $b_{i}$ come from the $F_{0}$-part of the fibre $F$, i.e. they are given by the fact that the $2b_{i}-1$ are the degrees of a homogeneous basis of $V_{F}^{\textrm{odd}}$. In order to prove this, we again draw on the observations from [5, p. 443] respectively on [5, Proposition 32.9, p. 442] and its proof. That is, we have seen that $V_{F}^{\textrm{even}}$ corresponds injectively (respecting degrees) to a subspace of $\pi_{\textrm{even}}(X)\otimes{\mathbb{Q}}$. Hence in the (not necessarily minimal) model of the fibration there must exist a homogeneous subspace $S$ of $V_{F}^{\textrm{odd}}\oplus V_{B}^{\textrm{odd}}\oplus T_{F}\oplus T_{B}$ of dimension at least $\dim V_{F}^{\textrm{even}}$ with the property that $\bar{\operatorname{d}}S{\,\subseteq\,}\Lambda V_{F}^{\textrm{even}}$ and that $(\Lambda V_{F}^{\textrm{even}}\otimes\Lambda S,{\operatorname{d}})$ is elliptic. Here, as usual, $\bar{\operatorname{d}}$ denotes the projection of ${\operatorname{d}}$ to the fibre $\Lambda(V_{F}\oplus T_{F})$. Since $\bar{\operatorname{d}}|_{T_{F}\oplus V_{B}\oplus T_{B}}=0$, the only such subspace is gradedly isomorphic to $V_{F}^{\textrm{odd}}$ itself. In other words, since $\dim V_{F}^{\textrm{odd}}=\dim V_{F}^{\textrm{even}}$, these first degrees $b_{i}$ for $1\leq i\leq\dim V_{F}^{\textrm{even}}$ are uniquely determined by a homogeneous basis of $V_{F}^{\textrm{odd}}$. Hence using (2) we can refine Formula (7) by (8) $$\displaystyle\dim H^{*}(X)=2^{\dim T_{B}}\cdot\dim H^{*}(F)\cdot\prod_{1\leq i% \leq\dim V_{B}^{\textrm{even}}}b_{\pi(i)}/a_{i}$$ where the $b_{i}$ are the odd exponents of $V_{B}\oplus T_{F}$, and $\pi$ now is a permutation of $\{1,\ldots,V_{B}^{\textrm{odd}}+\dim T_{F}\}$ satisfying $b_{\pi(i)}\leq b_{i}$. Indeed, we have seen that both even and also odd exponents of the $F_{0}$-part of $F$ appear in the product. That is, the product of their quotients computes the dimension of the cohomology of the $F_{0}$-part of $F$, i.e. $\dim H(\Lambda V_{F},\bar{\operatorname{d}})$. With the decomposition of formal elliptic spaces (see Proposition 1.1 and Remark 1.2) yielding $\dim H^{*}(F)=\dim H(T_{F},0)\cdot\dim H(\Lambda V_{F},\bar{\operatorname{d}})$, and with $2^{\dim T_{F}}=\dim H(T_{F})$ the refined formula follows. We derive the estimate $$\displaystyle\dim H^{*}(X)$$ $$\displaystyle\geq 2^{\dim T_{B}+\dim V_{B}^{\textrm{even}}}\cdot\dim H^{*}(F)$$ or, equivalently, (9) $$\displaystyle\dim H^{*}(X)$$ $$\displaystyle\geq 2^{\chi_{\pi}(B)+\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q% }}}\cdot\dim H^{*}(F)$$ Clearly, we have that $\chi_{\pi}(B)+\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}\geq\tfrac{1}{2}% \cdot\dim\pi(B)\otimes{\mathbb{Q}}$, and $\dim\pi_{*}(X)\otimes{\mathbb{Q}}\leq\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi% _{*}(B)\otimes{\mathbb{Q}}$. Hence we can estimate $$\displaystyle h(X)=$$ $$\displaystyle\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{\dim H^{*}(X)}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{\dim\pi_{*}(X)\otimes{\mathbb{Q}}}{2^{\chi_{\pi}(B)+\dim\pi% _{\textrm{even}}(B)\otimes{\mathbb{Q}}}\cdot\dim H^{*}(F)}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{\dim\pi_{*}(F)\otimes{\mathbb{Q}}+\dim\pi_{*}(B)\otimes{% \mathbb{Q}}}{2^{(\dim\pi_{*}(B)\otimes{\mathbb{Q}})/2}\cdot\dim H^{*}(F)}$$ (10) $$\displaystyle=$$ $$\displaystyle\frac{\dim\pi_{*}(F)\otimes{\mathbb{Q}}}{2^{(\dim\pi_{*}(B)% \otimes{\mathbb{Q}})/2}\cdot\dim H^{*}(F)}+\frac{\dim\pi_{*}(B)\otimes{\mathbb% {Q}}}{2^{(\dim\pi_{*}(B)\otimes{\mathbb{Q}})/2}\cdot\dim H^{*}(F)}$$ The formula $h(X)<h(F)+h(B)+\tfrac{1}{4}$ trivially holds true whenever one of $F$ and $B$ are contractible. Hence we may assume this not to be the case. As an elliptic space satisfies Poincaré duality, it follows that both $\dim H^{*}(F),\dim H^{*}(B)\geq 2$. We derive that (11) $$\displaystyle h(X)<h(F)+\frac{\dim\pi_{*}(B)\otimes{\mathbb{Q}}}{2^{(\dim\pi_{% *}(B)\otimes{\mathbb{Q}})/2+1}}$$ and we need to discuss the inequality $\tfrac{n}{2^{n/2+1}}\leq\frac{1}{4}$ for $n\in{\mathbb{N}}$ (playing the role of $\dim\pi_{*}(B)\otimes{\mathbb{Q}}$). This holds true unless $n\in[1,7]$. So, in order to finish the proof, we need to differ and discuss the following particular cases: (i) $\dim H^{*}(F)=2$, implying that either (1) $F\simeq{\mathbb{S}}^{2k+1}$, $k\geq 0$, or (2) $F\simeq{\mathbb{S}}^{2k}$, $k\geq 1$. (ii) $\dim H^{*}(F)=3$ equivalent to $F\simeq_{\mathbb{Q}}{\mathbb{Q}}[x]/x^{3}$. (iii) $\dim H^{*}(F)\geq 4$ Case (i.1). Let us first deal with Case (i.1). That is, we consider a fibration $$\displaystyle{\mathbb{S}}^{1}\overset{}{\hookrightarrow}X\to B$$ with $\dim\pi_{*}(B)\otimes{\mathbb{Q}}\leq 7$. It follows that $\dim\pi_{*}(X)\otimes{\mathbb{Q}}\leq 8$. As $X$ is formal, from Proposition 1.1 we derive that, depending on the dimension of its rational homotopy, the cohomology of $X$ satisfies the following: $(\dim\pi_{*}(X)\otimes{\mathbb{Q}},H^{*}(X))$ can be estimated from below by $(n,\geq 2^{\lceil n/2\rceil})$. Correspondingly, in the respective cases, $(\dim\pi_{*}(X)\otimes{\mathbb{Q}},h(X))$ can be estimated by $(1,\leq\tfrac{1}{2})$, $(2,\leq 1)$, $(3,\leq\tfrac{3}{4})$, $(4,\leq 1)$, $(5,\leq\tfrac{5}{8})$, $(6,\leq\tfrac{3}{4})$, $(7,\leq\tfrac{7}{16})$, $(8,\leq\tfrac{1}{2})$. Since in our case $h(F)=h({\mathbb{S}}^{2k+1})=1/2$, we derive that the inequality $h(X)\leq 1/2+1/4=3/4$—and hence the asserted strict inequality $h(X)<h(F)+h(B)+1/4$—holds unless $\dim\pi_{*}(X)\otimes{\mathbb{Q}}\in\{2,4\}$. In these latter two cases the estimates, however, are sharp only when $X$ is positively elliptic. That is, given that by the additivity of the homotopy Euler characteristic $\chi_{\pi}(X)\geq\chi_{\pi}({\mathbb{S}}^{2k+1})=1$, the actual upper bounds in these two cases are given by $(2,\geq 4)$, $(4,\geq 16)$ for $(\dim\pi_{*}(X)\otimes{\mathbb{Q}},H^{*}(X))$ and by $(2,\leq\tfrac{1}{2})$, $(4,\leq\tfrac{1}{4})$ for $(\dim\pi_{*}(X)\otimes{\mathbb{Q}},h(X))$. Hence we are also done in these cases. We remark that the arguments underlying this are our usual estimates of the cohomology of the $F_{0}$-factor: given the decomposition $(\Lambda B\otimes\Lambda V,{\operatorname{d}})$ from Proposition 1.1 and Remark 1.2 we estimate $\dim H(\Lambda B,0)\geq 2^{\dim B}$ and $\dim H(\Lambda V,{\operatorname{d}})\geq 2^{\dim V/2}$. That is, once we have fixed the dimension $\dim\pi_{*}(X)=\dim V+\dim B$ (see Remark 1.2) of the total rational homotopy, the smaller the dimension of the rational homotopy of the $F_{0}$-part, $\dim V$, the larger the overall cohomology $\dim H^{*}(X)$ predicted by this estimate. Case (i.2). Case (ii). Since the Halperin conjecture is confirmed for spaces with cohomology algebra generated by one element (see Section 1.1; for monicly generated cohomology this is just a trivial computation), the fibration is totally non-homologous to zero in Cases (i.2) and (ii). As we recalled in Section 2.1, the formula $h(X)\leq h(F)+h(B)$ holds whenever the fibration is totally non-homologous to zero. Hence we are done in these cases. Case (iii). In Case (iii) we refine Inequality (11) to (12) $$\displaystyle h(X)<h(F)+\frac{\dim\pi_{*}(B)\otimes{\mathbb{Q}}}{2^{\dim\pi_{*% }(B)\otimes{\mathbb{Q}}/2+2}}$$ and solve that $\tfrac{n}{2^{n/2+2}}\leq\frac{1}{4}$ holds for $n\in{\mathbb{N}}\setminus\{3\}$. Hence, eventually, assume $\dim\pi_{*}(B)\otimes{\mathbb{Q}}=n=3$. Let us see that this case is merely an artefact of the proof. Indeed, we provide a refined version of Estimate (5) directly derived using Inequality (5) and not the simplification $\chi_{\pi}(B)+\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}\geq\tfrac{1}{2}% \cdot\dim\pi(B)\otimes{\mathbb{Q}}$. That is, we obtain $$\displaystyle h(X)<h(F)+\frac{\dim\pi_{*}(B)\otimes{\mathbb{Q}}}{2^{\chi_{\pi}% (B)+\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}}\cdot\dim H^{*}(F)}$$ The cases when $\dim\pi_{*}(B)\otimes{\mathbb{Q}}=n=3$ now are the following: • Either $\chi_{\pi}(B)=3$ and, due to formality, $B$ rationally is a product of three odd-dimensional spheres, or • $\chi_{\pi}(B)=1$ and $B$ has an $F_{0}$-component with cohomology algebra generated by one element. In the first case respectively the second case we derive that $$\displaystyle\frac{\dim\pi_{*}(B)\otimes{\mathbb{Q}}}{2^{\chi_{\pi}(B)+\dim\pi% _{\textrm{even}}(B)\otimes{\mathbb{Q}}}\cdot\dim H^{*}(F)}$$ $$\displaystyle\leq\frac{3}{2^{3}\cdot 4}<\frac{1}{4}$$ $$\displaystyle\frac{\dim\pi_{*}(B)\otimes{\mathbb{Q}}}{2^{\chi_{\pi}(B)+\dim\pi% _{\textrm{even}}(B)\otimes{\mathbb{Q}}}\cdot\dim H^{*}(F)}$$ $$\displaystyle\leq\frac{3}{2^{2}\cdot 4}<\frac{1}{4}$$ and we are done. ∎ As a corollary of the proof let us fix Observation (5) again, as it may be of independent interest. Corollary 5.5. For a fibration of formal elliptic spaces $F\overset{}{\hookrightarrow}X\to B$ we have the estimate $$\displaystyle\dim H^{*}(X)$$ $$\displaystyle\geq 2^{\chi_{\pi}(B)+\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q% }}}\cdot\dim H^{*}(F)$$ ∎ We sum up the results of these propositions. Proof of Theorem E. Clearly, Theorem E is a combination of Propositions 5.3, and 5.4. ∎ We finally prove the Conjecture for an interesting class of manifolds, namely for any fibration with $X$ rationally one of the known simply-connected manifolds of positive sectional curvature. In particular, this includes all simply-connected homogeneous spaces admitting homogeneous metrics of positive curvature—see Section 1.2 for details. The key observation for this is that any such manifold $M$ is formal and has the rational structure of an $F_{0}$-space, if $\dim M$ is even. If $\dim M$ is odd, they satisfy ($$*$$) $$\displaystyle\dim\pi_{\textrm{odd}}(M)\otimes{\mathbb{Q}}=2\qquad\textrm{and}% \qquad\dim\pi_{\textrm{even}}(M)\otimes{\mathbb{Q}}=1$$ unless $M$ rationally is an odd-dimensional sphere. Remark 5.6. Indeed, clearly, the class of spaces with finite dimensional rational cohomology and satisfying $(*)$ is exactly the class of spaces with rational cohomology algebra generated by one even-degree and one odd-degree element. $\boxbox$ Lemma 5.7. An elliptic Sullivan algebra $(\Lambda V,{\operatorname{d}})$ satisfying $(*)$ is formal. Proof. A minimal model of this algebra is of the form $(\Lambda V,{\operatorname{d}})=(\Lambda\langle x,y,z\rangle,{\operatorname{d}})$ with $\deg x$ even, $\deg y$, $\deg z$ odd. Since the associated pure algebra has finite-dimensional cohomology if and only if the original one has, we derive that, without restriction, $\deg z\geq\deg y$ and $\deg z>\deg x$. We derive that $(\Lambda V,{\operatorname{d}})$ decomposes as the total space of a fibration with fibre $(\Lambda\langle x,z\rangle,\bar{\operatorname{d}})$ over $(\Lambda\langle y\rangle,0)$. Since $(\Lambda V,{\operatorname{d}})$ is simply-connected whence $\deg y>1$, we obtain that ${\operatorname{d}}x=0$. Consequently, $(\Lambda V,{\operatorname{d}})\cong(\Lambda\langle x,z\rangle,{\operatorname{d% }})\otimes(\Lambda\langle y\rangle,0)$ with ${\operatorname{d}}x=0$ and ${\operatorname{d}}z=x^{k}$ for some $k>0$. Such an algebra is clearly formal. ∎ Proof of Corollary F. We need to discuss the potential fibrations for all total spaces $X$ which we depicted around $(*)$. Case 1. If $X$ is even-dimensional, then $X$ is an $F_{0}$-space. Hence the result follows from Theorem D. Case 2. Let us assume that $X$ rationally is an odd-dimensional sphere. By Lemma 4.2 we obtain that $\dim\pi_{\textrm{even}}(F)\otimes{\mathbb{Q}}=0$, $\dim\pi_{\textrm{odd}}(B)\otimes{\mathbb{Q}}\leq 1$, whence $\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}\leq 1$ and $\dim\pi_{\textrm{odd}}(F)\otimes{\mathbb{Q}}\leq 1$. In total, it follows that both $F$ and $B$ have one of the models $(\Lambda\langle x\rangle,0)$, $\deg x$ odd, or $(\Lambda\langle x,y\rangle,{\operatorname{d}})$ with $\deg x$ even, $\deg y$ odd, ${\operatorname{d}}x=0$, ${\operatorname{d}}y=x^{k}$ for some $k>1$. Both are formal, and the result follows from Propositions 5.3 and 5.4. Case 3. Now suppose $X$ is odd-dimensional and not a sphere. Then $(*)$ applies. By the additivity of the homotopy Euler characteristic we know that $1=\chi_{\pi}(M)=\chi_{\pi}(F)+\chi_{\pi}(B)$. Hence either $\chi_{\pi}(F)=0$ and $\chi_{\pi}(B)=1$ or $\chi_{\pi}(F)=1$ and $\chi_{\pi}(B)=0$. In the first case $F$ is positively elliptic. Moreover, since $\pi_{\textrm{even}}(F)\otimes{\mathbb{Q}}\leq\dim\pi_{\textrm{even}}(M)\otimes% {\mathbb{Q}}$, it follows that $H^{*}(F)$ is generated by one element. Since the Halperin conjecture is confirmed for at most $3$ cohomology algebra generators (see Section 1.1), the result follows again from Theorem D. In the second case, $\chi_{\pi}(F)=1$, the space $B$ is positively elliptic, and we shall have to distinguish yet two more non-trivial cases (using Lemma 4.2 again). For this we first note that $\dim\pi_{\textrm{even}}(F)\otimes{\mathbb{Q}}\leq 1$, $\dim\pi_{\textrm{odd}}(F)\otimes{\mathbb{Q}}\leq 2$, $\dim\pi_{\textrm{odd}}(B)\otimes{\mathbb{Q}}\leq 2$, $\dim\pi_{\textrm{odd}}(B)\otimes{\mathbb{Q}}\leq 2$. Combining these pieces of information leads to the following cases. (i) $F$ either rationally is an odd-dimensional sphere and $\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}=\dim\pi_{\textrm{odd}}(B)=1$, or $\dim\pi_{\textrm{even}}(B)\otimes{\mathbb{Q}}=\dim\pi_{\textrm{odd}}(B)=2$, or (ii) $F$ is of the type $(\ast)$ described afore the proof. Hence $H^{*}(B)$ is either generated by one element, by two elements or contractible. In order to apply Propositions 5.3 and 5.4 it remains to observe using Lemma 5.7 that in any case all of $X$, $F$, and $B$ are formal. ∎ As we noted in [1, Corollary 4.15, p. 2292] there are several geometric conditions which require a positively curved manifold to be a compact rank one symmetric space, and hence to satisfy Conjecture 0.1 in particular, like a $2$-positive curvature operator, weakly quarter-pinched curvature, or the sectional curvature bound $\sec\geq 1$ together with the diameter bound $\operatorname{diam}M\geq\pi/2$. References [1] M. Amann and L. Kennard. Positive curvature and rational ellipticity. Algebr. Geom. Topol., 15(4):2269–2301, 2015. [2] M. Amann and L. Kennard. Positive curvature and symmetry in small dimensions. arXiv:1512.01302, 2019. DOI: 10.1142/S0219199719500536. [3] M. Amann and L. Zoller. The toral rank conjecture and variants of equivariant cohomology. arXiv:1910.04746, 2019. [4] Y. Félix and S. Halperin. Formal spaces with finite-dimensional rational homotopy. Trans. Amer. Math. Soc., 270(2):575–588, 1982. [5] Y. Félix, S. Halperin, and J.-C. Thomas. Rational homotopy theory, volume 205 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2001. [6] Y. Félix, J. Oprea, and D. Tanré. Algebraic models in geometry, volume 17 of Oxford Graduate Texts in Mathematics. Oxford University Press, Oxford, 2008. [7] D. González-Álvaro and M. Radeschi. A note on the Petersen-Wilhelm conjecture. Proc. Amer. Math. Soc., 146(10):4447–4458, 2018. [8] M. R. Hilali. Action du tore $T^{n}$ sur les espaces simplement connexes. PhD thesis, Université catholique de Louvain, 1980. [9] B. Jessup and G. Lupton. Free torus actions and two-stage spaces. Math. Proc. Cambridge Philos. Soc., 137(1):191–207, 2004. [10] Y. Kotani and T. Yamaguchi. A lower bound for the LS category of a formal elliptic space. Math. J. Okayama Univ., 47:141–145, 2005. [11] G. Lupton. Note on a conjecture of Stephen Halperin’s. In Topology and combinatorial group theory (Hanover, NH, 1986/1987; Enfield, NH, 1988), volume 1440 of Lecture Notes in Math., pages 148–163. Springer, Berlin, 1990. [12] G. Lupton. Variations on a conjecture of Halperin. In Homotopy and geometry (Warsaw, 1997), volume 45 of Banach Center Publ., pages 115–135. Polish Acad. Sci., Warsaw, 1998. [13] M. Markl. Towards one conjecture on collapsing of the Serre spectral sequence. In Proceedings of the Winter School on Geometry and Physics (Srní, 1989), number 22, pages 151–159, 1990. [14] W. Meier. Rational universal fibrations and flag manifolds. Math. Ann., 258(3):329–340, 1981/82. [15] W. Meier. Some topological properties of Kähler manifolds and homogeneous spaces. Math. Z., 183(4):473–481, 1983. [16] O. Nakamura and T. Yamaguchi. Lower bounds of Betti numbers of elliptic spaces with certain formal dimensions. Kochi J. Math., 6:9–28, 2011. [17] T. Nishimoto, H. Shiga, and T. Yamaguchi. Rationally elliptic spaces with isomorphic cohomology algebras. J. Pure Appl. Algebra, 187(1-3):241–254, 2004. [18] S. Papadima and L. Paunescu. Reduced weighted complete intersection and derivations. J. Algebra, 183(2):595–604, 1996. [19] H. Shiga and M. Tezuka. Rational fibrations, homogeneous spaces with positive Euler characteristics and Jacobians. Ann. Inst. Fourier (Grenoble), 37(1):81–106, 1987. [20] J.-C. Thomas. Rational homotopy of Serre fibrations. Ann. Inst. Fourier (Grenoble), 31(3):v, 71–90, 1981. [21] B. Wilking and W. Ziller. Revisiting homogeneous spaces with positive curvature. arXiv:1503.06256, 2015. [22] T. Yamaguchi and S. Yokura. On ratios of homotopy and homology ranks of fibrations. preprint, 2020. [23] W. Ziller. Riemannian manifolds with positive sectional curvature. arXiv:1210.4102v1, 2012. Manuel Amann Institut für Mathematik Differentialgeometrie Universität Augsburg Universitätsstraße 14 86159 Augsburg Germany [email protected]
Rotating Supertubes Jin-Ho Cho and Phillial Oh BK21 Physics Research Division and Institute of Basic Science Sungkyunkwan University, Suwon 440-746, Korea E-mail: , [email protected] [email protected] Abstract: We study the rotating tubular D2-brane as a time dependent supersymmetric solution of type-IIA string theory. We show that the Poynting angular momentum of the supertube can be replaced by the mechanical angular momentum without disturbing the 8 supersymmetries. Unlike the non-rotating supertube, whose cross section can take an arbitrary shape, the rotating supertube admits only the circular cross section. When there is no electric field on the world-volume, the supersymmetry dictates the angular velocity of the tubular D2-brane to be inversely proportional to the magnetic field. This rotating supertube can be considered as the ‘blown-up’ configuration of an array of spinning D0-particles and is T-dual to the spiraling D-helix whose pitch moves at the speed of light. Supertube, D-Helix, Time dependent system, Supersymmetry ††preprint: hep-th/0302172 1 Introduction The experimental evidence for the positive cosmological constant [1] calls for our attention to the time dependent solutions in superstring theory [2, 3, 4, 5]. These solutions involve the brane configurations which are unstable by construction, therefore not supersymmetric and usually have tachyonic modes on the world-volumes of the branes. (See Ref. [6] for example.) On the other hand recently, a time dependent but supersymmetric solution was found in Ref. [7]. The solution describes a tubular D2-brane named as ‘supertube’ carrying some angular momentum. Although the supersymmetry algebra $\{Q,\,Q\}\sim{\cal H}$ seems to imply the inconsistency between the time dependency and the supersymmetry in general, the supertube is a completely consistent solution. This is because the angular momentum is the Poynting vector provided by the Born-Infeld (BI) gauge fields on its world-volume and only the gauge connection component $A_{x}$ is time dependent [8]. By taking T-duality on this supertube, one can obtain another interesting time dependent supersymmetric solution, D-helix [9], that is a coiled D-string in a real motion along the axial direction at the speed of light. (See Ref. [10] for another derivation of the solution.) This solution is also consistent with the supersymmetry because its (time dependent) position in the embedding flat spacetime geometry does not specify the state of D-helix. Since we are lacking of an effective tool for the theory involving tachyons, the time dependent supersymmetric solutions could provide good toy models to build up our intuition for the time dependent systems, and could possibly be staring points in the arena of the aforementioned stringy cosmology. In fact, for the problems of the spacelike (or null) singularity, the time dependent supersymmetric solutions have been explored to see whether they provide the stable backgrounds amenable for exact perturbative string analysis [11, 12, 13, 14, 15, 16, 17]. Along this line, several time dependent systems of supersymmetric intersecting D-strings were analyzed in type-IIB theory [16, 18, 19, 20]. On the IIA side, an interesting time dependent solution of a rotating ellipsoidal D2-brane was constructed in Ref. [21]. Although the solution is classically stable [22], it is not supersymmetric. In this paper, we consider a rotating tubular D2-brane as a possible time dependent supersymmetric configuration that is in a real motion. We recall that BPS Q-balls or some topological solitons are stabilized due to the angular momentum [23, 24]. In the non-rotating supertube case, the angular momentum is given by the BI gauge fields. We show that the Poynting angular momentum of the supertube can be replaced by the mechanical angular momentum without disturbing the 8 supersymmetries. Unlike the non-rotating supertube, whose cross section can take an arbitrary shape [25, 26], the rotating supertube admits only the circular cross section. When there is no electric field on the world-volume, the supersymmetry dictates the angular velocity of the tubular D2-brane to be inversely proportional to the magnetic field. This rotating supertube can be considered as the ‘blown-up’ configuration of an array of spinning D0-particles and is T-dual to the spiraling D-helix whose pitch moves at the speed of light. The paper is organized as follows. In Sec. 2, we construct BI action for a rotating tubular D2-brane and obtain its equations of motion. In Sec. 3, we analyze the supersymmetry for the rotating tube and show that 8 supersymmetries are preserved regardless of the rotation. Especially we use a specific representation for the spinors and find out the supersymmetric conditions explicitly. In the subsequent subsections, we apply these supersymmetric conditions to the equations of motion and find the solutions. For the non-rotating case, we recover the results discovered in Ref. [7, 26, 27] and obtain the equation governing the supersymmetric profile of the radius varying configuration (Sec. 3.1). For the rotating case, the supersymmetric condition allows only circular cross section of the configuration (Sec. 3.2). Without any electric field assumed over the world-volume of the tubular D2-brane, the rotation of the magnetic flux induces the electric field over the configurations and stabilizes them. As for the BIon spike solution, the mechanical rotation of the planar D2-brane carrying magnetic flux triggers the electric charge. In Sec. 4, we show that the tensionless conditions agree with the supersymmetric conditions. In the rotating case, the tension around the angular direction vanishes only when the cross section takes the circular shape. In Sec. 5, we discuss about the BPS bound of the Hamiltonian and argue that the charge concerned with the rotation is due to the fundamental strings induced under the moving magnetic flux. In Sec. 6, we discuss about the angular momentum bound which forbids the supertube to rotate with the superluminal linear speed. The bound tells us that the charge density of the induced fundamental strings cannot exceed that of the D0-branes dissolved on the tube. Sec. 7 is devoted to a T-dual configuration of the rotating supertube. Taking T-duality along the axial direction of the rotating supertube, we obtain the spiraling D-helix, whose pitch moves at the speed of light. Sec. 8 concludes the paper with some discussions on the M-brane configuration of the rotating supertube. We also confirm the circular shape of the rotating supertube by showing (in the Appendix) that two tilted D-strings passing by each other cannot preserve any supersymmetry even when the intersection point moves at the speed of light. Lastly we give some remarks on the zero radius of the rotating supertube. 2 A Tubular D2-brane in the Rotational Motion Let us consider a tubular D2-brane embedded in a trivial type-IIA background, that is, with no dilaton field, NS-NS $B$ field, and with flat geometry; $$\displaystyle ds^{2}=-dT^{2}+dX^{2}+R^{2}d\Phi^{2}+dR^{2}+ds^{2}(E\!\!\!\!E^{(% 6)}).$$ (1) In order to allow the rotational motion of the tube, we embed the world-volume angular coordinate $\varphi$ into the above flat spacetime background in a time dependent manner; $\Phi=\Phi(t,\,\varphi)$. The BI action has the world-volume reparametrization symmetry. For our purpose, it is convenient to choose the following ‘comoving gauge’ for the world-volume coordinates $(t,\,x,\,\varphi)$; $$\displaystyle dT=dt,\quad dX=dx,\quad d\Phi=d\varphi+\Phi_{t}dt.$$ (2) The world-volume coordinate frame is therefore rotating with respect to the spacetime coordinate frame with the angular velocity, $\Phi_{t}\equiv\partial\Phi/\partial t$ and the coordinate field $\Phi(t)$ will be considered as one of the dynamical variables for the system. In the above specific gauge, we let the radial profile $R$ of the tube vary with the coordinates $(x,\,\varphi)$ but not with time $t$. In other words, the tubular D2-brane has a solid radial profile (in the world-volume coordinates) but is rotating with respect to the spacetime coordinate frame. The geometry induced on the tube becomes $$\displaystyle ds^{2}$$ $$\displaystyle=$$ $$\displaystyle g_{\alpha\beta}d\sigma^{\alpha}d\sigma^{\beta}$$ $$\displaystyle=$$ $$\displaystyle-\left(1-R^{2}\Phi_{t}^{2}\right)dt^{2}+\left(1+R_{x}^{2}\right)% dx^{2}+\left(R^{2}+R_{\varphi}^{2}\right)d\varphi^{2}+2R_{x}R_{\varphi}dx\,d% \varphi+2R^{2}\Phi_{t}\,dt\,d\varphi,$$ where $R_{x}\equiv\partial R/\partial x$ and $R_{\varphi}\equiv\partial R/\partial\varphi$. We consider the following BI 2-form field strengths in the above specific gauge (2): $$\displaystyle F=E\,dt\wedge dx+B\,dx\wedge d\varphi.$$ (4) We absorbed the scale $2\pi l_{s}^{2}$ into the the above definition so that the field $E$ is dimensionless while the component $B$ has the dimension of length. Note that the electric field $E$ and the magnetic field $B$ are defined in the comoving frame. Actually this setup of the fields gives the true meaning of the ‘rotation’ of the world-volume. The magnetic flux, written in the rotating world-volume coordinates as $$\displaystyle\int\limits dx\,d\varphi\,\,B,$$ (5) is due to the D0-branes dissolved over the world-volume of the tubular D2-brane. The electric field $E$ is produced by the end points of the macroscopic strings laid along the axial direction of the tube. Therefore we arranged D0-branes and macroscopic strings so that they be stalled with respect to the world-volume frame of Eq. (2) but rotate with respect to the spacetime coordinate frame. The whole situation is depicted in Fig. 1. Fig. 1: The tubular D2-brane is in the rotational motion. The magnetic flux and the electric field are stalled on the tube and are rotating with respect to the spacetime coordinates. The BI Lagrangian for the tube will be $$\displaystyle{\cal L}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}}\sqrt{-\det\left(g+F\right)}$$ (6) $$\displaystyle=$$ $$\displaystyle-\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}}\sqrt{R^{2}\left[1-\left(E+% \Phi_{t}B\right)^{2}\right]+B^{2}+\left(1-E^{2}-R^{2}\Phi_{t}^{2}\right)R_{% \varphi}^{2}+R^{2}R_{x}^{2}}$$ $$\displaystyle\equiv$$ $$\displaystyle-\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}}\Delta.$$ A few words about the reparametrization are in order. One might think that the rotation is just the gauge artifact that could be removed by a different gauge choice. However, this is not the case. Even when we use the following static gauge, $$\displaystyle T=\bar{t},\quad X=\bar{x},\quad\Phi=\bar{\varphi},$$ (7) to make $\Phi_{\bar{t}}=0$, the mechanical motion cannot be removed away. The dynamical degree is just transfered to the radial profile. $R_{\bar{t}}\neq 0$ in general, because $R_{t}=R_{\bar{t}}+R_{\varphi}\Phi_{t}=0$. Owing to the reparametrization symmetry, the above Lagrangian (6) remains the same but with the replacement of $-R_{\varphi}\Phi_{t}$ by $R_{\bar{t}}$. The BI fields of Eq. (4) will be also translated into the new language as $\bar{E}=E+\Phi_{t}B,\,\bar{B}=B$ and all the results below are valid in this new gauge too. However, we stress again that the gauge dependent statement about the magnetic flux (5) should be kept in mind. We still interpret the magnetic flux in the specific gauge (2) as the net charge of the D0-branes, and the vanishing of the electric field $E$ (not $\bar{E}$) as the absence of the macroscopic IIA strings dissolved in the D2-brane. In fact, the two gauge dependent statements ($R_{t}=0$ and Eq. (4)) define our system. Since both the coordinate $\Phi$ and the BI gauge field component $A_{x}$ are dynamical, we have to consider their conjugated momenta $$\displaystyle\Pi=\frac{\partial{\cal L}}{\partial\Phi_{t}}=\frac{R^{2}\left[% \Phi_{t}\left(R_{\varphi}^{2}+B^{2}\right)+EB\right]}{g_{s}l_{s}(2\pi l_{s})^{% 2}\Delta}$$ (8) and $$\displaystyle\Pi_{A}=\frac{\partial{\cal L}}{\partial E}=\frac{E\left(R^{2}+R_% {\varphi}^{2}\right)+\Phi_{t}BR^{2}}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}$$ (9) to obtain the Hamiltonian, $$\displaystyle{\cal H}=\Pi\Phi_{t}+\Pi_{A}E-{\cal L}=\frac{R_{\varphi}^{2}+B^{2% }+R^{2}\left(R_{x}^{2}+1\right)}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}.$$ (10) Varying the Lagrangian (6) with respect to $\delta\Phi$ and $\delta A_{t,x,\varphi}$, one can obtain the equations of motion respectively as $$\displaystyle g_{s}l_{s}(2\pi l_{s})^{2}\partial_{t}\Pi+\partial_{x}\frac{R^{2% }R_{x}R_{\varphi}}{\Delta}+\partial_{\varphi}\frac{R^{2}\left(-1-R_{x}^{2}+E^{% 2}+\Phi_{t}BE\right)}{\Delta}=0,$$ (11) $$\displaystyle g_{s}l_{s}(2\pi l_{s})^{2}\partial_{x}\Pi_{A}-\partial_{\varphi}% \frac{ER_{x}R_{\varphi}}{\Delta}=0,$$ (12) $$\displaystyle g_{s}l_{s}(2\pi l_{s})^{2}\partial_{t}\Pi_{A}+\partial_{\varphi}% \frac{B\left(1-R^{2}\Phi_{t}^{2}\right)-ER^{2}\Phi_{t}}{\Delta}=0,$$ (13) $$\displaystyle\partial_{t}\frac{ER_{x}R_{\varphi}}{\Delta}+\partial_{x}\frac{B% \left(1-R^{2}\Phi_{t}^{2}\right)}{\Delta}=0,$$ (14) where the second equation is the Gauss law constraint corresponding to the $U(1)$ local symmetry. 3 Supersymmetry Analysis in General In this section, we perform the supersymmetry analysis explicitly. With an appropriate choice of $\Gamma$-matrix representation, we solve the Killing spinor equations in the general setup including the case where $E\neq 0$ and $\Phi_{t}\neq 0$. The number of supersymmetries preserved by the rotating tubular D2-brane is determined by the number of the Killing spinors $\epsilon$ satisfying $$\displaystyle\Delta\,\Gamma\,\epsilon$$ $$\displaystyle=$$ $$\displaystyle\left(\gamma_{tx\varphi}+E\gamma_{\varphi}\Gamma_{\natural}+B% \gamma_{t}\,\Gamma_{\natural}\right)\epsilon$$ (15) $$\displaystyle=$$ $$\displaystyle\left[R_{\varphi}\Gamma_{TXR}+R\,\Gamma_{TX\Phi}+RR_{x}\Gamma_{TR% \Phi}+\Phi_{t}RR_{\varphi}\Gamma_{XR\Phi}\right.$$ $$\displaystyle\left.+E\left(R_{\varphi}\Gamma_{R}+R\Gamma_{\Phi}\right)\Gamma_{% \natural}+B\left(\Gamma_{T}+R\Phi_{t}\,\Gamma_{\Phi}\right)\Gamma_{\natural}% \right]\epsilon$$ $$\displaystyle=$$ $$\displaystyle\Delta\,\epsilon,$$ where $\Gamma$ is the matrix defining kappa transformation on the world-volume of D-branes [28] and $\gamma_{\alpha}$’s and $\Gamma_{X^{\mu}}$’s are gamma matrix components in the world-volume coordinates and in the spacetime coordinates respectively. The operator $\Gamma_{\natural}$ is the chiral operator. Making use of the Killing spinor expression $\epsilon=\exp{(\Phi\Gamma_{R\Phi}/2)}\epsilon_{0}$ adapted for the embedding flat spacetime, one can obtain the following two independent equations; $$\displaystyle\left[R\left(R_{x}\Gamma_{T}+\Phi_{t}R_{\varphi}\Gamma_{X}\right)% \Gamma_{R\Phi}+B\Gamma_{T}\Gamma_{\natural}-\Delta\right]\epsilon_{0}=0,$$ $$\displaystyle\left[\left(R_{\varphi}\Gamma_{R}+R\,\Gamma_{\Phi}\right)\Gamma_{% TX}+\left(E+\Phi_{t}B\right)R\,\Gamma_{\Phi}\Gamma_{\natural}+ER_{\varphi}% \Gamma_{R}\Gamma_{\natural}\right]\epsilon_{0}=0,$$ (16) where $\epsilon_{0}$ is a constant 32-component Majorana spinor. In order to work out these Killing spinor equations, it is convenient to recast them in the rectangular coordinates as $$\displaystyle\left[\Phi_{t}RR_{\varphi}\Gamma_{XYZ}+RR_{x}\Gamma_{TYZ}+B\Gamma% _{T}\Gamma_{\natural}-\Delta\right]\epsilon_{0}=0,$$ $$\displaystyle\left[\left(YR_{\varphi}-ZR\right)\Gamma_{TXY}+\left(ZR_{\varphi}% +YR\right)\Gamma_{TXZ}\right.$$ $$\displaystyle\left.+\left(EYR_{\varphi}-ZR\left(E+\Phi_{t}B\right)\right)% \Gamma_{Y}\Gamma_{\natural}+\left(EZR_{\varphi}+YR\left(E+\Phi_{t}B\right)% \right)\Gamma_{Z}\Gamma_{\natural}\right]\epsilon_{0}=0.$$ (17) The s-basis, which is well-illustrated in Ref. [29] and was applied in Ref. [19] for several intersecting D-branes, is a powerful tool to analyze the Killing spinor equations. The constant spinor $\epsilon_{0}$ can be written in this basis as $$\displaystyle\epsilon_{0}$$ $$\displaystyle=$$ $$\displaystyle(a,b,c,d)$$ (18) $$\displaystyle\equiv$$ $$\displaystyle a\,\,|+1,\,+1,\,2s_{2},2s_{3},2s_{4}>+b\,\,|+1,\,-1,\,2s_{2},2s_% {3},2s_{4}>$$ $$\displaystyle+$$ $$\displaystyle c\,\,|-1,\,+1,\,2s_{2},2s_{3},2s_{4}>+d\,\,|-1,\,-1,\,2s_{2},2s_% {3},2s_{4}>,$$ where the first and second entry $\pm 1$’s denote the eigenvalues of $2S_{0}\equiv\Gamma^{T}\Gamma^{X}$ and $2S_{1}\equiv-i\Gamma^{Y}\Gamma^{Z}$ respectively and $2s_{j}$ are the eigenvalues of the operators $2S_{j}\equiv-i\Gamma^{2j}\Gamma^{2j+1},\,\,(j=2,3,4)$. Since the operators $2S_{0}$ and $2S_{1}$ do not commute with the operators of the above eigenspinor equations, the spinor $\epsilon_{0}$ must be some combination of all possible eigenstates of $2S_{0}$ and $2S_{1}$. With $\sigma\equiv\prod\limits_{j=2}^{4}(2s_{j})$, the Killing spinor equations become $$\displaystyle i\Phi_{t}RR_{\varphi}(c,-d,a,-b)+iRR_{x}(-c,d,a,-b)-\sigma B(-c,% d,-a,b)-\Delta(a,b,c,d)=0,$$ (19) $$\displaystyle\left(YR_{\varphi}-ZR\right)(-b,-a,d,c)+\sigma\left(EYR_{\varphi}% -ZR\left(E+\Phi_{t}B\right)\right)(-b,a,d,-c)$$ $$\displaystyle+i\left(ZR_{\varphi}+YR\right)(b,-a,-d,c)-i\sigma\left(EZR_{% \varphi}+YR\left(E+\Phi_{t}B\right)\right)(-b,-a,d,c)=0.$$ (20) The former equation breaks supersymmetries by half, and the latter equation breaks another half. Eq. (19) has nontrivial solutions only when $$\displaystyle\Delta^{2}-\left(i\Phi_{t}RR_{\varphi}+\sigma B\right)^{2}-R^{2}R% _{x}^{2}=0.$$ (21) Therefore we obtain the necessary conditions for the supersymmetries; $$\displaystyle\Phi_{t}R_{\varphi}=0,$$ $$\displaystyle R^{2}\left[1-\left(E+\Phi_{t}B\right)^{2}\right]+\left(1-E^{2}% \right)R_{\varphi}^{2}=0,$$ (22) for which Eq. (19) relates coefficients as $$\displaystyle\sigma Ba=\sqrt{B^{2}+R^{2}R_{x}^{2}}\,c,\quad\sigma Bb=\sqrt{B^{% 2}+R^{2}R_{x}^{2}}\,d,$$ (23) which implies that half supersymmetries are broken by Eq. (19) so far as the coefficients are constant. This is possible when $B=B_{0}RR_{x}$ with some constant $B_{0}$, if $R_{x}\neq 0$, and otherwise, the magnetic field $B$ can take an arbitrary value. For these supersymmetric cases, $\Delta$ is simplified as $$\displaystyle\Delta=\sqrt{B^{2}+R^{2}R_{x}^{2}}=\left\{\begin{array}[]{ll}R|R_% {x}|\sqrt{B_{0}^{2}+1}&\quad(R_{x}\neq 0)\cr|B|&\quad(R_{x}=0)\end{array}\right.$$ (24) It is easy to see from Eq. (20) that for the states satisfying (3), the coefficients $a$ and $c$ vanish if $\sigma=E+\Phi_{t}B$, while $b$ and $d$ vanish if $\sigma=-(E+\Phi_{t}B)$. Hence $4+4=8$ symmetries are preserved. Now let us check the equations of motion, (11)-(14) upon the imposition of the supersymmetric conditions. 3.1 Non-rotating Case More specifically, when $\Phi_{t}=0$, the second condition of Eq. (3) requires $E^{2}=1$. Without loss of generality, we let $E=1$, then Eq. (20) have eight solutions, four of which are of the form $(a,0,\mbox{sgn}(B)a,0)$ with $\sigma=1$ and the other four spinors take the form $(0,b,0,-\mbox{sgn}(B)b)$ with $\sigma=-1$. i) Especially if $R_{x}=0$, the equations of motion (12)-(14) dictate that $\Pi_{A}$ may depend only on $\varphi$, and $\mbox{sgn}(B)$ is constant over the tube world-volume. Therefore 8 supersymmetries are preserved for the tubular D2-branes with arbitrary cross sections (that is $R_{\varphi}\neq 0$). The relation between the radial profile and the conserved charges $\Pi_{A}$ and $B$ is provided by Eq. (9); $R^{2}+R_{\varphi}^{2}=g_{s}l_{s}(2\pi l_{s})^{2}|\Pi_{A}||B|$. Hence the results of the circular supertubes [7] and the supertubes with arbitrary cross sections of Ref. [26] are recovered and the radial profile agrees well with the specific result about the elliptic supertube case discussed in Ref. [27]. ii) If $R_{x}\neq 0$, then $B=B_{0}RR_{x}$ so that the coefficients $a,b,c,$ and $d$ in Eq. (23) be constant. From the equation of motion (13) we see that, in order for the conjugate momentum $\Pi_{A}$ to be conserved, the $B$ field ought to keep its signature along the angular direction of the ‘tube’. In view of Eq. (14), it is reasonable to assume that the field $B$ does not change its signature over the D2-brane world-volume, in other words, there is no abrupt orientation flip on the world-volume. (This relation between the signature of the $B$ field and the orientation of the D2-brane was shown in detail in Ref. [19] for several supersymmetric configurations of D-branes.) The radial profile $R(x,\varphi)$ can then be obtained by solving the Gauss law constraint (11): $$\displaystyle\partial_{x}\left(\frac{1+\left(\partial_{\varphi}\ln{R}\right)^{% 2}}{\partial_{x}\ln{R}}\right)=\partial_{\varphi}^{2}\ln{R}.$$ (25) When $R_{\varphi}=0$, it was shown in Ref. [7] that the solution describes the two dimensional version of BIon spike of Ref. [30] carrying the magnetic flux. In general case, it is not easy to solve this nonlinear equation. However, we emphasize that there might be more general class of solutions for the ‘supertube’ with the radial profile $R(x,\varphi)$ satisfying the above equation. 3.2 Rotating Case When $\Phi_{t}\neq 0$, supersymmetry conditions (3) require $R_{\varphi}=0$ and $(E+\Phi_{t}B)^{2}=1$. The former condition is very interesting because it implies that the rotating supertube can take only the circular shaped cross section. This result is in contrast with the situation of the non-rotating case discussed previously. As we will see below, this is related with the fact that the tube is no longer tensionless if its cross section takes an arbitrary shape. In the latter condition, the combination $E+\Phi_{t}B$ replaces the role of the electric field in the non-rotating supertube case. The rotation of the magnetic sources (D0-branes) induces the electric field $\Phi_{t}B$. In view of the equations of motion, (11) and (13), we reasonably assume that $\partial_{\varphi}B=\partial_{\varphi}\Phi_{t}=0$ to achieve the conservation of the momenta, $\Pi$ and $\Pi_{A}$. Upon the imposition of the supersymmetric conditions 111Without loss of generality we may set $E+\Phi_{t}B=1$., the Gauss law constraint (12) and Eq. (14) reduce to $$\displaystyle\partial_{x}\frac{R^{2}}{\sqrt{B^{2}+R^{2}R_{x}^{2}}}=0,\qquad% \partial_{x}\frac{B\left(1-R^{2}\Phi_{t}^{2}\right)}{\sqrt{B^{2}+R^{2}R_{x}^{2% }}}=0.$$ (26) i) For the tubular case ($R_{x}=0$), these equations constrain the magnetic field $B$ and the angular velocity $\Phi_{t}$ to be $x$-independent. The radius of the supertube is determined by the conserved charges as $$\displaystyle R^{2}=g_{s}l_{s}(2\pi l_{s})^{2}|\Pi_{A}B|=g_{s}l_{s}(2\pi l_{s}% )^{2}|\Pi|.$$ (27) ii) As for the non-tubular case ($R_{x}\neq 0$), the above equations determine the radius $R(x)$ and the angular velocity $\Phi_{t}$ as $$\displaystyle R(x)=C\exp{\left[x/e_{0}\right]},\qquad R\Phi_{t}=v_{0},$$ (28) where $C,e_{0}$ and $v_{0}$ are integration constants. One very exciting thing is that the BI field strength becomes now rephrased in the spacetime coordinates $(T,R,\Phi)$ as $$\displaystyle F$$ $$\displaystyle=$$ $$\displaystyle E\,dt\wedge dx+B\,dx\wedge d\varphi$$ (29) $$\displaystyle=$$ $$\displaystyle E\,dT\wedge\frac{e_{0}}{R}dR+\frac{Be_{0}}{R}dR\wedge\left(d\Phi% -\Phi_{t}dT\right)$$ $$\displaystyle=$$ $$\displaystyle\left(E+\Phi_{t}B\right)\frac{e_{0}}{R}dT\wedge dR+B_{0}dR\wedge Rd\Phi.$$ Since the supersymmetric condition always requires $E+\Phi_{t}B=1$ regardless of the value of $E$, the solution describes a BIon spike on a D2-brane with a constant magnetic field $B_{0}$, even when $E=0$. The spike plays the role of the electric source with the charge $e_{0}$. The whole configuration carries inhomogeneous angular momentum about the axial direction; $$\displaystyle|\Pi|=\frac{|B_{0}|R^{2}}{g_{s}l_{s}(2\pi l_{s})^{2}\sqrt{1+B_{0}% ^{2}}}.$$ (30) This angular momentum is due to the mechanical rotation of the whole configuration with constant linear speed $R\,\Phi_{t}=v_{0}$ at every point. (See Fig. 1. for the case when $E=0$.) Although the BIon spike produces the electric field of the charge $e_{0}$, one can no longer interpret this as a fundamental string ending on a D0-charged D2-brane. The above solution is valid even when $E=0$, in which case the field momentum $\Pi_{A}$ is totally transferred to the mechanical momentum $\Pi$ via their relation (27). We will discuss about its interpretation later. Fig. 2: The figure shows the BIon spike solution especially when BI electric field $E=0$. The whole configuration carries the angular momentum due to the mechanical rotation, which triggers the electric charge $e_{0}$ for the BIon. The linear speed is constant $v_{0}$ everywhere over the configuration. 4 Tensionless Brane? In the rotating supertube case, the supersymmetric condition, $R_{\varphi}=0$, forbids the solution to have an arbitrary cross section, which is in contrast with the non-rotating supertube case. A simple way to confirm this is to check the tension along the angular direction (as was done in Ref. [26]). The spacetime stress-energy tensor can be obtained by the well-known Schwinger method; $$\displaystyle T^{\mu\nu}(\bar{X})$$ $$\displaystyle=$$ $$\displaystyle\left.\frac{2}{\sqrt{-\det{G}}}\frac{\delta S}{\delta G_{\mu\nu}(% \bar{X})}\right|_{G=\eta}\qquad\qquad(\mu,\nu=0,\cdots 9)$$ (31) $$\displaystyle=$$ $$\displaystyle-\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}}\int\limits d^{3}\sigma\sqrt% {-\det\left(g+F\right)}\left(g+F\right)^{-1(\alpha\beta)}\partial_{\alpha}X^{% \mu}\partial_{\beta}X^{\nu}\,\,\delta^{(10)}\left(X(\sigma)-\bar{X}\right)$$ $$\displaystyle\equiv$$ $$\displaystyle\int\limits d^{3}\sigma{\cal T}^{\mu\nu}\left(X(\sigma)\right)\,% \,\delta^{(10)}\left(X(\sigma)-\bar{X}\right).$$ Making use of the relation (2) between the spacetime coordinates and the world-volume coordinates, we obtain the stress-energy density as $$\displaystyle{\cal T}^{TT}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}\left[R^{2}\left(1+R_{x% }^{2}\right)+R_{\varphi}^{2}+B^{2}\right]$$ $$\displaystyle{\cal T}^{TX}$$ $$\displaystyle=$$ $$\displaystyle\frac{R^{2}R_{x}R_{\varphi}\Phi_{t}}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}$$ $$\displaystyle{\cal T}^{T\Phi}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}\left[B\left(E+\Phi_{t}% B\right)+\Phi_{t}R_{\varphi}^{2}\right]$$ $$\displaystyle{\cal T}^{XX}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}\left[R^{2}+R_{\varphi% }^{2}\left(1-R^{2}\Phi_{t}^{2}\right)\right]$$ $$\displaystyle{\cal T}^{X\Phi}$$ $$\displaystyle=$$ $$\displaystyle\frac{R_{x}R_{\varphi}}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}$$ $$\displaystyle{\cal T}^{\Phi\Phi}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{g_{s}l_{s}(2\pi l_{s})^{2}\Delta}\left[\left(E+\Phi_{t}B% \right)^{2}-1-R_{x}^{2}+\Phi_{t}^{2}R_{\varphi}^{2}\right].$$ (32) Note in the above that all the $\Phi_{t}$-dependent terms come with $R_{\varphi}$ upon the imposition of the supersymmetric condition $E+\Phi_{t}B=1$. Therefore these terms disappear when we impose the supersymmetric condition $R_{\varphi}=0$. Especially ${\cal T}^{\Phi\Phi}$ component that describes the tension along the angular direction, vanishes only for the tubular D2-brane with circular cross section and $(E+\Phi_{t}B)^{2}=1$. Consequently, tensionless condition agrees with the supersymmetric conditions (3). If the tube violates either of these conditions, it is no longer tensionless and is not supersymmetric. 5 Fundamental Strings induced from Moving Flux From the stress-energy tensor, one can see what the rotating supertube is composed of. Under the supersymmetric conditions (3), the components ${\cal T}^{TT}$, ${\cal T}^{T\Phi}$, and ${\cal T}^{XX}$ survive. The component ${\cal T}^{T\Phi}$ concerns the energy flow around the tube, which is entirely due to the rotation of D0-branes dissolved on the tube. This is because, the rotation of a pure tubular D2-brane (without any D0-brane on its world-volume) is not physical and can be regarded as a world-volume reparametrization. The component ${\cal T}^{XX}=-\Pi_{A}$ ($=-\Pi/B$, hereafter, in case when $E=0$) is difficult to understand. Unlike the non-rotating case, the component survives even when $E=0$ and $\Pi_{A}=0$ correspondingly. It is not from the contribution of D0 gas on the world-volume, since they behave like tensionless dust. It is also very difficult to conceive a tubular D2-brane that has tension only along one direction. Therefore we inevitably have to consider some one dimensional object as its source. The only candidate for this, in this type-IIA theory, will be the fundamental string. And then where does it come from (especially when $\Pi_{A}=0$)? Here we recall that the fundamental string is polarized when it moves in the magnetic field background [31]. The fundamental strings living on the tubular D2-brane will be polarized into long strings as the BI flux rotates. The whole effect is the induced electric field by the movement of the magnetic flux. For the supersymmetric cases, the component ${\cal T}^{TT}$, being the Hamiltonian ${\cal H}$ in Eq. (10), should be sum of some charges. Let us see the Hamiltonian for the rotating case ($\Phi_{t}\neq 0$ and $R_{\varphi}=0$): $$\displaystyle{\cal H}=\left\{\begin{array}[]{ll}|\Pi_{A}|+\frac{1}{g_{s}l_{s}(% 2\pi l_{s})^{2}}|B|&\qquad(R_{x}=0)\cr|\Pi_{A}|+\frac{1}{g_{s}l_{s}(2\pi l_{s}% )^{2}}\frac{\sqrt{1+B_{0}^{2}}}{|B_{0}|}|B|&\qquad(R_{x}\neq 0)\end{array}\right.$$ (33) Therefore it takes the same form as in the non-rotating case, but now, $|\Pi_{A}|$ should be understood as the charge density of the fundamental strings composed of the ‘bare’ strings or the ‘induced’ strings. Fig. 3: As the D0-branes dissolved in the tube rotate (gray arrows), the fundamental strings (black wiggly arrows) living on the tube are polarized, which results in the electric field induction. 6 Angular Momentum Bound As for the rotating case, we may expect the angular momentum bound, otherwise the tangential linear speed of the dissolved D-particles will become superluminal. This is indeed the case. Making use of the supersymmetric condition $E+\Phi_{t}B=1$ and the radius $R$ in (27), one can easily obtain the following bound: $$\displaystyle|\Pi_{A}|=\left|\frac{\Pi}{B}\right|\leq\frac{1}{g_{s}l_{s}(2\pi l% _{s})^{2}}\frac{|B|}{\left(1-E\right)^{2}}.$$ (34) When $E=0$, the bound shows that the charge density of the ‘induced’ open strings cannot exceed that of D0-branes and the maximal tube radius is $|B|$. In the non-rotating limit ($\Phi_{t}\rightarrow 0$), the supersymmetric condition requires $E\rightarrow 1$, therefore the bound disappears 222It is not to be confused with the angular momentum bound for the system composed of a supertube and D0-charged superstrings along its axis, which was discussed in [7]. What we are discussing in this paper is the bound of $|\Pi|=|\Pi_{A}B|$.. However, this is a bit confusing. Although the angular momentum $\Pi_{A}$ is not mechanical, the open strings living on the supertube will be exerted by the Lorentz force and move along the angular direction. This is because the open string end points are oppositely charged for the BI fields. Therefore the same logic applied to the moving open strings might give some bound on $\Pi_{A}$. Although we will not discuss this point further in this paper, it should be clarified elsewhere. 7 Spiraling D-Helix When $R_{x}=0$, the axial direction will acquire isometry. Let the direction is compactified on a circle of radius $\lambda$. Performing T-duality on the rotating supertube along its axial direction, one obtains the spiraling D-helix. This can be seen by inspecting the open string boundary condition as was done in Ref. [9]. Since the spacetime components of BI fields are $$\displaystyle F=B\,dX\wedge\left(d\Phi-\Phi_{t}dT\right)+E\,dT\wedge dX,$$ (35) the type -IIA open sting on the tube is subject to the following boundary conditions: $$\displaystyle\left.\partial_{\sigma}T+\partial_{\tau}X\left(E+\Phi_{t}B\right)% \right|_{\sigma=0,\pi}=0$$ $$\displaystyle\left.\partial_{\sigma}X+\partial_{\tau}T\left(E+\Phi_{t}B\right)% -\partial_{\tau}\Phi B\right|_{\sigma=0,\pi}=0$$ $$\displaystyle\left.\partial_{\sigma}\Phi\,R^{2}+\partial_{\tau}X\,B\right|_{% \sigma=0,\pi}=0.$$ (36) Under T-duality along $X$-direction, the boundary conditions become $$\displaystyle\left.\partial_{\sigma}\left[\tilde{T}+\tilde{X}\left(E+\Phi_{t}B% \right)\right]\right|_{\sigma=0,\pi}=0$$ $$\displaystyle\left.\partial_{\tau}\left[\tilde{X}+\tilde{T}\left(E+\Phi_{t}B% \right)-\tilde{\Phi}\,B\right]\right|_{\sigma=0,\pi}=0$$ $$\displaystyle\left.\partial_{\sigma}\left[\tilde{\Phi}\,R^{2}+\tilde{X}\,B% \right]\right|_{\sigma=0,\pi}=0,$$ (37) where $(\tilde{T},\tilde{X},\tilde{\Phi})$ denote the dual coordinates in type-IIB theory. The second condition defines the hypersurface $\tilde{X}+\tilde{T}\left(E+\Phi_{t}B\right)-\tilde{\Phi}\,B=$constant, which describes nothing but the D-string profile on which the dual string lives. In the same comoving gauge as was used in type-IIA case, that is, $t=\tilde{T}$ and $\varphi=\tilde{\Phi}-\Phi_{t}\tilde{T}$, one can easily see that the D-string is coiled to form a helix with the tilting angle $$\displaystyle\tan{\theta}=\frac{\partial\tilde{X}}{R\partial\varphi}=\frac{B}{% R},$$ (38) and moving up with the speed $$\displaystyle\frac{\partial\tilde{X}}{\partial t}=-E\equiv v_{||}.$$ (39) The Lagrangian is exactly the same as that of type-IIA case: $$\displaystyle{\cal L}_{IIB}=-\frac{1}{g^{\prime}_{s}l_{s}(2\pi l_{s})}\sqrt{R^% {2}\left[1-\left(E+\Phi_{t}B\right)^{2}\right]+B^{2}},$$ (40) where $g^{\prime}_{s}=g_{s}l_{s}/\lambda$ is the string coupling constant of type-IIB theory. Therefore the Hamiltonian is minimized at the same radius, $R^{2}=|\Pi|$. One would obtain the same supersymmetric condition as in type-IIA case; $$\displaystyle E+\Phi_{t}B=E+\frac{B}{R}\cdot\left(R\Phi_{t}\right)=E+\tan{% \theta}\cdot v_{\bot}=1.$$ (41) The latter part, $\tan{\theta}\cdot v_{\bot}$ describes the speed at which the pitch looks like moving down as the helix spirals, henceforth called as the ‘virtual speed’. The supersymmetric condition regulates that the sum of the actual speed and the virtual speed should be that of light. Fig. 4: The rotating supertube (Left) is T-dual to the spiraling super D-helix (Right). The charge density of the bare fundamental strings (therefore, winding) on the tube gives the momentum to the helix in the dual picture. The rotation of the tube corresponds to the spiral motion of the helix. The pitch of the helix is in a real motion due to the momentum and is, at the same time, in a virtual motion due the spiraling of the helix. The net speed (real plus virtual) amounts to the speed of light. 8 Discussions In this paper, we considered the generalization of the supertube configuration of Ref. [7] to the nonstatic case. We allowed the angular motion of an arbitrary shaped tubular D2-brane and checked whether it preserves any supersymmetry. When there is no angular motion ($\Phi_{t}=0$), a large class of radial profiles $R(x,\varphi)$ preserve 8 supersymmetries. These include the straight ($R_{x}=0$) tube with an arbitrary cross section, which was analyzed in [26, 27]. As for non-straight ($R_{x}\neq 0$) supertubes, we found the governing equation (25) for their radial profiles $R(x,\varphi)$. We showed that any rotational motion of a noncircular tubular D2-brane breaks supersymmetry completely. Only the circular shaped supertube can rotate preserving 8 supersymmetries. Although the rotational motion of the circular supertube is tangential to its world-volume, thus can be absorbed into a reparametrization, this motion is physical in the presence of magnetic field as in Eq. (4). Any motion of D0-branes is transverse to their world-volumes and gives rise to a term proportional to the angular velocity in the momentum flow ${\cal T}^{T\Phi}$. (See Eq. (4).) Especially under T-duality along the axial direction of a rotating straight supertube, the rotational motion becomes the spiraling motion, which is transverse to the world-sheet of the resulting D-helix. The upshot is as follows. The electric field on a D2-brane is caused by the end points of superstrings [7] while the magnetic flux corresponds to the net RR-charge of D0-branes dissolved in the D2-brane [32]. Since the notion of the ‘electric’ and ‘magnetic’ fields is changed under the reparametrization transformation, we have to define them in some specific gauge. Hence this gauge choice determines the particular system. The whole physics depends on whether the BI field strengths (thus macroscopic strings and D0-branes) are static or rotating with respect to the spacetime. Had we worked in the static gauge of (7) and defined the BI field strengths as $F=\bar{E}d\bar{t}\wedge d\bar{x}+\bar{B}d\bar{x}\wedge d\bar{\varphi}$, the problem would have changed into that of checking whether any supersymmetry-preserving radial motion ($R_{\bar{t}}\neq 0$) is allowed. The BI field strengths are given in the static world-volume coordinates, as in the ordinary supertube case [7]. The answer can be inferred from our analysis and is that the supertube is possible only when $R_{\bar{t}}=0$ and $\bar{E}=1$. There is no constraint on the radial profile in this case. Let us look at the M-brane configuration corresponding to the rotating supertube case ($R_{x}=0$). The M2-brane action constructed from the D2-brane action shows the following relation between the M2-brane profile and the BI fields over the D2-brane [33, 34, 35]: $$\displaystyle\partial_{\alpha}X^{11}=\frac{1}{2}\frac{g_{\alpha\beta}\epsilon^% {\alpha\beta\gamma}F_{\beta\gamma}}{\Delta}.$$ (42) Since the metric (1) induced on the supertube is stationary rather than static, the motion of the corresponding M2-brane becomes $$\displaystyle\partial_{t}X^{11}=-\mbox{sgn}(B)+g_{s}l_{s}(2\pi l_{s})^{2}|\Pi_% {A}|\Phi_{t}.$$ (43) Therefore the motion along $X^{11}$-axis is due to D0-branes on the one hand and due to the rotation of the tube on the other. The M2-brane extends to the $x$-coordinate, i.e., $\partial_{x}X^{11}=0$ and is coiled to form a ‘M-ribbon’ [36] as $$\displaystyle\partial_{\varphi}X^{11}=g_{s}l_{s}(2\pi l_{s})^{2}|\Pi_{A}|.$$ (44) Interestingly, the rotating motion of the supertube $d\Phi=d\varphi+\Phi_{t}dt$ has been traded to the linear motion along $X^{11}$ (43) via the relation, $$\displaystyle X^{11}=-\mbox{sgn}(B)T+g_{s}l_{s}(2\pi l_{s})^{2}|\Pi_{A}|\Phi.$$ (45) The supersymmetric condition $R_{\varphi}=0$ can be understood in a different view point. If ever the rotating supertube admits arbitrary cross sections, one could deform the tube to make a D2/$\overline{\mbox{D}2}$ pair as was done in Ref. [25]. By taking T-duality along $x$-axis, one could get two D-strings tilted at the angle $\theta$ and $\pi-\theta$ respectively and passing by each other with the same speed. (See Fig. 2.) The resulting configuration looks very similar to the scissors configuration discussed in [16, 18, 19]. However, it is not supersymmetric for any values of the tilting angle and the speed (even for the null scissors). See Appendix for details. This confirms that the rotating supertube does not admit arbitrary cross sections. A couple of remarks concerning future works are in order. The first one is on the zero radius limit of the rotating supertube. The zero radius limit of the non-rotating supertube corresponds to the D0-charged fundamental strings. The charge densities of the D0-branes and the fundamental strings are well encoded onto the tubular D2-brane as the magnetic flux and the electric displacement. As for the rotating supertube, the zero radius limit looks a bit ambiguous. Especially when $E=0$, the angular momentum of the tube is entirely due to the mechanical rotation of the tube. Although the ‘induced’ strings play the role of the ‘bare’ strings at any nonzero radius of the tube, it is unclear whether the ‘induced’ strings make sense in the zero radius limit. It is rather likely that an array of D0-branes with net spin angular momentum is blown up to the rotating supertube. These points deserves further investigation for clarification. After the work on the supertube [7], many related $1/4$ supersymmetric configurations have been discovered [37, 38, 39, 40, 41, 42]. It would be interesting to generalize those configurations to include the mechanical angular momentum and to check their supersymmetric conditions. Acknowledgments.The authors thank S. Hyun, O.-K. Kwon, T. Mateos, J.-H. Park and H. Shin for helpful discussions and comments. This work is supported in part by KOSEF through Project No. R01-2000-000-00021-0. Appendix A Two Tilted D-strings Passing by Each Other The supersymmetry preserved by a tilted D-string that is in motion to the right is written as the combination of the left moving part and the right moving part of the supercharges on the open string world-sheet: $Q_{\alpha}+(\bar{\beta}^{2}\beta_{2}^{\bot}\tilde{Q})_{\alpha}$ [29]. Here $$\displaystyle\beta_{2}^{\bot}=\prod\limits_{m=3}^{9}\beta^{m},$$ (46) and $\beta^{m}=\Gamma^{m}\Gamma$ is the spacetime parity operator (the inversion along $x^{m}$-axis) on the world-sheet. The factor $\bar{\beta}^{2}=\rho(\gamma)\rho(\theta)\beta^{2}\rho(-\theta)\rho(-\gamma)$ tilts (by an angle $\theta$) and boosts (with the boost parameter $\gamma$) the D-string. Here $\rho(\gamma)=\exp{\left(i\gamma\Sigma^{01}\right)}$ and $\rho(\phi_{i})=\exp{\left(i\phi_{i}\Sigma^{2i-1,2i}\right)}$ with the Lorentz rotation elements $\Sigma^{\mu\nu}=-\frac{i}{4}[\Gamma^{\mu},\,\Gamma^{\nu}]$ of $SO(2k,1)$ in the spinor representation. As for another D-string tilted at an angle $\pi-\theta$ and moving to the left with the same speed, the factor $\bar{\beta}^{2}$ should be replaced by the factor $\bar{\beta}^{\prime 2}=\rho(-\gamma)\rho(\pi-\theta)\beta^{2}\rho(\theta-\pi)% \rho(\gamma)$. The unbroken supersymmetries for both D-strings are determined by the 16 component spinors $\epsilon$ invariant under $$\displaystyle\left(\bar{\beta}^{2}\beta_{2}^{\bot}\right)^{-1}\bar{\beta}^{% \prime 2}\beta_{2}^{\bot}=2\cosh{\gamma}\sin{\theta}\left(\cosh{\gamma}\sin{% \theta}+\Gamma^{0}\Gamma^{1}\sinh{\gamma}\sin{\theta}-\Gamma^{1}\Gamma^{2}\cos% {\theta}\right)-1.$$ (47) Fig. 5: Two tilted D-strings are passing by each other. This configuration is not supersymmetric in general. Only when the boost parameter $\gamma=0$ and the tilting angle $\theta=\pi/2$, the configuration preserve 16 supersymmetries. As in Sec. 3, the spinor $\epsilon$ is represented in the s-basis as $$\displaystyle\epsilon$$ $$\displaystyle=$$ $$\displaystyle(a,b,c,d)$$ (48) $$\displaystyle\equiv$$ $$\displaystyle a\,\,|+1,\,+1,\,2s_{2},2s_{3},2s_{4}>+b\,\,|+1,\,-1,\,2s_{2},2s_% {3},2s_{4}>$$ $$\displaystyle+$$ $$\displaystyle c\,\,|-1,\,+1,\,2s_{2},2s_{3},2s_{4}>+d\,\,|-1,\,-1,\,2s_{2},2s_% {3},2s_{4}>,$$ where the first and second entry $\pm 1$’s denote the eigenvalues of $2S_{0}\equiv\Gamma^{0}\Gamma^{9}$ and $2S_{j}\equiv-i\Gamma^{2j-1}\Gamma^{2j}$ (a bit different from Sec. 3) respectively and $2s_{j}$ are the corresponding eigenvalues which are subject to the Weyl condition $\prod_{a=0}^{4}2s_{a}=1$. The spinors $\epsilon$ invariant under the operator (47) satisfy $$\displaystyle\left(\cosh^{2}{\gamma}\sin^{2}{\theta}-1\right)(a,\,b,\,c,\,d)+% \cosh{\gamma}\sinh{\gamma}\sin^{2}{\theta}(d,\,c,\,-b,\,-a)$$ $$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad+i\cosh{\gamma}\cos{\theta}% \sin{\theta}(a,\,-b,\,c,\,-d)=0.$$ (49) This equation has nontrivial solutions for $a,\,b,\,c,\,d$, only when $$\displaystyle\left(\cosh^{2}{\gamma}-1\right)^{2}+\cosh^{2}{\gamma}\cos^{2}{% \theta}\sin^{2}{\theta}+\cosh^{2}{\gamma}\sinh^{2}{\gamma}\sin^{4}{\theta}=0.$$ (50) The only possible case to satisfy this condition is that $\gamma=0$ and $\theta=\pi/2$, that is when the two D-strings are parallel and at rest. For this case, $a,\,b,\,c,\,d$ are arbitrary, therefore 16 supersymmetries are preserved. References [1] A.G. Riess it et al., “Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant,” Astron.J. 116 (1998) 1009, [astro-ph/9805201]. [2] M. Gutperle and A. Strominger, “Spacelike Branes,” J. High Energy Phys. 0204 (2002) 018, [hep-th/0202210]. [3] A. Sen, “Rolling Tachyon,” J. High Energy Phys. 0204 (2002) 048, [hep-th/0203211]. [4] L. Cornalba and M.S. Costa, “A New Cosmological Scenario in String Theory,” Phys. Rev. D 66 (2002) 066001, [hep-th/0203031]. [5] J.E. Wang, “Spacelike and Time Dependent Branes from DBI,” J. High Energy Phys. 0210 (2002) 037, [hep-th/0207089]. [6] A. Sen, “Time Evolution in Open String Theory,” J. High Energy Phys. 0210 (2002) 003, [hep-th/0207105]. [7] D. Mateos and P.K. Townsend, “Supertubes,” Phys. Rev. Lett. 87 (2001) 011602, [hep-th/0103030]. [8] P.K. Townsend, “Surprises with Angular Momentum,” hep-th/0211008. [9] J.-H. Cho and P. Oh, “Super D-Helix,” Phys. Rev. D 64 (2001) 106010, [hep-th/0105095]. [10] O. Lunin and S.D. Mathur, “Metric of the Multiply Wound Rotating String”, Nucl. Phys. B 610 (2001) 49, [hep-th/0105136]. [11] H. Liu, G. Moore, and N. Seiberg, “Strings in a Time-Dependent Orbifold,” J. High Energy Phys. 0206 (2002) 045, [hep-th/0204168]. [12] L. Cornalba, M.S. Costa, and C. Kounnas, “A Resolution of the Cosmological Singularity with Orientifolds,” Nucl. Phys. B 637 (2002) 378, [hep-th/0204261]. [13] H. Liu, G. Moore, and N. Seiberg, “Strings in Time-Dependent Orbifolds,” J. High Energy Phys. 0210 (2002) 031, [hep-th/0206182]. [14] G. T. Horowitz and J. Polchinski, “Instability of Spacelike and Null Orbifold Singularities,” Phys. Rev. D 66 (2002) 103512, [hep-th/0206228]. [15] J. Figueora-O’Farrill and J. Simón, “Supersymmetric Kaluza-Klein Reductions of M2 and M5-branes,” hep-th/0208107. [16] C. Bachas and C. Hull, “Null Brane Intersections,” J. High Energy Phys. 0212 (2002) 035, [hep-th/0210269]. [17] M. Fabinger and S. Hellerman, “Stringy Resolutions of Null Singularities,” hep-th/0212223. [18] R.C. Myers and D.J. Winters, “From D-$\overline{\mbox{D}}$ Pairs to Branes in Motion,” J. High Energy Phys. 0212 (2002) 061, [hep-th/0211042]. [19] J.-H. Cho and P. Oh, “Supersymmetric Boost on Intersecting D-branes,” J. High Energy Phys. 0301 (2003) 046, [hep-th/0212009]. [20] C. Bachas, “Relativistic String in a Pulse,” hep-th/0212217. [21] T. Harmark and K.G. Savvidy, “Ramond-Ramond Field Radiation from Rotating Ellipsoidal Membranes,” Nucl. Phys. B 585 (2000) 567, [hep-th/0002157]. [22] K.G. Savvidy and G.K. Savvidy, “Stability of the Rotating Ellipsoidal D0-brane System,” Phys. Lett. B 501 (2001) 283, [hep-th/0009029]. [23] R. Leese, “Q-lumps and Their Interactions,” Nucl. Phys. B 366 (1991) 283. [24] R.S. Ward, “Topological Q-Solitons,” hep-th/0302045. [25] D. Bak and A. Karch, “Supersymmetric Brane-Antibrane Configurations,” Nucl. Phys. B 626 (2002) 165, [hep-th/0110039]. [26] D. Mateos, S. Ng, and P.K. Townsend, “Tachyons, Supertubes and Brane/Anti-Brane Systems,” J. High Energy Phys. 0203 (2002) 016, [hep-th/0112054]. [27] J.-H. Cho and P. Oh, “Elliptic supertube and a Bogomol’nyi-Prasad-Sommerfield D2-brane–anti-D2-brane Pair,” Phys. Rev. D 65 (2002) 121901R, [hep-th/0112106]. [28] E. Bergshoeff, R. Kallosh, T. Ortin, and G. Papadopoulos, “$\kappa$-Symmetry, Supersymmetry and Intersecting Branes,” Nucl. Phys. B 502 (1997) 145, [hep-th/9705040]. [29] J. Polchinski, “String Theory,” (Cambridge University Press, Cambridge, England, 1998), Vol. II, Chapter 13. [30] C.G. Callan and J.M. Maldacena, “Brane Dynamics From the Born-Infeld Action,” Nucl. Phys. B 513 (1998) 198, [hep-th/9708147]. [31] D. Bigatti and L. Susskind, “Magnetic Fields, Branes and Noncommutative Geometry,” Phys. Rev. D 62 (2000) 066004, [hep-th/9908056]. [32] P.K. Townsend, “D-branes from M-branes,” Phys. Lett. B 373 (1996) 68, [hep-th/9512062]. [33] C. Schmidhuber, “D-brane actions,” Nucl. Phys. B 467 (1996) 146, [hep-th/9601003]. [34] S.P.de Alwis and K. Sato, “D-strings and F-strings from string loops,” Phys. Rev. D 53 (1996) 7187, [hep-th/9601167]. [35] E. Bergshoeff and P.K. Townsend, “Super D-branes,” Nucl. Phys. B 490 (1997) 145, [hep-th/9611173]. [36] Y. Hyakutake and N. Ohta, “Supertubes and Supercurves from M-ribbons,” Phys. Lett. B 539 (2002) 153, [hep-th/0204161]. [37] M. Kruczenski, R.C. Myers, A.W. Peet, and D.J. Winters, “Aspects of supertubes,” J. High Energy Phys. 0205 (2002) 017, [hep-th/0204103]. [38] D. Bak and K. Lee, “Supertubes connecting D4 branes,” Phys. Lett. B 544 (2002) 329, [hep-th/0206185]. [39] D.K. Park, S. Tamaryan, and H.J.W. Müller-Kirsten, “Supersphere,” Phys. Lett. B 551 (2003) 187, [hep-th/0210306]. [40] N.E. Grandi, and A.R. Lugo, “Supertubes and special holonomy,” Phys. Lett. B 553 (2003) 293, [hep-th/0212159]. [41] O. Lunin, J. Maldacena and L. Maoz, “Gravity solutions for the D1-D5 system with angular momentum,” hep-th/0212210. [42] E. Verlinde and M. Vonk, “String networks and supersheets,” hep-th/0301028.
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction Zitao Chen University of British Columbia [email protected]    Karthik Pattabiraman University of British Columbia [email protected] Abstract Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine whether a given input is used for training the target model. While there have been many efforts to mitigate MIAs, they often suffer from limited privacy protection, large accuracy drop, and/or requiring additional data that may be difficult to acquire. This work proposes a defense technique, HAMP that can achieve both strong membership privacy and high accuracy, without requiring extra data. To mitigate MIAs in different forms, we observe that they can be unified as they all exploit the ML model’s overconfidence in predicting training samples through different proxies. This motivates our design to enforce less confident prediction by the model, hence forcing the model to behave similarly on the training and testing samples. HAMP consists of a novel training framework with high-entropy soft labels and an entropy-based regularizer to constrain the model’s prediction while still achieving high accuracy. To further reduce privacy risk, HAMP uniformly modifies all the prediction outputs to become low-confidence outputs while preserving the accuracy, which effectively obscures the differences between the prediction on members and non-members. We conduct extensive evaluation on five benchmark datasets, and show that HAMP provides consistently high accuracy and strong membership privacy. Our comparison with seven state-of-the-art defenses shows that HAMP achieves a superior privacy-utility trade off than those techniques. ††publicationid: pubid: Network and Distributed System Security (NDSS) Symposium 2024 26 February - 1 March 2024, San Diego, CA, USA ISBN 1-891562-93-2 https://dx.doi.org/10.14722/ndss.2024.23014 www.ndss-symposium.org I Introduction Machine learning (ML) models are often trained with the sensitive or private user data like clinical records [22], financial information [31] and personal photos [21]. Unfortunately, ML models can also unwittingly leak private information [37, 10, 43, 12, 4]. One prominent example is Membership inference attacks (MIAs) [37, 30, 48, 38, 27, 47, 3], which determine whether an input is used for training the target model, Hence, MIAs constitute a fundamental threat to data privacy. For instance, by knowing that an individual’s clinical record was used to train a hospital’s diagnostic model, the adversary can directly infer his/her health status. MIAs exploit the ML model’s differential behaviors on members and non-members [37, 30, 48, 27, 38, 8, 3]. Members are the samples used to train the model (i.e., training samples) and non-members are the samples not used for training (e.g., testing samples). Existing MIAs can be divided into score-based [37, 30, 17, 48, 38, 3] and label-only attacks [8, 27], where the former requires access to the model’s output score indicating the class probability, while the latter needs only the prediction label. These attacks all seek to learn distinctive statistical features from the model’s predictions in different ways, such as training an attack inference model [30, 37], computing metrics like prediction loss [48] and entropy [37, 38], or using Gaussian likelihood estimate [3]. Defenses against MIAs can be categorized into provable and practical defenses. Provable defenses provide provable guarantees through differential privacy (DP) [2], but they often incur severe accuracy degradation. Practical defenses, instead, offer empirical membership privacy with the goal of maintaining high model accuracy [29, 41, 36, 19]. However, existing defenses still suffer from the following limitations: (1) limited privacy protection [19, 29]; (2) large accuracy drop [2, 29, 41]; (3) requiring additional public datasets that may not always be available in practice [32, 36]. To the best of our knowledge, no technique satisfies all these constraints, though they may address individual issues, e.g., high model accuracy but with limited privacy protection [19]; or strong privacy but with significant accuracy loss [2]. Our Approach. This paper proposes a practical defense called HAMP that can achieve both High Accuracy and Membership Privacy without requiring additional data. Existing MIAs employ diverse approaches in inferring membership, e.g., score-based MIAs may exploit prediction loss or entropy [48, 38, 30] while label-only MIAs [8, 27] can leverage adversarial robustness. Despite the different manifestations of these attacks, we identify a common exploitation thread among them - they are all learning to distinguish whether the model is overly confident in predicting the training samples via different proxies. Our defense is therefore to reduce the model’s overconfident prediction on training samples while preserving the model’s prediction performance, which can simultaneously reduce membership leakage (from different MIAs) and maintain model accuracy. HAMP consists of a training- and testing-time defense. Training-time defense. Our key idea is to explicitly enforce the model to be less confident in predicting training samples during training. We first identify that the prevailing use of hard labels in common training algorithms is one of the main factors that lead to the model’s excessive confidence in predicting training samples. Hard labels assign 1 to the ground-truth label class and 0 elsewhere. The model is trained to produce outputs that match the labels, i.e., near 100% probability for the ground-truth class and 0% otherwise. On the other hand, a non-member sample that is not seen during training, is usually predicted with lower confidence, and can hence be distinguished by the adversary from member samples. We therefore propose a new training framework that gets rid of hard labels and instead uses (1) High-entropy soft labels, which are soft labels with high entropy that assign a much lower probability to the ground-truth class and non-zero probability for other classes. This explicitly enforces the model to make less confident prediction on training samples. (2) HAMP also consists of an entropy-based regularizer, which is to penalize the model for predicting any high-confidence outputs via regularizing the prediction entropy during training. The proposed training framework is able to significantly reduce the model’s overconfident prediction and improve membership privacy, without (severely) degrading the model accuracy. Section III-B explains how it prevents privacy leakage from different sources (output scores and prediction labels). On the other hand, stronger membership privacy can also be achieved (e.g., by increasing the strength of regularization), but it would be at the cost of accuracy, which is undesirable as both privacy and accuracy are important considerations. This motivates our testing-time defense, whose goal is to gain higher membership privacy without degrading accuracy. Testing-time defense. We propose to uniformly modify all the outputs (from members and non-members) into low-confidence outputs, without changing the prediction labels. Our idea is to leverage the output scores from the randomly-generated samples, which are often predicted with low confidence due to the high dimensionality of the input space. In our defense, all the values in each output score are replaced by those from random samples, and we keep the relative ordering of different classes unchanged to maintain the same prediction labels (e.g., a dog image is still predicted as a dog but with different output scores). Both the high-confidence outputs (on training samples) and low-confidence outputs (on testing samples) are uniformly replaced by such low-confidence outputs from random samples. This further reduces the membership leakage from the output scores. Evaluation. We evaluate HAMP on five benchmark datasets (Purchase100, Texas100, Location30, CIFAR100 and CIFAR10), and perform comprehensive evaluation on a total of nine diverse MIAs (including the state-of-art LiRA attack [3]). We compare HAMP with seven leading defenses: AdvReg [29], MemGuard [19], SELENA [41], DMP [36], Label Smoothing (LS) [40], Early-stopping [38], and DP-SGD [2]. An ideal privacy defense should offer strong protection for both members and non-members. Therefore, we follow Carlini et al. [3] to use attack true positive rate (TPR) controlled at low false positive rate (FPR), and attack true negative rate (TNR) at low false negative rate (FNR) to evaluate membership privacy. The former metric evaluates the privacy protection for members, and the latter for non-members. Contributions. We summarize our contributions below. • Develop a novel training framework with high-entropy soft labels and an entropy-based regularizer to enforce less confident prediction by the model, which can significantly mitigate diverse MIAs and incur minimal accuracy drop. • Propose a novel testing time defense technique to modify all the output scores into low-confidence outputs, which further improves membership privacy without degrading accuracy. • Integrate the training and testing framework as HAMP, and conduct rigorous evaluation under a wide range of attacks on five different datasets. We compare HAMP against seven leading defenses and show that HAMP outperforms existing defenses by achieving a superior privacy-utility trade off. Fig. 1 summarizes the results of HAMP versus other defenses. We find that existing defenses often bias towards either privacy (e.g., DP-SGD) or utility (e.g., MemGuard). In contrast, HAMP is able to provide strong membership privacy for both members and non-members, and preserve model accuracy. HAMP reduces the attack TPR @0.1% FPR by 94% and the attack TNR @0.1% FNR by 97% respectively, with only 0.46% accuracy loss on average. This represents a much better privacy-utility trade off than other defenses. II Background II-A Machine Learning Primer This work focuses on supervised training for classification problem. A ML model can be expressed as a function $F_{\theta}:X\rightarrow Y$, where $X\in\mathbb{R}^{d}$ denotes the input space and $Y\in\mathbb{R}^{k}$ the output space, and $F$ is parameterized by weights $\theta$. During training, the network is given a training set $(x,y)\in D_{tr}$ where $y$ is the ground truth label. $y$ is commonly expressed in the one-hot encoding format, where the ground-truth class is indicated with 1 and 0 elsewhere. The training objective is to minimize the prediction loss on the training set: $$\operatorname*{min}_{\theta}\frac{1}{|D_{tr}|}\sum_{x\in D_{tr}}\mathcal{L}(F_{\theta}(x),{y}),$$ (1) where $|D_{tr}|$ denotes the size of the training set, and $\mathcal{L}$ the prediction loss such as cross-entropy loss. The model’s output $F_{\theta}(x)$ indicates the probability of $x$ belonging to each class with $\sum_{j=0}^{k-1}F_{\theta}(x)_{j}=1$ that sums up to 1. To prevent the model from overfitting on the training set, a separate validation set different from $D_{tr}$ is commonly used to serve as an unbiased proxy of the testing set. One can use the accuracy on the validation set to assess how good the model will be when evaluated on test data and prevent overfitting. Hereafter, we refer to $F$ as the trained model $F_{\theta}$, $F(x)$ as the output score of $F$ on $x$, and $D_{te}$ as the test set. II-B Threat Model Attacker. Following prior work [19, 41, 29], we assume a black-box adversary who can query the target ML model with any input and observe the prediction output. The adversary’s goal is to infer the membership of the training samples $(x,{y})\in D_{tr}$ for a given model $F$. Like previous defenses [29, 41, 36], we assume a strong adversary with the knowledge of half of the training members and an equal number of non-members. Further, we assume the adversary has full knowledge of the defense technique and can therefore train shadow models in the same way as the target model is trained, which facilitates a strong adversary in evaluating the defenses. Defender. We assume the defender has a private set $D_{tr}$ and his/her goal is to train a model that can both achieve high classification accuracy and protect against MIAs. We do not assume the defender has access to any additional data. II-C Membership Inference Attacks The attack model $h(x,{y},F(x))\rightarrow[0,1]$ outputs the membership probability. We refer to $D_{tr}^{A},D_{te}^{A}$ as the set of members and non-members that are known to the adversary. The adversary’s goal is to find a $h$ that can best distinguish between $D_{tr}^{A}$ and $D_{te}^{A}$. The empirical gain of the attack can be measured as: $$\sum_{(x,y)\in D_{tr}^{A}}\frac{h(x,{y},F(x))}{|D_{tr}^{A}|}+\sum_{(x,y)\in D_{te}^{A}}\frac{1-h(x,{y},F(x))}{|D_{te}^{A}|}$$ (2) We categorize existing MIAs into score-based and label-only attacks as follows. Score-based MIAs This class of attacks either trains an inference model to infer membership [30, 37] or computes custom metrics such as prediction loss [48] to derive a threshold for distinction. NN-based attack [30, 37] trains an neural network (NN) $A$, to distinguish the target model’s prediction on members and non-members: $A:F(x)\rightarrow[0,1],x\in[D_{tr}^{A},D_{te}^{A}]$. By querying the target model with $D_{tr}^{A},D_{te}^{A}$, the resulting output $(F(D_{tr}^{A}),1)$, $(F(D_{te}^{A}),0)$ forms the training set for $A$. In addition to output scores, other features like the ground-truth labels and prediction loss can also be used to train the inference model. Loss-based attack [48] is based on the observation that the prediction loss on training samples is often lower than that on testing samples, as the loss on training samples are explicitly minimized during training. Specifically, the adversary can query the target model with $D_{tr}^{A}$, and obtain the average loss on $D_{tr}^{A}$ as the threshold $\tau=-\frac{1}{|D_{tr}^{A}|}\sum_{(x,y)\in D_{tr}^{A}}\mathcal{L}(F_{\theta}(x),{y})$. Any sample with loss lower than $\tau$ is considered as a member. Entropy-based attack [37, 48] leverages that the output score of a training sample should be close to the one-hot encoded label, and hence its prediction entropy should be close to 0, which is lower than that on testing samples. Prediction entropy of a sample can be computed as $-\sum_{j}F(x)_{j}\text{log}(F(x)_{j})$, where $j$ is the class index. Modified-entropy-based attack [38] is an enhanced version of the entropy-based attack by computing the following metric: $-(1-F(x)_{y})\text{log}(F(x)_{y})-\sum_{j\neq y}F(x)_{j}\text{log}(1-F(x)_{j})$. This attack improves by taking into account class-dependent thresholds, as well as the ground truth label $y$, which is shown to achieve higher attack effectiveness. Confidence-based attack [48, 38] exploits the observation that the prediction confidence on training samples $F(x)_{y}$ is often higher than that on testing samples. The attack threshold can be derived similar to the entropy-based attacks, and samples predicted with high confidence are deemed as members. Likelihood Ratio Attack (LiRA) [3] is a state-of-art attack that can successfully infer membership when calibrated at low false positive. In LiRA, the adversary trains N shadow models, half of which are trained with target sample (called IN models) and the remaining half are trained without the target sample (called OUT models). It then fits two Gaussian distributions to approximate the output distributions by the IN and OUT models (a logit scaling step on the logit values is taken to ensure the outputs follow a Gaussian). Finally, LiRA conducts a parametric likelihood-ratio test to conduct membership inference (e.g., a sample is deemed as a member if its output is estimated to come from the IN models with high probability). Label-only MIAs These attacks exploit training members’ higher degree of robustness to different perturbations (like adversarial perturbations, random noise), and develop different proxies to distinguish the degree of robustness by members and non-members. Prediction-correctness attack [48] is the baseline label-only attack that simply determines any samples that are correctly classified as members. This attack is effective when the training accuracy is higher than the testing accuracy. Boundary attack [8, 27] is based on the observation that it is easier to perturb a testing sample to change the prediction label than a training sample. This is because testing samples are often closer to the decision boundary and therefore more susceptible to perturbations. Using common attacks such as CW2 attack [5], the adversary measures the magnitude of perturbation needed to perturb $x\in[D_{tr}^{A},D_{te}^{A}]$, based on which $\tau$ can be derived. A sample is deemed as a member if the amount of perturbation needed to change the prediction label is higher than $\tau$ (i.e., more difficult to be perturbed). The adversary can also inject random noise to the samples (instead of adversarial perturbations), which is more efficient and useful in the cases where constructing the adversarial sample is difficult (e.g., for inputs with binary features) [8]. Augmentation attack [8] makes use of the samples’ robustness to data augmentation and the idea is that training samples are often more resilient to data augmentation than testing samples. For instance, if an image was used to train a model, it should still be classified correctly when it is slightly translated. For each input $x$, the adversary first generates multiple augmented versions of $x$, and computes how many of them are correctly classified. Based on the classification outcome, the adversary trains an attack inference model to predict whether or not $x$ is a member. II-D Defenses against MIAs This section presents an overview of representative defenses against MIAs (a comprehensive survey of existing defenses is in Section VI). Adversarial regularization (AdvReg) [29] trains the model to both achieve good model performance and protection against a shadow MIA adversary. During training, the defender first trains an attack inference model that tries to maximize the MIA gain, after which the protected model is trained to minimize the MIA gain and maximize the classification accuracy. This is instantiated as a min-max game in [29]. Distillation for membership privacy (DMP) [36]. Shejwalkar et al. propose DMP to defend against MIAs based on knowledge distillation. The idea is to distill the knowledge from an undefended model (trained on a private dataset) into a new public model using a new reference set. Privacy protection is enabled by thwarting the access of the public model to the private dataset as the public model is trained on a separate reference set. Such a reference set can be curated by assuming the availability of a public dataset or by using synthetic data. We consider the latter since we do not assume access to external data. This is because in many domains such as healthcare, the training data is private/proprietary, and thus such a public dataset may not be available. We hence consider a more realistic scenario in which the defender has no access to external data (similar to [41]). SELf ENsemble Architecture (SELENA) [41]. SELENA also uses knowledge distillation. Its key idea is to partition the private dataset into different subsets and train a sub model on each of the subset (another technique with similar idea is proposed in [9]). For each sub model, there exists a subset of private dataset that was not used in its training, i.e., “reference set” for that sub model. Each sub model assigns the output scores on its “reference set”, which constitutes the knowledge to the distilled. The knowledge from the ensemble of sub models is finally distilled into a new public model. Early stopping [38, 6]. As the training proceeds, the model tends to overfit the training data and become susceptible to MIAs. Early stopping is a general solution in reducing overfitting [6] by training models with fewer epochs. Song et al. [38] find that this is useful in mitigating MIAs and we follow to include it as as a benchmark defense mechanism. Differential privacy (DP) based defenses [2]. DP-based defenses leverage the formal framework of differential privacy to achieve rigorous privacy guarantee. This is done via injecting noise to the learning objective during training such as DP-SGD that adds noise to the gradients [2]. However, DP-based defenses often produce models with considerable accuracy drop, resulting in a poor privacy-utility tradeoff. MemGuard [19]. Jia et al. propose to defend against MIAs via obfuscating the prediction scores. The idea is to fool the MIA adversary by constructing a noise vector to be added to the input (analogous to constructing adversarial samples), and make the outputs on members and non-members indistinguishable by the adversary. Label Smoothing [40]. LS is a common regularization technique to improve model accuracy by using soft labels. LS replaces the one-hot label with a mixture of the one-hot label and uniform distribution using a smoothing intensity parameter. E.g., for a smoothing intensity of 0.3, the soft label becomes 80% cat, 10% dog, 10% frog; and a smoothing intensity of 0.6 yields 60% cat, 20% dog, 20% frog. LS trains with different smoothing intensities to produce model with high accuracy. Both LS and HAMP use soft labels in their training, but they are two techniques built with different principles that require different soft labels. LS is used to improve model performance, which necessitates training with low-entropy soft labels. Unlike LS, HAMP consists of high-entropy soft labels, an entropy-based regularizer and a novel testing-time defense (details in the next section), which is to improve membership privacy while preserving model accuracy. This consequently results in the different privacy implications by the two techniques: LS improves model performance but the resulting model still suffers from high MIA risk [20], while HAMP consistently contributes to very low MIA risk. We refer to detailed comparison in Section IV-G. III Methodology The main insight behind HAMP in mitigating diverse MIAs is to identify a common exploitation thread among different MIAs. HAMP is designed to overcome this exploitation so that it can defend against different MIAs regardless of their specific approaches. We first explain how existing MIAs can be unified via a common thread in Section III-A, and then discuss how we build HAMP to overcome this exploitation. III-A Overconfident Prediction Leads to Membership Leakage While existing MIAs employ diverse approaches to infer membership, we unify them by viewing them all as exploiting the model’s overconfidence in predicting training samples. We explain below how different attacks can be viewed as different forms to quantify whether a model is overly confident in predicting a specific sample, in order to infer its membership. Score-based MIAs leverage the prediction scores to infer membership through different proxies. The model’s overconfident prediction on training samples can be exposed through high confidence scores [48], low prediction entropy [37, 38], low prediction loss [48], or using a neural network [37, 30]. For boundary and augmentation attacks, samples predicted with high confidence can be viewed as exhibiting high robustness against adversarial perturbations and data augmentation. Training samples can therefore be identified by the adversary based on whether they are more resilient to adversarial perturbation [8, 27] or data augmentation [8]. What leads to the model’s overconfidence in predicting training samples? As mentioned before, common training algorithms make use of the one-hot hard labels to minimize the prediction loss. Minimizing the training objective function (1) is equivalent to encouraging the model to produce outputs that are consistent with the labels, i.e., 100% for the ground-truth class and 0% for any other classes. While training with hard labels has achieved success in a broad class of classification problems, we find that it undesirably contributes to the model’s overconfidence in predicting training samples, which eventually leads to membership leakage. For example, on Purchase100, the difference between the average prediction confidence on training and testing samples is $>$25%, which means the model is much more confident in predicting training samples. Such differential behavior can be identified by the adversary to obtain $>$14% attack TPR @0.1% FPR. This indicates training with one-hot hard labels undesirably enables the adversary to identify a large fraction of training samples with very low false positives (and similarly identifying testing samples with low false negatives). This inspires our design principle of enforcing less confident prediction to mitigate MIAs, based on which we introduce a novel training and testing defense that can achieve both strong membership privacy and high model accuracy. III-B Overview Fig. 2 shows an overview of HAMP. It has two parts. Training-time defense. Inspired by the observation in Section III-A, our main idea is to train the model to produce less confident prediction even on training samples, thereby enforcing the model to behave similarly on training and testing samples. We achieve this by two innovations: (1) replacing the hard labels with high-entropy soft labels; and (2) introducing an entropy-based regularizer. The first step is to generate soft labels with high entropy from the hard labels. These high-entropy soft labels explicitly induce the model to produce less confident output during training by assigning a much lower probability for the ground-truth class. For instance, a hard label of [0, 1] can be changed into a soft label of [0.4, 0.6], which guides the model to predict the ground-truth class with 60% probability (instead of 100%). The probability of each class is determined by an entropy threshold parameter, and a higher threshold generates a soft label with higher entropy (e.g., [0.5, 0.5] has the highest entropy) - details in the next section. The ground-truth class remains the same so that the model can learn correctly, e.g., a dog image is still trained to be predicted as a dog. Second, we introduce an entropy-based regularizer to penalize the model for predicting any output with low entropy. Prediction entropy measures the prediction uncertainty, and can be used to regularize the confidence level of the prediction, e.g., low entropy indicates high-confidence output, and can be mitigated by the proposed regularizer to become low-confidence output. The high-entropy soft labels encourages the model to produce outputs consistent with the labels, while the regularization term allows the model to produce any low-confidence outputs, even if the outputs do not closely match the labels. Both components are important for HAMP to mitigate overconfident prediction and achieve strong membership privacy. How HAMP’s training-time defense mitigates membership leakage from different sources? There are two sources leading to membership leakage, and we discuss below how HAMP can reduce leakage from both sources. Output scores. With the high-entropy soft labels and entropy-based regularizer, HAMP forces the model to produce output scores on training samples with higher entropy (i.e., lower confidence), which resemble the output scores on testing samples. E.g., on Purchase100, the average prediction entropy on members and non-members are 0.389 and 0.576 on the undefended model, which are 4.485 and 4.490 on the HAMP model. HAMP therefore reduces the entropy difference by 31x (from 0.187 to 0.006) and effectively enforces the output scores on members and non-members to be indistinguishable (more details in Appendix A-B). Some score-based MIAs leverage both output scores and label information (e.g., [38, 30]) and we explain next how HAMP prevents membership leakage from the labels. Prediction labels. HAMP’s training-time defense mitigates privacy leakage from the prediction labels by pushing the training samples closer to the decision boundary, so that training samples lie similarly close to the decision boundary as the testing samples. We next use the boundary and augmentation attacks to explain (both attacks exploit label information in different manners to infer membership). Boundary attack exploits the training samples’ higher adversarial robustness than testing samples. Without HAMP, the adversary can discern that the training samples require more perturbations than the testing samples. With HAMP however, training samples are predicted with lower confidence, and therefore it takes a similar amount of perturbation to perturb training and testing samples. For instance, on CIFAR100, the average amount of perturbation to perturb the training samples on the undefended model is 0.342, and 0.226 on the testing samples. With HAMP, the perturbation on the training samples become 0.289 and 0.234 on the testing samples, which effectively reduces the perturbation difference between training and testing samples by $>$53%. This means the members and non-members become indistinguishable from the perspective of their adversarial robustness. Augmentation attack exploits the training samples’ higher resistance to data augmentation, i.e., the augmented variants of training samples are more likely to be classified correctly. Performing data augmentation on the original samples can be viewed as drawing neighboring variants around the original samples in the sample space. Since the training samples are closer to the decision boundary under HAMP, their augmented variants are more likely to cross the decision boundary, and hence be classified incorrectly, which is similar to how testing samples would behave. We also evaluate the model’s performance on the inputs added with random augmentations. We find HAMP mainly reduces the performance on the augmented training samples (from 64.38% to 55.12%), and the performance on the augmented testing samples remain similar before and after HAMP (46.12% and 46.36%). This reduces the accuracy difference between members and non-members from 18.26% to 8.76% (a 52% reduction), and enables them to exhibit similar resistance to data augmentation. HAMP’s training-time framework is able to reduce the model’s overconfident prediction on training samples without compromising the model’s performance, i.e., strong membership privacy and prediction accuracy. Nevertheless, membership privacy can be further improved such as by pushing the training samples closer to the decision boundary, but at the cost of accuracy, which is undesirable. In light of this, we introduce a testing-time output modification defense that can attain higher membership privacy without degrading accuracy. Testing-time defense. Our idea is to modify all the output scores to become low-confidence scores, hence making the output scores from members and non-members less distinguishable. The key observation that underpins the testing-time defense is that randomly-generated samples are often predicted with low confidence, and the low-confidence output scores can be used for output modification. Specifically, we first uniformly generate random samples, which are highly unlikely to be part of the training set due to the high dimensionality of the input space (e.g., the entire Texas100 dataset contains only $67,330$ samples while the input space has $2^{6170}$ samples). As these random samples are unlikely to be members of the training set, they are often predicted by the model with low confidence. We then replace all the entries in each output score with those from random samples, where the replacement is to keep the predicted labels unchanged (all top-k labels) and modify the output scores only. In essence, HAMP returns only the ordering of the confidence scores and the ordering is represented by the random output scores arranged in a specific order. The random samples do not have any prerequisites (e.g., they do not need to come from a specific distribution, nor do they need to produce a specific prediction label), as long as they are valid inputs (e.g., pixel values are in [0, 255]). In HAMP, the high-confidence outputs on members and low-confidence outputs on non-members, all become low-confidence outputs after being modified. This significantly increases the difficulty for the adversary to identify differential behaviors on members and non-members. In Section V-A, we perform detailed ablation study to show that all three defense components in HAMP are crucial in achieving strong membership privacy and preserving high model accuracy. We next explain HAMP in details. III-C Training-time Defense Generating high-entropy soft labels. The first step is to generate high-entropy soft labels for training, where the class probabilities in the soft labels are controlled by an entropy threshold parameter, denoted as $\gamma$. The entropy of a soft label $y^{\prime}$ can be calculated as: $$\mathbb{H}(y^{\prime})=-\sum_{j=0}^{k-1}y^{\prime}_{j}*\text{log}(y^{\prime}_{j})$$ (3) A soft label with uniform probability on each dimension has the highest entropy, based on which we choose a smaller entropy threshold. For a $k$-class classification problem, our goal is to find a $y^{\prime}$ given $\gamma$ such that, $$\mathbb{H}(y^{\prime})\geq\gamma\mathbb{H}(y),y=\{{\frac{1}{k},...\frac{1}{k}}\}^{k},\gamma\in[0,1],$$ (4) where $y^{\prime}$ has the highest probability on its ground-truth class, and the probabilities on the remaining dimension are the same. For a hard label $y$ whose ground-truth class is $j_{truth}$ ($k$ classes in total), the resulting soft label becomes: $$\begin{split}\forall y^{\prime}_{j}\in y^{\prime},y^{\prime}_{j}={\begin{cases}p&\quad\text{if}~{}~{}j=j_{truth}\\ (1-p)/(k-1)&\quad\text{if}~{}~{}j\neq j_{truth}\end{cases}}\end{split}$$ (5) $p$ is the probability on the ground-truth class, and a larger $\gamma$ indicates higher prediction entropy, which leads to a smaller $p$ (i.e., smaller probability on the ground-truth class). Entropy-based regularization. In addition, we introduce an entropy-based regularizer that measures the prediction entropy during training, and penalizes predictions that have low entropy, as such predictions indicate high-confidence output and may be exploited by the adversary. Finally, the overall training objective can be formulated as: $$\begin{split}\mathcal{L_{\text{KL}}}(F_{\theta}(x),y)=\sum_{j=0}^{k-1}y_{j}\text{log}(\frac{y_{j}}{F_{\theta}(x)_{j}}),\end{split}$$ (6) $$\begin{split}\operatorname*{min}_{\theta}[\mathcal{L_{\text{KL}}}((F_{\theta}(X_{tr}),Y^{{}^{\prime}}_{tr}),\theta)-\alpha\mathbb{H}(F_{\theta}(X_{tr}))],\end{split}$$ (7) where $Y^{{}^{\prime}}_{tr}$ is the high-entropy soft labels, $L_{\text{KL}}$ the Kullback-Leibler divergence loss, $\alpha$ is to control the strength of regularization. Our goal is to train the model to mitigate the overconfident prediction on training samples while maintaining high prediction accuracy. We achieve this by using a large $\gamma$ to train the model with soft labels in high entropy, and a $\alpha$ to regularize the prediction entropy. Section IV-A explains how to select the parameters $\gamma,\alpha$ in HAMP ($p$ in Equation 5 is determined by $\gamma$). III-D Testing-time Defense The testing-time defense uniformly modifies the runtime outputs to achieve stronger privacy without jeopardizing accuracy. We first generate uniform random samples $x_{rand}$, e.g., for Purchase100 with binary features, each feature is assigned with 0 or 1 with equal probability. For each runtime input $x\in[D_{tr},D_{te}]$, all the entries in $F(x)$ (that indicate the probability for each class) are replaced by those in $F(x_{rand})$, the resulting output is denoted as $F^{x_{rand}}(x)$. The replacement is to only modify the entries in $F(x)$ while ensuring $F(x)$ and $F^{x_{rand}}(x)$ give the same prediction labels. For example, let $x\in[D_{tr},D_{te}],F(x)=[0.85,0.05,0.1]$, and $x^{\prime}\in X_{rand},F(x^{\prime})=[0.2,0.3,0.5]$, then the final output produced by the model becomes: $F(x_{i})=[0.5,0.2,0.3]$. This enforces the model to produce low-confidence outputs on both members and non-members, and reduces privacy leakage. Overall Algorithm. Algorithm 1 gives the overall algorithm of HAMP. $\gamma$ and $\alpha$ are the two parameters in HAMP to regulate the confidence level of the model’s prediction, e.g., a high entropy threshold or strong regularization can enforce the model to become less confident in prediction. Line 2 generates a template of high-entropy soft labels of $y^{\prime}$, which is then used to generate soft labels for each of the hard labels. The condition in Line 3 ensures that the ground-truth labels remains unchanged so that the model can learn the correct labels. At test time, each output is replaced by those from a random sample. The condition of $\text{argsort}(F^{x_{rand}}(x))=\text{argsort}(F(x))$ in line 13 is to ensure both $F^{x_{rand}}(x)$ and $F(x)$ give the same labels (all top-k labels and not just the top-1 label). Line 11 and Line 12 are independent of each other, and hence can be executed independently to facilitate faster runtime inference (overhead evaluation in Appendix A-F). IV Evaluation IV-A Experimental Setup Datasets. We consider five common benchmark datasets, and we describe them below. Purchase100 [37] includes 197,324 shopping records of customers, each with 600 binary features indicating whether a specific item is purchased. The goal is to predict the customer’s shopping habits (100 different classes in total). Texas100 [37] contains 67,330 hospital discharge records, each containing 6,170 binary features indicating whether the patient has a particular symptom or not. The data is divided into 100 classes, and the goal is to predict the treatment given the patient’s symptoms. Location30 [37] contains the location “check-in” records of different individuals. It has 5,010 data records with 446 binary features, each of which corresponds to a certain loation type and indicates whether the individual has visited that particular location. The goal is to predict the user’s geosocial type (30 classes in total). CIFAR100 [23] is an image classification dataset that has 60,000 images in 100 object classes. Each image has a size of 32$\times$32$\times$3. CIFAR10 [23] is similar to CIFAR100 that also contains 60,000 images but with 10 different object classes. We follow [36] to use the fully-connected (FC) networks on Purchase100, Texas100 and Location30, and a DenseNet-12 [16] on CIFAR100 and CIFAR10 (Appendix A-H conducts evaluation on more network architectures, including ResNet-18 [14], MobileNet [15] and ShuffleNet [50]). Purchase100 is trained with 20,000 samples, Texas100 with 15,000 samples, Location30 with 1,500 samples, CIFAR100 and CIFAR10 are with 25,000 samples. Section V-B reports additional experiments on more training sizes (from 2,500 to 50,000). Attacks. We consider all nine attacks as in Section II-C. For NN-based attack, we use the black-box NSH attack from Nasr et al. [30], which uses the model loss, logit values from the target model, and the ground-truth label to train an attack inference model. We consider the loss-based attack from Yeom et al. [48] and confidence-, entropy- and modified-entropy-based attacks as in Song et al. [38]. For LiRA [3], we train 128 shadow models for each defense (64 IN and OUT models each), where each shadow model is trained following the same procedure as the targeted defense (as per our threat model). E.g., for HAMP, this means the shadow model is trained with the same high-entropy soft labels and the entropy-based regularization as the defense model, and the shadow model also performs the same output modification as HAMP does. We consider the boundary and augmentation attacks from Choquette et al. [8]. For the boundary attack on the two image datasets, we use the CW2 attack [5] to generate adversarial samples and derive the perturbation magnitude threshold to distinguish members and non-members. Likewise, for the other three non-image datasets that contain binary features, we compute the sample’s robustness to random noise instead of adversarial perturbation. For each sample $x$, we generate hundreds of noisy variants of $x$, and the number of correctly classified noisy variants of $x$ is used to determine a threshold that best distinguishes between members and non-members. For augmentation attack, we consider image translation as the augmentation method, and we similarly consider different degrees of translation to find the best attack. HAMP configuration. $\gamma,\alpha$ are the two parameters in configuring HAMP (for generating high-entropy soft labels and controlling the strength of regularization respectively). We perform grid search to select the parameters ($\gamma\in[0.5,0.99],\alpha\in[0.0001,0.5]$), and select the one with small train-validation gap and high validation accuracy. We also conduct evaluation to study how HAMP’s performance varies under different parameters (please see Appendix A-E). For the testing-time defense, we generate random samples (e.g., random pixels in [0, 255]) and perform output modification as in Section III-D. There are no any other requirements. Our code is available at https://github.com/DependableSystemsLab/MIA_defense_HAMP. Related defenses. We consider seven major defenses: AdvReg [29], MemGuard [19], DMP [36], SELENA [41], Early stopping [38, 6], Label Smoothing (LS) [40] and DP-SGD [2]. We follow the original work to set up the defenses unless otherwise stated (more details in Appendix A-A). Evaluation metrics. An ideal privacy defense should provide strong protection for both members and non-members, for which we follow the best practice [3] to consider (1) attack true positive rate (TPR) evaluated at 0.1% false positive rate (FPR), which evaluates the protection for members, and (2) attack true negative rate (TNR) at 0.1% false negative rate (FNR), which quantifies the protection for non-members. Result organization. Table I reports the model accuracy for every defense. Fig. 3 compares each defense in terms of their membership privacy and model utility. Each defense is evaluated with multiple attacks, and we report the ones that achieve the highest attack TPR or TNR (detailed results for each attack are in Appendix A-K). Fig. 4 presents the average attack AUC (area under curve) by each defense, and the full ROC curves are in Appendix A-J. We leave the comparison with early stopping in Appendix A-D due to space constraint. Section V-A presents an ablation study, and Appendix A-F reports training and inference overhead evaluation. We next discuss the results by comparing HAMP with other defenses. IV-B Comparison with Undefended Models HAMP significantly reduces the MIA risk against both members and non-members. Compared with the undefended models, HAMP achieves significantly lower attack TPR and TNR. The average attack TPR on the undefended model is 13.48%, which is reduced to 0.8% by HAMP (a 94.1% reduction). Similarly, HAMP reduces the attack TNR by 97%, from 19.89% to 0.59%. This effectively thwarts the adversary in inferring members or non-members from the target model. In addition, we find that NN-based attack yields the highest attack TPR on the undefended models in many cases (as in Fig. 3), and we explain the reason in Appendix A-G. HAMP achieves strong membership privacy while preserving high model accuracy. Across the five diverse datasets, HAMP is able to consistently produce models with comparable accuracy as the undefended models. HAMP has an accuracy drop of at most 1.1% (on Location30), and the average accuracy drop by HAMP is only 0.46%. IV-C Comparison with MemGuard [19] Both MemGuard and HAMP are capable of preserving model accuracy. MemGuard does not incur any accuracy drop since it is a post-processing technique, and does not change the prediction label. Likewise, HAMP only incurs a minor accuracy drop of 0.46%. HAMP achieves considerably stronger membership privacy than MemGuard. MemGuard offers very limited privacy protection because MemGuard only modifies the output scores without changing the prediction labels, which cannot prevent privacy leakage from the label information. On the contrary, HAMP consists of a training-time defense that can mitigate membership leakage from both output scores and label information (explained in Section III-B), and achieves much stronger membership privacy than MemGuard. The average attack TPR on MemGuard is 6.7%, which is 8.4x relative to that of HAMP. Similarly, the attack TNR by MemGuard is 10.9%, which is 18.3x relative to that of HAMP. IV-D Comparison with AdvReg [29] HAMP outperforms AdvReg with higher model accuracy and stronger membership privacy. In terms of accuracy, HAMP consistently achieves higher accuracy than AdvReg. AdvReg incurs an average 7.45% accuracy drop, while HAMP incurs only 0.46% (94% lower than AdvReg). In terms of privacy, HAMP outperforms AdvReg with both much lower attack TPR and TNR. The attack TPR is 1.70% with AdvReg and 0.8% with HAMP, which translate to a 87% and 94% reduction from those of the undefended models. Similarly, AdvReg reduces the attack TNR by 90% while HAMP reduces it by 97%, which is much higher. IV-E Comparison with DMP [36] DMP [36] uses generative adversarial networks (GANs) trained on the private dataset to produce synthetic data as the reference set for knowledge distilation. We follow Shejwalker et al. [36] to train the two image datasets on DC-GAN [34]. The defender can generate unlimited data from the GAN, and hence he/she can create a reference set that is larger than the original training set. Therefore, we use 150K synthetic samples to train the model with higher accuracy (we do not consider more synthetic images as the improvement is negligible). For the three datasets with binary features, we use CTGAN [45] for modeling tabular data. We use 100K synthetic samples for Texas100, 10k for Location30. We do not consider Purchase100 as it incurs significant accuracy drop (over 30%). To validate that synthetic samples are useful for the domain task, we compare the performance of the models trained with GAN-generated synthetic data and those with random data (i.e., all features are randomly selected as 0 or 1 with equal probability) using Texas100. We find that models trained with random data only achieve accuracy from 5.8% to 14.8%; while those with GAN-generated data achieve over 40% accuracy. HAMP outperforms DMP by being able to consistently achieve strong privacy protection with high model accuracy across different datasets. In terms of membership privacy, we find that DMP is able to achieve strong results in many (but not all) cases, and it achieves an average attack TPR of 0.44% and TNR of 0.38% on Texas100, CIFAR100 and CIFAR10, where HAMP achieves 0.9% TPR and 0.65% TNR (DMP is slightly better). However, DMP’s performance does not generalize across datasets. For instance, on Location30, DMP suffers from a much higher attack TPR of 7.26% and TNR of 23.33%. This is because the model is trained with limited data (1,500), and the GAN is not able to generate diverse data that are different from the original training data. As a result, the teacher model assigns high confidence to the synthetic data, from which the student model learns to predict the training members with high confidence that eventually leads to high MIA risk. To validate this, we compare the difference between the prediction confidence on members and non-members by the DMP models. On Location30, the average difference is $>$30%, and only $<$5% on the other datasets, which is why DMP exhibits poor privacy protection on Location30. On the same dataset, HAMP yields a low TPR of 0.89% and TNR of 0.59%, and this trend is consistent across datasets. In terms of accuracy, DMP suffers from different degrees of accuracy loss that are much higher than HAMP’s. DMP incurs $>$30% accuracy loss on Purchase100 (as mentioned earlier), $\sim$12% accuracy drop on Texas100 and CIFAR100, 3.1% on Location30, and 1.2% on CIFAR10 (smaller accuracy loss as CIFAR10 has 10 classes only). In contrast, HAMP incurs average accuracy drop of $<$0.5% (at most 1.1%), which is significantly better than DMP. IV-F Comparison with SELENA [41] Both SELENA and HAMP achieve similarly strong privacy protection. On average, HAMP has a slightly better membership privacy than SELENA, but neither technique has consistently better membership privacy overall (Fig. 3). The attack TPR of SELENA is $0.53\%\sim 1.72\%$, with an average of 1.1%, and that of HAMP is $0.4\%\sim 1.2\%$, with an average of 0.8%. They are able to reduce the attack TPR by 92% (SELENA) and by 94% (HAMP). In addition, the attack TNR of SELENA is $0.42\%\sim 3.7\%$, with an average of 1.7%, and that of HAMP is $0.44\%\sim 0.77\%$, with an average of 0.6%. This translates to a TNR reduction of 91% (SELENA) and 97% (HAMP), respectively. While providing comparable privacy benefits, HAMP outperforms SELENA by having lower accuracy loss, hence providing a better privacy-utility trade off. The largest accuracy drop by SELENA is 4.4% and that by HAMP is only 1.1%. On average, SELENA incurs a 2.25% accuracy drop, while HAMP incurs a much smaller drop of 0.46%. Moreover, our additional experiment in Section V-B shows that HAMP continues to outperform SELENA with much lower accuracy drop when evaluated on a variety of different training sizes (2.2%$\sim$5.2% by SELENA and 0.04%$\sim$0.98% by HAMP). IV-G Comparison with Label Smoothing (LS) [40] Though LS is able to improve model accuracy, the model trained with LS still suffers from high MIA risk. In contrast, the model trained with HAMP can maintain high model accuracy and exhibit very low MIA risk. For LS, we follow prior work by Kaya et al.  [20] to train with different smoothing intensities from 0.01 to 0.995, and select the model with the highest accuracy (we omit CIFAR10 and Location30 as LS did not lead to accuracy improvement). We first discuss the qualitative difference between LS and HAMP, and then quantitatively compare their privacy risk. While LS and HAMP use soft labels in their training, they are built with different purposes that require different soft labels. LS is used as a regularization technique to improve model accuracy, which necessitates training with low-entropy soft labels, and is able to increase the accuracy by 2.4% on average. However, the resulting model still suffers from high MIA risk, as LS causes the model to overfit on the smooth labels and exhibit discernible behavior on the training samples [20]. In contrast, HAMP is built to improve membership privacy, which consists of high-entropy soft labels, an entropy-based regularizer and a novel testing-timd defense to force the model to make less confident predictions, and to behave similarly on the training and testing samples. To quantitatively compare the different soft labels used by both techniques, we measure the soft label entropy in LS and HAMP, and find that the label entropy in HAMP is considerably higher than that in LS, and is 4x$\sim$50x relative to that in LS (average 9x). This contributes to the low membership privacy risk by HAMP, unlike LS. The average attack TPR on the LS models is 5.1%, 7.1x relative to that by HAMP (on the same datasets). The attack TNR on LS is 6.3x relative to that by HAMP (we observe a similar trend even when we train LS with other smoothing intensities that have comparable accuracy improvement - see Appendix A-C). Moreover, our results reveal that LS may amplify the MIA risk and render the model more vulnerable than the undefended model. On Texas100, LS increases the attack TPR from 3.87% (on the undefended model) to 5.61%, which increases the MIA risk against training members by 45%. This suggests that LS may constitute a hidden privacy risk for the practitioners (a similar finding was identified recently by Kaya et al. [20]). On the contrary, HAMP consistently leads to low MIA risk and outperforms LS with significantly better membership privacy. IV-H Comparison with DP-SGD [2] We use the canonical implementation of DP-SGD using Pytorch Opacus [1]. We first consider a fixed privacy budget $\epsilon=4$ as per Tang et al. [41], and then evaluate DP-SGD with different values of $\epsilon$. IV-H1 DP-SGD with fixed $\epsilon=4$. In this setting, the average attack TPR of the DP-SGD models is 0.36% and 0.3%, both of which are the lowest among all the defenses we evaluated. In comparison, HAMP yields 0.8% attack TPR and 0.6% TNR, which are slightly higher than DP-SGD. However, DP-SGD suffers from considerable accuracy loss, with an average loss of 23.84%, while HAMP a significantly smaller loss of 0.46%. IV-H2 DP-SGD with different $\epsilon$. We next evaluate DP-SGD by considering different noise_multipliers and clipping norms. We consider Purchase100, on which we used a noise_multiplier of 1.7 and a clipping norm of 1, for $\epsilon=4$ in the earlier evaluation. We select different noise_multiplier values of 0.0 (no noise injected), 0.1 ($\epsilon=12069.1$), 0.5 ($\epsilon=62.5$) and 0.9 ($\epsilon=10.9$); and clipping norm values of 1, 5 and 10, totalling 12 different configurations. We report the results in Fig. 5. Reducing the amount of injected noise and using a larger clipping norm allows DP-SGD to provide empirical privacy protection (but with a very large provable bound of $\epsilon$), and reduce the amount of accuracy loss. For instance, by using a clipping norm of 10 without injecting any noise, DP-SGD is able to reduce the accuracy loss to be $<$1%, which can also reduce the attack TPR by 73% (from 14.37% to 3.86%), and the attack TNR by 36% (from 14.62% to 9.36%). Nevertheless, this performance is still considerably inferior to that of HAMP, which can reduce the attack TPR and TNR by 97.2% and 96.7%, respectively. Using a tighter clipping norm or injecting more noise can improve the membership privacy even more, but this comes at the cost of accuracy loss (the earlier result has negligible accuracy loss). For example, by using a small clipping norm of 1, the attack TPR can be reduced to 0.67% and attack TNR to 0.62%. However, this results in 8.2% accuracy loss. Increasing the noise_multiplier can further reduce privacy leakage, e.g., using a noise_multiplier value of 0.5 can reduce the attack TPR to 0.5% and attack TNR to 0.49% (and with a large $\epsilon$ of 62.5), which are comparable to the 0.4% TPR and 0.44% TNR values by HAMP. However, DP-SGD degrades the accuracy by 13.6%, while HAMP incurs negligible accuracy drop. Therefore, training a model with a small amount of noise or with a tight clipping norm is also a viable defense against MIAs, though it still incurs much larger accuracy loss than HAMP and results in large provable bounds $\epsilon$. V Discussion V-A Ablation Study HAMP consists of three components, and we perform a detailed ablation study to investigate the effectiveness of each of these components - this includes a total of six configurations. We present the results in Table II. The second to fourth rows in Table II shows the results on models using a single component in HAMP. For instance, training with high-entropy soft labels alone is able to produce a model with similar accuracy as the undefended model (trained with the one-hot hard labels), and reduce the attack TPR from 14.37% to 4.76%, and attack TNR from 14.62% to 4.22%. This also validates our earlier observation in Section III-A that training with one-hot hard labels could lead to high MIA risk, and the proposed high-entropy soft labels can be used to mitigate the high MIA risk. However, this is not enough as the model still suffers from relatively high TPR and TNR. We observe similar trends in the other two settings where we either train with the entropy-based regularizer alone, or directly perform output modification on the undefended model. Strengthening the model with more defense components can further reduce the MIA risk while preserving model accuracy. For example, training with high-entropy soft labels and the entropy-based regularizer (fifth row in Table II) achieves a low TPR of 1.86% and a low TNR of 1.07%. We observe a similar trend even if we change to different configurations, as in the sixth and seventh rows in Table II, both of which exhibit better privacy protection than models equipped with a single component. Furthermore, we find that the resulting model continues to maintain high model accuracy, which means the different defense components in HAMP can be used together to improve membership privacy without jeopardizing model accuracy. Finally, the full defense consisting of all three defense components, as in HAMP, exhibits the best privacy protection while maintaining competitive model accuracy. V-B Evaluation on Different Training Sizes This section reports additional experiments where we vary the size of the training set. We evaluate six more different sizes on Purchase100, which is the largest dataset in our evaluation and allows us to comprehensively evaluate a wide range of sizes, namely: 2,500, 5,000, 7,500, 10,000, 30,000, 50,000 (up to 20x difference). We trained 64 shadow models in the LiRA attack for each defense, with over 2,300 different shadow models in total. Fig. 6 shows the results. We find that even when evaluated under a broad range of training sizes, HAMP consistently achieves superior performance on both privacy protection and model utility. The average attack TPR on the undefended model is 24.7% and attack TNR 22.9%. MemGuard achieves an average attack TPR of 13% and attack TNR 17.4%, both of which are significantly higher than the 1.3% and 1.5% by HAMP. AdvReg incurs an average accuracy loss of 6.3% while HAMP incurs only 0.2%. HAMP also outperforms AdvReg with better privacy protection: AdvReg reduces the attack TPR by 83% and attack TNR by 76.1%, while HAMP reduces them by 94.8% and 93.4%, respectively. LS improves the accuracy by 3.2%, but it still suffers from high MIA risk: its attack TPR and TNR are 8x and 4.1x relative to that of HAMP. Both SELENA and HAMP have similarly strong membership privacy: the average attack TPR on SELENA is 1.2%, and 1.3% on HAMP; the attack TNR are 1.4% and 1.5%, respectively. Under a similar privacy protection, HAMP still outperforms SELENA with a much lower accuracy drop. On average, SELENA degrades the accuracy by 3.97% (up to 5.2%), while HAMP degrades accuracy by only 0.15% (up to 0.98%). V-C Evaluation against Data-poisoning-based MIA [42] Recent work by Tramer et al. [42] shows that a more capable adversary can significantly amplify the MIA risk through data poisoning. Therefore, we conduct additional evaluation on whether HAMP can protect against such more capable attack. The Tramer et al. attack increases the membership leakage against target points, by poisoning the training set to transform the target points into outliers. Each target point is replicated $n$ times with a wrong label, and these replicas are added as the poison samples. If the target point is a member in the training set, the model will be fooled into believing that the correctly-labeled target point is “mislabeled” (due to the presence of other poisoned replicas), which would have a large influence on the model’s output and can be identified by the adversary. We follow [42] to conduct the evaluation on CIFAR10, and select 250 random target points (containing both members and non-members), each replicated 8 times. We train 128 shadow models, which include a total of 32,000 target points. Without data poisoning, the adversary achieves 8.23% attack TPR and 10.15% attack TNR on the undefended model. These are increased to 52.44% and 24.52% after data poisoning, respectively. Even under such a powerful attack, HAMP is able to reduce the attack TPR from 52.44% to 0.34%, and attack TNR from 24.52% to 0.71%. Further, HAMP achieves such strong protection with a negligible accuracy drop of 0.6%. V-D Limitation First, it requires re-training and hence incurs additional training overhead. Nevertheless, re-training is commonly required by many existing defenses [29, 36, 41], and training is a one-time effort prior to deployment. Further, our evaluation shows that HAMP incurs only a modest training overhead compared with other defenses (see Appendix A-F). The second limitation is that HAMP’s testing-time defense incurs an overhead in every inference, which may be undesirable for the computations that have stringent real-time constraints. Nevertheless, HAMP incurs a low latency of only 0.04$\sim$0.38ms per inference. In comparison, MemGuard, the other defense that also contains post-processing modification, introduces a latency of 335.42$\sim$391.75ms. In addition, this process also changes the output scores to be randomized scores, which may affect the usefulness of the output scores. Nevertheless, we try to reduce the impact by ensuring the prediction labels derived from the output scores remain unchanged (all top-k labels), and thus the model accuracy is unaffected. This can still provide meaningful information in the output scores without leaking membership privacy. Finally, though HAMP empirically provides superior privacy-utility tradeoff, it does not offer provable guarantees. This is a limitation common to all practical defenses [29, 36, 41, 19]. Hence, a more capable adversary may mount stronger attacks, such as the data poisoning attack by Tramer et al. [42]. Our preliminary evaluation shows that HAMP still exhibits strong privacy protection and preserves model accuracy even under the presence of such a data-poisoning adversary, but we leave further investigation to future work. VI Related Work Membership inference attacks. Depending on the adversary capabilities, MIAs can be divided into black-box [37, 48, 17, 3, 38, 8, 47, 27] and white-box attacks [25, 18, 30]. The former has access only into the output of the target model while the latter has visibility into information such as the internal model gradients to facilitate membership inference. Black-box MIA assumes a more realistic adversary, and hence is hence widely adopted in prior defense studies [19, 41, 29] (and in HAMP). Such attacks can be mounted by either shadow-training [37, 29, 48] or computing statistical metrics based on the partial knowledge of the private dataset [38, 8, 27]. Many of those attacks require full or partial access to the output scores by the model, and may be defeated if the model only reveals the prediction label. This motivates a new class of attacks called, label-only attacks, which can be launched either with [8] or without [27] partial knowledge of the membership information. Carlini et al. [3] introduce the LiRA attack that can succeed in inferring membership when controlled at low false positive or false negative, through a well-calibrated Gaussian likelihood estimate. In addition to supervised classification, MIAs have also been explored in other domains, including contrastive learning [28], generative models [7, 13], federated learning [30], graph neural networks [51], and recommender systems [49]. Defenses against membership inference attacks. These defenses can be divided into provable and practical defenses. The former can provide rigorous privacy guarantee, such as DP-SGD [2], PATE [32]. Nevertheless, these defenses often incur severe accuracy drop when used with acceptable provable bounds [35, 33]. Another line of practical defenses aim to achieve empirical privacy without severely degrading accuracy. Common regularization techniques such as dropout [39], weight decay [24] are shown to be able to reduce privacy leakage, but with limited effectiveness [37, 36]. Other defenses enforce specific optimization constraint during training to mitigate MIAs [29, 26], or perform output obfuscation [19, 46]. Knowledge distillation is used by different techniques to mitigate MIAs, including PATE [32], DMP [36], SELENA [41] and KCD [9]. However, existing defenses are often biased towards either privacy or utility. In contrast, HAMP both achieves strong membership privacy and high accuracy, which offers a much better privacy-utility trade off. Other privacy attacks. In addition to membership privacy, common ML models are found to leak different private properties [43, 44, 11, 10, 12, 4]. Model extraction attacks can duplicate the functionality of a proprietary model [43, 44]. Model inversion attacks are capable of inferring critical information in the input features such as genomic information [11, 10]. Property inference attacks are constructed to infer sensitive properties of the training dataset [12]. VII Conclusion This work introduces HAMP, a defense against Membership Inference Attacks (MIAs) that can achieve both high accuracy and membership privacy. HAMP has two innovations: (1) a training framework that consists of high-entropy soft labels and an entropy-based regularizer; and (2) an output modification defense that uniformly modifies the runtime output. HAMP significantly constrains the model’s overconfidence in predicting training samples, and forces the model to behave similarly on both members and non-members, thereby thwarting MIAs. Our evaluation shows that HAMP outperforms seven leading defenses by offering a better trade off between utility and membership privacy. Acknowledgment This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and a Four Year Fellowship and a Public Scholar Award from the University of British Columbia (UBC). References [1] “Pytorch opacus,” https://github.com/pytorch/opacus. [2] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318. [3] N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, and F. Tramer, “Membership inference attacks from first principles,” in 2022 IEEE Symposium on Security and Privacy (SP).   IEEE, 2022, pp. 1897–1914. [4] N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson et al., “Extracting training data from large language models,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2633–2650. [5] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp).   IEEE, 2017, pp. 39–57. [6] R. Caruana, S. Lawrence, and L. Giles, “Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping,” Advances in neural information processing systems, pp. 402–408, 2001. [7] D. Chen, N. Yu, Y. Zhang, and M. Fritz, “Gan-leaks: A taxonomy of membership inference attacks against generative models,” in Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, 2020, pp. 343–362. [8] C. A. Choquette-Choo, F. Tramer, N. Carlini, and N. Papernot, “Label-only membership inference attacks,” in International Conference on Machine Learning.   PMLR, 2021, pp. 1964–1974. [9] R. Chourasia, B. Enkhtaivan, K. Ito, J. Mori, I. Teranishi, and H. Tsuchida, “Knowledge cross-distillation for membership privacy,” arXiv preprint arXiv:2111.01363, 2021. [10] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1322–1333. [11] M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing,” in 23rd $\{$USENIX$\}$ Security Symposium ($\{$USENIX$\}$ Security 14), 2014, pp. 17–32. [12] K. Ganju, Q. Wang, W. Yang, C. A. Gunter, and N. Borisov, “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, 2018, pp. 619–633. [13] J. Hayes, L. Melis, G. Danezis, and E. De Cristofaro, “Logan: Membership inference attacks against generative models,” in Proceedings on Privacy Enhancing Technologies (PoPETs), vol. 2019, no. 1.   De Gruyter, 2019, pp. 133–152. [14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. [15] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017. [16] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708. [17] B. Hui, Y. Yang, H. Yuan, P. Burlina, N. Z. Gong, and Y. Cao, “Practical blind membership inference attack via differential comparisons,” arXiv preprint arXiv:2101.01341, 2021. [18] B. Jayaraman, L. Wang, K. Knipmeyer, Q. Gu, and D. Evans, “Revisiting membership inference under realistic assumptions,” Proceedings on Privacy Enhancing Technologies, vol. 2021, no. 2, 2021. [19] J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Z. Gong, “Memguard: Defending against black-box membership inference attacks via adversarial examples,” in Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, 2019, pp. 259–274. [20] Y. Kaya and T. Dumitras, “When does data augmentation help with membership inference attacks?” in International Conference on Machine Learning.   PMLR, 2021, pp. 5345–5355. [21] I. Kemelmacher-Shlizerman, S. M. Seitz, D. Miller, and E. Brossard, “The megaface benchmark: 1 million faces for recognition at scale,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4873–4882. [22] K. Kourou, T. P. Exarchos, K. P. Exarchos, M. V. Karamouzis, and D. I. Fotiadis, “Machine learning applications in cancer prognosis and prediction,” Computational and structural biotechnology journal, vol. 13, pp. 8–17, 2015. [23] A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009. [24] A. Krogh and J. A. Hertz, “A simple weight decay can improve generalization,” in Advances in neural information processing systems, 1992, pp. 950–957. [25] K. Leino and M. Fredrikson, “Stolen memories: Leveraging model memorization for calibrated white-box membership inference,” in 29th $\{$USENIX$\}$ Security Symposium ($\{$USENIX$\}$ Security 20), 2020, pp. 1605–1622. [26] J. Li, N. Li, and B. Ribeiro, “Membership inference attacks and defenses in classification models,” in Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, 2021, pp. 5–16. [27] Z. Li and Y. Zhang, “Membership leakage in label-only exposures,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 880–895. [28] H. Liu, J. Jia, W. Qu, and N. Z. Gong, “Encodermi: Membership inference against pre-trained encoders in contrastive learning,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2081–2095. [29] M. Nasr, R. Shokri, and A. Houmansadr, “Machine learning with membership privacy using adversarial regularization,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018, pp. 634–646. [30] ——, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in 2019 IEEE symposium on security and privacy (SP).   IEEE, 2019, pp. 739–753. [31] E. W. Ngai, Y. Hu, Y. H. Wong, Y. Chen, and X. Sun, “The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature,” Decision support systems, vol. 50, no. 3, pp. 559–569, 2011. [32] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar, “Semi-supervised knowledge transfer for deep learning from private training data,” arXiv preprint arXiv:1610.05755, 2016. [33] N. Papernot, A. Thakurta, S. Song, S. Chien, and Ú. Erlingsson, “Tempered sigmoid activations for deep learning with differential privacy,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, 2021, pp. 9312–9321. [34] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. [35] M. A. Rahman, T. Rahman, R. Laganière, N. Mohammed, and Y. Wang, “Membership inference attack against differentially private deep learning model.” Trans. Data Priv., vol. 11, no. 1, pp. 61–79, 2018. [36] V. Shejwalkar and A. Houmansadr, “Membership privacy for machine learning models through knowledge transfer,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 11, pp. 9549–9557, May 2021. [37] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP).   IEEE, 2017, pp. 3–18. [38] L. Song and P. Mittal, “Systematic evaluation of privacy risks of machine learning models,” in 30th $\{$USENIX$\}$ Security Symposium ($\{$USENIX$\}$ Security 21), 2021. [39] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014. [40] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826. [41] X. Tang, S. Mahloujifar, L. Song, V. Shejwalkar, M. Nasr, A. Houmansadr, and P. Mittal, “Mitigating membership inference attacks by $\{$Self-Distillation$\}$ through a novel ensemble architecture,” in 31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 1433–1450. [42] F. Tramèr, R. Shokri, A. San Joaquin, H. Le, M. Jagielski, S. Hong, and N. Carlini, “Truth serum: Poisoning machine learning models to reveal their secrets,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, p. 2779–2792. [43] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction apis,” in 25th $\{$USENIX$\}$ Security Symposium ($\{$USENIX$\}$ Security 16), 2016, pp. 601–618. [44] J.-B. Truong, P. Maini, R. J. Walls, and N. Papernot, “Data-free model extraction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4771–4780. [45] L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni, “Modeling tabular data using conditional gan,” Advances in Neural Information Processing Systems, vol. 32, 2019. [46] Z. Yang, B. Shao, B. Xuan, E.-C. Chang, and F. Zhang, “Defending model inversion and membership inference attacks via prediction purification,” arXiv preprint arXiv:2005.03915, 2020. [47] J. Ye, A. Maddi, S. K. Murakonda, V. Bindschaedler, and R. Shokri, “Enhanced membership inference attacks against machine learning models,” arXiv preprint arXiv:2111.09679, 2021. [48] S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha, “Privacy risk in machine learning: Analyzing the connection to overfitting,” in 2018 IEEE 31st Computer Security Foundations Symposium (CSF).   IEEE, 2018, pp. 268–282. [49] M. Zhang, Z. Ren, Z. Wang, P. Ren, Z. Chen, P. Hu, and Y. Zhang, “Membership inference attacks against recommender systems,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 864–879. [50] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6848–6856. [51] Z. Zhang, M. Chen, M. Backes, Y. Shen, and Y. Zhang, “Inference attacks against graph neural networks,” in USENIX Security Symposium (USENIX Security). USENIX, vol. 2022, 2021, p. 13. Appendix A Appendix A-A Details of Defense Setup This section provides details of the defense setup in our evaluation. For each dataset, we use 10% of the training set as a separate validation set (20% for Location30 as it has a smaller training size), and select the model with the highest validation accuracy. HAMP. The values of entropy threshold $\gamma$ and $\alpha$ parameter (for controlling the regularizer) are given in Table III. For model training on the two image datasets, in addition to the requirement of yielding high validation accuracy, we empirically set an additional condition that the model needs to gain at least 1% improvement on validation accuracy in order to be considered the best model. This is to prevent the model gaining a marginal improvement on validation accuracy at the cost of significant overfitting on training samples, which could result in a large generalization gap. Adversarial regularization [29]: The alpha parameter is for balancing classification accuracy and privacy protection. We set alpha as 3 for Purchase100 [29], 10 for Texas100 [38], 6 for CIFAR100 and CIFAR10 [29], and 10 for Location30. SELENA [41]: We follow the original authors to set K=25 and L=10, where K is the total number of sub models, and L means for a given training sample, there are L sub models whose training sets do not contain that particular sample. For these L models, the given training sample can be viewed as an instance in their “reference set” for distillation. Label Smoothing (LS) [40]: We follow [20] to train LS with different smoothing intensities and select the model with the highest accuracy. Purchase100 is trained with a smoothing intensity of 0.03, Texas with 0.09 and CIFAR100 with 0.01. DP-SGD[2]: We use PyTorch Opacus [1] to train the DP-SGD model. We set microbatch size to be 1. Purchase100 is trained with a noise_multiplier of 1.7, a norm clipping bound of 1.0 and with 200 epochs. Texas100 is trained with a noise_multiplier of 1.44, a norm clipping bound of 1.0 and with 200 epochs. Location30 is trained with a noise_multiplier of 2.91, a norm clipping bound of 3.0 and with 50 epochs. A-B Measuring Prediction Entropy by HAMP As mentioned in Section III-B, HAMP reduces privacy leakage from output scores via enforcing the model to predict training samples with higher entropy (i.e., less confident prediction on training samples). We validate this by measuring the prediction entropy produced by the models before and after HAMP, and report the results in Table IV. On the undefended models, the member samples are predicted with much lower entropy than that on non-members, and the entropy difference between members and non-members is 0.125$\sim$0.343. Such a large difference indicates the differential behavior on members and non-members that can be distinguished by the MIAs. In contrast, the models trained with HAMP predict both members and non-members with much higher prediction entropy (increase by 4.1x$\sim$19.8x), and the average difference between members and non-members is reduced from 0.125$\sim$0.337 (on undefended models) to 0.006$\sim$0.058, which is 6.5x$\sim$32.7x smaller. This demonstrates how HAMP enforces the model to behave similarly on members and non-members and therefore reduce privacy leakage. A-C Evaluating Label Smoothing with Different Smoothing Intensities In Section IV-G, we compare HAMP with LS using the smoothing intensity that achieves the highest accuracy, and we found that HAMP achieves significantly lower MIA risk than LS. In this section, we evaluate LS with other intensities that achieve similar accuracy improvement. On Purchase100, we select a smoothing intensity of 0.03, which yields the highest accuracy improvement of 4.75%, and we consider all seven other intensities that achieve comparable accuracy improvement (3.8%$\sim$4.4%). Fig. 8 presents the results, which show that LS trained with different intensities still exhibit very high MIA risk. For example, the attack TPR @ 0.1% FPR by LS are 13.7x$\sim$15.5x higher than that of HAMP, and the attack TNR are 8.2x$\sim$12.4x higher than that of HAMP. A-D Comparison with Early Stopping Early stopping produces models trained with fewer epochs to prevent overfitting. In our evaluation, we benchmark the classification accuracy and attack TPR/TNR of the models trained with different epochs before convergence, and compare them with HAMP. The results are shown in Fig. 7. When the model is trained with a few epochs in early stopping, the model is able to achieve comparable privacy protection as HAMP, but with a large accuracy drop. For example, on Purchase100, the model trained with 15 epochs yields an attack TPR of 0.67% and attack TNR of 1.01%, which are slightly higher than the 0.4% and 0.44% by HAMP. However, its prediction accuracy is only $68.6\%$, which is much lower than the $81.15\%$ achieved by HAMP. The model’s accuracy improves with more training epochs, but then so does the attack TPR and TNR. When the models derived by early stopping converge, there is a substantial gap between the attack TPR and TNR of HAMP and early stopping (black dashed line vs. red solid line in Fig. 7). To summarize, under a similar MIA risk for members (i.e., similar attack TPR), HAMP achieves an average 12.5% higher accuracy than early stopping; and 28.6% higher accuracy than early stopping when under similar attack TNR. A-E Varying the Parameters of HAMP This section evaluates the performance of HAMP under different parameters, $\gamma\in(0.1,0.9),\alpha\in(0.0001,0.5)$. We use Purchase100 and present the results in Table V. Entropy threshold. A higher entropy threshold assigns lower probability to the ground-truth class in the labels and enforces the model to become less confident in predicting training samples. For instance, for the entropy threshold of 0.9, the probability of the ground-truth class is only 20%, while with a threshold of 0.1, the probability is 94%. Table V shows that a higher entropy threshold leads to a model with lower classification accuracy and also lower MIA risk (on both attack TPR and attack TNR). The highest entropy threshold, 0.9, produces a model with the lowest test accuracy of 66.7% and the lowest attack TPR of 0.38% and attack TNR of 0.26% Strength of regularization. Stronger entropy-based regularization forces the model to produce outputs with higher uncertainty (uncertainty is measured by the prediction entropy), and is useful in preventing the model’s overconfidence in predicting training samples. The model exhibits strong resistance against MIAs when $\alpha$ is large (e.g., 0.05) On the other hand, strong regularization results in a model with low classification accuracy. This is because, when $\alpha$ is large, the overall loss term in objective (7) is dominated by the second regularization term, while the first loss term for improving classification accuracy is not optimized sufficiently. A-F Overhead Evaluation Training overhead. We compare the training overhead of HAMP with AdvReg, SELENA, LS and DMP. We do not compare training overhead with MemGuard as it is a post-processing technique that modifies the prediction vector during inference. Instead, we compare with its inference overhead. For Purchase100, Texas100, CIFAR100, CIFAR10 and Location30, the undefended models and the sub models in SELENA are trained with 100, 20, 100, 100 and 50 epochs; For knowledge distillation in DMP and SELENA, we use 200, 100, 200, 200 and 100 epochs. LS and HAMP are trained with 200, 100, 200, 200 and 100 epochs. AdvReg is trained with 50, 20, 200, 200 and 50 epochs, respectively. All models converged after training. The overhead is measured on a single NVidia V100SXM2 GPU with 16 GB memory. Each measurement is repeated 5 times and we report the average overhead. The training overhead of each defense is shown in Table VI. All defense techniques incur higher training cost compared with the undefended models (as expected), HAMP and LS incur the lowest training overhead among all the defenses (HAMP is slightly higher than LS). AdvReg’s overhead is 5.4x$\sim$11.4x relative to that of HAMP, and DMP’s overhead is 3.2x$\sim$5.6x relative to that of HAMP. SELENA’s overhead is 4x$\sim$8.8x relative to that of of HAMP. Even though the latency of training multiple sub models in SELENA can be hidden by parallel training, its overhead is still 23%$\sim$66% higher than that of HAMP. Inference overhead. We compare HAMP with MemGuard on their inference overhead (other defenses do not have a post-processing procedure, and hence their inference overheads are the same as the undefended model’s). For HAMP, the generation of random samples is independent of the runtime inference, so we first generate the random samples and obtain their output scores, and measure only the overhead of performing output modification (i.e., Line 13 in Algorithm 1). We measure the inference overhead by performing inference on 500 random member and non-member samples (1,000 samples in total). Table VII shows the average inference overhead per sample. The overhead incurred by MemGuard is 25x$\sim$1048x the overhead incurred by HAMP. This is because MemGuard requires solving a complex optimization to obfuscate the prediciton scores while HAMP only performs output modification on the prediction scores (Line 13 in Algorithm 1), which does not require solving any optimization. A-G Understanding the High Attack Performance by the NN-based Attack [30] Fig. 3 in our earlier evaluation shows that the NN-based attack [30] achieves the highest TPR with low FPR on the undefended models in many cases. We explain the reason. The NN attack trains an attack inference model on the known member and non-member samples, which outputs large values on members and small ones on non-members. We first plot in Fig. 9 the output distribution by the attack inference model to help understand how different thresholds affect the attack TPR and FPR. The default NN attack uses a threshold of 0.5 and predicts any sample with an output $>$0.5 as a member. As shown in Fig. 9, in order to maintain a low FPR, the attack switches to a larger threshold (as high as over 0.99 in our experiment). In this case, low FPR can be achieved because most non-members are predicted with low values (the left region in Fig. 9). Likewise, the attack achieves high TPR, because many members are predicted with large values (in the right most region in Fig. 9), and are correctly recognized as members. A-H Evaluation on Different Network Architectures This section reports additional evaluation on models trained with different network architectures (using CIFAR10), including DenseNet-12 [16], ResNet-18 [14], MobileNet [15], ShuffleNet [50]. The results are shown in Fig. 10. We find that models trained with different architectures exhibit disparate degrees of MIA risk, with the attack TPR @0.1% FPR being 6.47%$\sim$30%, and the attack TNR @0.1% FNR 10.15%$\sim$ 31.12%. This gives an average attack TPR of 16.29% and attack TNR of 18.75%. HAMP is able to consistently reduces the MIA risk, with the attack TPR on HAMP being 0.52%$\sim$0.92% and the attack TNR 0.31%$\sim$0.77%. On average, HAMP reduces the attack TPR by 95.6% (from 16.29% to 0.72%) and the attack TNR by 97.5% (from 18.75% to 0.47). Further, HAMP achieves such strong privacy protection with only a minor accuracy drop of 0.59% (at most 1.28%). A-I Detailed Attack AUC comparison In Section IV-A, we report the average attack AUC on each defense in Fig. 4, and we provide the detailed results on each dataset in Fig. 11. A-J Full ROC Curves The full ROC curves from the evaluation in Section IV can be found in Fig. 13. A-K Detailed Results for Each Attack In Section IV, we reported the highest attack results among all evaluated attacks. We now provide the detailed results for each attack for completeness (the correctness-based attack is omitted as it does not work when calibrated at a low FPR or FNR), and they can be found in Table VIII. Our results also find that label-only attacks are unsuccessful in inferring members and non-members when controlled at low FPR and FNR regime - this is also a known issue found on many score-based attacks by Carlini et al. [3]. We use the boundary-based attack [8] as an example to illustrate. We first plot the perturbation distance on the members and non-members in Fig. 12. As shown, though perturbing the training members requires more perturbations than the testing samples, the distance is not well separated enough to be calibrated for inferring members with low false positive (hence with a low 0.12% [email protected]% FPR), or inferring non-members with low false negative (hence with a low 0.1% [email protected]%FNR).
A comparison between Avila-Gouëzel-Yoccoz norm and Teichmüller norm Weixu Su  and  Shenxing Zhang Weixu Su: School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China [email protected] Shenxing, Zhang: School of Mathematical Sciences, Fudan University, Shanghai 200433, China [email protected] Abstract. We give a comparison between the Avila-Gouëzel-Yoccoz norm and the Teichmüller norm on the principal stratum of holomorphic quadratic differentials. 1. Introduction Let $X$ be a compact Riemann surface of genus $g$. A holomorphic quadratic differential $q$ on $X$ is a tensor given locally by an expression $q=q(z)dz^{2}$ where $z$ is a conformal coordinate on $X$ and $q(z)$ is holomorphic. Such a (nonzero) quadratic differential $q$ defines a flat metric $|q|^{1/2}$ on $X$. This metric has conical singularities at zeroes of $q$. Its area is defined by $$\|q\|=\int_{X}|q(z)||dz|^{2}.$$ Fix $g\geq 2$ and let $\mathcal{P}_{g}$ be the principal stratum of the moduli space of quadratic differentials, consisting of isomorphism classes of holomorphic quadratic differentials $(X,q)$ with $4g-4$ distinct simple zeroes. There is a Finsler metric on $\mathcal{P}_{g}$ called AGY metric, which was introduced by Avila-Gouëzel-Yoccoz [2, §2.2.2] on each stratum of Abelian differentials. This norm plays an important role in the study of Teichmüller flow. See [1, 2, 4]. Let $\mathcal{M}_{g}$ be the moduli space of Riemann surfaces of genus $g$. Let $\pi:\mathcal{P}_{g}\to\mathcal{M}_{g}$ be the natural projection, defined by $\pi(X,q)=X$. In the note, we consider the derivative of $\pi$ and compare the AGY norm with the Teichmüller norm. For each $(X,q)\in\mathcal{P}_{g}$, there is a canonical double cover $\rho:\hat{X}\to X$, ramified at the odd zeros of $q$, such that $\rho^{*}q$ is the square of an Abelian differential $\omega$ on $\hat{X}$. See [3] or [9, §2] for details. The Abelian differential $\omega$ is a $-1$ eigenvector for the holomorphic involution $\tau:\hat{X}\to\hat{X}$ that permutes the sheets of the double cover, that is, $$\rho^{*}\omega=-\omega.$$ We can identify the tangent space of $\mathcal{P}_{g}$ at $(X,q)$ as $H^{1}_{-1}(\hat{X},\mathbb{C})$, the $-1$ eigenspace for the action of $\tau$ on the cohomology $H^{1}(\hat{X},\mathbb{C})$. Every element of $H^{1}(\hat{X},\mathbb{C})$ can be represented uniquely by a harmonic one-form. Consequently, there is a natural decomposition of $H^{1}_{-1}(\hat{X},\mathbb{C})$ into $H_{-1}^{1,0}(\hat{X})\oplus H_{-1}^{0,1}(\hat{X}).$ Note that the kernel of $D\pi$ is $H_{-1}^{1,0}(\hat{X})$. See Theorem 2.1 below. We consider $\eta\in H_{-1}^{0,1}(\hat{X})$ and compare the AGY norm of $\eta$ with the Teichmüller norm of $D\pi(\eta)$. The main result is: Theorem 1.1. Let $(X,q)\in\mathcal{P}_{g}$ with area $\|q\|=1$. Let $\rho:\hat{X}\to X$ be the canonical double cover such that $\rho^{*}q=\omega^{2}$. Then for any $\eta\in H_{-1}^{0,1}(\hat{X})$, we have (1) $$\frac{r}{\sqrt{2}}\ {\|\eta\|_{\mathrm{AGY}}}\leq{\|D\pi(\eta)\|_{\mathrm{Teich}}}\leq\frac{8}{\sqrt{\pi}r}\|\eta\|_{\mathrm{AGY}},$$ where $2r$ is the shortest length of saddle connections on $(\hat{X},\omega)$. Remark 1.2. Note that the area of $\omega$ is $2$. Recently, Kahn-Wright [6] derived a comparison between the Hodge norm (another important norm on $\mathcal{P}_{g}$) and the Teichmüller norm. Our research is motivated by their work. The paper has the following structure. In §2, we present some basic properties of quadratic differentials. The upper bound in (1) is given in §3, where we use the Delaunay triangulation of quadratic differential to construct quasiconformal maps with explicit Beltrami differentials. In §4, we give an upper bound of the AGY norm in terms of the Hodge norm, and then we derive the lower bound in (1) from Kahn-Wright[6, Theorem 1.4]. 2. Preliminaries 2.1. The moduli space of quadratic differentials Let $g\geq 2$. We denote by $\mathcal{M}_{g}$ the moduli space of compact Riemann surfaces of genus $g$. For $X\in\mathcal{M}_{g}$, the cotangent space of $\mathcal{M}_{g}$ at $X$ is canonically identified with the space $Q(X)$ of holomorphic quadratic differentials on $X$. We define the $L^{1}$-norm on $Q(X)$ by $$\|q\|=\int_{X}|q|.$$ A tangent vector of $\mathcal{M}_{g}$ at $X$ is represented by a Beltrami differential $\mu$. There is a natural pairing between quadratic differentials and Beltrami differentials given by $$\langle\mu,q\rangle=\int_{X}\mu q.$$ The Teichmüller norm of $\mu$ is defined by $$\|\mu\|_{\mathrm{Teich}}=\sup_{\|q\|=1}\operatorname{Re}\ \langle\mu,q\rangle.$$ This gives the infinitesimal form of the Teichmüller metric on $\mathcal{M}_{g}$. Let $\mathcal{Q}_{g}$ be the moduli space of quadratic differentials, consisting of pairs $(X,q)$ where $X$ is a compact Riemann surface of genus $g$ and $q$ is a holomorphic quadratic differential on $X$. The moduli space $\mathcal{Q}_{g}$ has a stratified structure: given an integral vector $\kappa=\left(\kappa_{1},\cdots,\kappa_{n}\right)$ with $\sum\kappa_{i}=4g-4$, we let $\mathcal{Q}_{g}(\kappa)\subset\mathcal{Q}_{g}$ be the set of quadratic differentials $(X,q)$ where $q$ has $n$ zeros of order $\kappa_{1},\cdots,\kappa_{n}$. In the paper, our study is mainly restricted on the principal stratum, consisting of those quadratic differentials all of whose zeros are simple. We denote the principal stratum by $\mathcal{P}_{g}$. This stratum is both open and dense in $\mathcal{Q}_{g}$. 2.2. Canonical double cover Let $\mathcal{Q}_{g}(\kappa)$ be a stratum of quadratic differentials. Given $(X,q)\in\mathcal{Q}_{g}(\kappa)$, let $\rho:\hat{X}\to X$ be the canonical double cover such that the pull-back $\rho^{*}q$ becomes the square of an Abelian differential $\omega$ on $\hat{X}$. Let $\tau:\hat{X}\to\hat{X}$ be the involution that permutes the sheets of the double cover. By the construction, $\tau^{*}\omega=-\omega$. Let $\Sigma$ be the set of zeros of $\omega$. Denote by $H^{1}_{-1}(\hat{X},\Sigma,\mathbb{C})$ the $-1$ eigenspace for the action of $\tau$ on the relative homology group $H^{1}(\hat{X},\Sigma,\mathbb{C})$. Note that the relative cohomology class of $\omega$ is an element of $H^{1}_{-1}(\hat{X},\Sigma,\mathbb{C})$. A neighborhood of $\omega$ in $H^{1}_{-1}(\hat{X},\Sigma,\mathbb{C})$ gives a local chart of $q$ in the stratum, via the period mapping. In the following, we shall identify the tangent space at $(X,q)$ with the cohomology $H^{1}_{-1}(\hat{X},\Sigma,\mathbb{C})$. If $(X,q)\in\mathcal{P}_{g}$, then $q$ has no zeros of even order. In this case, since $\Sigma$ is the set of fixed points of $\tau$, we have $$H^{1}_{-1}(\hat{X},\Sigma,\mathbb{C})\cong H^{1}_{-1}(\hat{X},\mathbb{C}).$$ Thus each element of $H^{1}_{-1}(\hat{X},\mathbb{C})$ can be uniquely represented by a harmonic $1$-form. The following result describe the tangent map of $\pi:\mathcal{P}_{g}\to\mathcal{M}_{g}$ in terms of the period coordinates. It is proved by Kahn-Wright [6, Corollary 1.2]. Theorem 2.1. Consider the projection $\pi:\mathcal{P}_{g}\to\mathcal{M}_{g}$. Let $(X,q)\in\mathcal{P}_{g}$ and let $\eta$ be a harmonic $1$-form on $\hat{X}$ that represents an element of $H^{1}_{-1}(\hat{X},\mathbb{C})$. Then for any $\phi\in Q(X)$, $$\langle D\pi(\eta),\phi\rangle=\frac{1}{2}\int_{\hat{X}}\rho^{*}(\phi)\frac{\eta^{0,1}}{\omega},$$ where $\eta^{0,1}$ is the anti-holomorphic part of $\eta$. 2.3. The AGY norm. The AGY norm is defined by Avila-Gouezel-Yoccoz [2] on any stratum of Abelian differentials. With the notations in §2.2, we consider the Abelian differential $\omega$ as an element of $H^{1}(\hat{X},\Sigma,\mathbb{C})$. A saddle connection of $\omega$ is a geodesic segment for the flat metric defined by $|\omega|$ joining two zeros of $\omega$ and not passing any zero in its interior. Each saddle connection $\gamma$ gives rise to an element $[\gamma]$ of $H^{1}(\hat{X},\Sigma,\mathbb{C})$. Denote by $\left\{\gamma_{j}\right\}$ the set of saddle connections on $\omega$. Then for any $[\eta]\in H^{1}(\hat{X},\Sigma,\mathbb{C})$, its AGY norm is defined by $$\|\eta\|_{\mathrm{AGY}}=\sup\frac{\left|\int_{\gamma_{j}}\eta\right|}{\left|\int_{\gamma_{j}}\omega\right|},$$ where the supremum is taken over all saddle connections. Avila-Gouezel-Yoccoz [2] showed that the AGY norm is continuous and induces a complete metric on the stratum. 3. The upper bound In this section, we give an upper bound of $\|D\pi(\eta)\|_{\mathrm{Teich}}$ in terms of $\|\eta\|_{\mathrm{AGY}},$ for any $\eta\in H^{1}_{-1}(\hat{X},\mathbb{C})$. The idea is to triangulate the surface and compute the Beltrami differentials of maps that are affine on each triangle. We remark that the proof applies to any other stratum of quadratic differentials or Abelian differentials. 3.1. Delaunay triangulation Given a quadratic differential $(X,q)$, there is an associated flat metric (with conical singularities) on $X$, defined by $|q|^{1/2}$. Denote by $\Sigma$ the set of zeros of $q$. For any $x\in X$, let $d(x,\Sigma)$ be the minimal $|q|^{1/2}$-distance from $x$ to $\Sigma$. The next result is proved by Masur-Smillie [8]. See also Farb-Masur [5]. Proposition 3.1. Let $(X,q)$ be a holomorphic quadratic differential of area $\|q\|\leq 1$. There is a triangulation $\Delta$ on $X$ with the following properties: (1) The vertices of $\Delta$ lie in the zero set of $q$. (2) The edges of $\Delta$ are saddle connections of $q$. (3) Each triangle is inscribed in a circle of radius $d(x,\Sigma)$ for some $x\in X$. The above construction is called a Delaunay triangulation of $q$. Let $s=\sqrt{\frac{2}{\pi}}$, and let $B_{s}$ be the set of points in $X$ with $d(x,\Sigma)\leq s$. By the proof of [8, Theorem 5.3], the complement of $B_{s}$ is contained in a union of disjoint maximal flat cylinders, with the property that their circumference is less than their height. 3.2. The proof of upper bound Let $\eta\in H^{1}_{-1}(\hat{X},\mathbb{C})$. Denote by $(\hat{X}_{t},\omega_{t})$ the family of Abelian differentials corresponding to the cohomology classes $\omega+t\eta\in H^{1}_{-1}(\hat{X},\mathbb{C})$, for sufficiently small $t>0$. Let $\Delta$ be a Delaunay triangulation of $(\hat{X},\omega)$. By the construction, the vertices of $\Delta$ are the zeros of $\omega$, and the edges of $\Delta$ are saddle connections of $\omega$. For each $t$, we can straighten $\Delta$ to be a triangulation of $\hat{X}_{t}$, denoted by $\Delta_{t}$, such that the edges are saddle connections of $\omega_{t}$. The next step is to construct quasiconformal mappings $f_{t}$ from $\hat{X}$ to $\hat{X}_{t}$ that are linear on each triangle. Denote the Beltrami differentials of $f_{t}$ by $\mu_{t}$. Then $$D\pi(\eta)\cong\frac{d\mu_{t}}{dt}|_{t=0}.$$ Proposition 3.2. Let $2r$ be the shortest length of saddle connections on $(\hat{X},\omega)$. Then $$\left\|D\pi(\eta)\right\|_{\mathrm{Teich}}\leq\frac{8}{\sqrt{\pi}r}\|\eta\|_{\mathrm{AGY}}.$$ Proof. Denote by $$\mu=\frac{d\mu_{t}}{dt}|_{t=0}.$$ Since $\|\mu\|_{\mathrm{Teich}}\leq\|\mu\|_{\infty}$, it suffices to give the upper bound for $\|\mu\|_{\infty}$. Let $T=\triangle OAB$ be any triangle of $\Delta$, where $O,A,B$ denotes the vertices. For simplicity, we consider $T$ as a triangle in the complex plane and put $O=0,A=a>0$ and $B=b\in\mathbb{C}$. By definition, $$a=\int_{\gamma}\omega,b=\int_{\gamma^{\prime}}\omega,$$ where $\gamma$ and $\gamma^{\prime}$ denote the saddle connection connecting $O$ to $A$ and $O$ to $B$, respectively. For each $t$ sufficiently small, the corresponding triangle in $\hat{X}_{t}$ has vertices given by $0$, $a+t\alpha$ and $b+t\beta$, where $$\alpha=\int_{\gamma_{i}}\eta,\beta=\int_{\gamma_{j}}\eta.$$ Denote the associated affine mapping between the triangles by $$f_{t}(z)=Rz+S\bar{z}.$$ Then we have $$Ra+Sa=a+t\alpha,$$ $$Rb+S\bar{b}=b+t\beta.$$ A simple computation shows that the Beltrami coefficient $\mu_{t}$ is equal to $$\frac{S}{R}=t\frac{\frac{\alpha}{a}-\frac{\beta}{b}}{1-\frac{\bar{b}}{b}}+o(t).$$ Now we give an upper bound of $$|\mu(z)|=\left|\frac{\frac{\alpha}{a}-\frac{\beta}{b}}{1-\frac{\bar{b}}{b}}\right|.$$ Assume that $\theta=\arg b$. Then $$|1-\frac{\bar{b}}{b}|=2|\sin\theta|.$$ To give an upper bound of the quasiconformal dilatation, we discuss $\sin\theta$ in two cases. Let $s_{0}=\sqrt{\frac{4}{\pi}}$. We remark that the area of $|\omega|$ is $2$. Note that for any edge of $T$, it either has length $\leq 2s_{0}$ or crosses a maximal flat cylinder $C$ whose height $h$ is greater than its circumference $c$. Assume that all edges of $T$ has length $\leq 2s_{0}$. In this case, the triangle $T$ is inscribed in a circle of radius $d(x,\Sigma)\leq 2s_{0}.$ Since $\sin\theta=|a-b|/2d(x,\Sigma)$, we have $$|\sin\theta|\geq\frac{r}{d(x,\Sigma)}\geq\frac{\sqrt{\pi}r}{4}.$$ Thus we have $$\left|\frac{\frac{\alpha}{a}-\frac{\beta}{b}}{1-\frac{\bar{b}}{b}}\right|\leq\frac{8\max\{|\frac{\alpha}{a}|,|\frac{\alpha}{a}|\}}{\sqrt{\pi}r}\leq\frac{8}{\sqrt{\pi}r}\|\eta\|_{\mathrm{AGY}}.$$ The remaining case is that some edge of $T$ crosses a maximal flat cylinder $C$ whose height $h$ is greater than its circumference $c$. In this case, some other edge of $T$ also crosses $C$. Thus the triangle $T$ looks like an isosceles triangle with a short base. As a result, we may choose the angle $\theta$ such that $$\frac{\pi}{4}\leq\theta\leq\frac{\pi}{2}.$$ Then we have $\sin\theta\geq\frac{\sqrt{2}}{2}$. It follows that $$\left|\frac{\frac{\alpha}{a}-\frac{\beta}{b}}{1-\frac{\bar{b}}{b}}\right|\leq\frac{2\max\{|\frac{\alpha}{a}|,|\frac{\alpha}{a}|\}}{\sqrt{2}}\leq\sqrt{2}\|\eta\|_{\mathrm{AGY}}.$$ Note that $\pi r^{2}\leq 1$. ∎ Remark 3.3. It is known that for any quadratic differential $q$, in the the direction of Teichmüller flow, the AGY norm is less than the Teichmüller norm (see [2, Page 152]). As we have shown in the proof of Proposition 3.2, the order $\frac{1}{r}$ appears when the triangle is almost flat. If there is some angle of the triangle which is neither close to $0$ or $\pi$, then the Beltrami coefficient should be bounded above by $\|\eta\|_{\mathrm{AGY}}$ up to a multiplicative constant. 3.3. The order $\frac{1}{r}$ in Proposition 3.2 is sharp. We recall the following construction of Kahn-Wright [6, §3.3]. Let $\epsilon>0$ be a small constant. We take a square torus of length $1$, make a length $\epsilon$ horizontal slit, and glue in a small cylinder with circumference $1$ and height $\epsilon$. The construction defines an Abelian differential $(X_{\epsilon},\omega_{\epsilon})$ with $1$ double zero. Let ${\gamma}_{\epsilon}$ be the core curve of the small cylinder on $(X_{\epsilon},{\omega}_{\epsilon})$. Denote the harmonic differential dual to $\gamma$ by ${\gamma}_{\epsilon}^{*}$. Remark 3.4. We can write $\gamma_{\epsilon}^{*}=\beta_{\epsilon}+\bar{\beta}_{\epsilon}$, where $\beta_{\epsilon}$ is an Abelian differential. It is known that the Hodge norm of $\beta_{\epsilon}$ is bounded above and below independently of $\epsilon$. As shown in Kahn-Wright [6, §3.3], $$\left\|D\pi({\gamma}_{\epsilon}^{*})\right\|_{\mathrm{Teich}}\geq\frac{C}{\epsilon}$$ for some constant $C$. The path $\omega_{\epsilon}+t\epsilon\gamma_{\epsilon}^{*}$ is corresponding to a family of translations surfaces, obtained by twisting along the core curve of the small cylinder. When $t=1$, $\omega_{\epsilon}+\epsilon\gamma_{\epsilon}^{*}$ is a Dehn twist of $\omega_{\epsilon}$. The length of shortest saddle connections of $\omega_{\epsilon}$ is equal to $\epsilon$. If $\alpha_{0}$ is the saddle connection contained in the small cylinder and crossing $\gamma_{\epsilon}$, then $$\frac{\left|\int_{\alpha_{0}}\gamma_{\epsilon}^{*}\right|}{\left|\int_{\alpha_{0}}\omega_{\epsilon}\right|}=\frac{\epsilon}{\epsilon}=1.$$ For any other saddle connections $\alpha$, which crosses the small cylinder $n$ times, we have $$\frac{\left|\int_{\alpha}\gamma_{\epsilon}^{*}\right|}{\left|\int_{\alpha}\omega_{\epsilon}\right|}\leq\frac{n\epsilon}{n\epsilon}=1.$$ As a result, $\|{\gamma}_{\epsilon}^{*}\|_{\mathrm{AGY}}=1$. In conclusion, we have $$\left\|D\pi({\gamma}_{\epsilon}^{*})\right\|_{\mathrm{Teich}}\geq C\frac{\|{\gamma}_{\epsilon}^{*}\|_{\mathrm{AGY}}}{\epsilon},$$ where $\epsilon$ is the length of shortest saddle connections of $\omega_{\epsilon}$. 4. The lower bound In this section, we consider tangent vectors to $\mathcal{P}_{g}$ of the form $\eta=\bar{\beta}$, where $\beta\in H_{-1}^{1,0}(\hat{X})$. By Theorem 2.1, the Beltrami differential $\mu=\bar{\beta}/\omega$ can be considered as the tangent vector $D\pi(\eta)$ via pairing with holomorphic quadratic differentials: $$\int_{\hat{X}}\rho^{*}(\phi)\frac{\bar{\beta}}{\omega}.$$ The Hodge norm of $\beta\in H_{-1}^{1,0}(\hat{X})$ is defined by $$\|\beta\|_{\mathrm{Hodge}}=\sqrt{\int_{\hat{X}}|\beta|^{2}}.$$ We have (see [6, Theorem 3.1]): Theorem 4.1. For any $\eta=\bar{\beta}\in H_{-1}^{0,1}(\hat{X})$, we have $$\|D\pi(\eta)\|_{\mathrm{Teich}}\geq\frac{\|\beta\|_{\mathrm{Hodge}}}{\|\omega\|_{\mathrm{Hodge}}}\ .$$ The lower bound in Theorem 1.1. Note that $\|q\|=1$ implies $\|\omega\|_{\mathrm{Hodge}}=\sqrt{2}.$ Applying Theorem 4.1 and the next proposition , we have (2) $$\displaystyle\|D\pi(\eta)\|_{\mathrm{Teich}}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{\|[\eta]\|_{\mathrm{Hodge}}}{\sqrt{2}}\geq\frac{r}{\sqrt{2}}\|\eta\|_{\mathrm{AGY}}.$$ ∎ Proposition 4.2. Let $2r$ be the shortest length of saddle connections. For any saddle connection $\gamma$ of $\omega$ and any $\beta\in H_{-1}^{1,0}(\hat{X})$, we have $$\frac{\left|\int_{\gamma}\beta\right|}{\left|\int_{\gamma}\omega\right|}\leq\frac{\|\beta\|_{\mathrm{Hodge}}}{r}.$$ Proof. Endow the surface with the metric $|\omega|$. Let $a,b$ be the zeros of $\omega$ which are connected by $\gamma$. Let $D_{a}$ and $D_{b}$ be the disks of radius $r$ around $a$ and $b$. We separate $\gamma$ into two parts by setting $\gamma^{\prime}=\gamma\cap\left(D_{a}\cup D_{b}\right)$ and $\gamma^{\prime\prime}=\gamma\setminus\gamma^{\prime}$. The next inequality is a corollary of [6, Lemma 3.2]: (3) $$\left|\int_{\gamma^{\prime}}\beta\right|\leq 2\|\beta\|_{\mathrm{Hodge}}.$$ On the other hand, we have $$\displaystyle\left|\int_{\gamma^{\prime\prime}}\beta\right|\leq\int_{\gamma^{\prime\prime}}\left|\frac{\beta}{\omega}\right|\left|\omega\right|.$$ Since $\gamma^{\prime\prime}$ is contained in a saddle connection, we have $\left|\int_{\gamma^{\prime\prime}}\omega\right|=\int_{\gamma^{\prime\prime}}|\omega|$. Thus $$\displaystyle\frac{\left|\int_{\gamma^{\prime\prime}}\beta\right|}{\left|\int_{\gamma^{\prime\prime}}\omega\right|}$$ $$\displaystyle\leq$$ $$\displaystyle\max_{\gamma^{\prime\prime}}\left|\frac{\beta}{\omega}\right|$$ $$\displaystyle\leq$$ $$\displaystyle\frac{\|\beta\|_{\mathrm{Hodge}}}{\sqrt{\pi}r},$$ where the last inequality was proved by Kahn-Wright in §3.2 of [6]. Combining the above inequality with (3), we have $$\displaystyle\frac{\left|\int_{\gamma}\beta\right|}{\left|\int_{\gamma}\omega\right|}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{\left|\int_{\gamma^{\prime}}\beta\right|+\left|\int_{\gamma^{\prime\prime}}\beta\right|}{\left|\int_{\gamma^{\prime}}\omega\right|+\left|\int_{\gamma^{\prime\prime}}\omega\right|}$$ $$\displaystyle\leq$$ $$\displaystyle\max\left\{\frac{\left|\int_{\gamma^{\prime}}\beta\right|}{\left|\int_{\gamma^{\prime}}\omega\right|},\frac{\left|\int_{\gamma^{\prime\prime}}\beta\right|}{\left|\int_{\gamma^{\prime\prime}}\omega\right|}\right\}$$ $$\displaystyle\leq$$ $$\displaystyle\max\left\{\frac{\|\beta\|_{\mathrm{Hodge}}}{r},\frac{\|\beta\|_{\mathrm{Hodge}}}{\sqrt{\pi}r}\right\}$$ $$\displaystyle=$$ $$\displaystyle\frac{\|\beta\|_{\mathrm{Hodge}}}{r}.$$ ∎ References [1] Avila A, Gouëzel S. Small eigenvalues of the Laplacian for algebraic measures in moduli space, and mixing properties of the Teichmüller flow[J]. Annals of Mathematics, 2013: 385-442. [2] Avila A, Gouëzel S, Yoccoz J C. Exponential mixing for the Teichmüller flow[J]. Publications Mathématiques de l’IHÉS, 2006, 104: 143-211. [3] Douady A, Hubbard J. On the density of Strebel differentials[J]. Inventiones mathematicae, 1975, 30(2): 175-179. [4] Eskin A, Mirzakhani M. Invariant and stationary measures for the action on moduli space[J]. Publications mathématiques de l’IHÉS, 2018, 127(1): 95-324. [5] Farb B, Masur H. Teichmüller geometry of moduli space, I: distance minimizing rays and the Deligne-Mumford compactification[J]. Journal of Differential Geometry, 2010, 85(2): 187-228. [6] Kahn J, Wright A, Hodge and Teichmüller. http://www-personal.umich.edu/ alexmw/HT.pdf [7] Kontsevich M, Zorich A. Connected components of the moduli spaces of Abelian differentials with prescribed singularities[J]. Inventiones mathematicae, 2003, 153(3): 631-678. [8] Masur H, Smillie J. Hausdorff dimension of sets of nonergodic measured foliations[J]. Annals of Mathematics, 1991, 134(3): 455-543. [9] Lanneau E. Hyperelliptic components of the moduli spaces of quadratic differentials with prescribed singularities[J]. Commentarii Mathematici Helvetici, 2004, 79(3): 471-501. [10] Wright, A. VISUALIZING THE UNIT BALL OF THE AGY NORM. http://www-personal.umich.edu/ alexmw/AGY.pdf
Virial shocks in galactic haloes? Yuval Birnboim & Avishai Dekel Racah Institute of Physics, The Hebrew University, Jerusalem Israel [email protected]; [email protected] Abstract We investigate the conditions for the existence of an expanding virial shock in the gas falling within a spherical dark-matter halo. The shock relies on pressure support by the shock-heated gas behind it. When the radiative cooling is efficient compared to the infall rate the post-shock gas becomes unstable; it collapses inwards and cannot support the shock. We find for a monoatomic gas that the shock is stable when the post-shock pressure and density obey $\gamma_{\rm eff}\equiv(d\ln P/dt)/(d\ln\rho/dt)>10/7$. When expressed in terms of the pre-shock gas properties at radius $r$ it reads $\rho r\Lambda(T)/u^{3}<.0126$, where $\rho$ is the gas density, $u$ is the infall velocity and $\Lambda(T)$ is the cooling function, with the post-shock temperature $T\propto u^{2}$. This result is confirmed by hydrodynamical simulations, using an accurate spheri-symmetric Lagrangian code. When the stability analysis is applied in cosmology, we find that a virial shock does not develop in most haloes that form before $z\sim 2$, and it never forms in haloes less massive than a few $10^{11}M_{\odot}$. In such haloes, the infalling gas is not heated to the virial temperature until it hits the disc, thus avoiding the cooling-dominated quasi-static contraction phase. The direct collapse of the cold gas into the disc should have nontrivial effects on the star-formation rate and on outflows. The soft X-ray produced by the shock-heated gas in the disc is expected to ionize the dense disc environment, and the subsequent recombination would result in a high flux of $L_{\alpha}$ emission. This may explain both the puzzling low flux of soft X-ray background and the $L_{\alpha}$ emitters observed at high redshift. keywords: cooling flows — dark matter — galaxies: formation — galaxies: ISM — hydrodynamics — shock waves ††pagerange: Virial shocks in galactic haloes?–C††pubyear: 2002 1 Introduction The standard lore in the idealized picture of galaxy formation by spherical infall of gas inside dark-matter haloes is that the gas is first heated to the halo virial temperature behind an expanding virial shock. It is then supported by pressure in a quasi-static equilibrium while it is cooling radiatively and is slowly contracting to a disc where it can eventually form stars. The cooling process thus determines important galaxy properties such as the star-formation rate and the metal enrichment, so it is necessarily an important ingredient in the galaxy formation process. However, it is not at all clear that a stable shock can persist in the halo gas away from the disc under the conditions valid in many galactic haloes. In the absence of a virial shock, the gas is not heated to the virial temperature until it falls all the way to the disc, where the collapse stops and the gas is heated in a thin layer. This may alter some of the assumed processes of disc formation and in particular the star formation rate in it. It may work against blowout by supernova-driven winds in dwarf galaxies. The result of heating near the disc instead of at the virial radius may result in weakening the soft x-ray emission from such haloes and producing a high flux of $L_{\alpha}$ instead. In this paper we evaluate the conditions for the existence of a virial shock in galactic haloes. Initial density perturbations are assumed to grow by gravitational instability, reach maximum expansion, and collapse into virial equilibrium at roughly half the maximum-expansion radius. During the initial phase, and roughly until shells start crossing each other near the virial radius, the gas pressure is negligible compared to the gravitational force, so the shells of gas and dark matter move in a similar manner. Once interior to the virial radius, where shells tend to cross and the gas density becomes high enough, the gas pressure becomes an important player in the dynamics. Its hydrodynamic properties allow transfer of bulk kinetic energy into internal energy and the pressure prevents gas element from passing through other gas elements and from being compressed without limit. This makes the infall velocity vanish at the centre. Since in the cold infalling gas the typical velocity is higher than the speed of sound, the information about this inner boundary condition cannot propagate outwards in time, and these supersonic conditions create a shock. After the gas crosses the shock, it is heated up, the speed of sound increases, and the flow becomes subsonic. The shock transfers the kinetic energy that has been built during the collapse into internal gas energy just behind the shock. A stable spherical shock would slowly propagate outwards through the infalling gas, leaving behind it hot, high-entropy gas that is almost at rest. The temperature of the post-shock gas roughly equals the virial temperature. The persistence of the shock depends on sufficient pressure by the post-shock gas, which supports it against being swept inwards due to the gravitational pull together with the infalling matter. Radiative gas cooling makes the gas lose entropy and pressure, which weakens the pressure support behind the shock front. Our approach here is to evaluate the existence of a virial shock by analyzing the gravitational stability of the supporting gas behind the shock in the presence of significant cooling. In §2 we first summarize the standard analysis of an adiabatic shock and then generalize the gravitational stability criterion to the case where cooling is important. In §3 we describe our spherical hydrodynamic Lagrangian code, which includes gravitating dark-matter and gas shells, artificial viscosity, radiative cooling and centrifugal forces. We test the code in this section and in Appendix A. In §4 we apply the numerical code to simulations which demonstrate the shock formation and test the validity of the analytical model. In §5 we apply the shock stability criterion to realistic haloes forming in cosmological conditions. In §6 we summarize our results and discuss potential astrophysical implications. 2 Shock stability analysis Our goal here is to derive a criterion for the existence of a virial shock in terms of the properties of the infalling gas just in front of the shock front. It is based on a gravitational stability analysis of the post-shock gas. We first remind ourselves of the standard stability analysis in the simple adiabatic case, and then derive a more general criterion for stability in the radiative case, under certain assumptions and using a perturbation analysis. 2.1 The standard adiabatic case Throughout this paper, we treat the baryons as an ideal monoatomic gas. Their equation of state could therefore be written as $$P=(\gamma-1)\,e\,\rho\,,$$ (1) where $P$ is the pressure, $e$ is the specific internal energy, $\rho$ is the density of the gas and $\gamma$ is the adiabatic index. Along an isentrope (an adiabatic process of constant entropy) the pressure and density are related via $P\propto\rho^{\gamma}$, so the adiabatic index is defined by $$\gamma=\left({\partial\ln P\over\partial\ln\rho}\right)_{\rm s}\,.$$ (2) For a monoatomic gas $\gamma=5/3$.111As the temperature exceeds the binding energy of the hydrogen and helium atoms, electrons become detached from the nuclei and $\gamma$ becomes smaller. Once the gas becomes fully ionized, the original value of $5/3$ is restored, but with a different effective density. This should have only a marginal effect on our results, and is ignored in this paper. The virial shock is assumed to be a spherical accretion shock which propagates outwards slowly while infalling gas crosses it inwards. The kinetic energy of the infalling gas is transformed at the shock front into thermal energy — the post-shock gas is thus heated to a temperature close to the virial temperature of the system of dark-matter halo and gas, $V_{\rm infall}^{2}\approx k_{\rm B}T_{\rm vir}$. Because the original temperature of the infalling gas is negligible compared to the virial temperature, the system obeys the strong-shock limit. When we denote the pre-shock and post-shock quantities by subscripts 0 and 1 respectively, the jump conditions across the shock are in this case (Zel’dovich & Raiser, 1966): $$\rho_{0}=\frac{\gamma-1}{\gamma+1}\,\rho_{1}\,,$$ (3) $$(u_{0}-u_{\rm s})=\frac{\gamma+1}{\gamma-1}\,(u_{1}-u_{\rm s})\,,$$ (4) $$P_{1}=\frac{2\rho_{0}u_{0}^{2}}{\gamma+1}\,,$$ (5) $$T_{1}=\frac{\mu}{k_{b}N_{a}}\frac{P_{1}}{\rho_{1}}=\frac{\mu}{k_{b}N_{a}}\frac% {2\gamma-1}{(\gamma+1)^{2}}u_{0}^{2}\,,$$ (6) where $u$ stands for radial velocity, $u_{\rm s}$ is the shock velocity, $N_{a}$ is Avogadro’s number, ${N_{a}}/{\mu}$ is the average number of molecules per unit mass, and $k_{B}$ is Boltzman’s constant. According to standard shock theory, the post-shock gas is always sub-sonic (in the frame of reference of the moving shock) because of the increase of the sound velocity behind the shock. This gas is thus capable of providing the necessary pressure to support the shock against the gravitational pull inwards applied by the self-gravity of the gas and the dark-matter halo as well as the pressure applied by the infalling matter at the shock front. The criterion for gravitational stability of this post-shock gas in the adiabatic case is the standard Jeans stability criterion: $\gamma>4/3$ (e.g., Cox, 1980, Chapter 8) If the post-shock gas is gravitationally unstable, it falls into the galaxy centre on a dynamical timescale and can no longer support the shock. As a result, the shock weakens and it is swept inwards. The Jeans criterion can be qualitatively understood in terms of the following heuristic derivation. For a shell of radius $r$, we compare the gravitational pull inwards, $a_{\rm g}=GM/r^{2}$ (where $M$ is the mass interior to $r$), to the pressure pushing outwards, $a_{\rm p}=\rho^{-1}\nabla P$. We assume an isentrope, $P\propto\rho^{\gamma}$. We also assume homology, such that the local density scales like the mean density in the sphere interior to $r$, $\rho\propto M/r^{3}$. Then $\nabla P$ can be replaced by $\sim P/r$ and we obtain $$\frac{a_{\rm p}}{a_{\rm g}}\propto\rho^{\gamma-4/3}\,.$$ (7) If $\gamma<4/3$, we have an unstable configuration. Starting in hydrostatic equilibrium, $a_{\rm p}/a_{\rm g}=1$, a perturbation involving contraction is associated with a larger $\rho$, and therefore $a_{\rm p}/a_{\rm g}<1$ by eq. (7), implying that the pressure cannot prevent collapse. If $\gamma>4/3$, the pressure force increases until it balances the increased gravitational pull. We note that even this simple derivation of the Jeans criterion had to assume homology — an assumption that we will have to adopt also in our analysis of the radiative case below. 2.2 Shock stability under radiative cooling We wish to replace the adiabatic Jeans criterion by a more general stability condition that will be valid also in the radiative case. This criterion must depend on the cooling rate and should therefore be naturally expressed in terms of time derivatives. We generalize the adiabatic $\gamma$ of eq. (2) by an effective $\gamma$ following a comoving volume element along its Lagrangian path: $$\gamma_{\rm eff}\equiv{d\ln P/dt\over d\ln\rho/dt}\,.$$ (8) We expect that the system would be stable when $\gamma_{\rm eff}$ is larger than a certain critical value, the analog of the requirement $\gamma>4/3$ in the adiabatic case. In our Lagrangian analysis all the quantities ($r$, $u$, $\rho$, $P$, etc.) refer to comoving shells; they are all functions of the gas mass $m$ interior to radius $r$ and time $t$. Derivatives with respect to time following a comoving volume element will be denoted by an upper dot, and derivatives with respect to m will be denoted by a prime. The effective gamma can be related to its adiabatic analog given the cooling rate and other post-shock gas quantities. The time derivative of eq. (1) yields: $$\dot{P}=(\gamma-1)(\dot{e}\rho+e\dot{\rho})\,.$$ (9) Energy conservation in the presence of radiative losses can be expressed by $$\dot{e}=-P\dot{V}-q=\frac{P\dot{\rho}}{\rho^{2}}-q\,,$$ (10) where $q$ is the radiative cooling rate [to be discussed below, e.g., eq. (20)] and $V=\rho^{-1}$ is the specific volume. Substituting $\dot{e}$ from eq. (10) into eq. (9), and using it in eq. (8), we obtain $$\gamma_{\rm eff}=\gamma-\frac{\rho}{\dot{\rho}}\frac{q}{e}\,.$$ (11) Note that in the limit $q/e\ll\dot{\rho}/\rho$ we reproduce the adiabatic case; the process is nearly adiabatic when the cooling timescale is long compared to the contraction timescale. We assume that in the region close behind the shock the pattern of the velocity field is homologous. By this we mean that at any given time the (radial) velocity is proportional to the radius (as in a Hubble flow), namely $$u/r=u_{1}/r_{\rm s}\,,$$ (12) thus providing a boundary condition for the post-shock gas. The homology is shown to be a valid approximation in the simulations discussed below, where the post-shock shell trajectories are roughly parallel to each other in the $\log r-t$ plane, at any given time close enough to shock crossing. The time evolution of the density can then be evaluated via the continuity equation in Lagrangian form for the spheri-symmetric case, $$\frac{\dot{\rho}}{\rho}=-{\bf\nabla}\cdot{\bf u}=-\frac{1}{r^{2}}\frac{% \partial}{\partial r}(r^{2}u)=-\frac{3u_{1}}{r_{\rm s}}\,,$$ (13) where the last equality results from the assumed homology, eq. (12). The homology thus implies that $\dot{\rho}/\rho$ at a given $t$ is a constant in $m$ throughout the post-shock region. Eq. (11) can then be simplified: $$\gamma_{\rm eff}=\gamma+\frac{r_{\rm s}}{3u_{1}}\frac{q}{e}\,.$$ (14) We start with a hypothetical unperturbed state for the post-shock gas, where we assume that the net force vanishes, $\ddot{r}=0$. The system adjusts itself to this state on a timescale associated with the speed of sound $c_{\rm s}$, provided that it is much higher than the infall velocity $u$. This is expected to be the case in the sub-sonic post-shock medium, where $c_{\rm s}$ becomes high and $u$ becomes low. The unperturbed equation of motion in Lagrangian form is then $$\ddot{r}=-4\pi r^{2}P^{\prime}-\frac{GM}{r^{2}}=0\,,$$ (15) where $M$ is the total mass interior to radius $r$. We then introduce a perturbation due to a homologous infall velocity $u$. Over a short time interval $\delta t$, it introduces a small displacement inwards, $\delta r=u\,\delta t$. In order to distinguish between stability and instability we wish to determine whether the induced acceleration, $\ddot{\delta r}$, is positive or negative, tending to decrease or increase the velocity respectively. Note that under homology, eq. (12), the relative displacement is $${\delta r}/{r}={u_{1}\,\delta t}/{r_{\rm s}}\,.$$ (16) Writing the equation of motion, eq. (15), but for the perturbed quantities $P+\delta P$ and $r+\delta r$, and subtracting the unperturbed eq. (15), we obtain to first order $$\ddot{\delta r}=-4\pi r^{2}(\delta P)^{\prime}+\frac{4GM\,\delta r}{r^{3}}\,.$$ (17) We next manipulate the right-hand side of eq. (17) to obtain a simple expression involving $\gamma_{\rm eff}$. In the second term we use the homology, eq. (16), and then the unperturbed equation of motion, eq. (15), to obtain $$\frac{4GM\,\delta r}{r^{3}}=-\frac{16\pi r^{2}u_{1}\delta t}{r_{\rm s}}P^{% \prime}\,.$$ (18) The manipulation of the first term is somewhat more elaborate. We use the definition of $\gamma_{\rm eff}$, eq. (8), to write $$\delta P=(\dot{\rho}/\rho)P\gamma_{\rm eff}\delta t=-(3u_{1}/r_{\rm s})P\gamma% _{\rm eff}\delta t\,,$$ (19) where the second equality is due to eq. (13). Note that the $m$ dependence in this term is only in the product $P\gamma_{\rm eff}$. We now express $\gamma_{\rm eff}$ in terms of the cooling rate $q$ as in eq. (14), and need to take the derivative $(Pq/e)^{\prime}$. We make here the standard assumption that the radiative cooling rate is proportional to density, $$q=\rho\Lambda(T)\,,$$ (20) with $\Lambda(T)$ the macroscopic cooling function and $T$ the post-shock temperature. The immediate post-shock medium is assumed to be isothermal, reflecting via the jump conditions an assumed approximate uniformity of the pre-shock gas over a short time interval. Using eq. (1) we have $P/e=(\gamma-1)\rho$, and together with eq. (20) it becomes $$Pq/e=(\gamma-1)\Lambda\rho^{2}\,.$$ (21) In the computation of $(Pq/e)^{\prime}$, we first replace $\rho^{\prime}=(d\rho/dP)P^{\prime}$, then use eq. (8) to write $d\rho/dP=\gamma_{\rm eff}^{-1}\rho/P$, use eq. (21) backwards to replace $(\gamma-1)\Lambda\rho^{2}/P$ by $q/e$, and finally use eq. (14) to obtain $(Pq/e)^{\prime}=3u_{1}/r_{s}[-2\gamma_{\rm eff}^{-1}(\gamma-\gamma_{\rm eff})P% ^{\prime}]$. We thus have in the first term of the rhs of eq. (17) $$-(\delta P)^{\prime}=\frac{3u_{1}\delta t}{r_{\rm s}}P^{\prime}[\gamma-2\gamma% _{\rm eff}^{-1}(\gamma-\gamma_{\rm eff})]\,.$$ (22) With the right-hand side of eq. (17) given by eq. (22) and eq. (18), the first-order equation finally becomes, $$\ddot{\delta r}=\frac{12\pi r^{2}u_{1}\delta tP^{\prime}}{r_{\rm s}}\left[% \gamma-2\gamma_{\rm eff}^{-1}(\gamma-\gamma_{\rm eff})-\frac{4}{3}\right]\,.$$ (23) Since $u_{1}$ and $P^{\prime}$ are both always negative, the desired sign of $\ddot{\delta r}$ is determined by the sign of the expression inside the square brackets. Note that in the adiabatic case, $q=0$, we have $\gamma_{\rm eff}=\gamma$, so we recover the standard stability criterion, $\gamma>4/3$. In the radiative case, $\gamma_{\rm eff}\neq\gamma$, we finally obtain the generalized stability criterion: $$\gamma_{\rm eff}>\frac{2\gamma}{\gamma+{2}/{3}}\equiv\gamma_{\rm crit}$$ (24) For a monoatomic gas, where the adiabatic value is $\gamma=5/3$, the threshold for stability is $\gamma_{\rm crit}=10/7=1.43$, which is close but not identical to the adiabatic threshold $4/3$. 2.3 Stability in terms of pre-shock quantities Next, we wish to express $\gamma_{\rm eff}$ and the stability criterion in terms of the properties of the pre-shock gas; the infall velocity $u_{0}$ and the gas density $\rho_{0}$ at $r_{\rm s}$. We use the jump conditions, eq. (3) through eq. (6), in eq. (14). In eq. (4) we assume $u_{\rm s}=0$, namely that the shock is temporarily at rest, which should be valid when the shock is marginally stable (or unstable). This is because a stable shock is pushed outwards by the post-shock gas, while cooling reduces the pressure, slows the outward motion, and eventually causes it to halt and then be swept inwards by the infalling matter and gravitational pull. The transition from stability to instability can thus be associated with a transition from expansion to contraction of the shocked volume. According to eq. (1) and eq. (5) we have $$e_{1}=\frac{1}{(\gamma-1)}\frac{P_{1}}{\rho_{1}}=\frac{2u_{0}^{2}}{(\gamma+1)^% {2}}\,.$$ (25) According to eq. (13) and eq. (4) with $u_{\rm s}=0$ we have $$\frac{\dot{\rho}}{\rho}=-\frac{3u_{1}}{r_{\rm s}}=-3\frac{(\gamma-1)}{(\gamma+% 1)}\frac{u_{0}}{r_{\rm s}}\,.$$ (26) With these and eq. (20) we obtain the desired expression for the effective $\gamma$ of the post-shock gas in terms of the pre-shock conditions: $$\gamma_{\rm eff}=\gamma-\frac{(\gamma+1)^{4}}{6(\gamma-1)^{2}}\frac{\rho_{0}r_% {\rm s}\Lambda(T_{1})}{|u_{0}|^{3}}\,.$$ (27) For a monoatomic gas, $\gamma=5/3$, we obtain $$\gamma_{\rm eff}=5/3-18.96\frac{\rho_{0}r_{\rm s}\Lambda(T_{1})}{|u_{0}|^{3}}\,.$$ (28) Based on eq. (24), the criterion for stability of a $\gamma=5/3$ gas finally becomes $$\frac{\rho_{0}r_{\rm s}\Lambda(T_{1})}{|u_{0}|^{3}}<0.0126\,.$$ (29) The post-shock temperature is related to the pre-shock infall velocity using the jump condition, eq. (6), which for $\gamma=5/3$ gives $$T_{1}=\frac{3}{16}\frac{\mu}{k_{b}N_{a}}u_{0}^{2}\,.$$ (30) For a given cooling function $\Lambda(T)$, eq. (29) is a simple criterion for determining whether a stable shock can form at some radius $r_{\rm s}$ of the halo. It is in a form that can be directly tested against hydrodynamic simulations (§3), and can serve for evaluating shock stability under realistic conditions in cosmological haloes (§5). Under the simplifying assumption that the gas is unclumped, the cooling rate is given by eq. (20). The macroscopic cooling function $\Lambda(T)$ is related to the microscopic $\Lambda_{\rm mic}(T)$, the energy-loss rate of a particle, via $\Lambda(T)=(N_{a}^{2}\chi^{2}/\mu^{2})\Lambda_{\rm mic}(T)$, where $\chi$ is the number of electrons per particle. We assume a Helium atomic fraction of 0.1 for $\mu$ and $\chi$. The microscopic cooling function is shown in Fig. 1 for three different values of mean metallicity $Z$. The cooling at temperatures below $10^{4}$K is very slow because the main available cooling agent is molecular hydrogen, which is very inefficient. At temperatures slightly above $10^{4}$K the cooling function peaks due to $L_{\alpha}$ emission from atomic hydrogen. At very low metallicities, a second peak arises near $10^{5}$K due to recombination of atomic Helium. Metals give rise to a higher peak at $\sim 10^{5}$K and slightly above, due to line emission from the heavier atoms. At $\sim 10^{6}$K and above, the cooling is dominated by brehmstralung, and the cooling function increases slowly. We use the cooling function as derived by Sutherland & Dopita (1993), and presented in their table, in the manner described in Somerville & Primack (1999). 3 The spherical hydro code We test the validity of the shock stability criterion using numerical simulations based on a spherical hydrodynamics code which follows the evolution of shells of dark matter and gas. Since the problem we intend to examine is of global spherical symmetry, and since we need to follow the cooling and the shock with high precision, we use a one-dimensional code. Most of the simulations presented here were run using $2000$ gas shells and $10,000$ dark-matter shells. A comparable resolution in a three-dimensional code would require on the order of $10^{10}$ and $10^{12}$ particles respectively, which is impractical. We use no smoothing in the dark-matter shell-crossing scheme, we introduce small-scale smoothing at the halo centre to avoid an artificial singularity there, and we include small artificial viscosity in the hydrodynamics. Tests of the code performance are described in Appendix A. 3.1 Dark matter The dark-matter particles are represented by infinitely thin spherical shells of constant mass and of radii $r$ that vary in time. The shell of current radius $r$ obeys the equation of motion $$\frac{d^{2}r}{dt^{2}}=-\frac{G(M+m)}{(r+a)^{2}}+\frac{j^{2}}{r^{3}}\,,$$ (31) where $M$ and $m$ refer respectively to the mass of dark matter and gas within the sphere of radius $r$. The last term is a centrifugal acceleration, determined by the the specific angular momentum $j$ of the particle represented by the shell. This $j$ is assigned to each shell at the initial conditions and is assumed to be preserved during the simulation. The parameter $a$ is the smoothing length that becomes effective only near the centre; it has been set to be $50$pc throughout this work. The dark-matter shells are allowed to cross each other (and the gas shells). The dark-matter mass is evaluated by $$M(r)=\sum_{r_{i}<r}\Delta M_{i}+\frac{1}{2}\sum_{j}\delta(r=r_{j})\Delta M_{j}\,,$$ (32) where the shell radii and masses are denoted by $r_{i}$ and $\Delta M_{i}$ respectively, $i=1,...,\,n_{\rm d}$. The second term adds half the mass of a shell when $r$ coincides with one of the shells. Generally, this summation requires $n_{\rm d}^{2}$ calculations for the dark matter alone. The particles are kept sorted by radius. When two shells cross each other, we re-sort the array by exchanging pairs which violate the order. This kind of sorting algorithm, termed ‘Shell’s Method’ in Numerical Recipes (Press, 1997), is natural in cases where only a few shells cross each other in each timestep. When two shells cross, they exchange an energy of $G\,\Delta M_{i}\,\Delta M_{j}/r$. In order to conserve energy, the radius at which the shells cross must be known with great precision. We therefore reduce the timestep to a small value, $t_{\rm sc}$, when two shells are about to cross each other (see below). 3.2 Gas The hydrodynamic part of the code is based on Lagrangian finite elements in the form of spherical shells. The basic equations governing the dynamics of each shell are $$\frac{d^{2}r}{dt^{2}}=-\frac{1}{\rho}\nabla(P+\sigma)-\frac{G(M+m)}{(r+a)^{2}}% +\frac{j^{2}}{r^{3}}\,,$$ (33) $$\frac{de}{dt}=\rho^{2}(P+\sigma)\frac{d\rho}{dt}-q\,,$$ (34) $$\rho=\frac{dm}{4\pi r^{2}dr}\,.$$ (35) $$P=(\gamma-1)\,e\,\rho\,,$$ (36) An artificial viscosity term, $\sigma$, is added to the pressure for numerical purposes, as explained below. The smoothing length effective at the center, $a$, is the same as for the dark matter, eq. (31). As in the model described before, the loss of internal energy due to radiative cooling is represented by the cooling rate $q$. The gas is divided into discrete shells. The mass enclosed within a shell, $\Delta m$, is assumed constant in time, while the inner and outer shell boundaries move independently in time. Each boundary is characterized by a temporal position $r$, velocity $v$ and specific angular momentum $j$. The acceleration [eq. (33)] is evaluated at the boundary position. The variables $\rho$, $P$, $e$, $q$, and $T$ for each shell are evaluated within the shell between the boundaries. In particular, the pressure term in eq. (33) is evaluated at the outer boundary $r_{i}$ of shell $i$ using eq. (35): $$-\frac{1}{\rho}\nabla(P+\sigma)=\frac{4\pi r_{i}^{2}}{\Delta m}[(P+\sigma)_{i+% 1}-(P+\sigma)_{i}]\,,$$ (37) where $\Delta m=(\Delta m_{i}+\Delta m_{i+1})/2$. The boundary conditions for the outer boundary of the system are $P=\sigma=0$, and zero mass beyond the outer boundary. Since gas shells cannot cross each other, the gas mass in the sphere interior to each gas shell is constant throughout the simulation: $$m(r_{i})=\sum_{j=1}^{i}\Delta m_{j}\,.$$ (38) For the evolution of the dark-matter shell at $r$, we evaluate the gas mass that appears in eq. (31) using $$m(r)=\sum_{j=1}^{i-1}\Delta m_{j}+\frac{r^{3}-r_{i-1}^{3}}{r_{i}^{3}-r_{i-1}^{% 3}}\Delta m_{i}\,,$$ (39) where $i$ refers to the gas shell for which $r_{i-1}\leq r<r_{i}$. 3.3 Integration and Timestep The discrete integration of $r$ and $v$ is performed by a Runge Kutta fourth-order scheme (Press, 1997). The state of the system at the beginning of each timestep is kept in memory until the timestep is completed, such that it is possible to return to the beginning of the timestep and retry with a smaller timestep if the convergence criteria are not met. The timesteps are set such that the position $r$ and velocity $v$ do not change by too much during a single timestep. For a given accuracy parameter $\epsilon_{\rm rk}$, we demand that the difference between the forth-order displacement $\Delta r_{4}$ and the analogous first-order displacement $\Delta r_{1}$ obeys $|\Delta r_{4}-\Delta r_{1}|/r<\epsilon_{\rm rk}$, both for the dark-matter and the gas. The similar requirement is applied to the change in velocity over a timestep. If this condition is not fulfilled, we reduce the timestep by a certain factor and repeat the calculation over this timestep. We use here as our default $\epsilon_{\rm rk}=0.1$. In addition, we make sure the timestep for each shell does not violate the Courant condition, for an accuracy parameter $\epsilon_{\rm c}$. This implies ${c_{\rm s}\Delta t}/{\Delta r}<\epsilon_{\rm c}$, where $c_{\rm s}^{2}=(dP/d\rho)_{s}=\gamma P/\rho$ is the speed of sound. We use here as our default $\epsilon_{\rm c}=0.3$. A third limitation on the timestep comes from the desire to conserve energy when shells cross. When two shells are about to cross each other within the current timestep $dt$, we set the timestep to $\min(dt,t_{\rm sc})$, and keep it small until they actually cross. We use here as our default $t_{sc}=10^{-4}$Gyr. The values for $\epsilon_{\rm c},\epsilon_{\rm rk}$ and $t_{sc}$ were chosen empirically such that energy is conserved and the dynamics converges to our satisfaction, in the sense that it does not change by much when smaller parameters are used. We demonstrate in Appendix A how well these requirements are met. Once we have computed the new radii and velocities of the shells at the end of the timestep, we correct the energy of the gas for the $-PdV$ work term using the states of the system at the beginning and at the end of the timestep. The cooling is explicitly subtracted from the internal energy after the hydrodynamic timestep is completed. Once the final state of the system is ready, it is copied onto the memory array of the initial state, and the simulation is ready to execute a new timestep. 3.4 Initial conditions The simulation starts at high redshift, $z=100$, with a small spherical density perturbation. The initial density fluctuation profile is set to be proportional to the linear correlation function of the assumed cosmological model, representing the typical perturbation under the assumption that the random fluctuation field is Gaussian (see Dekel, 1981, and Appendix C). The amplitude of the density fluctuation at the initial time, averaged over a given mass, determines the time of collapse, as desired. The initial velocity field is assumed to follow a quiet Hubble flow and the radial peculiar velocities build up in time. We assume the standard $\Lambda$CDM cosmology with $\Omega_{\rm m}=0.3$, $\Omega_{\Lambda}=0.7$, $h=0.7$ and $\sigma_{8}=1$. 3.5 Angular momentum We assume that in a real system the orbits of dark-matter particles, and the initial orbits of the gas particles, are quite elongated. Cosmological N-body simulations show that the velocity distribution tends to be more radial than tangential (Ghigna et al. , 1998; Safran & Dekel, 2003) and already for an isotropic distribution the eccentricities are about 1:6. The processes we study in this paper occur away from the galactic disc at a radius on the order of the virial radius, namely in a regime where the centrifugal force can be expected to be negligible compared to the gravitational force and the gas pressure force. The prescribed angular momentum for the shells is thus mainly for numerical purposes, to avoid divergent densities of gas or dark matter shells when they pass through the halo centre. Our results concerning the virial shock are insensitive to the actual way by which we assign angular momentum to each shells. In the current study we practically assume that the dark-matter particles are almost on radial orbits. The angular momentum of the gas is prescribed such that the shells, once they lost their energy by radiation, would settle into an exponential disk with pure circular motions and a characteristic radius of a few kpc, smaller than the inner characteristic radius of the halo. Our spherical ‘disc’ thus contains gas that is cold and dense compared to the shocked gas. 3.6 Artificial viscosity It is impossible to follow the discontinuous behavior across the shock using the conventional continuity equation for the density and standard conservation of energy and momentum. The jump conditions can be calculated explicitly, as in eq. (3) to (6) (termed ‘the characteristic method’ or ‘Godonov’s method’). Alternatively, as proposed by Von-Newman, one can slightly smear the discontinuities and then solve them within the framework of the standard hydrodynamic equations. By adding an artificial pressure term in a few shells around the shock, the differential equations become solvable and one can continue the calculation without affecting the energy and the dynamics of the shock (while its internal structure naturally changes). Artificial viscosity is applied when the inner and outer shell boundaries at $r_{1}$ and $r_{2}$ approach each other, $\Delta v=v_{2}-v_{1}<0$, and when the volume of the shell decreases, ${dV}/{dt}=4\pi(r_{2}^{2}v_{2}-r_{1}^{2}v_{1})<0$. The artificial viscosity then takes the form $$\sigma=a_{2}\rho(\triangle v)^{2}+a_{1}\rho c_{s}|\Delta v|\,.$$ (40) The quadratic, common form of artificial viscosity smears discontinuities over about 3 shells. The Linear discontinuity affects a slightly larger range, and is usually added with a smaller coefficient $a_{1}$. The coefficients $a_{1}$ and $a_{2}$ are varied for different shells in the course of the simulation in order to overcome a specific numerical problem in the cold ‘disc’, where the gravitational and centrifugal forces balance each other and the pressure force is negligible. In this case the gas is not a standard hydrodynamic gas because the pressure does not regulate large discontinuities, and information is not transported because of the low sound speed. When a ‘disc’ shell vibrates, it is artificially heated by the artificial viscosity in every contraction until its pressure grows and stops the process. If we are not careful to properly tune the artificial viscosity we may end up with one ‘disc’ shell that has been heated to $10^{7}$K while the rest of the ‘disc’ is at $10^{4}$K. This imposes an undesired drastic decrease in the corresponding timestep. In order to overcome this numerical problem, we gradually turn off the quadratic term ($a_{2}$) of the artificial viscosity inside the ‘disc’. We define a ‘disc’ radius $R_{\rm disc}$ to be the largest radius for which the difference between the gravitational and centrifugal forces is less than 1/4 of the gravitational force. Once at $r<0.6R_{\rm disc}$, we continuously decrease the parameter $a_{2}$ in eq. (40) according to $a_{2}=(r-0.3R_{\rm disc})/(0.3R_{\rm disc})$ and make it completely vanish at $r<0.3R_{\rm disc}$. This prescription was found by trial-and-error to properly solve the numerical problem in most cases. The linear term of the viscosity, being proportional to the speed of sound, is anyway very small in the cold ‘disc’, so effectively no artificial viscosity is applied in the inner ‘disc’. Appendix A provides tests and examples of the hydrodynamic simulations in some detail. 4 Virial shock in the simulations 4.1 Existence of a virial shock We now investigate the formation of a virial shock using the spherical hydrodynamical simulations described above. We wish to test in particular the validity of the analytic stability criterion developed in §2. In order to mimic a typical perturbation in a random Gaussian field (Dekel 1981, Appendix B), the initial density-fluctuation profile was set to be proportional to the correlation function, normalized such that the mean density fluctuation in a sphere enclosing $M_{\rm i}=10^{11}M_{\odot}$ was $\delta_{\rm i}=0.09$ at $z=100$. For example, the shell encompassing $M\sim 3\times 10^{10}M_{\odot}$ is expected to collapse at $z=3$, and $M\sim 10^{12}M_{\odot}$ is expected to collapse at $z=0$. Fig. 2 shows the time evolution of the radii of Lagrangian shells in a simulation of the adiabatic case, with the cooling turned off. We find that a shock exists at all times. It appears as a sharp break in the flow lines, associated with a discontinuous decrease in infall velocity [eq. (4)]. Shown in the figure is the shock radius, defined by the outermost shell for which the inner and outer shell boundaries approach each other and the volume of the shell decreases [the same conditions that have been used for turning on the artificial viscosity in eq. (40)]. The shock gradually propagates outwards, encompassing more gas mass and dark matter in time. The gas below the shock is pressure supported and at quasi-static equilibrium. Not shown here are the dark-matter shells, which collapse, oscillate and tend to increase the gravitational attraction exerted on the gas shells. Shown in comparison is the evolution of the virial radius, computed from the simulation density as the radius within which the mean overdensity is $\Delta_{\rm v}$ times the mean cosmological background density. The virial overdensity $\Delta_{\rm v}$ is provided by the dissipationless spherical top-hat collapse model; it is a function of the cosmological model, and it may vary with time. For the Einstein-deSitter cosmology, the familiar value is $\Delta_{\rm v}\simeq 176$ at all times.222This can be derived from the top-hat formalism of Appendix B, once the final radius is assumed to be fixed at half the maximum-expansion radius but the overdensity is evaluated at the time when the top-hat sphere would have collapsed to a singularity. For the family of flat cosmologies ($\Omega_{\rm m}+\Omega_{\Lambda}=1$), the value of $\Delta_{\rm v}$ can be approximated by Bryan & Norman (1998) $$\Delta_{\rm v}\simeq(18\pi^{2}+82x-39x^{2})/(1+x)\,,$$ (41) where $x\equiv\Omega_{\rm m}(z)-1$, and $\Omega_{\rm m}(z)$ is the ratio of mean matter density to critical density at redshift $z$. For example, in the $\Lambda$CDM  cosmological model that serves as the basis for our analysis in this paper ($\Omega_{\rm m}=0.3$, $\Omega_{\Lambda}=0.7$), the value at $z=0$ is $\Delta_{\rm v}\simeq 340$. We see in Fig. 2 that the shock radius almost coincides with the virial radius at all times. This is hardly surprising, as the shock is likely to appear at the outermost radius at which shell crossing first occurs, which is near the virial radius (to be demonstrated in Fig. 8 below). Fig. 3 is the result of a similar simulation, but now with realistic radiative cooling for $Z=0$. We see that a stable shock does not exist in this case before $t=3.9$Gyr. During this period, the cooling makes the gas lose its pressure support and lets it collapse freely under gravity into the halo centre. The collapse stops by the assumed angular momentum, in a ‘disc’ whose marked radius can be identified at the bottom of the plot by the abrupt change of the infalling flow lines into horizontal lines. The matter in the ’disc’ is angular-momentum supported. As is visible in the figure, a shock, in the sense of a discontinuity in velocity and density, is present at the edge of the disc. Once the stability criterion is met, a shock forms and propagates outward abruptly. The propagation of the shock causes it to re-enter a regime for which $\gamma_{\rm eff}$ is below the critical value. Consequently, the post shock gas becomes non-supportive again and falls. The oscillatory behavior of the shock continues with an increasing period until it stabilizes at the largest radius for which the stability criterion is met. The shock never expands beyond the virial radius because shells do not tend to cross there (Fig. 8 below). Fig. 4 shows the evolution of the total mass interior to the characteristic radii in the adiabatic and radiative simulations of Fig. 2 and Fig. 3. In the adiabatic case, the shock mass practically coincides with the virial mass, and the gas never forms a disc. In the radiative case, the shock is initially at the disc radius, and only after 3.9Gyr does it start propagating outwards. 4.2 Testing the stability criterion We first test the assumption made in §2.2 that the post-shock gas has a homologous velocity profile in the vicinity of the shock. In Figures 2 and 3 we see that the flow lines beneath the shock tend to be nearly parallel in the $\log r-t$ plane. This means that $d\log r/dt=u/r\simeq$const., namely homology, eq. (12). A similar behavior has been found in all the simulations that we have performed. Next, we use the simulations to test the validity of the stability criterion derived in §2, eq. (29), where $\gamma_{\rm eff}$ is given by eq. (27). Recall that the $\gamma_{\rm eff}$ is expressed in terms of the pre-shock quantities. In order to map the value of $\gamma_{\rm eff}$ in the different regions of the free falling gas in the forming halo, we ran the same simulation as in the previous section except that the cooling rate was set to be very high, such that the virial shock never develops. Fig. 5, top panel shows the flow lines in this case, on which overlaid are 4 contours of equal $\gamma_{\rm eff}$ values, evaluated via eq. (27) with eq. (30) and the cooling rate from Sutherland & Dopita (1993). As shells are falling into the halo, their $\gamma_{\rm eff}$ is gradually increasing. Also, as time progresses, the value of $\gamma_{\rm eff}$ at the same radius is increasing. By following the value of $\gamma_{\rm eff}$ just above the “disc” radius (shown as the break at the bottom of the plot), in comparison with the critical value for stability $\gamma_{\rm crit}$, we can therefore use our model to predict when we expect the virial shock to form. This is shown in the middle panel of Fig. 5. We see that at early times (and smaller masses) we have $\gamma_{\rm eff}<\gamma_{\rm crit}$, predicting no stable shock. The system is predicted to enter the stable-shock regime at about $t=3.9$Gyr, where $\gamma_{\rm eff}$ becomes larger than $\gamma_{\rm crit}$. A comparison with the realistic radiative simulation described in Fig. 3 yields that this model prediction is very accurate: the shock indeed starts forming at $t=3.9$Gyr, and is globally stable thereafter. Fig. 6 shows the actual evolution of $\gamma_{\rm eff}$ at either the ‘disc’ radius or the shock radius, whichever is larger, as computed directly from the pre-shock quantities in the simulation with realistic cooling. Shown at the same times are the characteristic masses, already shown in Fig. 4. The virial shock is first generated at time $t=3.9$Gyr, when the virial mass is $10^{11}M_{\odot}$. Starting at this time, the shock is propagating outwards very rapidly. As a result of this fast expansion, the $\gamma_{\rm eff}$ of the pre-shock infalling matter at the shock, which is decreasing with $r$, drops below the threshold. This makes the shock lose its pressure support, it becomes temporarily unstable and its expansion slows down until it is eventually swept back on a dynamic time scale. The associated drop in total mass behind the shock, seen around $t=5.8$Gyr, is due to the fact that the dark matter is not swept back with the gas. Once the shock is shrunk to a low enough radius, $\gamma_{\rm eff}$ rises again to above $\gamma_{\rm crit}$; the shock becomes stable again and it resumes its associated expansion towards the virial radius. After the conditions for the shock stability are first met, the shock is visible most of the time. In the rest of this paper, we treat haloes at this state as ones containing a virial shock. Fig. 7 presents results similar to Fig. 6 for two other simulations with different initial overdensities, and therefore different masses collapsing at different times. The small difference seen in one case between the $\gamma_{\rm eff}$ at which the shock actually forms and the predicted $\gamma_{\rm crit}$ may be totally due to numerical inaccuracies in the simulation. Such inaccuracies may occur when a dark-matter shell crosses a gaseous shell, which, near the threshold, may lead to a slightly premature shock formation. This is seen in the convergence test of our code described in Appendix A, when we compare the shock formation times in table 2. Thus, the model stability criterion is found to be valid within the accuracy of the simulations in all the cases studied, indicating that the model is not limited to a special range of masses and collapse times. 5 Shock stability in cosmology The analysis of §2 thus provides a successful criterion for shock stability, eq. (29), as a function of the pre-shock properties of the infalling gas at radius $r$: the density, velocity and metallicity. In order to apply this criterion to a given protogalaxy in a cosmological background, we wish to evaluate the gas density and velocity just before it hits the disc, for a gas shell initially encompassing a total mass $M$ that virializes at redshift $z_{\rm v}$. In this calculation we assume an Einstein-deSitter cosmology, as a sensible approximation at $z>2$ (where $\Omega_{\rm m}>0.9$). We assume a given universal baryonic fraction $f_{\rm b}$, and a global spin parameter $\lambda$ which determines the ratio of disc to virial radius. The initial mean density perturbation profile, $\bar{\delta}_{\rm i}(M)$, is given at some fiducial time in the linear regime; it is the average profile derived from the power spectrum of initial density fluctuations, as described in Appendix C. In the cosmological toy model used here we approximate the power spectrum as a power law, $P_{k}\propto k^{n}$, where $n\simeq-2.4$ to mimic the $\Lambda$CDM power spectrum on galactic scales. We follow gas shells from the initial perturbation till they approach the disc using a two-stage model. During the expansion, turn-around, and until an assumed virialization at half the maximum-expansion radius, we assume no shell crossing, the total mass interior to the shell remains constant in time, and we follow the radii, density and velocity of the shell via the spherical top-hat model (see Appendix B). From the virial radius inwards we assume that the gas shells, which do not cross each other, contract inside the fixed potential well of an isothermal dark-matter halo. This idealized model involves several crude approximations, such as the instantaneous transition at the virial radius, and neglecting the effect of the angular momenta of the individual gas particles at small radii, but we show using spherical simulations that this model predicts the minimum halo mass for which a stable shock first appears to an accuracy better than 25%. This allows us to use the model for exploring the critical mass as a function of cosmological parameters such as galaxy formation time, metallicity, spin parameter, fluctuation power spectrum, and baryonic fraction. 5.1 Toy model until virialization For a given shell $M$ and initial mean perturbation profile $\delta(M)$ [standing for the $\bar{\delta}_{\rm i}(M)$ of Appendix B], the top-hat model [eq. (70) and eq. (64)] yields the implicit solution $$r(M,\eta)=r_{\rm v}(M)\,(1-\cos\eta)\,,$$ (42) $$t(M,\eta)=t_{\rm v}(M)\,(\eta-\sin\eta)\,,$$ (43) where the mass dependence enters via the virial quantities $$r_{\rm v}=C_{\rm i}\,M^{1/3}\delta(M)^{-1}\,,$$ (44) $$t_{\rm v}=r_{\rm v}/v_{\rm v}=C_{\rm i}^{3/2}G^{-1/2}\,\delta(M)^{-3/2}\,,$$ (45) with $v_{\rm v}^{2}=GM/r_{\rm v}$. The coefficient $C_{\rm i}=(6/\pi)^{1/3}(0.15/\rho_{\rm ui})$ is determined by $\rho_{\rm ui}$, the cosmological density at the initial time when $\delta(M)$ is given, independent of $M$. The velocity of the shell $M$ is $$u=\frac{\partial r/\partial\eta}{\partial t/\partial\eta}=v_{\rm v}\frac{\sin% \eta}{(1-\cos\eta)}\,.$$ (46) At virialization, $\eta=3\pi/2$, it is simply $u=-v_{\rm v}$. In order to evaluate the local density, we follow the radii of two adjacent shells, encompassing masses $M$ and $M+dM$ respectively, at a given time $t$, e.g., the time when shell $M$ virializes (at half its maximum expansion radius). Let $\eta$ correspond to shell $M$ at that time, and $\eta+d\eta$ to shell $M+dM$. In order to express $d\eta$ in terms of $dM$ we use the fact that the time $t$ is the same for the two shells: $0=dt=(\partial t/\partial M)dM+(\partial t/\partial\eta)d\eta$. Using eq. (43) this gives $$d\eta=\frac{3}{2}\frac{(\eta-\sin\eta)}{(1-\cos\eta)}\frac{\delta^{\prime}}{% \delta}dM\,,$$ (47) where we denote $\delta^{\prime}\equiv d\delta/dM$. Expressing $dr$ in terms of $dM$ and $d\eta$ based on eq. (42), we obtain using eq. (47) and after some algebra $$\frac{dr}{r}=\frac{1}{3}\frac{dM}{M}\left(1-\frac{3M\delta^{\prime}}{\delta}% \left[1-\frac{3}{2}\frac{\sin\eta(\eta-\sin\eta)}{(1-\cos\eta)^{2}}\right]% \right)\,.$$ (48) At virialization of shell $M$, $\eta=3\pi/2$, the quantity in square brackets equals $(10+9\pi)/4$. Not surprisingly, if the initial perturbation is of uniform density, $\delta^{\prime}=0$, we are left with $dr/r=(1/3)(dM/M)$, the straightforward result of $M\propto r^{3}$. Recall that the virial radius $r_{\rm v}$ of shell $M$ can be obtained either from the universal density at the time of virialization using eq. (66), or from the initial perturbation using eq. (70). The desired local density $\rho$ can be obtained from eq. (48) via $dM=4\pi r^{2}\rho dr$. If the initial perturbation profile is a power law, $\delta(r)\propto r^{-(n+3)}$, using $M\propto r^{3}$ we have $3M\delta^{\prime}/\delta=-(n+3)$. So finally $$\frac{dr}{r_{\rm v}}=\frac{1}{3}\frac{dM}{M}\left(1+(n+3)\frac{(10+9\pi)}{4}% \right)\,.$$ (49) eq. (48) [or eq. (49) in the power-law case] allows us to compute the desired radii of the two adjacent shells at the time of virialization of shell $M$. 5.2 Toy model after virialization Given the radii of the two adjacent shells at $r_{\rm v}$, we enter the shell-crossing regime and continue to follow the shells down to the disc radius by numerical integration. The shell radius $r$ and velocity $u$ are related via energy conservation. Assuming that the gas shells contract without crossing each other inside a dark-matter halo that is a fixed isothermal sphere, the total mass interior to the shell that originally encompassed a total mass $M$ is $$M(r)=f_{\rm b}M+(1-f_{\rm b})\frac{M}{r_{\rm v}}r\,,$$ (50) and the gravitational potential at $r$ is $$\phi(r)=-\frac{GM(r)}{r}-(1-f_{\rm b})\frac{GM}{r_{\rm v}}\ln\left(\frac{r_{% \rm v}}{r}\right)\,.$$ (51) The integration is performed by advancing $r$ according to the velocity $u$ and then recalculating $u$ according to energy conservation: $$(1/2)u^{2}+\phi(r)=(1/2)v_{\rm v}^{2}+\phi(r_{\rm v})={\rm const.}$$ (52) We follow shell $M$ for the time it falls from $r=r_{\rm v}$ to the disc radius $r=\lambda r_{\rm v}$, and shell $M+dM$ for the same time interval. Denoting the separation between the shells at the end of this time interval by $dr$, we compute the desired gas density by $$\rho=\frac{f_{\rm b}}{4\pi r^{2}}\frac{dM}{dr}\,.$$ (53) The resultant values of $r$, $u$ and $\rho$ are inserted into eq. (28) in order to obtain an approximation for $\gamma_{\rm eff}$ and then to evaluate stability by eq. (24). This allows us to check stability for the case where mass $M$ virializes at redshift $z_{\rm v}$, with metallicity $Z$, spin parameter $\lambda$, baryonic fraction $f_{\rm b}$, and a given power spectrum. 5.3 Model versus simulations In Fig. 8 and Fig. 9 we compare the evolution of the quantities of a given gas shell according to the toy model described in the previous subsections and according to the spherical hydro simulation described in the earlier sections. We follow a specific shell that hits the disc at about $t\simeq 3.8$Gyr, just before the shock starts propagating into the halo [see Fig. 3]. The quantities shown as a function of radius $r$ are total mass $M$ interior to $r$, radial velocity $u$, gas density $\rho$, and the corresponding value of $\gamma_{\rm eff}$. For $M$ and $u$ the evolution starts at the top-left corner and ends at the bottom left, while for $\rho$ the upper part of the curve corresponds to the expansion phase and the lower part to the contraction phase. The evolution of $\gamma$ is followed only during part of the contraction phase. In Fig. 8 we calibrate the toy model to match the simulation at the maximum expansion radius. We see that while the mass interior to the shell is reproduced by the model only to a limited accuracy in the last stages of the collapse, the velocity, density and the resulting value of $\gamma_{\rm eff}$ are recovered very well by the model. This allows us to predict quite accurately the point where $\gamma_{\rm eff}$ exceeds $\gamma_{\rm crit}$. Since we wish to use the toy model without an exact knowledge of the conditions at maximum expansion, we normalize the model evolution in Fig. 9 based on $M$ and $z_{\rm v}$. The slight deviations in $u$ and in $\rho$ now translate into a larger error in $\gamma_{\rm eff}$. The error in the toy model originates mostly from the slight ambiguity in the definition of the virial radius. On one hand we assume it to equal half the maximum-expansion radius, and on the other hand we assume it to represent an overdensity of $\Delta_{\rm v}$ as in eq. (41). These two assumptions are not fully consistent with the actual behavior of the virializing system in the simulation. Nevertheless, we see below that our approximate model allows us to estimate the critical halo mass below which the shock does not form to an accuracy of better than 25%, which is quite satisfactory for our purpose here. 5.4 Critical mass for shock formation Fig. 10 shows for several different cases the critical halo mass, below which a shock does not propagate into the halo, versus the redshift at which this critical mass virializes. For each case we compare the model prediction to the shock formation as actually seen in the simulation. The cases differ by the mean metallicity, $Z=0$ and $Z=0.05$ for the lower and upper sets of points respectively, and by the amplitude of the initial perturbation, corresponding to a range of shock-formation redshifts at every given $Z$. The assumed baryonic fraction is always $f_{\rm b}=0.13$, but the assumed spin parameter may be different for the different shells in a given simulation because we set it for each shell such that the final disc has an exponential surface density profile. However, the $\lambda$ values vary in the range $0.02$ to $0.05$, compatible with the distribution of spin parameter in cosmological simulations (Bullock et al. 2001). We see that the model predicts the critical mass with an accuracy better than 25%, such that we can use it for mapping the parameter space in more detail. Fig. 11 shows for several different choices of parameters the critical halo mass for shock formation versus the halo virialization redshift as predicted by the model. A virial shock does not form in haloes of masses below the line. The lines are not always monotonic due to the non-monotonic features in the cooling curves (Fig. 1). Shown in comparison is $M_{\star}$, the characteristic mass for haloes forming at $z$ according to the Press-Schechter approximation (Lacey & Cole, 1993). The default values of the parameters, used unless specified otherwise, are $Z=0$, $\lambda=0.05$, $f_{\rm b}=0.13$ and $n=-2.4$. The upper panel has the metallicity varying from $Z=0$ to $Z=0.3$. The critical mass tends to be higher at higher redshifts (especially for $Z=0$) because the higher density implies more efficient cooling. It is striking that even for the case of zero metallicity, for which the cooling is not at its maximum efficiency, an $M_{\star}$ halo cannot produce a shock until a relative late redshift, $z\simeq 2.1$. The addition of a small amount of metals, $Z=0.05$, increases the cooling rate significantly (see Fig. 1) such that $M_{\star}$ haloes start producing virial shocks only after $z\simeq 1.6$. The second panel has $\lambda$ varying as marked. The shock forms slightly earlier if the disc is smaller (lower $\lambda$), because the conditions become more favorable for shock formation closer to the centre. At high redshifts the increase in infall velocity happens to balance out the increase in density, temperature and cooling rate. The post shock temperature there is a few $10^{6}$K. The bottom line is that the critical mass is not too sensitive to $\lambda$. The third panel has $f_{\rm b}$ varying as marked. The critical mass is monotonic with the baryonic fraction because the cooling rate is monotonic with gas density. The parameter $f_{\rm b}$ can be interpreted as the fraction of the baryons that actually take part in the shock formation. This can be smaller than the universal baryonic fraction if some of the gas falls into the halo in the form of dense clumps. Even with $f_{\rm b}$ as low as $0.05$, meaning that most of the gas is not participating in the cooling, an $M_{\star}$ halo would not produce a shock until $z\simeq 2.4$. The conclusion is that the critical mass is not too sensitive to $f_{\rm b}$ either. The bottom panel explores three values for the initial power index $n$ approximating the power spectrum of $\Lambda$CDM on galactic scales. The dependence of the critical mass on $n$ in this regime is weak. 6 Discussion The heating of the gas behind a virial shock in haloes has been a basic component in galaxy formation theory (Rees & Ostriker, 1977). We studied the conditions for the existence of such a virial shock in spherical haloes. We first pursued an analytic stability analysis in the presence of cooling, and then demonstrated its validity using high-resolution spherical hydrodynamical simulations. The obtained criterion for shock stability in terms of the post-shock quantities is $$\gamma_{\rm eff}\equiv\frac{d\ln P/dt}{d\ln\rho/dt}>10/7\,.$$ (54) In terms of the pre-shock gas properties, this condition reads $$\frac{\rho_{0}r_{\rm s}\Lambda(T_{1})}{|u_{0}|^{3}}<0.0126\,,$$ (55) where $\rho_{0}$ and $u_{0}$ are the gas density and infall velocity in front of the shock, $r_{\rm s}$ is the shock radius, $\Lambda(T)$ is the cooling function which depends on the metallicity $Z$, and $T_{1}\propto u_{0}^{2}$ is the post-shock temperature as a function of the pre-shock infall velocity. Based on this criterion, we find that a virial shock forms only in big haloes forming at late redshifts. A virial shock does not form in smaller haloes forming early where the cooling behind the shock efficiently removes its pressure support. For example, we find that most galactic haloes that have collapsed and virialized by $z\sim 2$ did not produce a virial shock. Haloes less massive than $\sim 10^{11}M_{\odot}$ never produce a shock even if the gas is of zero metallicity. If the metallicity is non-negligible (e.g. $Z\sim 0.05$), this lower bound to shock formation rises to $\sim 7\cdot 10^{11}M_{\odot}$. When a shock does not exist, the gas is not heated to the halo virial temperature until it falls all the way to the disc at the inner halo. Forcada-Miro & White (1997), in an unpublished work, have pursued independently a numerical analysis along similar lines, involving a more detailed treatment of the cooling processes involved. They also find that the virial shock radius is significantly reduced due to the cooling in haloes of small masses, $M<10^{11}M_{\odot}$. In their case the shock never completely disappears because of a different feature in their numerical scheme; they put all the cooled post-shock gas in one central “shell” to avoid numerical difficulties at the centre. This makes the inner boundary of the system follow the shock quite closely in cases where there is efficient cooling behind the shock, and allows the presence of a small-radius shock even in such cases. Overall, our numerical results are in encouraging agreement, and our analytic model provides a natural explanation for their numerical results as well. The most severe uncertainty when attempting to apply our results to real galaxies arises from the assumed spherical symmetry in both the model and the numerical simulations. The validity of this approximation for the asymmetric halo configurations in the hierarchical clustering scenario is an open question to be addressed in future work. Nevertheless, we notice that Katz et al.  (2001) and Fardal et al.  (2001) observe in their cosmological simulations that a large fraction of the mass accreted onto haloes indeed remains cold and is never heated to the virial temperature. Toft et al.  (2002) find in their simulations, using a similar treeSPH code as the one used by Katz et al.  (2001), that the soft X-ray radiation is mainly emitted from within the innermost $20kpc$ of their haloes, well inside the virial radius, in encouraging agreement with our results. On the other hand, it is not obvious that the resolution in these simulations is adequate for studying the shock physics involved; our estimates indicate that three-dimensional simulations with proper resolution are not practical at present (Appendix A). Another complication may arise from radiative effects. Even when there is no virial shock, the kinetic energy of the gas eventually turns into radiation when the gas infall motion is brought to a halt at the disc. At such densities, the width of the shock front is much smaller than the width of the cooling front behind the shock, $\sim 10^{-2}$pc versus $\sim 10^{2}$pc. Thus, the gas in a thin shell behind the shock is heated to a temperature corresponding to its kinetic energy, and it cools by radiating soft X-rays. The X-ray radiation is expected to generate an ionized $H_{\it II}$ bubble, in which the ionization rate balances the recombination rate. The Strömgren radius of this bubble is relatively small, on the order of a few kiloparsecs, because the high gas density implies a high recombination rate. The recombination process then generates a flux of $L_{\alpha}$ radiation, emitted at the inner few kpc of the halo. A naive inspection of cross sections might indicate that the $L_{\alpha}$ radiation would be trapped inside the halo. This could in principle affect the shock stability in three different ways: by increasing the radiation pressure, by heating up the infalling matter, and by slowing down the radiative cooling responsible for the shock instability. It has been argued by Rees & Ostriker (1977) that the radiation pressure at these low temperatures must be insignificant compared to the gas pressure even if all the internal energy was drained from the baryons into the radiation field. One might add that since the radiation pressure behaves like a $\gamma=4/3$ gas, it could at most make the system marginally stable. When work is done on the radiation field, any leakage of radiation out would turn the energy into cooling rather than $PdV$, and will thus reduce the effective gamma, making the system unstable. Partial heating of the infalling gas should not affect our analysis as long as the temperature of the infalling matter is significantly below the virial (post-shock) temperature such that the strong shock approximation remains valid. The effect of the reduced cooling rate is yet to be investigated. In practice, we do not expect the radiation trapping to be very efficient, because the effective opacity is reduced by thermal broadening and by the systematic blue shift due to the gas infall motion. When the opacity is high, the radiation heats up the gas, which enhances the thermal broadening and the collisional ionization rate. This reduces the opacity and allows for radiation escape. The system is likely to reach a steady state in which it gradually cools. This process is under current investigation. Feedback effects may further complicate the picture and affect shock stability. The energy fed back to the gas from stars, supernovae and AGNs may heat the halo gas and expel part of it. Merging substructures may have additional complicated effects. These effects cannot be captured by our idealized spherical analysis, and a proper study would probably require high-resolution three-dimensional hydrodynamical simulations. While observations and certain theoretical considerations indicate that feedback effects are likely to be important in galaxies as large as $\sim 10^{11}M_{\odot}$ and may thus affect the shock stability (e.g. Dekel & Silk 1986; Dekel & Woo 2003, and references therein), it has proven difficult for numerical simulations to reproduce such effects in any but small dwarf galaxies, indicating that feedback effects may not be so crucial for the understanding of shock stability. Until the dust settles on the role of feedback effects, our preliminary conclusions based on the spherical analysis should be taken with a grain of salt. The general absence of a virial shock might have three direct implications, which we study in associated papers. First, as explained above, when the gas is heated at the disc rather than near the halo virial radius, the generated X-ray radiation serves to ionize the gas and is not emitted outwards. The result would be a suppression of the X-ray emission in the range $5\times 10^{5}$ to $2\times 10^{6}$K. This may help explaining the missing X-ray problem pointed out by Pen (1999) and Benson et al.  (2000). Pen (1999) argue that there is an order-of-magnitude discrepancy between the soft X-ray flux as observed by Cui et al.  (1996), after subtracting the contribution of quasars, and the predicted flux from haloes constructed by a Press-Schechter hierarchical model under the assumption of shock heating to the virial temperature. Second, the infall energy, via the ionizing X-ray, is efficiently transformed into $L_{\alpha}$ radiation at the inner few kpc of the halo. A related increase in the $L_{\alpha}$ flux has indeed been seen in the cosmological simulations of Fardal et al.  (2001). This may explain the observed high flux of $L_{\alpha}$ emitters at high redshift (e.g. Pentericci et al. , 2000, 2001; Breuck et al. , 2000). Based on the high observed flux and the assumption that the $L_{\alpha}$ is emitted from stars, Pentericci et al.  (2000) estimate large masses for the $L_{\alpha}$ emitters, but the much higher flux per unit mass predicted by our model may lead to significantly lower mass estimates. Based on our analysis, most of the $L_{\alpha}$ flux is expected to be emitted from the inner few kpc of the halo, where the gas is at $\sim 10^{4}$K. Neglecting line shifts and broadening, the halo might be opaque to $L_{\alpha}$ , thus eventually emitting its energy from an outer photosphere where the halo becomes transparent. However, a careful study of the thermal broadening and the systematic redshifts within the halo is required in order to determine whether the system is opaque or transparent to the $L_{\alpha}$ photons. This is a subject of an ongoing investigation. Finally, the direct collapse of cold gas into the disc may have interesting theoretical consequences to be worked out. It may induce an efficient star burst in analogy to the burst originating in the shock between two colliding gas clouds. In turn, the strong inwards flow of gas may prevent an efficient gas removal by supernova-driven winds. In particular, current cosmological semi-analytic models (SAM) of galaxy formation (Kauffmann et al. , 1993; Kaffmann et al. , 1999; Cole et al. , 1994; Somerville & Primack, 1999; Maller et al. , 2001, and related works) use the standard picture of heating behind a virial shock in their modeling. This has strong effects on the disk formation rate, star formation rate, feedback etc. Other semi-analytic models (Efstathiou, 2000; White & Frenk, 1991) also appeal to the slow gas infall rate as a mechanism that regulates the gas input into the disc. Since the cooling time for a $10^{11}M_{\odot}$ halo is relatively short, the SAM predictions for such haloes may be only slightly affected by the inhibition of heating. However, given some metal enrichment, no heating is expected for haloes as massive as $\sim 7\times 10^{11}M_{\odot}$, for which the cooling time is longer, and the effect on the SAM predictions may be more severe. Shocks, when present, are also expected to alter the properties of the gas, for example - extinct dust particles. These effects can change SAMs that incorporate dust extinction. Acknowledgments We acknowledge advice from Z. Barkat and E. Livne, J. Ostriker and stimulating discussions with S. Balberg, E. Bertschinger, T. Broadhurst, D. Gazit, Y. Hoffman, W. Mathews, A. Nusser, N. Shaviv, and S.D.M. White. This research has been supported by the Israel Science Foundation grant 213/02, by the German-Israel Science Foundation grant I-629-62.14/1999. and by NASA ATP grant NAG5-8218. References (1) (2) Bardeen et al.  (1986) Bardeen J. M., Bond J. R., Kaiser N., Szalay A. S., 1986, ApJ, 304, 15 Benson et al.  (2000) Benson, A.J., Bower, R.G., Frenk, C.S., White, S.D.M., 2000, MNRAS, 314, 557 Bertschinger (1985a) Bertschinger E., 1985, ApJS, 58, 1 Bertschinger (1985b) Bertschinger E., 1985, ApJS, 58, 39 Breuck et al.  (2000) De Breuck C., Rottgering H., Miley G., van Breugel W.,Best P., 2000, A&A, 362, 519 Bryan & Norman (1998) Bryan G.L., Norman M.L., 1997, ApJ 495, 80 Cole et al.  (1994) Cole S., Aragón-Salamanca A., Frenk C., Navarro J., Zepf S., 1994, MNRAS, 271, 781 Cui et al.  (1996) Chi W., Sanders W.T., McCammon D., Snowden S.L., Womble D.S., 1996, ApJ, 486, 117 Cox (1980) Cox J.P., 1980, Theory of Stellar Pulsation (Princeton University press) Dekel (1981) Dekel A., 1981, A&A, 101, 79 Dekel & Silk (1986) Dekel A., Silk J., 1986, ApJ, 303, 39 Dekel & Woo (2002) Dekel A., Woo J., 2002, submitted (astro-ph/0210454) Efstathiou (2000) Efstathiou G., 2000, MNRAS, 317, 3, 697 Efstathiou et al.  (1992) Efstathiou G., Bond J. R., White, S. D. M. 1992, MNRAS, 258, 1P Fardal et al.  (2001) M. A., Katz N., Gardner J. P., Hernquist L., Weinberg D. H., Davé R., 2001, ApJ, 562, 605 Forcada-Miro & White (1997) Forcada-Miro M. I., White S. D. M., 1997, Astro-ph/9712204 (unpublished) Ghigna et al.  (1998) Ghigna S., Moore B., Governato F., Lake G., Quinn T., Stadel J., 1998, MNRAS, 300, 146 Katz et al.  (2001) Katz N., Keres D., Davé R., Weinberg D.H., 2001, astro-ph/209279 Kauffmann et al.  (1993) Kauffmann G, White S.D.M., Guiderdoni B., 1993, MNRAS, 264, 201 Kaffmann et al.  (1999) Kauffmann G., Colberg J. M., Diaferio A., White S. D. M., 1999, MNRAS, 303, 1, 188 Lacey & Cole (1993) Lacey C., Cole S., 1993, MNRAS, 262, 627 Maller et al.  (2001) Maller A. H., Prochaska J. X., Somerville R. S.,Primack J. E. 2001, MNRAS 326, 4, 1475 Peebles (1993) Peebles, P.J.E., 1993, Principles of Physical Cosmology (princeton University Press) Pen (1999) Pen U. L., 1999, ApJL 510, 2, L1 Pentericci et al.  (2000) Pentericci L., Kurk J.D., Rottgering H.J.A, Miley G.K., van Breugel W., Carilli C.L., Ford H., Heckman T., McCarthy P., Moorwood A., 2000, A&A 361, 2, L25 Pentericci et al.  (2001) Pentericci L., Kurk J.D., Rottgering H.J.A., Miley G.K., Venemans B.P., 2001, ASP Conference Series, Astro-ph-0110223 Press (1997) Press W., H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1997, Numerical Recipes is Fortran 77 (Cambridge University Press) Rees & Ostriker (1977) Rees M.J., Ostriker J.P., 1977, MNRAS, 179, 541 Safran & Dekel (2003) Safran, M., Dekel, A., 2003, in preparation Somerville & Primack (1999) Somerville R.S., Primack J.R., 1999, MNRAS, 310, 1087 Sutherland & Dopita (1993) Sutherland R., Dopita M., 1993, ApJS, 88, 253 Toft et al.  (2002) Toft, S., Rasmussen, J., Sommer-Larsen, J., Pedersen, K., 2002, MNRAS, 335, 799 White & Frenk (1991) White S.D.M., Frenk C., 1991, ApJ, 379, 52 Zel’dovich & Raiser (1966) Zel’dovich Ya. B., Raiser Yu. P., 1966, Physics of Shock Waves and High-Temperature Hydrodynamic Phenomena (Academic Press) Appendix A Testing the hydro code The numerical code, Hydra, has been developed specifically for simulating the evolution of a single spherical halo through collapse and feedback processes. A proper computation of the cooling and shock formation requires high precision. In this appendix we describe a few of the tests performed in order to verify that the code works properly. In the following three subsections we test for energy conservation, spatial convergence, and the performance of the code in a self-similar case. A.1 Energy Conservation Our numerical scheme does not use the total energy equation in the integration of the partial differential equations. Furthermore, the total energy of the system is not a straightforward sum of other variables that are involved in the calculation. The requirement of energy conservation is therefore an independent test for the accuracy of the numerical scheme. Energy conservation is harder to achieve than spatial convergence for several reasons. First, the error in total energy is systematic, in the sense that when dark-matter shells cross each other the energy tends to increase. Second, since our system is only marginally bound, the total energy is a small difference between two large quantities. We notice that energy conservation is simpler to achieve when there is no cooling, or when dark matter is absent (and thus there is no shell crossing). The total energy of the system at time $t$ is the sum of terms: $$E=K_{\rm d}+T_{\rm d}+K_{\rm g}+T_{\rm g}+U+Q\,,$$ (56) where subscripts d and g refer to dark matter and gas respectively, $K$ stands for kinetic energy, $T$ stands for potential energy, $U$ is the gas internal energy, and $Q$ is the thermal energy lost to radiation by time $t$. For the dark matter, these are straightforward sums over the discrete dark-matter shells: $$K_{\rm d}=\Delta M_{i}\frac{1}{2}\sum_{i=1}^{n_{\rm d}}\left(v_{i}^{2}+\frac{j% _{i}^{2}}{r_{i}^{2}}\right)\,,$$ (57) $$T_{\rm d}=-\sum_{i=1}^{n_{\rm d}}\frac{G\,\Delta M\,{{\cal M}}_{i}}{r_{i}+a}\,,$$ (58) where ${{\cal M}}_{i}$ is the total mass interior to dark-matter shell $i$, as defined in §3. For the gas shells, recall that the quantities $r$, $v$ and $j$ are given at the inner and outer shell boundaries, $i-1$ and $i$ respectively, so we compute the shell energies by averaging over the two boundary values: $$\displaystyle K_{\rm g}=$$ $$\displaystyle\!\frac{1}{2}\sum_{i=1}^{n_{\rm g}}\Delta m_{i}\left(\frac{v_{i}r% _{i}^{3}+v_{i-1}r_{i-1}^{3}}{r_{i}^{3}+r_{i-1}^{3}}\right)^{2}$$ $$\displaystyle+$$ $$\displaystyle\!\frac{1}{2}\sum_{i=1}^{n_{\rm g}}\Delta m_{i}\left(\frac{j_{i}r% _{i}^{2}+j_{i-1}r_{i-1}^{2}}{r_{i}^{3}+r_{i-1}^{3}}\right)^{2}\,,$$ $$T_{\rm g}=-\sum_{i=1}^{n_{\rm g}}\frac{G\,\Delta m_{i}\,({{\cal M}}_{i}+{{\cal M% }}_{i-1})/2}{[(r_{i}^{3}+r_{i-1}^{3})/2]^{1/3}+a}\,.$$ (60) The internal energy is a straightforward sum $$U=\sum e_{i}\Delta m_{i}\,.$$ (61) The energy radiated away, $Q=\int dt\int qdm$, is computed by $$Q_{=}\sum_{j=1}^{n_{\rm t}}\Delta t^{j}\sum_{i=1}^{n_{\rm g}}q^{j}_{i}\Delta m% _{i}\,,$$ (62) where $\Delta t^{j}$ is the length of timestep $j$, and $q^{j}_{i}$ is the cooling rate in shell $i$ at timestep $j$ (in units of ${\rm erg}\mbox{ }{\rm g}^{-1}{\rm s}^{-1}$). In a run with 10,000 dark-matter shells and 2,000 gas shells, we require and obtain energy conservation at the level of 1% in a Hubble-time, using a typical Runge Kutta timestep of about $5\times 10^{-6}\mbox{Gyr}$. (Such a run takes about 10 hours on an Alpha-6 DEC processor). We check the conservation first by varying the accuracy parameters presented in §3.3, and then by varying the number of shells. The three cases presented in Table 1 demonstrate that the results converge when the accuracy is increased. The simulations shown in this table are of the standard case with realistic cooling shown in Fig. 3. When cooling is shut off, energy conservation is much better. With the nominal choice of accuracy parameters the final energy is $0.9999$ of the initial energy. A.2 Spatial Convergence: 3D versus 1D A proper treatment of the competition between the pressure increase due to contraction and the pressure decrease due to cooling requires high temporal and spatial resolution. In particular, when the spatial resolution is increased, the shock appears earlier. Table 2 shows results from simulations of the case with realistic cooling (Fig. 3), all with the same accuracy criteria ($\epsilon_{\rm c}$, $\epsilon_{\rm rk}$, and $t_{sc}$), but with different spatial resolutions. The average distance between gas shells near the center ranges from about $80$pc to $2$kpc. With the poorest resolution of 125 gas shells the virial shock appears almost immediately after the virialization of the first shells of the simulation. The energy changed by about 75% during this simulation. Even if we assume that the precision of a 3D calculation is as good as that of an analogous 1D calculation (actually SPH codes converge slower than finite element schemes for problems involving shocks), we still need to cube the number of particles or grid points in order to achieve the same resolution. A three-dimensional simulation with $2\times 10^{6}$ gas particles and $2.5\times 10^{8}$ dark-matter particles, which is close to the limit of what is computationally feasible today, would correspond to the unsatisfactory case with the lowest spatial resolution in table 2. A.3 A self-similar case When the initial conditions are scale free (unlike the initial conditions assumed in the body of this paper, motivated by $\Lambda$CDM), and when the cooling function is also scale free (unlike the realistic cooling function used above), the results should be self similar. This can provide a test for the accuracy of our numerical code. We follow Bertschinger (1985a, b) in using an initial perturbation consisting of a point-mass embedded in a uniform-density background. Far from the point mass, the system should be self similar. We ran a simulation of such a case using our code with gas only ($f_{\rm b}=1$) and no cooling, starting at $z=200$ with an overdensity of 10 inside the innermost $2$kpc. The upper panel of Fig. 12 shows the density profile at different times. As expected, a shock appears at every time as a density jump by a factor of 4 [$=(\gamma+1)/(\gamma-1)$ for $\gamma=5/3$], and the post-shock gas settles to a complete rest after it is shocked. (The slope of the post-shock density profile is somewhat different from Bertschinger (1985b), because our calculation assumes a $\Lambda$CDM cosmology rather than the Einstein-deSitter assumed by Bertschinger.) The lower panel shows the same profiles after they were scaled to the same time (5Gyr) according to the scaling relation of Bertschinger (1985b): $r\propto t^{8/9}$. We see that our simulations recover the expected scaling relation almost perfectly. Fig. 13 shows an analogous test for the case where both gas and dark matter are present, with $f_{\rm b}=0.13$. The results are similar except for the somewhat higher noise level caused by the dark-matter component. Appendix B Top-Hat Model Consider a bound spherical perturbation encompassing mass $M$, whose mean density fluctuation profile at some fiducial initial time in the linear regime is $\bar{\delta}_{\rm i}(M)$, embedded in an Einstein-deSitter (EdS) cosmological background when the universal expansion factor is $a_{\rm i}$. We wish to express the shell radius $r$ as a function of time in terms of $M$. The implicit solution for a closed “mini-universe”, via a conformal time parameter $\eta$ specific to this perturbation, is $$r=r_{\rm v}(1-\cos\eta)\,,$$ (63) $$t=\frac{r_{\rm v}}{v_{\rm v}}(\eta-\sin\eta)\,.$$ (64) Maximum expansion occurs when $\eta=\pi$, and then a collapse to half this radius, which we identify with virialization, is obtained at $\eta=3\pi/2$, with virial radius $r_{\rm v}$ and corresponding virial velocity $v_{\rm v}^{2}={GM}/{r_{\rm v}}$. We normalize the universal expansion factor $a$ by identifying it at the initial time with the shell radius $r$. Assuming $\eta_{\rm i}\ll 1$, we have $r_{\rm i}\simeq(1/2)r_{\rm v}\eta_{\rm i}^{2}$ and $t_{\rm i}\simeq(1/6)(r_{\rm v}/v_{\rm v})\eta_{\rm i}^{3}$, so $a_{\rm i}=r_{\rm i}$ yields $a_{\rm i}=(9r_{\rm v}v_{\rm v}^{2}/2)^{1/3}t_{\rm i}^{2/3}$. The EdS expansion factor, $a\propto t^{2/3}$, can now be related using eq. (64) to the perturbation’s $\eta$ at any time: $$a=(9r_{\rm v}v_{\rm v}^{2}/2)^{1/3}t^{2/3}=(9/2)^{1/3}r_{\rm v}(\eta-\sin\eta)% ^{2/3}\,.$$ (65) The mean density within the perturbation relative to the universal density at the same time becomes a straightforward function of $\eta$: $$\frac{\bar{\rho}}{\rho_{\rm u}}=\frac{a^{3}}{r^{3}}=\frac{9}{2}\frac{(\eta-% \sin\eta)^{2}}{(1-\cos\eta)^{3}}\,.$$ (66) This is a standard result of the top-hat model. In order to relate the density to the small initial perturbation at $\eta_{\rm i}\ll 1$, we obtain from eq. (66) by a proper Taylor expansion to the first non-vanishing order: $$\bar{\delta}_{\rm i}\simeq 0.15\eta_{\rm i}\,,$$ (67) where $\bar{\delta}\equiv\bar{\rho}/\rho_{\rm u}-1$ is the mean fluctuation. Using this in the linear term of eq. (63) we obtain $$a_{\rm i}=(1/2)r_{\rm v}\eta_{\rm i}^{2}=(1/0.3)r_{\rm v}\bar{\delta}_{\rm i}\,.$$ (68) This allows us to write the mean density at any $\eta$, using eq. (63), as $$\frac{\bar{\rho}}{\rho_{\rm ui}}=\frac{a_{\rm i}^{3}}{r^{3}}=\frac{\bar{\delta% }_{\rm i}^{3}}{0.3(1-\cos\eta)^{3}}\,.$$ (69) Recalling that $\bar{\rho}=M/[(4\pi/3)r^{3}]$ we finally obtain at any $\eta$ $$r=C_{\rm i}\frac{M^{1/3}}{\bar{\delta}_{\rm i}(M)}(1-\cos\eta)\,,$$ (70) where $$C_{\rm i}\equiv\left(\frac{6}{\pi}\right)^{1/3}\frac{0.15}{\rho_{\rm ui}}\,.$$ (71) In particular, at $\eta=3\pi/2$, we obtain for the virial radius $r_{\rm v}=C_{\rm i}{M^{1/3}}/{\bar{\delta}_{\rm i}(M)}$. The constant $C_{\rm i}$ is independent of $M$; the universal density $\rho_{\rm ui}$ [$=(1+z_{\rm i})^{3}\rho_{\rm u0}$] is determined by the choice of the fiducial redshift $z_{\rm i}$ at which $\bar{\delta}_{\rm i}(M)$ is given. Appendix C Initial Profile We adopt in the linear regime the typical density fluctuation profile for the assumed power spectrum of fluctuations. For a Gaussian random field, this profile is proportional to the two-point correlation function (Dekel, 1981): $$\delta_{\rm i}(r)=\delta_{\rm 0i}\frac{\xi(r)}{\xi(0)}\,,$$ (72) where $\delta_{\rm 0i}$ specifies the amplitude normalization. For a given power spectrum $P(k)$, the correlation function is given by (Peebles, 1993, eq. 21.40) $$\xi(r)=4\pi\int_{0}^{\infty}k^{2}dkP(k)\frac{\sin kr}{kr}\,,$$ (73) and the local variance is $$\xi(0)=4\pi\int_{0}^{\infty}k^{2}dkP(k)\,.$$ (74) The mean density fluctuation interior to radius $r$, containing mass $M=(4\pi/3)\rho_{\rm ui}r^{3}$ when the fluctuation is small, is $$\bar{\delta}_{\rm i}(r)=\frac{3}{r^{3}}\int_{0}^{r}r^{2}dr\delta_{\rm i}(r)\,.$$ (75) This involves the integral (Peebles, 1993, eq. 21.62) $$J_{3}(r)\equiv\int_{0}^{r}r^{2}dr\xi(r)=\frac{4\pi r^{3}}{3}\int_{0}^{\infty}k% ^{2}dkP(k)\tilde{W}_{s}(kr)\,,$$ (76) where $\tilde{W}_{s}(kr)$ is the Fourier transform of the top-hat window, $$\tilde{W}_{s}(kr)=3\left[\frac{\sin kr}{(kr)^{3}}-\frac{\cos kr}{(kr)^{2}}% \right]=\frac{3}{kr}j_{1}(kr)\,,$$ (77) with $j_{1}$ the spherical Bessel function. We use in the simulations of this paper the $\Lambda$CDM power spectrum as from the fitting formula of Bardeen et al.  (1986) $$P(k)=AkT^{2}(k),$$ (78) $$T(k)=\left\{1+\left[ak/\Gamma+(bk/\Gamma)^{3/2}+(ck/\Gamma)^{2}\right]^{\nu}% \right\}^{1/\nu}$$ (79) with $a=6.4h^{-1}$Mpc, $b=3.0h^{-1}$Mpc, $c=1.7h^{-1}$Mpc, $\nu=1.13$ and $\Gamma=0.21$ (the $\tau$CDM model of Efstathiou et al. , 1992). The normalization is such that $\sigma_{8}=1.$ In the cosmological toy model we approximate the $\Lambda$CDM by a power-law power spectrum $P_{k}\propto k^{n}$, for which the two-point correlation function is also a power law, $\xi(r)\propto r^{-(n+3)}$, and then $$\bar{\delta}_{\rm i}(r)=\left({r\over r_{\rm 1}}\right)^{-(n+3)},$$ (80) where $r_{\rm 1}$ provides the normalization. In terms of mass we obtain $$\bar{\delta}_{\rm i}(M)=\left({M\over(4\pi/3)\rho_{\rm ui}r_{\rm 1}^{3}}\right% )^{-(n+3)/3}\,.$$ (81) This serves as the input to eq. (70), or eq. (48). We normalize the initial perturbation such that a specific mass $M_{1}$ reaches virialization at some cosmological epoch $a_{\rm v}=1/(1+z_{\rm v})$. Using eq. (65) and eq. (68) we obtain the linear analog to the nonlinear fluctuation growth rate: $$\frac{\bar{\delta}(\eta)}{\bar{\delta}_{\rm i}}\equiv\frac{a(\eta)}{a_{\rm i}}% =0.3\left(\frac{9}{2}\right)^{1/3}(\eta-\sin\eta)^{2/3}\,.$$ (82) At virialization, this gives $\delta_{\rm v}\equiv\bar{\delta}(\eta=3\pi/2)\simeq 1.58$. Then: $$\bar{\delta}_{\rm i}(M_{1})=\delta_{\rm v}\left({a_{\rm i}\over a_{\rm v}}% \right).$$ (83) The normalization parameter $\delta_{\rm 0i}$ (or $r_{\rm 1}$) at $a_{\rm i}$ is obtained by equating this with eq. (75) [or eq. (81)] at $M=M_{1}$.
Implicit–explicit timestepping with finite element approximation of reaction–diffusion systems on evolving domains Omar Lakkis Omar Lakkis, Department of Mathematics, University of Sussex, Brighton, England, GB-BN1 9QH. [email protected] http://www.maths.sussex.ac.uk/~omar ,  Anotida Madzvamuse Anotida Madzvamuse, Department of Mathematics, University of Sussex, Brighton, England, GB-BN1 9QH. [email protected]  and  Chandrasekhar Venkataraman Chandrasekhar Venkataraman, Department of Mathematics, University of Sussex, Brighton, England, GB-BN1 9QH. [email protected] (Date:: January 14, 2021) Abstract. We present and analyse an implicit–explicit timestepping procedure with finite element spatial approximation for a semilinear reaction–diffusion systems on evolving domains arising from biological models, such as Schnakenberg’s (1979). We employ a Lagrangian formulation of the model equations which permits the error analysis for parabolic equations on a fixed domain but introduces technical difficulties, foremost the space-time dependent conductivity and diffusion. We prove optimal-order error estimates in the $\operatorname{L}_{\infty}(0,T;\operatorname{L}_{2}(\varOmega))$ and $\operatorname{L}_{2}(0,T;\operatorname{H}^{1}(\varOmega))$ norms, and a pointwise stability result. We remark that these apply to Eulerian solutions. Details on the implementation of the Lagrangian and the Eulerian scheme are provided. We also report on a numerical experiment for an application to pattern formation on an evolving domain. AM would like to acknowledge partial financial support from the British Council through its UK-US New Partnership Fund (PMI2), the London Mathematical Society (R4P2) and the EPSRC small grant scheme: EP/H020349/1. The research of CV has been supported by the British Engineering and Physical Sciences Research Council (EPSRC), Grant EP/G010404. All authors acknowledge the EPSRC DTA Fellowship. 1. Introduction Since the seminal paper of Turing ], time-dependent reaction–diffusion systems (RDSs) have been studied as models for pattern formation in natural-process driven morphogenesis and developmental biology (see Murray ] for details). An important generalisation of these models consists in considering RDSs posed on evolving domains. This stems from the now relatively well-known observation that in many cases growth of organisms plays a pivotal role in the emergence of patterns and their evolution during growth development [34, 23]. RDSs on evolving domains have a wider scope of application, e.g., competing-species of micro-organisms in environmental biology, chemistry of materials and corrosion processes, the spread of pollutants. Numerical simulations of RDSs on time-evolving domains reproducing the empirically observed pattern formation processes are commonly used [23, 5, 32, 4, 44, 14]. It is essential for scientists to computationally approximate and appreciate the error between simulations and exact solutions of such RDSs. Galerkin finite elements [41] are among the methods of choice to approximate such systems. In spite of their widespread use, to the best of our knowledge, no complete error analysis of approximating finite element schemes for nonlinear reaction–diffusion systems on evolving domains is available in the literature, thus motivating this work. This is a sibling paper to [45] where we analysed the well-posed nature of (exact) RDSs on evolving domains. In most practical applications, the evolving domain is usually a surface embedded in the three-dimensional Euclidean space, but for simplicity we restrict our discussion to the case where both the reference domain and the evolving domain are flat, deferring thus the analysis of RDSs on evolving curved surfaces. 1.1. Problem (RDS on a time-dependent evolving domain) We study a RDS, also considered in [10, 27], which models a system of chemicals that interact through the reaction terms only and diffuse in the domain independently of each other. Given an integer $m\geq 1$, the vector $\boldsymbol{u}\left(\boldsymbol{x},t\right)\in\mathbb{R}^{m}$, denoting the concentration of the chemical species $i=1,\dotsc,m$, at a spatial point $\boldsymbol{x}\in{\Omega_{t}}\subset\mathbb{R}^{d}$, $d=1,2,3$, at time $t\in[0,T]$, $T>0$, satisfies the following initial–boundary value problem (1.1) $$\begin{gathered}\displaystyle\partial_{t}{u}_{i}(\boldsymbol{x},t)-D_{i}% \Updelta{u}_{i}(\boldsymbol{x},t)+\nabla\cdot\left[\boldsymbol{a}u_{i}\right](% \boldsymbol{x},t)=f_{i}\left(\boldsymbol{u}(\boldsymbol{x},t)\right),\quad% \boldsymbol{x}\in{\Omega_{t}},t\in(0,T],\\ \displaystyle[{\boldsymbol{\nu}}\cdot\nabla{u}_{i}](\boldsymbol{x},t)=0,\quad% \boldsymbol{x}\in\partial{\Omega_{t}},t>0,\\ \displaystyle u_{i}(\boldsymbol{x},0)=u_{i}^{0}(\boldsymbol{x}),\quad% \boldsymbol{x}\in{\Omega}_{0},\end{gathered}$$ where $\Omega_{t}$, detailed in §2.2, is a simply connected Lipschitz continuously evolving domain with respect to $t\in[0,T]$, and $\boldsymbol{D}:=({D}_{1},\dotsc,{D}_{m})^{\mathsf{T}}$ is a vector of strictly positive diffusion coefficients. Detailed assumptions on the nonlinear reaction vector field $\boldsymbol{f}:=({f}_{1},\dotsc,{f}_{m})^{\mathsf{T}}$ are given in §2.5. The convection $\boldsymbol{a}=(a_{1},\dotsc,a_{d})^{\mathsf{T}}$ is induced by the material deformation due to the evolution of the domain. The initial data $\boldsymbol{u}^{0}$ is a positive-entry bounded field. Since we are primarily interested in pattern formation phenomena that arise as a result of self-organisation within a domain without outside-world communication we consider homogeneous Neumann boundary conditions, but other types of boundary conditions could be studied as well within our framework. 1.2. Main results The core result in this paper is Theorem 5.1 where we prove optimal convergence rates of the discrete solution in $\operatorname{L}_{\infty}(0,T;\operatorname{L}_{2}({\hat{\Omega}}))^{m}$ and $\operatorname{L}_{2}(0,T;\operatorname{H}^{1}({\hat{\Omega}})^{m})$ (where ${\hat{\Omega}}$ is a transformed version of ${\Omega_{t}}$ to be described next). Our theoretical results are illustrated by numerical experiments, aimed mainly at quantifying the pattern formation phenomena related to the type of growth in the domains. 1.3. A Lagrangian approach We employ, both for the analysis and the implementation of the computational method, a Lagrangian formulation of Problem 1.1, in the sense employed in fluid-dynamics, i.e., where the evolving domain, ${\Omega_{t}}\in\mathbb{R}^{d}$, is the image of a time-dependent family of diffeomorphisms $\boldsymbol{\boldsymbol{\mathcal{A}}}_{t}$ on a reference domain ${\hat{\Omega}}\in\mathbb{R}^{d}$. The $m$ parabolic equations with constant diffusion coefficient constituting the RDS on ${\Omega_{t}}$ are thus pulled-back into equations on a fixed domain, albeit with space-time dependent coefficients. The fixed domain setting permits us to use the standard Bochner space machinery needed for evolution equations of parabolic type. On the other hand, we are thus left to deal (computationally and analytically) with three interacting difficulties: (1) a system of $m$ coupled equations, (2) the nature of the nonlinearity $\boldsymbol{f}$ coupling the equations and (3) the non-constant diffusion and velocity coefficients, especially as functions of time. Our approach in tackling the nonlinearity consists in constructing a suitable globally Lipschitz extension of the nonlinear reaction field that coincides with it in a neighbourhood of the exact solution and then proving that both the exact solution and the numerical solution are confined to the domain of the original (non-extended) nonlinearity. We use mainly parabolic energy techniques, but must have some pointwise control in order to bound the nonlinearities. Our treatment of the nonlinear reaction functions is based on the approach of [42], see also [11, 35]. An alternative approach to ours would be to construct schemes where an invariant region for the continuous solution [8] is preserved under discretisation, for work in this direction we refer to [13, 29, 22]. Although all our error estimates are derived for the Lagrangian formulation, given that the domain evolution is prescribed, they carry in a straightforward manner to the Eulerian framework. The situation would be more delicate if the domain evolution was itself an unknown as a geometric motion coupled with the RDS, but this is outside the scope of this study. The smooth prescribed evolution case we deal with in this study is of relevance in many applications (see for example [34]), including but not limited to skin pigment pattern formation during development. We note that in many important applications such as morphogen controlled growth where the evolution of the domain is governed by the solution to the RDS [3], or cell motility [14] and tumour growth [4] where the deformation of the cell membrane is governed by a geometric evolution law, the domain itself is an unknown which must be approximated. This more challenging setting warrants further investigation. The transformation to the reference domain, which we make use of in our Lagrangian analysis, would now depend on the solution of the RDSs and/or on the geometric properties of the domain leading to the consideration of quasilinear or fully nonlinear RDSs on fixed domains. 1.4. Implicit-explicit schemes The fully discrete method that we analyse, is a fully practical method, implemented in the ALBERTA toolbox (code available upon request), using an implicit-explicit backward Euler scheme to derive the time-discretisation [28]. On fixed domains, Zhang et al. [49] analyse a second order implicit-explicit finite element scheme for the Gray-Scott model and Garvie and Trenchea [18] analyse a first order scheme for an RDS that models predator prey dynamics. In [7] the authors propose and briefly analyse a numerical method based on an IMEX time discretisation and spherical harmonics for the spatial approximation of a RDS posed on the surface of a stationary sphere. The a posteriori analysis of finite element methods for RDSs is treated in, for example, the book [15] where systems of coupled parabolic differential equations (reaction-diffusion) and ordinary differential equations are considered and [33] where 1d scalar quasilinear RDSs are considered. An adaptive finite element method for semilinear RDSs on evolving domains and surfaces is presented in [46]. Another approach is the moving finite element method, where nodal movement is regarded as an unknown (even on fixed domain problems) and at each timestep nodes are moved, usually with the goal of controlling the error [30, 31, 2], for the analysis of the moving finite element method we refer to [12]. In [48] the authors describe an adaptive moving mesh FEM to approximate solutions of the Gray-Scott RDS on a fixed domain. Recently Mackenzie and Madzvamuse [26] analysed a finite difference scheme approximating the solution of a linear RDS on a domain with continuous spatially linear isotropic evolution. Our study is novel, in that, we propose and analyse a finite element method to approximate RDSs on a domain with continuous (possibly nonlinear) evolution. This creates space-time-dependent coefficients impacting the diffusion and the time-derivative term which complicates the fully discrete scheme’s analysis and requires a careful treatment of the timestep, depending on the rate of domain evolution. In spite of it being only first order in time, the proposed implicit-explicit method is robust for the applications we have in mind, where long time integration is essential and the problems are often posed on complex geometries such as the surface of an organism. 1.5. Outline The structure of this paper is as follows: in §2 we introduce the notation employed throughout this article, we state our model problem together with the assumptions that we make on the problem data and the domain evolution. We present the weak formulation of the continuous problem and define a modified nonlinear reaction function which we introduce for the analysis. In §3 we present the semidiscrete (space-discrete) and the fully discrete finite element schemes with some remarks regarding implementation, allowing the practically minded reader to skip over the analysis through to §6. We then analyse the semidiscrete scheme in §4 and the fully discrete scheme in §5 proving optimal rate error bounds as well as a maximum-norm stability result, whereby the stabilising effect of domain growth observed in the continuous case is preserved at the discrete level and in the numerical schemes. In §6 we provide a concrete implementation of the finite element scheme with a set of reaction kinetics commonly encountered in developmental biology, considering domains with spatially linear and nonlinear evolution. In §7 we present computational experiments to illustrate our theoretical results. 2. Notation and Setup In this section we define most of the basic notation for the rest of the paper, introduce the evolving domain framework, set the detailed blanket assumptions and introduce a pulled-back version of Problem 1.1. 2.1. Calculus and function spaces Given an open and bounded stationary domain $\Pi\subset{\mathbb{R}}^{d}$ and a function $\boldsymbol{\eta}\in{C}^{1}(\Pi;{\mathbb{R}}^{m})$, we denote by $\nabla\boldsymbol{\eta}$ the Jacobian matrix of $\boldsymbol{\eta}$ with components $\left[\nabla\boldsymbol{\eta}(\boldsymbol{x})\right]_{ij}=\partial_{x_{i}}\eta% _{j}$. For $\boldsymbol{\eta}\in{C}^{1}(\Pi;{\mathbb{R}}^{d})$ we denote by $\nabla\cdot\boldsymbol{\eta}$ the divergence of $\boldsymbol{\eta}$. In an effort to compress notation for spatial derivatives, we introduce the convention used above, that if the variable with respect to which we differentiate is omitted, it should be understood as the spatial argument of the function. We denote by $\operatorname{L}_{p}{(\Pi)}$, $\operatorname{W}^{p,k}{(\Pi)}$ and $\operatorname{H}^{k}{(\Pi)}$ the Lebesgue, Sobolev and Hilbert spaces respectively, equipped with the usual norms and seminorms [16]. For vector valued functions $\boldsymbol{\eta},\boldsymbol{\mu}:\Pi\to{\mathbb{R}}^{m}$, we denote (2.1) $$\displaystyle\left\langle\boldsymbol{\eta},\boldsymbol{\mu}\right\rangle_{\Pi^% {m}}$$ $$\displaystyle:=\sum_{i=1}^{m}\int_{\Pi}\eta_{i}(\boldsymbol{x})\mu_{i}(% \boldsymbol{x})\dif x,$$ with the corresponding modifications to the norms and seminorms. 2.2. Evolving domain Let ${\hat{\Omega}}\subset\mathbb{R}^{d}$ be a simply connected, convex domain with Lipschitz boundary; we will call it the reference domain. We define the evolving domain as a time-parametrised family of domains (2.2) $$\{{\Omega_{t}}:=\boldsymbol{\boldsymbol{\mathcal{A}}}_{t}({\hat{\Omega}})\}_{0% \leq t\leq T}\text{ where }\boldsymbol{\boldsymbol{\mathcal{A}}}_{t}:{\hat{% \Omega}}\rightarrow\Omega_{t}\text{ is a $\operatorname{C}^{1}$-diffeomorphism% for each fixed }t\in[0,T].$$ The Jacobian matrix of $\boldsymbol{\boldsymbol{\mathcal{A}}}_{t}(\cdot)$, its determinant and inverse will be respectively denoted by (2.3) $$\boldsymbol{J}_{t}(\boldsymbol{\xi}):=\nabla\boldsymbol{\mathcal{A}}_{t}(% \boldsymbol{\xi}),\quad J_{t}(\boldsymbol{\xi}):=\det\boldsymbol{J}_{t}(% \boldsymbol{\xi})\text{ and }\boldsymbol{K}_{t}(\boldsymbol{\xi}):=[\nabla% \boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi})]^{-1}$$ for each $(\boldsymbol{\xi},t)\in{\hat{\Omega}}\times[0,T]$. We will use also the evolution induced convection on the evolving domain (2.4) $$\boldsymbol{a}(\boldsymbol{x},t):=\partial_{t}\boldsymbol{\boldsymbol{\mathcal% {A}}}_{t}(\boldsymbol{\boldsymbol{\mathcal{A}}}_{t}^{-1}(\boldsymbol{x}))\text% { for }\boldsymbol{x}\in{\Omega}_{t}\mbox{ and }t\in[0,T].$$ From classical results [1] we have the following expression (2.5) $$\partial_{t}{J}\left(\boldsymbol{\xi},t\right)=J_{t}(\boldsymbol{\xi})\nabla% \cdot\boldsymbol{a}\left(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}),t% \right)\text{ for }(\boldsymbol{\xi},t)\in{\hat{\Omega}}\times[0,T],$$ and the Reynold’s transport theorem [1], which reads: For a function $g\in C^{1}({\Omega_{t}},[0,T])$ (2.6) $$\frac{\dif}{\dif t}\int_{{\Omega_{t}}}g=\int_{{\Omega_{t}}}\partial_{t}{g}+% \nabla\cdot\left(\boldsymbol{a}g\right).$$ To aid the exposition we define $\mathcal{Q}$ to be the topologically cylindrical space-time domain: (2.7) $$\mathcal{Q}:=\left\{(\boldsymbol{x},t):\boldsymbol{x}\in{\Omega_{t}},t\in[0,T]% \right\}.$$ We now introduce notation to relate functions defined on the evolving domain to functions defined on the reference domain. Given a function $g:\mathcal{Q}\rightarrow\mathbb{R}$ we denote by $\hat{g}:{\hat{\Omega}}\times[0,T]\rightarrow\mathbb{R}$ its pullback on the reference domain, defined by the following relationship (2.8) $$\displaystyle\hat{g}(\boldsymbol{\xi},t):=g\left(\boldsymbol{\mathcal{A}}_{t}(% \boldsymbol{\xi}),t\right)\quad(\boldsymbol{\xi},t)\in{\hat{\Omega}}\times[0,T].$$ Assuming sufficient smoothness on the function $g$, using (2.8) and the chain rule we may relate time-differentiation on the reference and evolving domains:111To avoid confusion, as in (2.9)), we denote by $\partial_{i}f$ the partial derivative with respect to the $i$-th argument of the function $f$, for a positive integer $i$. When there is no risk of confusion we write $\partial_{t}f$ for the time derivative of a time-dependent function $f$ even when such a variable is not explicitly written in the arguments. (2.9) $$\displaystyle\partial_{t}\hat{g}(\boldsymbol{\xi},t)=$$ $$\displaystyle\partial_{2}{g}\left(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi% }),t\right)+\left[\boldsymbol{a}\cdot\nabla{g}\right]\left(\boldsymbol{% \mathcal{A}}_{t}(\boldsymbol{\xi}),t\right),\quad(\boldsymbol{\xi},t)\in{\hat{% \Omega}}\times[0,T].$$ The right hand side of (2.9) is commonly known as the material derivative of $g$ with respect to the velocity $\boldsymbol{a}$. The following result relates the norm of a function $g:\mathcal{Q}\rightarrow{\mathbb{R}}$ on the evolving domain with its pullback $\hat{g}$ on the reference domain: (2.10) $$\displaystyle\|g\|_{\operatorname{L}_{2}{({\Omega_{t}})}}^{2}$$ $$\displaystyle=\left\langle J_{t}\hat{g},\hat{g}\right\rangle_{{\hat{\Omega}}}=% :\|\hat{g}\|_{J_{t}}^{2}.$$ For the gradient of a sufficiently smooth function $g:\mathcal{Q}\rightarrow{\mathbb{R}}$, we have (2.11) $$\displaystyle\|\nabla{g}\|_{\operatorname{L}_{2}{({\Omega_{t}})}}^{2}$$ $$\displaystyle=\left\langle J_{t}\boldsymbol{K}_{t}\nabla\hat{g},\boldsymbol{K}% _{t}\nabla\hat{g}\right\rangle_{{\hat{\Omega}}}=\left\langle\boldsymbol{B}_{t}% \nabla\hat{g},\nabla\hat{g}\right\rangle_{{\hat{\Omega}}}=:|\hat{g}(\cdot,t)|_% {\boldsymbol{B}_{t}}^{2},$$ where $\boldsymbol{B}:=J\boldsymbol{K}\boldsymbol{K}^{\mathsf{T}}$. For $t\in[0,T]$ we define the bilinear form (2.12) $$b_{t}\left({\hat{v}},{\hat{w}}\right):=\langle\boldsymbol{B}_{t}\nabla\hat{v},% \nabla\hat{w}\rangle_{{\hat{\Omega}}}\text{ for }\hat{v},\hat{w}\in% \operatorname{H}^{1}({\hat{\Omega}}).$$ Assumption 2.3 implies that there exists $\mu,\bar{\mu}\in{\mathbb{R}}_{+}$ such that for $i=1,\dotsc,m$ and for all $\hat{v}\in\operatorname{H}^{1}({\hat{\Omega}})$, (2.13) $$\mu\|\nabla\hat{v}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\leq b_{t}% \left({\hat{v}},{\hat{v}}\right)\leq\|\boldsymbol{B}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})}}\|\nabla\hat{v}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})}}^{2}=\bar{\mu}\|\nabla\hat{v}\|_{\operatorname{L}_{2}{({\hat{\Omega% }})}}^{2}.$$ 2.3. Assumption (Regularity of the mapping) It will be handy sometimes to denote the family ${\left\{{\boldsymbol{\boldsymbol{\mathcal{A}}}_{t}}\right\}}_{{t\in\left[0,T% \right]}}$, introduced in 2.2, $\boldsymbol{\boldsymbol{\mathcal{A}}}$ as a single map $(\boldsymbol{x},t)\ni\varOmega\times[0,T]\mapsto\boldsymbol{\boldsymbol{% \mathcal{A}}}_{t}(\boldsymbol{x})$. We assume the following regularity: (2.14) $$\boldsymbol{\mathcal{A}}\in\operatorname{C}^{1}({\hat{\Omega}}\times[0,T])% \quad\text{ and }\quad\boldsymbol{\mathcal{A}}_{t}\in\operatorname{C}^{k+1}({% \hat{\Omega}})\text{ for each }t\in[0,T],$$ where $k$ will be taken equal to the degree of the basis functions of the finite element space defined in the following section. To ensure the mapping is invertible we assume the determinant of the Jacobian $J$ of the mapping $\boldsymbol{\mathcal{A}}$ (cf. (2.3)) satisfies (2.15) $$J>0\mbox{ in }{\hat{\Omega}}\times[0,T].$$ 2.4. The RDS reformulated on the reference domain Using (2.8)—(2.11) and a change of variable in the divergence, we obtain the following equivalent formulation of Problem 1.1 on a reference domain. Denote by $\hat{\boldsymbol{u}}:{\hat{\Omega}}\times[0,T]\to{\mathbb{R}}^{m}$ the function that satisfies for $i=1,\dotsc,m$, (2.16) $$\begin{gathered}\displaystyle\partial_{t}\hat{u}_{i}-\frac{D_{i}}{J}\nabla% \cdot(\boldsymbol{B}\nabla\hat{u}_{i})+\hat{u}_{i}\nabla\cdot\hat{\boldsymbol{% a}}=f_{i}\left(\hat{\boldsymbol{u}}\right),\text{ on }{\hat{\Omega}}\times(0,T% ],\\ \displaystyle\hat{{\boldsymbol{\nu}}}\cdot\boldsymbol{B}\nabla{\hat{u}}_{i}=0,% \text{ on }\partial{\hat{\Omega}}\times(0,T],\\ \displaystyle\hat{u}_{i}(\boldsymbol{\xi},0)=\hat{u}_{i}^{0}(\boldsymbol{\xi})% ,\quad\boldsymbol{\xi}\in{\hat{\Omega}}.\end{gathered}$$ where (2.17) $$\hat{\boldsymbol{a}}(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}))=\partial_% {t}\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi})\text{ for }(\boldsymbol{\xi}% ,t)\in{\hat{\Omega}}\times[0,T].$$ 2.5. Assumption (Nonlinear reaction vector field) We assume throughout that $\boldsymbol{f}$ is of the form (2.18) $${f}_{i}(\boldsymbol{z})={z}_{i}F_{i}(\boldsymbol{z}),\text{ for all }% \boldsymbol{z}\in\operatorname{Dom}\boldsymbol{f}=:I\text{ and each }i=1,% \dotsc,m,$$ for some vector field $\boldsymbol{F}\in\operatorname{C}^{1}(I)$ and some open set $I\subset{\mathbb{R}}^{m}$. As a result $\boldsymbol{f}\in\operatorname{C}^{1}(I)$ and it is locally Lipschitz. In §6 we provide an example of a widely studied set of reaction kinetics that satisfy the structural assumptions we make on the nonlinear reaction vector field. 2.6. Assumption (Existence and regularity) We assume the global existence of a solution $\hat{\boldsymbol{u}}$ to Problem 2.4. Furthermore we assume $\hat{\boldsymbol{u}}$ is in $H^{\ell+1}({\hat{\Omega}})^{m}$ with $\partial_{t}\hat{\boldsymbol{u}}$ in $H^{\ell+1}({\hat{\Omega}})^{m}$ where $\ell$ is the polynomial degree of the finite element space defined in the following section. 2.7. Remark (Applicability of Assumption 2.6) In [45], we proved the global existence of positive classical solutions to Problem 1.1 for a class of RDSs with positive initial data on domains with bounded spatially linear isotropic evolution. 2.8. Weak formulation To construct a finite element discretisation, we introduce a weak solution of Problem 2.4, denoted by $\hat{u}_{i}\in\operatorname{L}_{2}{\big{(}0,T;\operatorname{H}^{1}{({\hat{% \Omega}})}\big{)}}$ with $\partial_{t}\hat{u}_{i}\in\operatorname{L}_{2}{\big{(}0,T;\operatorname{H}^{-1% }{({\hat{\Omega}})}\big{)}}$ such that (2.19) $$\\ \left\langle J\left({\partial_{t}\hat{u}_{i}+\hat{u}_{i}\nabla\cdot\boldsymbol% {a}\left({\boldsymbol{\mathcal{A}}_{t}(\cdot,t)}\right)}\right),\hat{\chi}% \right\rangle_{{\hat{\Omega}}}+D_{i}b_{t}\left({{\nabla}\hat{u}_{i}},{\nabla% \hat{\chi}}\right)=\left\langle J{f}_{i}(\hat{\boldsymbol{u}}),\hat{\chi}% \right\rangle_{{\hat{\Omega}}}\quad\>\forall\>\hat{\chi}\in\operatorname{H}^{1% }{({\hat{\Omega}})}.$$ Using the expression for the time-derivative of the determinant of the Jacobian (2.5), we have (2.20) $$\begin{split}\displaystyle\left\langle\partial_{t}(J\hat{u}_{i}),\hat{\chi}% \right\rangle_{{\hat{\Omega}}}+{D}_{i}b_{t}\left({{\nabla}\hat{u}_{i}},{\nabla% \hat{\chi}}\right)&\displaystyle=\left\langle Jf_{i}(\hat{\boldsymbol{u}}),% \hat{\chi}\right\rangle_{{\hat{\Omega}}}\quad\>\forall\>\hat{\chi}\in% \operatorname{H}^{1}{({\hat{\Omega}})}.\end{split}$$ We shall use (2.20) to construct a finite element scheme to approximate the solution to Problem 1.1 on the reference domain. 2.9. Extended nonlinear reaction function In general the techniques used to show Assumptions 2.5 and 2.6 hold utilise the maximum principle [40, 45]. In the discrete case, since the maximum principle cannot be applied [41, p. 83] we show, under suitable assumptions, maximum-norm bounds on the discrete solution in (5.26) that guarantee the solution remains in the region ${I}$ defined in 2.18. We introduce a modified globally Lipschitz nonlinear reaction in order to derive the error bounds, but this extension is never needed in practice and hence needs not be computed. Recalling Assumption 2.5, we define $\widetilde{\boldsymbol{F}}\in{C}^{1}(\mathbb{R}^{m})$ such that (2.21) $$\begin{gathered}\displaystyle\begin{cases}\begin{aligned} \displaystyle% \widetilde{\boldsymbol{F}}(\boldsymbol{z})&\displaystyle=\boldsymbol{F}(% \boldsymbol{z}),\quad\mbox{for }\boldsymbol{z}\in{I}\\ \displaystyle\left|\widetilde{\boldsymbol{F}}^{\prime}(\boldsymbol{z})\right|&% \displaystyle<\widetilde{C},\quad\mbox{for }\boldsymbol{z}\in\mathbb{R}^{m},% \end{aligned}\end{cases}\qquad\text{ and }\widetilde{{f}_{i}}(\boldsymbol{z}):% =z_{i}{\widetilde{F}_{i}}(\boldsymbol{z}),\quad\mbox{for }\boldsymbol{z}\in% \mathbb{R}^{m}.\end{gathered}$$ The function $\widetilde{\boldsymbol{F}}$ is guaranteed to exist due to Assumptions 2.5, 2.6, the Whitney Extension Theorem [17, Th. 1, §6.5] and the use of an appropriate cut-off factor. If $\boldsymbol{u}$ is a solution of (1.1) (2.22) $$\begin{split}\displaystyle\widetilde{\boldsymbol{f}}(\boldsymbol{u})&% \displaystyle={\boldsymbol{f}}(\boldsymbol{u}).\end{split}$$ Thus, we may without restriction replace $\boldsymbol{f}$ with $\widetilde{\boldsymbol{f}}$ in (1.1) 3. Finite element method In this section we design the finite element method, first by discretising Problem 2.4 in space only, discussing some properties of the semidiscrete scheme and then passing to the fully discrete scheme. 3.1. Spatial discretisation set-up We shall split the spatial and temporal discretisation of Problem 1.1 into separate steps. For the spatial approximation, we employ a conforming finite element method. To this end, we define $\hat{{\mathscr{T}}}$ a triangulation of the reference domain. We shall consistently denote by $\hat{h}:=\max_{{{s}}\in\hat{{\mathscr{T}}}}\operatorname{diam}({{s}})$ the mesh-size of $\hat{\mathscr{T}}$. We assume the triangulation $\hat{{\mathscr{T}}}$ is conforming and that there is no error due to boundary approximation. Furthermore given $\{\hat{{\mathscr{T}}}_{i}\}_{i=1}^{\infty}$, a sequence of conforming triangulations, we assume the quasi-uniformity of the sequence holds, for details see for example [39]. Note the assumption of quasi-uniformity implies that the family of triangulations is shape-regular [39, p. 159]. Given the triangulation $\hat{{\mathscr{T}}}$, we now define a finite element space on the reference configuration: (3.1) $$\hat{\mathbb{V}}:=\left\{\hat{\Phi}\in H^{1}({\hat{\Omega}}):\hat{\Phi}|_{s}% \mbox{ is piecewise polynomial of degree $\ell$}\right\}.$$ We utilise the following known results about the accuracy of the finite element space $\hat{\mathbb{V}}$. By the definition of $\hat{\mathbb{V}}$, we have for $\hat{v}\in H^{\ell+1}({\hat{\Omega}})$ (see for example Brenner and Scott ] or Thomée ]), (3.2) $$\begin{split}\displaystyle\inf_{\hat{\Phi}\in\hat{\mathbb{V}}}\left\{\|\hat{v}% -\hat{\Phi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}+\hat{h}\|\nabla(\hat{v}% -\hat{\Phi})\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}\right\}\leq C\hat{h}^{% \ell+1}\left|\hat{v}\right|_{H^{\ell+1}({\hat{\Omega}})}.\end{split}$$ Let the degree of the finite element space satisfy $\ell+1>\frac{d}{2}$, where $d$ is the spatial dimension. In the analysis we shall make use of the fact that (3.2) is satisfied by taking the Lagrange interpolator ${\Lambda^{h}}:\operatorname{H}^{\ell+1}{({\hat{\Omega}})}\to\hat{\mathbb{V}}$ in place of $\hat{\Phi}$ (note that $\ell+1>{d}/{2}$ implies $\operatorname{H}^{\ell+1}({\hat{\Omega}})\hookrightarrow\operatorname{C}^{0}(\Omega)$ so the Lagrange interpolant is well defined). Let $\mathcal{I}^{h}:\operatorname{C}^{0}\rightarrow\hat{\mathbb{V}}$ be a Clément type interpolant [9]. The following bound holds (3.3) $$\begin{split}\displaystyle\|\hat{v}-\mathcal{I}^{h}\hat{v}\|_{\operatorname{L}% _{\infty}({\hat{\Omega}})}\leq{C}\hat{h}^{\ell+1-d/2}\left|\hat{v}\right|_{H^{% \ell+1}({\hat{\Omega}})}.\end{split}$$ We shall make use of the following inverse estimate valid on quasiuniform sequences of triangulations: (3.4) $$\begin{split}\displaystyle\|\hat{\Phi}\|_{\operatorname{L}_{\infty}(\hat{% \mathbb{V}})}\leq C\hat{h}^{-d/2}\|\hat{\Phi}\|_{\operatorname{L}_{2}(\hat{% \mathbb{V}})}\quad\forall\hat{\Phi}\in\hat{\mathbb{V}}.\end{split}$$ 3.2. Semidiscrete approximation We define the spatially semidiscrete approximation of the solution of Problem 1.1 to be a function $\hat{u}^{h}_{i}:[0,T]\rightarrow\hat{\mathbb{V}}$, such that for $i=1,\dotsc,m$, (3.5) $$\begin{cases}\begin{aligned} \displaystyle\left\langle\partial_{t}(J\hat{u}^{h% }_{i}),\hat{\Phi}\right\rangle_{{\hat{\Omega}}}+\left\langle{D}_{i}\boldsymbol% {B}{\nabla}\hat{u}^{h}_{i},\nabla\hat{\Phi}\right\rangle_{{\hat{\Omega}}}&% \displaystyle=\left\langle J\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h}),\hat{% \Phi}\right\rangle_{{\hat{\Omega}}}\quad\>\forall\>\hat{\Phi}\in\hat{\mathbb{V% }},\\ \displaystyle\hat{u}^{h}_{i}(0)&\displaystyle=\Lambda^{h}\hat{u}_{i}^{0},\end{% aligned}\end{cases}$$ where ${\Lambda^{h}}$ is the Lagrange interpolant. 3.3. Proposition (Solvability of the semidiscrete scheme) Let Assumptions 2.6 and 2.3 hold. Then, the semidiscrete scheme (3.5) possesses a unique solution $\hat{\boldsymbol{u}}^{h}\in\operatorname{L}_{\infty}{(0,T)}^{m}$. Proof. In (3.5) if we write $\hat{u}^{h}_{i}(t)$ as $\sum_{j=1}^{\dim(\hat{\mathbb{V}})}\alpha_{j}\hat{\Phi}_{j}$, we obtain a system of $\dim(\hat{\mathbb{V}})$ ordinary differential equations for each $i$. By assumption the initial data for each ODE is bounded. From Assumption 2.3 and the construction of $\widetilde{\boldsymbol{f}}$ (2.21), we have that $J$, $\widetilde{\boldsymbol{f}}$ and their product are continuous globally Lipschitz functions. From ODE theory (for example [37]) we conclude that (3.5) possesses a unique bounded solution. ∎ 3.4. The effect of domain evolution on the semidiscrete solution We now examine the stability of (3.5) and show that domain growth has a diluting or stabilising effect on the semidiscrete solution, mirroring results for the continuous problem [24]. Taking $\hat{\Phi}=\hat{u}^{h}_{i}$ in (3.5) gives for $i=1,\dotsc,m$, (3.6) $$\left\langle\partial_{t}(J\hat{u}^{h}_{i}),\hat{u}^{h}_{i}\right\rangle_{{\hat% {\Omega}}}+{D}_{i}b_{t}\left({{\nabla}\hat{u}^{h}_{i}},{\nabla\hat{u}^{h}_{i}}% \right)=\left\langle J\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h}),\hat{u}^{h}_% {i}\right\rangle_{{\hat{\Omega}}}.$$ For the first term on the left of (3.6) we have (3.7) $$\left\langle\partial_{t}(J\hat{u}^{h}_{i}),\hat{u}^{h}_{i}\right\rangle_{{\hat% {\Omega}}}=\frac{\dif}{\dif t}\left\langle J\hat{u}^{h}_{i},\hat{u}^{h}_{i}% \right\rangle_{{\hat{\Omega}}}-\left\langle J\hat{u}^{h}_{i},\partial_{t}\hat{% u}^{h}_{i}\right\rangle_{{\hat{\Omega}}}.$$ Application of Reynold’s transport theorem (2.6) gives (3.8) $$\left\langle\partial_{t}(J\hat{u}^{h}_{i}),\hat{u}^{h}_{i}\right\rangle_{{\hat% {\Omega}}}=\frac{1}{2}\left(\frac{\dif}{\dif t}\|\hat{u}^{h}_{i}\|_{J}^{2}+% \left\langle J{u}^{h}_{i},{u}^{h}_{i}\nabla\cdot\boldsymbol{a}(\boldsymbol{% \mathcal{A}}_{t}(\boldsymbol{\xi}),t)\right\rangle_{{\hat{\Omega}}}\right).$$ Dealing with the right hand side of (3.6), using (2.21) and the mean-value theorem (MVT) we have with $\widetilde{C}$ from (2.21) (3.9) $$\begin{split}\displaystyle\left|\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h})% \right|&\displaystyle\leq\left|\widetilde{f}_{i}(\boldsymbol{0})\right|+\left|% \widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h})-\widetilde{f}_{i}(\boldsymbol{0})% \right|\leq\left|\widetilde{f}_{i}(\boldsymbol{0})\right|+\widetilde{C}\sum_{j% =1}^{m}\left|\hat{u}_{j}^{h}\right|.\end{split}$$ Therefore we have (3.10) $$\left|\left\langle J\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h}),\hat{u}^{h}_{i% }\right\rangle_{{\hat{\Omega}}}\right|\leq\widetilde{C}\left\langle J\sum_{j=1% }^{m}\left|\hat{u}^{h}_{j}\right|,\left|\hat{u}^{h}_{i}\right|\right\rangle_{{% \hat{\Omega}}}+\left|\left\langle J\widetilde{f}_{i}(0),\hat{u}^{h}_{i}\right% \rangle_{{\hat{\Omega}}}\right|.$$ Applying Young’s inequality gives (3.11) $$\begin{split}\displaystyle\left|\left\langle J\widetilde{f}_{i}(\hat{% \boldsymbol{u}}^{h}),\hat{u}^{h}_{i}\right\rangle_{{\hat{\Omega}}}\right|\leq&% \displaystyle\widetilde{C}\left(\frac{1}{2}\sum_{j\not=i}\|\hat{u}^{h}_{j}\|_{% J}^{2}+\frac{m+1}{2}\|\hat{u}^{h}_{i}\|_{J}^{2}\right)+\frac{1}{2}\|\hat{u}^{h% }_{i}\|_{J}^{2}+C_{\widetilde{f}_{i}(\boldsymbol{0})},\end{split}$$ where $C_{\widetilde{f}_{i}(\boldsymbol{0})}\in{\mathbb{R}}^{+}$ depends on $\left|\widetilde{f}_{i}(\boldsymbol{0})\right|$. Summing over $i$ we have (3.12) $$\begin{split}\displaystyle\sum_{i=1}^{m}\left|\left\langle J\widetilde{f}_{i}(% \hat{\boldsymbol{u}}^{h}),\hat{u}^{h}_{i}\right\rangle_{{\hat{\Omega}}}\right|% &\displaystyle\leq\left(\widetilde{C}m+\frac{1}{2}\right)\|\hat{\boldsymbol{u}% }^{h}\|_{J^{m}}^{2}+C_{\widetilde{\boldsymbol{f}}(\boldsymbol{0})}.\end{split}$$ Using (2.11), (3.8) and (3.12) in (3.6) gives (3.13) $$\begin{split}\displaystyle\frac{\dif}{\dif t}\|\hat{\boldsymbol{u}}^{h}\|_{J^{% m}}^{2}+2\sum_{i=1}^{m}D_{i}|\hat{u}^{h}_{i}|_{\boldsymbol{B}}^{2}\leq&% \displaystyle\left\langle J\left(2\widetilde{C}m+1-\nabla\cdot\boldsymbol{a}% \left(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}),t\right)\right)\hat{% \boldsymbol{u}}^{h},\hat{\boldsymbol{u}}^{h}\right\rangle_{{\hat{\Omega}}^{m}}% +2C_{\widetilde{\boldsymbol{f}}(\boldsymbol{0})}.\end{split}$$ Finally, integrating in time and applying Gronwall’s lemma we have (3.14) $$\|\hat{\boldsymbol{u}}^{h}(t)\|_{J^{m}}^{2}\leq\left(\|\hat{\boldsymbol{u}}^{h% }(0)\|_{J^{m}}^{2}+2tC_{\widetilde{\boldsymbol{f}}(\boldsymbol{0})}\right)\exp% \left(\sup_{{\hat{\Omega}}\times[0,T]}\left\{2\widetilde{C}m+1-\nabla\cdot% \boldsymbol{a}\left(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}),t\right)% \right\}t\right).$$ From (2.5), the dilution term $\nabla\cdot\boldsymbol{a}$ has the same sign as $\partial_{t}{J}$ and is therefore positive (or negative) if the domain is growing (or contracting). Thus, domain growth has a diluting effect on the $\operatorname{L}_{2}{({\Omega_{t}})^{m}}$ norm (c.f., (2.10)) of the solution. 3.5. Fully discrete scheme We divide the time interval $[0,T]$ into $N$ subintervals, $0=t_{0}<\dotsb<t_{N}=T$ and denote by $\tau_{n}:=t_{n}-t_{n-1}$ the (possibly nonuniform) time step and $\tau=\max_{n}{\tau_{n}}$. We consistently use the following shorthand for a function of time: $f^{n}:=f(t_{n})$, we denote by $\bar{\partial}f^{n}:={\tau_{n}}^{-1}\left(f^{n}-f^{n-1}\right).$ For the approximation in time we use a modified implicit Euler method where linear reaction terms and the diffusive term are treated implicitly while the nonlinear reaction terms are treated semi-implicitly using values from the previous timestep (the first step of a Picard iteration). Our choice of timestepping scheme stems from the numerical investigation conducted by Madzvamuse ]. The fully discrete scheme we employ to approximate the solution of Problem 1.1 is thus, find $\hat{U}_{i}^{n}\in\hat{\mathbb{V}}$, for $n=1,\dotsc,N$, such that for $i=1\dotsc,m$, we have (3.15) $$\begin{cases}\begin{split}\displaystyle\left\langle\bar{\partial}\left[J\hat{U% }_{i}\right]^{n},\hat{\Phi}\right\rangle_{{\hat{\Omega}}}+{D}_{i}&% \displaystyle\left\langle[\boldsymbol{B}\nabla\hat{U}_{i}]^{n},\nabla\hat{\Phi% }\right\rangle_{{\hat{\Omega}}}=\left\langle J^{n}\hat{U}_{i}^{n}\widetilde{F}% _{i}(\hat{\boldsymbol{U}}^{n-1}),\hat{\Phi}\right\rangle_{{\hat{\Omega}}},% \quad\forall\hat{\Phi}\in\hat{\mathbb{V}},\\ \displaystyle\hat{U}_{i}^{0}&\displaystyle=\Lambda^{h}\hat{u}_{i}^{0},\end{% split}\end{cases}$$ where ${\Lambda^{h}}$ is the Lagrange interpolant and $\widetilde{F}_{i}$ is as defined in (2.21). 3.6. Physical domain formulation In a more physically intuitive way, we may look to approximate the solution to (1.1) on a conforming subspace of the evolving domain. To this end we define a family of finite dimensional spaces ${\mathbb{V}^{n}},n=[0,\dotsc,N]$ such that (3.16) $${\mathbb{V}^{n}}:=\left\{\hat{\Phi}(\boldsymbol{\mathcal{A}}_{t^{n}}^{-1}(% \cdot)):\hat{\Phi}\in\hat{\mathbb{V}}\right\},$$ which also defines the triangulation ${\mathscr{T}}^{n}$, $n=[0,\dotsc,N]$ on the evolving domain. Using (3.15) and (3.16) we have the following equivalent finite element formulation on the evolving domain: Find ${U}_{i}^{n}\in{\mathbb{V}^{n}}$, for $n=1,\dotsc,N$, such that for $i=1\dotsc,m$, (3.17) $$\begin{cases}\begin{aligned} \displaystyle\bar{\partial}\left[\left\langle{U}_% {i},{\Phi}\right\rangle_{{\Omega_{t}}}\right]^{n}+{D}_{i}\left\langle\nabla{U}% _{i}^{n},\nabla{\Phi}^{n}\right\rangle_{{\Omega_{t^{n}}}}&\displaystyle=&&% \displaystyle\left\langle{U}_{i}^{n}\widetilde{F}_{i}({\boldsymbol{U}}^{n-1}),% {\Phi}^{n}\right\rangle_{{\Omega_{t^{n}}}}\quad\>\forall\>\Phi^{n}\in{\mathbb{% V}^{n}}\\ \displaystyle{U}_{i}^{0}&\displaystyle=&&\displaystyle\Lambda^{h}{u}_{i}^{0},% \end{aligned}\end{cases}$$ where $\Lambda^{h}$ is the Lagrange interpolant. 4. Analysis of the semidiscrete scheme We now prove that the semidiscrete solution converges to the exact one with optimal order in the $\operatorname{L}_{\infty}{(0,T;\operatorname{L}_{2}{({\hat{\Omega}})^{m}})}$ norm and the $\operatorname{L}_{2}{(0,T;\operatorname{H}^{1}{({\hat{\Omega}})^{m}})}$ seminorm. 4.1. A time-dependent Ritz projection A central role in the analysis is played by the Ritz, or elliptic projector, defined, as in Wheeler ], for each $t\in[0,T]$, by ${R}_{t}:H^{1}({\hat{\Omega}})\to\hat{\mathbb{V}}$ such that for each ${\hat{v}}\in\operatorname{H}^{1}{({\hat{\Omega}})}$ (4.1) $$\displaystyle b_{t}\left({\hat{v}},{\hat{\Phi}}\right)=b_{t}\left({{R}_{t}\hat% {v}},{\hat{\Phi}}\right)\quad\>\forall\>\hat{\Phi}\in\hat{\mathbb{V}},$$ (4.2) $$\displaystyle\text{ and }\int_{{\hat{\Omega}}}\left[{R}_{t}\hat{v}-\hat{v}% \right]=0.$$ The constraint (4.2) ensures ${R}_{t}$ is well defined. Differentiation in time in (4.1) with $v=\hat{u}_{i}$ yields (4.3) $$b_{t}\left({\partial_{t}(\hat{u}_{i}-{R}_{t}\hat{u}_{i})},{\hat{\Phi}}\right)+% \langle(\partial_{t}\boldsymbol{B})\nabla(\hat{u}_{i}-{R}_{t}\hat{u}_{i}),% \nabla\hat{\Phi}\rangle_{{\hat{\Omega}}}=0\quad\>\forall\>\hat{\Phi}\in\hat{% \mathbb{V}}.$$ To obtain optimal error estimates, we now decompose the error into an elliptic error (the error between the Ritz projection and the exact solution) and a parabolic error (the error between the semidiscrete solution and the Ritz projection): (4.4) $$\begin{split}\displaystyle\hat{\boldsymbol{u}}^{h}-\hat{\boldsymbol{u}}&% \displaystyle=(\hat{\boldsymbol{u}}^{h}-{R}_{t}\hat{\boldsymbol{u}})+({R}_{t}% \hat{\boldsymbol{u}}-\hat{\boldsymbol{u}})=:\hat{\boldsymbol{\rho}}^{h}+\hat{% \boldsymbol{\varepsilon}}^{h}.\end{split}$$ where the equality defines $\hat{\boldsymbol{\rho}}^{h}=(\hat{\rho}^{h}_{1},\dotsc,\hat{\rho}^{h}_{m})^{% \mathsf{T}}$ and $\hat{\boldsymbol{\varepsilon}}=(\hat{\varepsilon}_{1},\dotsc,\hat{\varepsilon}% _{m})^{\mathsf{T}}$. 4.2. Lemma (Ritz projection error estimate) Suppose assumptions 2.6 and 2.3 (with $k=\ell$) hold and let ${R}$ be the Ritz projection defined in (4.1). Then the following estimates hold: (4.5) $$\displaystyle\begin{split}\displaystyle\sup_{t\in[0,T]}\Bigg{\{}&\displaystyle% \|{R}_{t}\hat{\boldsymbol{u}}(t)-\hat{\boldsymbol{u}}(t)\|_{\operatorname{L}_{% 2}{({\hat{\Omega}})^{m}}}^{2}+\hat{h}^{2}\sum_{i=1}^{m}\|\nabla\left({R}_{t}% \hat{{u}}_{i}(t)-\hat{{u}}_{i}(t)\right)\|_{\operatorname{L}_{2}{({\hat{\Omega% }})}}^{2}\Bigg{\}}\leq{C(\boldsymbol{\mathcal{A}},\hat{u})}\hat{h}^{2(\ell+1)}% ,\end{split}$$ (4.6) $$\displaystyle\begin{split}\displaystyle\sup_{t\in[0,T]}\Bigg{\{}&\displaystyle% \|\partial_{t}\left({R}_{t}\hat{\boldsymbol{u}}(t)-\hat{\boldsymbol{u}}(t)% \right)\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\hat{h}^{2}\sum_{i=% 1}^{m}\|\nabla\partial_{t}\left({{R}_{t}\hat{{u}}_{i}(t)-\hat{{u}}_{i}(t)}% \right)\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\Bigg{\}}\leq C(% \boldsymbol{\mathcal{A}},\hat{u})\hat{h}^{2(\ell+1)}.\end{split}$$ Proof. Using (2.13) and (4.1) we have for $i=1,\dotsc,m$ (4.7) $$\begin{split}\displaystyle\mu\|\nabla\hat{\varepsilon_{i}}\|_{\operatorname{L}% _{2}{({\hat{\Omega}})}}^{2}&\displaystyle\leq a(\hat{\varepsilon}_{i},\hat{% \Phi}-\hat{u}_{i})\quad\>\forall\>\Phi\in\hat{\mathbb{V}}\\ &\displaystyle\leq\bar{\mu}\|\nabla\hat{\varepsilon}_{i}\|_{\operatorname{L}_{% 2}{({\hat{\Omega}})}}\|\nabla({\Lambda^{h}}\hat{u}_{i}-\hat{u}_{i})\|_{% \operatorname{L}_{2}{({\hat{\Omega}})}}\leq C\hat{h}^{\ell}\|\nabla\hat{% \varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}\left|\hat{u}_{i}% \right|_{\operatorname{H}^{\ell+1}({\hat{\Omega}})},\end{split}$$ which shows the energy norm bound of (4.5). To show the $\operatorname{L}_{2}$ estimate we use duality. Fix a $t\in(0,T]$ and consider the solution $\hat{\psi}$ of following elliptic problem (4.8) $$-\nabla\cdot(\boldsymbol{B}_{t}\nabla\hat{\psi})=\hat{\phi}\mbox{ in }{\hat{% \Omega}},\quad\boldsymbol{B}_{t}\nabla\hat{\psi}\cdot\hat{{\boldsymbol{\nu}}}=% 0\mbox{ on }\partial{\hat{\Omega}},\quad\int_{\hat{\Omega}}\hat{\psi}=0.$$ Note that $\|\hat{\psi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}\leq C\left|\hat{\psi}% \right|_{\operatorname{H}^{1}({\hat{\Omega}})}$ as for any $\hat{v}$ (4.9) $$\inf_{r\in{\mathbb{R}}}\|\hat{v}-r\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}=% \Big{\|}\hat{v}-\frac{1}{|{\hat{\Omega}}|}\int_{{\hat{\Omega}}}\hat{v}\Big{\|}% _{\operatorname{L}_{2}{({\hat{\Omega}})}}\leq C\left|\hat{v}\right|_{% \operatorname{H}^{1}({\hat{\Omega}})}.$$ We therefore have (4.10) $$\mu\|\nabla\hat{\psi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\leq b_{t}% \left({\hat{\psi}},{\hat{\psi}}\right)=\left\langle\hat{\phi},\hat{\psi}\right% \rangle_{{\hat{\Omega}}}\leq C\|\hat{\phi}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})}}\|\nabla\hat{\psi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}.$$ Furthermore we have the estimate (4.11) $$\left|\hat{\psi}\right|_{\operatorname{H}^{2}({\hat{\Omega}})}\leq C\|\Updelta% \hat{\psi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}\leq C\|\boldsymbol{B}% \Updelta\hat{\psi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}=C\|\hat{\phi}+% \nabla\cdot\boldsymbol{B}\cdot\nabla\hat{\psi}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})}}\leq C\|\hat{\phi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}.$$ Here we have introduced the notation that the divergence of the tensor $\boldsymbol{B}$ is a vector defined such that for $i=1,\dotsc,d$, $(\nabla\cdot\boldsymbol{B})_{i}=\sum_{j=1}^{d}\partial_{x_{j}}\boldsymbol{B}_{% i,j}.$ Thus testing (4.8) with $\hat{\varepsilon}_{i}$ and using (4.1) we have (4.12) $$\begin{split}\displaystyle\left\langle\hat{\varepsilon}_{i},\hat{\phi}\right% \rangle_{{\hat{\Omega}}}&\displaystyle=b_{t}\left({\hat{\varepsilon}_{i}},{% \hat{\psi}-\hat{\Phi}}\right)\quad\>\forall\>\hat{\Phi}\in\hat{\mathbb{V}}\\ &\displaystyle\leq\bar{\mu}\|\nabla\hat{\varepsilon}_{i}\|_{\operatorname{L}_{% 2}{({\hat{\Omega}})}}\|\nabla(\hat{\psi}-{\Lambda^{h}}\hat{\psi})\|_{% \operatorname{L}_{2}{({\hat{\Omega}})}}\leq C\hat{h}^{\ell+1}\left|\hat{u}% \right|_{\operatorname{H}^{\ell+1}({\hat{\Omega}})}\left|\psi\right|_{% \operatorname{H}^{2}({\hat{\Omega}})}\leq C\hat{h}^{\ell+1},\end{split}$$ which completes the proof of (4.5). For the proof of (4.6) using (4.3) and the fact that the gradient commutes with the time derivative (as we work on the reference domain) we have that for $i=1,\dotsc,m$, and for each $\hat{\Phi}\in\hat{\mathbb{V}}$, (4.13) $$\begin{split}\displaystyle\mu&\displaystyle\|\nabla\partial_{t}\hat{% \varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\leq b_{t}\left% ({\partial_{t}\hat{\varepsilon}_{i}},{\partial_{t}\hat{\varepsilon}_{i}}\right% )=b_{t}\left({\partial_{t}\hat{\varepsilon}_{i}},{\hat{\Phi}-\partial_{t}\hat{% u}_{i}}\right)+b_{t}\left({\partial_{t}\hat{\varepsilon}_{i}},{\partial_{t}{R}% _{t}\hat{u}_{i}-\hat{\Phi}}\right)\\ &\displaystyle=b_{t}\left({\partial_{t}\hat{\varepsilon}_{i}},{\hat{\Phi}-% \partial_{t}\hat{u}_{i}}\right)+\left\langle\partial_{t}\boldsymbol{B}\nabla% \hat{\varepsilon}_{i},\nabla(\hat{\Phi}-\partial_{t}{R}_{t}\hat{u}_{i})\right% \rangle_{{\hat{\Omega}}}.\end{split}$$ Taking $\hat{\Phi}={\Lambda^{h}}\partial_{t}\hat{u}_{i}$ in (4.13) gives (4.14) $$\begin{split}\displaystyle\mu\|\nabla\partial_{t}\hat{\varepsilon}_{i}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\leq&\displaystyle C\hat{h}^{\ell}% \left|\partial_{t}\hat{u}_{i}\right|_{\operatorname{H}^{\ell+1}({\hat{\Omega}}% )}\|\nabla\partial_{t}\hat{\varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})}}\\ &\displaystyle+\|\partial_{t}\boldsymbol{B}\|_{\operatorname{L}_{\infty}{({% \hat{\Omega}})}}\|\nabla\hat{\varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})}}\bigg{(}\|\nabla\partial_{t}\hat{\varepsilon}_{i}\|_{\operatorname{% L}_{2}{({\hat{\Omega}})}}+\big{\|}{\nabla\left({\Lambda^{h}}\partial_{t}\hat{u% }_{i}-\partial_{t}\hat{u}_{i}\right)}\big{\|}_{\operatorname{L}_{2}({\hat{% \Omega}})}\bigg{)}\\ \displaystyle\leq&\displaystyle\frac{\mu}{2}\|{\nabla\partial_{t}\hat{% \varepsilon}_{i}}\|_{\operatorname{L}_{2}({\hat{\Omega}})}^{2}+C(\|\nabla\hat{% \varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}+\hat{h}^{2\ell% }\left|\partial_{t}\hat{u}_{i}\right|_{\operatorname{H}^{\ell+1}({\hat{\Omega}% })}^{2}),\end{split}$$ where we have use Young’s inequality in the final step. The previous estimate (4.5) completes the proof of the energy norm bound in (4.6). For the $\operatorname{L}_{2}$ estimate we once again use duality. Testing problem (4.8) with $\partial_{t}\hat{\varepsilon}_{i}$ and using (4.3), we have for $i=1,\dotsc,m$, and any $\hat{\Phi}\in\hat{\mathbb{V}}$ (4.15) $$\begin{split}\displaystyle\left\langle\partial_{t}\hat{\varepsilon}_{i},\hat{% \phi}\right\rangle_{{\hat{\Omega}}}&\displaystyle=b_{t}\left({\partial_{t}\hat% {\varepsilon}_{i}},{\hat{\psi}-\hat{\Phi}}\right)-\left\langle(\partial_{t}% \boldsymbol{B})\nabla\hat{\varepsilon}_{i},\nabla\hat{\Phi}\right\rangle_{{% \hat{\Omega}}}\\ &\displaystyle=b_{t}\left({\partial_{t}\hat{\varepsilon}_{i}},{\hat{\psi}-\hat% {\Phi}}\right)+\left\langle(\partial_{t}\boldsymbol{B})\nabla\hat{\varepsilon}% _{i},\nabla\left(\hat{\psi}-\hat{\Phi}\right)\right\rangle_{{\hat{\Omega}}}-% \left\langle(\partial_{t}\boldsymbol{B})\nabla\hat{\varepsilon}_{i},\nabla\hat% {\psi}\rangle\right\rangle_{{\hat{\Omega}}}.\end{split}$$ Taking $\hat{\Phi}={\Lambda^{h}}\partial_{t}\hat{u}_{i}$ in (4.15) gives (4.16) $$\begin{split}&\displaystyle\left|\left\langle\partial_{t}\hat{\varepsilon}_{i}% ,\hat{\phi}\right\rangle_{{\hat{\Omega}}}\right|\leq C\left|\hat{\psi}\right|_% {\operatorname{H}^{2}({\hat{\Omega}})}\Bigg{(}\hat{h}\bar{\mu}\|\nabla\partial% _{t}\hat{\varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}+\hat{h}\|% \partial_{t}\boldsymbol{B}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|% \nabla\hat{\varepsilon}_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}\\ &\displaystyle\quad+\|\partial_{t}\boldsymbol{B}\|_{\operatorname{L}_{\infty}{% {\hat{\Omega}}}}\|\varepsilon_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}% \Bigg{)}+\|\nabla\hat{\psi}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}\|\nabla% \partial_{t}\boldsymbol{B}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|% \varepsilon_{i}\|_{\operatorname{L}_{2}{({\hat{\Omega}})}},\end{split}$$ where we have used integration by parts to estimate the last term in (4.15). The previous estimates and Assumption 2.3 complete the proof. ∎ 4.3. Theorem (A priori estimate for the semidiscrete scheme) Suppose Assumptions 2.5 and 2.6 hold. Furthermore, let Assumption 2.3 hold (with $k=\ell$). Finally let $\hat{\boldsymbol{u}}^{h}$ be the solution to Problem (3.5). Then, the following optimal a priori error estimate holds for the error in the semidiscrete scheme: (4.17) $$\begin{split}\displaystyle\sup_{t\in[0,T]}\left\{\|\hat{\boldsymbol{u}}^{h}(t)% -\hat{\boldsymbol{u}}(t)\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}% \right\}+\sum_{i=1}^{m}\int_{0}^{T}\hat{h}^{2}\|\nabla(\hat{u}_{i}^{h}(t)-\hat% {u}_{i}(t))\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\dif t\leq{C}% \left(\boldsymbol{\mathcal{A}},\hat{u},\widetilde{C}\right)\hat{h}^{2(\ell+1)}% ,\end{split}$$ Proof. Using the decomposition (4.4) and Lemma 4.2 we have a bound on the elliptic error and it simply remains to estimate the parabolic error $\hat{\boldsymbol{\rho}}^{h}$. To this end, we use (3.5) to construct a PDE for $\hat{\rho}^{h}_{i}$ by inserting $\hat{\rho}^{h}_{i}$ in place of $\hat{u}^{h}_{i}$ and taking $\hat{\Phi}=\hat{\rho}^{h}_{i}$. Using (2.11) we obtain for $i=1,\dotsc,m$, (4.18) $$\left\langle\partial_{t}\left(J\hat{\rho}^{h}_{i}\right),\hat{\rho}^{h}_{i}% \right\rangle_{{\hat{\Omega}}}+D_{i}|\nabla\hat{\rho}^{h}_{i}|_{B}^{2}=\left% \langle\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h}),J\hat{\rho}^{h}_{i}\right% \rangle_{{\hat{\Omega}}}-\left\langle\partial_{t}\left(J{R}_{t}\hat{u}_{i}% \right),\hat{\rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}-b_{t}\left({{R}_{t}% \hat{u}_{i}},{\hat{\rho}^{h}_{i}}\right).$$ Using (2.20), (2.22) and (4.1) gives (4.19) $$\left\langle\partial_{t}\left(J\hat{\rho}^{h}_{i}\right),\hat{\rho}^{h}_{i}% \right\rangle_{{\hat{\Omega}}}+D_{i}|\nabla\hat{\rho}^{h}_{i}|_{B}^{2}=\left% \langle\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h})-\widetilde{f}_{i}(\hat{% \boldsymbol{u}}),J\hat{\rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}-\left% \langle\partial_{t}\left(J\hat{\varepsilon}_{i}\right),\hat{\rho}^{h}_{i}% \right\rangle_{{\hat{\Omega}}}.$$ Dealing with the first term on the left of (4.19) as in (3.8): (4.20) $$\left\langle\partial_{t}\left(J\hat{\rho}^{h}_{i}\right),\hat{\rho}^{h}_{i}% \right\rangle_{{\hat{\Omega}}}=\frac{1}{2}\left(\frac{\dif}{\dif t}\|\hat{\rho% }_{i}^{h}\|_{J}^{2}+\left\langle J\hat{\rho}^{h}_{i}\nabla\cdot\boldsymbol{a}(% \boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}))),\hat{\rho}^{h}_{i}\right% \rangle_{{\hat{\Omega}}}\right).$$ Dealing with the first term on the right of (4.19) using (4.4) and the MVT we have (4.21) $$\left|\left\langle\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{h})-\widetilde{f}_{i% }(\hat{\boldsymbol{u}}),J\hat{\rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}% \right|\leq\widetilde{C}\left(\left\langle\sum_{j=1}^{m}\left(\left|\hat{% \varepsilon}_{j}\right|+\left|\hat{\rho}^{h}_{j}\right|\right),J\left|\hat{% \rho}^{h}_{i}\right|\right\rangle_{{\hat{\Omega}}}\right).$$ Applying Young’s inequality: (4.22) $$\begin{split}\displaystyle\left|\left\langle\widetilde{f}_{i}(\hat{\boldsymbol% {u}}^{h})-\widetilde{f}_{i}(\hat{\boldsymbol{u}}),J\hat{\rho}^{h}_{i}\right% \rangle_{{\hat{\Omega}}}\right|\leq\widetilde{C}\Bigg{(}\left(m+\frac{1}{2}% \right)\|\hat{\rho}^{h}_{i}\|_{J}^{2}+\sum_{j\not=i}\frac{1}{2}\|\hat{\rho}^{h% }_{j}\|_{J}^{2}+\frac{1}{2}\|\boldsymbol{\hat{\varepsilon}}\|_{J^{m}}^{2}\Bigg% {)}.\end{split}$$ Summing over $i$ we have (4.23) $$\begin{split}\displaystyle\sum_{i=1}^{m}\left|\left\langle\widetilde{f}_{i}(% \hat{\boldsymbol{u}}^{h})-\widetilde{f}_{i}(\hat{\boldsymbol{u}}),J\hat{\rho}^% {h}_{i}\right\rangle_{{\hat{\Omega}}}\right|\leq\widetilde{C}\Bigg{(}\frac{3m}% {2}\|\hat{\boldsymbol{\rho}}^{h}\|_{J^{m}}^{2}+\frac{m}{2}\|\boldsymbol{\hat{% \varepsilon}}\|_{J^{m}}^{2}\Bigg{)}.\end{split}$$ Dealing with the second term on the right of (4.19): (4.24) $$\begin{split}\displaystyle\left|\left\langle\partial_{t}\left(J\hat{% \varepsilon}_{i}\right),\hat{\rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}\right% |\leq&\displaystyle\left|\left\langle J\partial_{t}\hat{\varepsilon}_{i},\hat{% \rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}\right|+\left|\left\langle\partial_% {t}\left(J\right)\hat{\varepsilon}_{i},\hat{\rho}^{h}_{i}\right\rangle_{{\hat{% \Omega}}}\right|\\ \displaystyle\leq&\displaystyle\frac{1}{2}\Bigg{(}\|\hat{\rho}^{h}\|_{J}^{2}+% \left\langle{J}\partial_{t}\hat{\varepsilon}_{i},\partial_{t}\hat{\varepsilon}% _{i}\right\rangle_{{\hat{\Omega}}}+\left\langle\left|\partial_{t}({J})\right|% \hat{\rho}^{h}_{i},\hat{\rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}+\left% \langle\left|\partial_{t}({J})\right|\hat{\varepsilon}_{i},\hat{\varepsilon}_{% i}\right\rangle_{{\hat{\Omega}}}\Bigg{)},\end{split}$$ where we have used Young’s inequality for the second step. Now using (2.5) and summing over $i$ we have (4.25) $$\begin{split}\displaystyle\sum_{i=1}^{m}\left|\left\langle\partial_{t}\left(J% \hat{\varepsilon}_{i}\right),\hat{\rho}^{h}_{i}\right\rangle_{{\hat{\Omega}}}% \right|\leq\frac{1}{2}\Bigg{(}&\displaystyle\|\hat{\boldsymbol{\rho}}^{h}\|_{J% ^{m}}^{2}+\left\langle J\hat{\boldsymbol{\rho}^{h}}\left|\nabla\cdot% \boldsymbol{a}\left(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}),t\right)% \right|,\hat{\boldsymbol{\rho}}^{h}\right\rangle_{{\hat{\Omega}}^{m}}\\ &\displaystyle+\|\partial_{t}\boldsymbol{\hat{\varepsilon}}\|_{J^{m}}^{2}+\|% \partial_{t}{J}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}}\times[0,T])}}\|% \boldsymbol{\hat{\varepsilon}}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^% {2}\Bigg{)}.\end{split}$$ Combining (4.20), (4.23), (4.25) (4.26) $$\begin{split}\displaystyle\frac{\dif}{\dif t}\|\hat{\boldsymbol{\rho}}^{h}\|_{% J^{m}}^{2}+2\sum_{i=1}^{m}D_{i}|\nabla\hat{\rho}^{h}_{i}|_{B}^{2}\leq{C}\Bigg{% (}\|\hat{\boldsymbol{\rho}}^{h}\|_{J^{m}}^{2}+\|\boldsymbol{\hat{\varepsilon}}% \|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\|\partial_{t}\boldsymbol{% \hat{\varepsilon}}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\Bigg{)},% \end{split}$$ where we have used the fact that Assumption 2.3 implies $J,\partial_{t}{J}\in\operatorname{L}_{\infty}{\left(\smash{{\hat{\Omega}}}% \times[0,T]\right)}$. Integrating in time, using Lemma 4.2 and applying Gronwall’s Lemma we have (4.27) $$\|\hat{\boldsymbol{\rho}}^{h}(t)\|_{J^{m}}^{2}+2\sum_{i=1}^{m}D_{i}\int_{0}^{T% }|\nabla\hat{\rho}^{h}_{i}|_{B}^{2}\leq{C}\left(\|\hat{\boldsymbol{\rho}}^{h}(% 0)\|_{J^{m}}^{2}+\hat{h}^{2(\ell+1)}\right).$$ To estimate $\hat{\boldsymbol{\rho}}^{h}(0)$, we note (4.28) $$\begin{split}\displaystyle\|\hat{\boldsymbol{\rho}}^{h}(0)\|_{J^{m}}^{2}\leq\|% \hat{\boldsymbol{u}}(0)-{\Lambda^{h}}\hat{\boldsymbol{u}}(0)\|_{J^{m}}+\|\hat{% \boldsymbol{\varepsilon}}^{h}\|_{J^{m}}\leq{C}\hat{h}^{\ell+1},\end{split}$$ where we have used (3.2), the assumption on the regularity of the exact solution and Lemma 4.2 in the last step. Assumption 2.3 and the equivalence of norms (2.10) completes the proof. ∎ 5. Error analysis of the fully discrete approximation In this section we provide the convergence result for the fully discrete scheme (3.15). The main result of this paper is Theorem 5.1, whose proof is given in detail below. We follow that up with a convergence result in the $\operatorname{L}_{\infty}({\hat{\Omega}})$ norm which allows the use of the original $\boldsymbol{f}$ (without extending to $\widetilde{\boldsymbol{f}}$ in the numerical method). 5.1. Theorem (A priori estimate for the fully discrete scheme) Suppose Assumptions 2.5 and 2.6 hold. Suppose Assumption 2.3 (with $k=\ell$) holds. Let $\hat{\boldsymbol{U}}$ be the solution to (3.15). Suppose the timestep satisfies a stability condition defined in (5.11). Then, the following optimal a priori estimate holds for the error in the fully discrete scheme: (5.1) $$\begin{split}\displaystyle\|\hat{\boldsymbol{U}}^{n}-\hat{\boldsymbol{u}}^{n}% \|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+&\displaystyle\tau\hat{h}^% {2}\sum_{i=1}^{m}{D}_{i}\|\nabla\left(\hat{{U}}_{i}^{n}-\hat{{u}}_{i}^{n}% \right)\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\\ &\displaystyle\leq{C}\left(\boldsymbol{\mathcal{A}},\hat{u},\widetilde{C}% \right)\left(\hat{h}^{2(\ell+1)}+\tau^{2}\right),\quad\mbox{for }n\in[0,\dotsc% ,N],\end{split}$$ with $\widetilde{C}$ as defined in (2.21). 5.2. Remark (Error estimate for the evolving domain scheme) The schemes (3.15) and (3.17) are equivalent. Thus Theorem 5.1 also provides an error estimate for the evolving domain based scheme (3.17). Proof of Theorem 5.1.  Decomposing the error as in (4.4) we have (5.2) $$\begin{split}\displaystyle\|\hat{\boldsymbol{U}}^{n}-\hat{\boldsymbol{u}}^{n}% \|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}&\displaystyle\leq\|{R}_{t}% \hat{\boldsymbol{u}}^{n}-\hat{\boldsymbol{u}}^{n}\|_{\operatorname{L}_{2}{({% \hat{\Omega}})^{m}}}^{2}+\|\hat{\boldsymbol{U}}^{n}-{R}_{t}\hat{\boldsymbol{u}% }^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}=\|\hat{\boldsymbol{% \varepsilon}}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\|\hat{% \boldsymbol{\rho}}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}.\end% {split}$$ From Lemma 4.2 we have the following bound on the elliptic error: (5.3) $$\|\hat{\boldsymbol{\varepsilon}}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^% {m}}}^{2}\leq{C}\hat{h}^{2(\ell+1)}\quad\text{ for }n\in[0,\dotsc,N].$$ Therefore it only remains to estimate $\hat{\boldsymbol{\rho}}^{n}$. Constructing an expression for $\hat{\boldsymbol{\rho}}^{n}$ as in (4.18), using (3.15) and (4.1) we obtain for $i=1,\dotsc,m$, (5.4) $$\begin{split}&\displaystyle\left\langle\bar{\partial}[J\hat{\rho}_{i}]^{n},% \hat{\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}+D_{i}|\nabla\hat{\rho}^{n}_{i% }|_{B}^{2}=\left\langle\hat{U}_{i}^{n}\widetilde{F}_{i}(\hat{\boldsymbol{U}}^{% n-1}),[J\hat{\rho}_{i}]^{n}\right\rangle_{{\hat{\Omega}}}\\ &\displaystyle\qquad-\left\langle\bar{\partial}[J{R}_{t}\hat{u}_{i}]^{n},\hat{% \rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}-D_{i}\left\langle[J\boldsymbol{K}% \nabla\hat{u}_{i}]^{n},[\boldsymbol{K}\nabla\hat{\rho}_{i}]^{n}\right\rangle_{% {\hat{\Omega}}}\\ &\displaystyle\quad=\left\langle\hat{U}_{i}^{n}\widetilde{F}_{i}(\hat{% \boldsymbol{U}}^{n-1})-\widetilde{f}_{i}(\hat{\boldsymbol{u}}^{n}),[J\hat{\rho% }_{i}]^{n}\right\rangle_{{\hat{\Omega}}}-\left\langle\bar{\partial}[J\hat{% \varepsilon}_{i}]^{n},\hat{\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}+\left% \langle\left(\bar{\partial}-\partial_{t}\right)[J\hat{u}_{i}]^{n},\hat{\rho}_{% i}^{n}\right\rangle_{{\hat{\Omega}}},\end{split}$$ where we have used (2.20) for the second step and $\boldsymbol{\widetilde{F}}$ is as defined in (2.21). Using Young’s inequality for the first term on the left hand side of (5.4) gives (5.5) $$\begin{split}\displaystyle\left\langle\bar{\partial}[J\hat{\rho}_{i}]^{n},\hat% {\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}\geq\frac{1}{\tau_{n}}\Bigg{(}&% \displaystyle\|\hat{\rho}^{n}_{i}\|_{J}^{2}-\frac{1}{2}\left(\left\langle J^{n% -1}\hat{\rho}_{i}^{n},\hat{\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}+\left% \langle J^{n-1}\hat{\rho}_{i}^{n-1},\hat{\rho}_{i}^{n-1}\right\rangle_{{\hat{% \Omega}}}\right)\Bigg{)},\end{split}$$ where we have used (2.10). Summing over $i$ we have (5.6) $$\begin{split}\displaystyle\sum_{i=1}^{m}\left\langle\bar{\partial}[J\hat{\rho}% _{i}]^{n},\hat{\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}\geq\frac{1}{\tau_{n% }}\left(1-\frac{1}{2}\|\frac{J^{n-1}}{J^{n}}\|_{\operatorname{L}_{\infty}{({% \hat{\Omega}})}}\right)\|\hat{\boldsymbol{\rho}}^{n}\|_{J^{m}}^{2}-\frac{1}{2% \tau_{n}}\|\hat{\boldsymbol{{\rho}}}^{n-1}\|_{J^{m}}^{2}.\end{split}$$ Using 5.2 and the MVT for the first term on the right hand side of (5.4) gives (5.7) $$\begin{split}\displaystyle\Bigg{|}&\displaystyle\left\langle\hat{U}_{i}^{n}% \widetilde{F}_{i}(\hat{\boldsymbol{U}}^{n-1})-\widetilde{f}_{i}(\hat{% \boldsymbol{u}}^{n}),[J\hat{\rho}_{i}]^{n}\right\rangle_{{\hat{\Omega}}}\Bigg{% |}\\ &\displaystyle\quad\leq\widetilde{C}\sum_{j=1}^{m}\left\langle\left|\hat{% \varepsilon}^{n-1}_{j}\right|+\left|\hat{\rho}^{n-1}_{j}\right|+\left|\tau_{n}% \bar{\partial}\hat{u}^{n}_{j}\right|+\left|\hat{\varepsilon}^{n}_{i}\right|+% \left|\hat{\rho}^{n}_{i}\right|,J^{n}\left|\hat{\rho}_{i}^{n}\right|\right% \rangle_{{\hat{\Omega}}}\\ &\displaystyle\quad\leq{C}\widetilde{C}\Bigg{(}\|\hat{\rho}_{i}^{n}\|_{J}^{2}+% \|\frac{J^{n}}{J^{n-1}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|\hat{% \boldsymbol{\rho}}^{n-1}\|_{J^{m}}^{2}\\ &\displaystyle\qquad+\|J^{n}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}% \bigg{(}\|\hat{{\varepsilon}}_{i}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})% }}^{2}+\|\hat{\boldsymbol{\varepsilon}}^{n-1}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})^{m}}}^{2}+\|\tau_{n}\bar{\partial}\hat{\boldsymbol{u}}^{n}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\bigg{)}\Bigg{)}.\end{split}$$ where we have used Young’s inequality for the second step. Summing over $i$ we have (5.8) $$\begin{split}\displaystyle\sum_{i=1}^{m}&\displaystyle\left|\left\langle\hat{U% }_{i}^{n}\widetilde{F}_{i}(\hat{\boldsymbol{U}}^{n-1})-\widetilde{f}_{i}(\hat{% \boldsymbol{u}}^{n}),[J\hat{\rho}_{i}]^{n}\right\rangle_{{\hat{\Omega}}}\right% |\leq{C}\widetilde{C}\Bigg{(}\|\hat{\boldsymbol{\rho}}^{n}\|_{J^{m}}^{2}+\|% \frac{J^{n}}{J^{n-1}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|\hat{% \boldsymbol{\rho}}^{n-1}\|_{J^{m}}^{2}\\ &\displaystyle+\|J^{n}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\bigg{(}% \|\hat{\boldsymbol{\varepsilon}}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^% {m}}}^{2}+\|\hat{\boldsymbol{\varepsilon}}^{n-1}\|_{\operatorname{L}_{2}{({% \hat{\Omega}})^{m}}}^{2}+\|\tau_{n}\bar{\partial}\hat{\boldsymbol{u}}^{n}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\bigg{)}\Bigg{)},\end{split}$$ Applying Young’s inequality to the second and third term on the right of (5.4) gives (5.9) $$\begin{split}\displaystyle\left|\left\langle\bar{\partial}[J\hat{\varepsilon}_% {i}]^{n},\hat{\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}\right|&\displaystyle% +\left|\left\langle\left(\bar{\partial}-\partial_{t}\right)[J\hat{u}_{i}]^{n},% \hat{\rho}_{i}^{n}\right\rangle_{{\hat{\Omega}}}\right|\\ \displaystyle\leq\|\hat{\rho}^{n}_{i}\|_{J}^{2}&\displaystyle+\frac{1}{2}\|% \frac{1}{J^{n}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\left(\|\bar{% \partial}[J\hat{\varepsilon}_{i}]^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})% }}^{2}+\|\left(\bar{\partial}-\partial_{t}\right)[J\hat{u}_{i}]^{n}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\right).\end{split}$$ Using (5.6), (5.8) and (5.9) in (5.4) gives (5.10) $$\begin{split}\displaystyle\frac{1}{\tau_{n}}&\displaystyle\left(1-\frac{1}{2}% \|\frac{J^{n-1}}{J^{n}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}-C% \widetilde{C}\tau_{n}\right)\|\hat{\boldsymbol{\rho}}^{n}\|_{\operatorname{L}_% {2}{J^{m}}}^{2}+\sum_{i=1}^{m}D_{i}|\nabla\hat{\rho}^{n}_{i}|_{B}^{2}\\ &\displaystyle\leq\left(\frac{1}{2\tau_{n}}+C\widetilde{C}\|\frac{J^{n}}{J^{n-% 1}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\right)\|\hat{\boldsymbol{{% \rho}}}^{n-1}\|_{J^{m}}^{2}+{C}\widetilde{C}\|J^{n}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})}}\Big{(}\|\hat{\boldsymbol{\varepsilon}}^{n}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\|\hat{\boldsymbol{\varepsilon% }}^{n-1}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\\ &\displaystyle+\|\tau_{n}\bar{\partial}\hat{\boldsymbol{u}}^{n}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\Big{)}+\frac{1}{2}\|\frac{1}{J% ^{n}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\Big{(}\|\bar{\partial}[J% \hat{\boldsymbol{\varepsilon}}]^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{% m}}}^{2}+\|\left(\bar{\partial}-\partial_{t}\right)[J\hat{\boldsymbol{u}}]^{n}% \|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\Big{)}.\end{split}$$ Let $\tau^{\prime}>0$ be such that, for $\tau<\tau^{\prime}$ and for $n=1,\dotsc,N$, (5.11) $$1-\frac{1}{2}\|\frac{J^{n-1}}{J^{n}}\|_{\operatorname{L}_{\infty}{({\hat{% \Omega}})}}-C\widetilde{C}\tau>0.$$ Such a $\tau^{\prime}$ exists since (5.12) $$\lim_{\tau\to 0}\left\{\frac{1}{2}\|\frac{J^{n-1}}{J^{n}}\|_{\operatorname{L}_% {\infty}{({\hat{\Omega}})}}+C\widetilde{C}\tau\right\}=\frac{1}{2}.$$ For $\tau<\tau^{\prime}$, we have (5.13) $$\|\hat{\boldsymbol{\rho}}^{n}\|_{J^{m}}^{2}+\sum_{i=1}^{m}C\tau{D}_{i}|\nabla% \hat{\rho}^{n}_{i}|_{B}^{2}\leq{C}\left(\bar{C}^{n}\|\hat{\boldsymbol{{\rho}}}% ^{n-1}\|_{J^{m}}^{2}+\tau\mathcal{R}^{n}\right),$$ where $\bar{C}^{n}=1+\tau\widetilde{C}\|\frac{J^{n}}{J^{n-1}}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})}}$ and (5.14) $$\begin{split}\displaystyle\mathcal{R}^{n}:=\widetilde{C}&\displaystyle\|J^{n}% \|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\Big{(}\|\hat{\boldsymbol{% \varepsilon}}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\|\hat{% \boldsymbol{\varepsilon}}^{n-1}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}% ^{2}+\|\tau\bar{\partial}\hat{\boldsymbol{u}}^{n}\|_{\operatorname{L}_{2}{({% \hat{\Omega}})^{m}}}^{2}\Big{)}\\ \displaystyle+\frac{1}{2}&\displaystyle\|\frac{1}{J^{n}}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})}}\Big{(}\|\bar{\partial}[J\hat{\boldsymbol{% \varepsilon}}]^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\|\left(% \bar{\partial}-\partial_{t}\right)[J\hat{\boldsymbol{u}}]^{n}\|_{\operatorname% {L}_{2}{({\hat{\Omega}})^{m}}}^{2}\Big{)}.\end{split}$$ Therefore, for $n=1,\dotsc,N$, (5.15) $$\begin{split}\displaystyle\|\hat{\boldsymbol{\rho}}^{n}\|_{J^{m}}^{2}+\sum_{i=% 1}^{m}C\tau{D}_{i}|\nabla\hat{\rho}^{n}_{i}|_{B}^{2}\leq{C}\Biggr{(}\prod_{k=1% }^{n}\bar{C}^{k}\|\hat{\boldsymbol{{\rho}}}^{0}\|_{J^{m}}^{2}+\tau\sum_{j=1}^{% n}\prod_{i=j}^{n}\bar{C}^{i}\mathcal{R}^{j}\Biggr{)}.\end{split}$$ For $n=1,\dotsc,N$, we have (5.16) $$\overline{C}^{n}=1+\tau\widetilde{C}\left\|\frac{J^{n}}{J^{n-1}}\right\|_{L^{% \infty}(\hat{\Omega})}\leq 1+\tau\widetilde{C}\left\|{J^{n}}\right\|_{L^{% \infty}(\hat{\Omega})}\left\|\frac{1}{J^{n-1}}\right\|_{L^{\infty}(\hat{\Omega% })}\leq 1+\tau\widetilde{C}C,$$ where the last line follows by Assumption 2.3. Thus $0<\Pi_{i=j}^{n}\overline{C}^{i}\leq\Pi_{k=1}^{n}\overline{C}^{k}\leq\left(1+% \tau\widetilde{C}C\right)^{n}$. Considering the first two terms on the right of (5.14), we have for $n=1,\dotsc,N$ (5.17) $$\begin{split}\displaystyle\widetilde{C}\|J^{n}\|_{\operatorname{L}_{\infty}{({% \hat{\Omega}})}}\Big{(}\|\hat{\boldsymbol{\varepsilon}}^{n}\|_{\operatorname{L% }_{2}{({\hat{\Omega}})^{m}}}^{2}+\|\hat{\boldsymbol{\varepsilon}}^{n-1}\|_{% \operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\Big{)}\leq 2\widetilde{C}\sup_% {s\in[0,\dotsc,N]}\|J^{s}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|% \hat{\boldsymbol{\varepsilon}}^{s}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m% }}}^{2}\leq\widetilde{C}{C}\hat{h}^{2(\ell+1)},\end{split}$$ where we have used Assumption 2.3 and Lemma 4.2. Dealing with the third term on the right of (5.14), we have (5.18) $$\begin{split}\displaystyle\widetilde{C}\|J^{n}\|_{\operatorname{L}_{\infty}{({% \hat{\Omega}})}}\|\tau\bar{\partial}\hat{\boldsymbol{u}}^{n}\|_{\operatorname{% L}_{2}{({\hat{\Omega}})^{m}}}^{2}=\widetilde{C}\|J^{n}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})}}\|\int_{t^{n-1}}^{t^{n}}\partial_{t}\hat{\boldsymbol% {u}}^{s}\dif s\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\leq% \widetilde{C}{C}\tau^{2},\end{split}$$ where we have used Assumptions 2.6 and 2.3. For the fourth term on the right of (5.14) we have (5.19) $$\begin{split}\displaystyle\frac{1}{2}\|\frac{1}{J^{n}}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})}}\|\bar{\partial}[J\hat{\boldsymbol{\varepsilon}}]^{n% }\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}&\displaystyle\leq\frac{1}% {2}\|\frac{1}{J^{n}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|\frac{1}% {\tau_{n}}{\int_{t^{n-1}}^{t^{n}}}\partial_{t}[J\hat{\boldsymbol{\varepsilon}}% ]^{s}\dif s\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\\ &\displaystyle\leq{C}\sup_{s\in[t^{n-1},t^{n}]}\|\hat{\boldsymbol{\varepsilon}% }^{s}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}\leq{C}\hat{h}^{2(\ell% +1)},\end{split}$$ where we have used Assumption 2.3 for the second step and Lemma 4.2 for the final step. Finally, for the fifth term on the right of (5.14) we have (5.20) $$\begin{split}\displaystyle\|\frac{1}{J^{n}}\|_{\operatorname{L}_{\infty}{({% \hat{\Omega}})}}&\displaystyle\|\left(\bar{\partial}-\partial_{t}\right)[J\hat% {\boldsymbol{u}}]^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}=\|% \frac{1}{J^{n}}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}\|\frac{1}{\tau% _{n}}\int^{t^{n}}_{t^{n-1}}\left(s-t^{n-1}\right)\partial_{tt}[J\hat{% \boldsymbol{u}}]^{s}\dif s\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}% \\ &\displaystyle\leq{C}\tau^{2}\sup_{s\in[t^{n-1},t^{n}]}\left(\|\partial_{t}% \hat{\boldsymbol{u}}^{s}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+\|% \hat{\boldsymbol{u}}^{s}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}% \right),\end{split}$$ where we have used Assumption 2.3 for the second step and Assumption 2.6 for the final step. Combining (5.17), (5.18), (5.19) and (5.20) we have (5.21) $$\mathcal{R}^{n}\leq{C}\left(\hat{h}^{2(\ell+1)}+\tau^{2}\right)\text{ for }n=1% ,\dotsc,N.$$ Using (4.28) we have (5.22) $$\|\hat{\boldsymbol{\rho}}^{0}\|_{J^{m}}^{2}=\|\hat{\boldsymbol{\rho}}^{h}(0)\|% _{J^{m}}^{2}\leq{C}\hat{h}^{2(\ell+1)}.$$ Applying estimates (5.21) and (5.22) in (5.13) completes the proof of Theorem 5.1. ∎ 5.3. Remark (Stability of the fully discrete scheme) The timestep restriction (5.11) is composed of a term arising from domain growth (the term involving the determinant $J$ of the diffeomorphism $\boldsymbol{\mathcal{A}}$) and a term arising from the nonlinear reaction kinetics (the term containing $\widetilde{C}$). It is worth noting that for a given set of reaction kinetics, i.e., a given $\widetilde{C}$, larger timesteps are admissible on growing domains (as we have $\|J^{n-1}/J^{n}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})}}<1$ for all $n=1,\dotsc,N$). If we consider for illustrative purposes the heat equation, i.e, the case $\widetilde{C}=0$, we recover unconditional stability on growing domains whereas for contracting domains (5.11) implies a stability condition on the timestep dependent on the growth rate. In practice only qualitative a priori estimates are generally available for the exact solution and the region ${I}$ defined in Assumption 2.5 is not explicitly known. To this end, we show a maximum-norm bound on the discrete solution to circumvent the construction of $\widetilde{\boldsymbol{f}}$. We wish to invoke estimate (3.3) with a positive power of $\hat{h}$ and thus we require the degree of the finite element space to satisfy $\ell>\frac{d}{2}-1,$ where $d$ is the spatial dimension. For any physically relevant domain $(d<4)$ piecewise linear or higher basis functions suffice. 5.4. Remark (Maximum-norm bound of the discrete solution) Let the assumptions in Theorem 5.1 be valid and let the degree of the finite element space satisfy $\ell>\frac{d}{2}-1,$ where $d$ is the spatial dimension. Then (5.23) $$\|\hat{\boldsymbol{u}}^{n}-\hat{\boldsymbol{U}}^{n}\|_{\operatorname{L}_{% \infty}{({\hat{\Omega}})^{m}}}\leq{C}\hat{h}^{\ell+1-\frac{d}{2}},$$ and for sufficiently small mesh-size $\hat{h}$ the discrete solution $\hat{\boldsymbol{U}}^{n}$ to Problem (3.15) is in the region ${I}$, defined in Assumption 2.5, for all $n\in[0,\dotsc,N]$. Thus, we may replace $\widetilde{\boldsymbol{F}}$ in (3.15) by $\boldsymbol{F}$. Indeed, for $n\in[0,\dotsc,N]$ we have for ${\mathcal{I}^{h}}$ the Clément interpolant (5.24) $$\displaystyle\|\hat{\boldsymbol{u}}^{n}-\hat{\boldsymbol{U}}^{n}\|_{% \operatorname{L}_{\infty}{({\hat{\Omega}})^{m}}}$$ $$\displaystyle\leq\|{\mathcal{I}^{h}}\hat{\boldsymbol{u}}^{n}-\hat{\boldsymbol{% U}}^{n}\|_{\operatorname{L}_{\infty}{({\hat{\Omega}})^{m}}}+\|\hat{\boldsymbol% {u}}^{n}-{\mathcal{I}^{h}}\hat{\boldsymbol{u}}^{n}\|_{\operatorname{L}_{\infty% }{({\hat{\Omega}})^{m}}}.$$ Using (3.3) and (3.4) gives (5.25) $$\begin{split}\displaystyle\|\hat{\boldsymbol{u}}^{n}-\hat{\boldsymbol{U}}^{n}% \|_{\operatorname{L}_{\infty}{({\hat{\Omega}})^{m}}}&\displaystyle\leq{C}\Bigg% {(}\hat{h}^{-d/2}\bigg{(}\|{\mathcal{I}^{h}}\hat{\boldsymbol{u}}^{n}-\hat{% \boldsymbol{u}}^{n}\|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}+\|\hat{% \boldsymbol{u}}^{n}-\hat{\boldsymbol{U}}^{n}\|_{\operatorname{L}_{2}{({\hat{% \Omega}})^{m}}}\bigg{)}\\ &\displaystyle\phantom{\leq C\Bigg{(}}+\hat{h}^{\ell+1-d/2}|\hat{\boldsymbol{u% }}^{n}|_{\operatorname{H}^{\ell+1}{({\hat{\Omega}})^{m}}}\Bigg{)}.\end{split}$$ Error bound (5.23) now follows from (3.2) and Theorem 5.1. Thus, if $\hat{h}$ is taken sufficiently small we have (5.26) $$\sup_{n={0},\dotsc,{N}}\|\hat{\boldsymbol{u}}^{n}-\hat{\boldsymbol{U}}^{n}\|_{% \operatorname{L}_{\infty}{({\hat{\Omega}})^{m}}}\leq\delta,$$ for any $\delta\in\mathbb{R}^{+}$. Therefore, $\hat{\boldsymbol{U}}^{n}\in{I}$ for all $n\in[0,\dotsc,N]$ and thus, $\widetilde{\boldsymbol{f}}(\hat{\boldsymbol{U}})=\boldsymbol{f}(\hat{% \boldsymbol{U}})$. The following corollary follows immediately. 5.5. Corollary (Convergence of a practical finite element method) Let the assumptions in Theorem 5.1 be valid and let the degree of the finite element space satisfy $\ell>\frac{d}{2}-1$, where $d$ is the spatial dimension. Then, for a sufficiently small mesh-size $\hat{h}$ the scheme (3.15), with $\tilde{\boldsymbol{F}}$ replaced by $\boldsymbol{F}$ possesses a unique solution $\left({\smash{\hat{\boldsymbol{U}}}^{n}}\right)_{{n={0},\dotsc,{N}}}$. It satisfies the following optimal-rate a priori error estimate: (5.27) $$\begin{split}\displaystyle\|\hat{\boldsymbol{U}}^{n}-\hat{\boldsymbol{u}}^{n}% \|_{\operatorname{L}_{2}{({\hat{\Omega}})^{m}}}^{2}+&\displaystyle\tau\hat{h}^% {2}\sum_{i=1}^{m}{D}_{i}\|\nabla\left(\hat{{U}}_{i}^{n}-\hat{{u}}_{i}^{n}% \right)\|_{\operatorname{L}_{2}{({\hat{\Omega}})}}^{2}\\ &\displaystyle\leq{C}\left(\boldsymbol{\mathcal{A}},\hat{u},\widetilde{C}% \right)\left(\hat{h}^{2(\ell+1)}+\tau^{2}\right),\quad\mbox{for }n\in[0,\dotsc% ,N],\end{split}$$ with $\widetilde{C}$ as defined in (2.21). 5.6. Remark (How small must the mesh-size be?) Knowing the the meshsize is “sufficiently small” in Corollary 5.5 is possible, by verifying that the computed solution remains in the region $I$ defined in Assumption 2.5, 6. Implementation In this section we illustrate the implementation of the finite element scheme with explicit nonlinear reaction functions. We consider the following widely studied set of reaction kinetics. 6.1. Definition (Schnakenberg’s “activator-depleted substrate” model [38, 19, 25]) We consider the following activator depleted substrate model, also known as the Brusselator model in nondimensional form: (6.1) $$\begin{split}\displaystyle f_{1}\left(u_{1},u_{2}\right)=\gamma\left(a-u_{1}+u% _{1}^{2}u_{2}\right)\text{ and }f_{2}\left(u_{1},u_{2}\right)=\gamma\left(b-u_% {1}^{2}u_{2}\right),\end{split}$$ where $0<a,b,\gamma<\infty$. 6.2. Remark (Applicability of Assumption 2.5) The Schnakenberg reaction kinetics satisfy the structural assumptions on the nonlinear reaction vector field as (6.2) $$\begin{split}\displaystyle f_{1}\left(u_{1},u_{2}\right)=\gamma\left(a+u_{1}F_% {1}\left(u_{1},u_{2}\right)\right)\text{ and }f_{2}\left(u_{1},u_{2}\right)=% \gamma\left(b+u_{2}F_{2}\left(u_{1},u_{2}\right)\right),\end{split}$$ where (6.3) $$\begin{split}\displaystyle F_{1}\left(u_{1},u_{2}\right)=u_{1}u_{2}-1\text{ % and }F_{2}\left(u_{1},u_{2}\right)=-u_{1}^{2}.\end{split}$$ Clearly $\boldsymbol{f},\boldsymbol{F}\in C^{1}(\mathbb{R}^{2})$ thus Assumption 2.5 holds for the Schnakenberg kinetics. In matrix vector form scheme (3.15) equipped with kinetics (6.1) and appropriate initial approximations ${\boldsymbol{W}}^{0}_{1},{\boldsymbol{W}}^{0}_{2}$ is: To solve for ${\boldsymbol{W}}^{n}_{1},{\boldsymbol{W}}^{n}_{2}$, $n=[1,\dotsc,N]$, the linear systems given by (6.4) $$\begin{cases}\left(\frac{1}{\tau_{n}}\hat{\boldsymbol{M}}^{n}+D_{1}\hat{% \boldsymbol{S}}^{n}+\gamma\hat{\boldsymbol{N}}_{1}^{n}\right)\hat{\boldsymbol{% W}}^{n}_{1}&=\frac{1}{\tau_{n}}\hat{\boldsymbol{M}}^{n-1}\hat{\boldsymbol{W}}^% {n-1}_{1}+\gamma{a}\hat{\boldsymbol{F}}^{n}\\ \left(\frac{1}{\tau_{n}}\hat{\boldsymbol{M}}^{n}+D_{2}\hat{\boldsymbol{S}}^{n}% +\gamma\hat{\boldsymbol{N}}_{2}^{n}\right)\hat{\boldsymbol{W}}^{n}_{2}&=\frac{% 1}{\tau_{n}}\hat{\boldsymbol{M}}^{n-1}\hat{\boldsymbol{W}}^{n-1}_{2}+\gamma{b}% \hat{\boldsymbol{F}}^{n},\end{cases}$$ where $\boldsymbol{W}_{1}$ and $\boldsymbol{W}_{2}$ represent the nodal values of the discrete solutions corresponding to $\hat{u}_{1}$ and $\hat{u}_{2}$ respectively and the equations are nondimensional such that either $D_{1}$ or $D_{2}$ is equal to 1. The components of the weighted mass matrix $\hat{\boldsymbol{M}}$, the weighted stiffness matrix $\hat{\boldsymbol{S}}$ and the load vector $\hat{\boldsymbol{F}}$ on the reference frame are given by (6.5) $$\hat{M}^{n}_{\alpha\beta}:=\int_{{\hat{\Omega}}}J^{n}\hat{\Phi}_{\alpha}\hat{% \Phi}_{\beta},\quad\hat{S}^{n}_{\alpha\beta}:=\int_{{\hat{\Omega}}}[J% \boldsymbol{K}]^{n}\nabla\hat{\Phi}_{\alpha}\cdot\boldsymbol{K}^{n}\nabla\hat{% \Phi}_{\beta}\quad\text{and}\quad\hat{F}^{n}_{\alpha}:=\int_{{\hat{\Omega}}}J^% {n}\hat{\Phi}_{\alpha}.$$ For reaction kinetics (6.1) the components of the matrices arising from the Picard linearisation $\hat{\boldsymbol{N}}_{1}$ are given by (6.6) $$\left(\hat{N}_{1}\right)_{\alpha\beta}:=\sum_{\eta=1}^{\dim(\hat{\mathbb{V}})}% \sum_{\vartheta=1}^{\dim(\hat{\mathbb{V}})}\left[(W_{2})_{\eta}(W_{2})_{% \vartheta}\right]^{n-1}\int_{{\hat{\Omega}}}J^{n}\hat{\Phi}_{\alpha}\hat{\Phi}% _{\beta}\hat{\Phi}_{\eta}\hat{\Phi}_{\vartheta},$$ with $\hat{\boldsymbol{N}}_{2}$ treated similarly. Formulation (6.4) gives rise to the following linear algebra problem: Solve for vectors $\boldsymbol{b}_{i}^{n},i=1,\dotsc,m,$ such that (6.7) $$\boldsymbol{A}^{n}\boldsymbol{b}_{i}^{n}=\boldsymbol{c}^{n-1}_{i},\text{ for }% n=1,\dotsc,N.$$ The matrix $\boldsymbol{A}^{n}$ is symmetric sparse and positive definite. We therefore use the conjugate gradient (CG) algorithm [21] to compute the solution to the linear systems. 7. Numerical experiments We now provide numerical evidence to back-up the estimate of Theorem 5.1. We use as a test problem, the Schnakenberg kinetics, although any other reaction kinetics that fulfils our assumptions could have been used. For the implementation we make use of the toolbox ALBERTA [36]. The graphics were generated with PARAVIEW [20]. 7.1. Numerical verification of the a priori convergence rate We examine the experimental order of convergence (EOC) of scheme (3.15). The EOC is a numerical measure of the rate of convergence of the scheme as $\hat{h}_{n}\rightarrow 0$. For a series of uniform refinements of a triangulation $\big{\{}\hat{{\mathscr{T}}}_{i}\big{\}}_{i=0,\dotsc,N}$ we denote by $\{\boldsymbol{e}_{i}\}_{i=0,\dotsc,N}$ the error and $\hat{h}_{n}$ the maximum mesh-size of $\hat{{\mathscr{T}}}_{n}$. The EOC is given by (7.1) $$\operatorname{EOC}_{i}(\boldsymbol{e}_{i,i+1},\hat{h}_{i,i+1})=\ln({% \boldsymbol{e}_{i+1}}/{\boldsymbol{e}_{i}})/\ln({\hat{h}^{i+1}}/{\hat{h}_{i}}).$$ We consider the EOC in approximating the solution to (1.1), with $\mathbb{P}^{1}$, $\mathbb{P}^{2}$ and $\mathbb{P}^{3}$ basis functions and uniform timestep $\tau\approx\hat{h}^{2}$, $\tau\approx\hat{h}^{3}$ and $\tau\approx\hat{h}^{4}$ respectively (since the scheme is first order in time). We also consider two different forms of domain evolution. • Spatially linear periodic evolution: (7.2) $$\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi})=\boldsymbol{\xi}\left(1+\kappa% \sin\left({\pi{t}}/{T}\right)\right).$$ • Spatially nonlinear periodic evolution: (7.3) $$(\boldsymbol{\mathcal{A}}_{t}(\boldsymbol{\xi}))_{i}={\xi}_{i}\left(1+\kappa% \sin\left({\pi{t}}/{T}\right){\xi}_{i}\right)\text{ for }i=1,\dotsc,d,.$$ In both cases we take a time interval of $[0,1]$, the initial domain as the unit square and the parameter $\kappa=1$. We take the diffusion coefficients $\boldsymbol{D}=(0.01,1)^{\mathsf{T}}$ and the parameter $\gamma=1$. Problem 1.1 equipped with nonlinear reaction kinetics does not admit any closed form solutions. In order to provide numerical verification of the convergence rate, we insert a source term such that the exact solution is, (7.4) $$\begin{split}\displaystyle\hat{u}_{1}\left(\boldsymbol{\xi},t\right)=\sin(\pi{% t})\cos(\pi x_{1})\cos(\pi x_{2}),\quad\hat{u}_{2}\left(\boldsymbol{\xi},t% \right)=-\sin(\pi{t})\cos(\pi x_{1})\cos(\pi x_{2}).\end{split}$$ Tables 1 and 2 show the EOCs for the two benchmark examples. In both examples we observe that the error converges at the expected rate, providing numerical evidence for the estimate of Theorem 5.1. 7.2. Remark (Existence of solutions to Problem 1.1 with spatially linear isotropic evolution) In [45], we showed that Problem 1.1 equipped with the Schnakenberg reaction kinetics posed on a $C^{2}$ domain ${\Omega_{t}}$, is well posed under any bounded spatially linear isotropic evolution of the domain. If we assume this result holds on polygonal domains, we have sufficient regularity on the continuous problem to apply Theorem 5.1 and thus conclude scheme (3.15) with $\mathbb{P}^{1}$ finite elements converges with optimal order. Thus, to illustrate a concrete application for which our theory holds, we present results for the Schnakenberg kinetics with domain growth function of the form (7.2), initial conditions are taken as small perturbations around the spatially homogeneous steady state and numerical and reaction kinetic parameter values as given in Table 3. We take the unit square as the initial domain, with the domain growing from a square of length 1 to a square of length 5 at $t=1000$ before contracting to a square of length 1 at final time. Figure 1 shows snapshots of the discrete activator ($W_{1}$) profiles. The substrate profiles ($W_{2}$) have been omitted as they are $180^{\circ}$ out of phase with those of the activator. An initial half spot pattern forms which reorients as the domain grows into a single spot positioned in the centre of the domain. As the domain contracts this single spot disappears (via spot annihilation) with the final domain exhibiting no spatial patterning. References Acheson [1990] D. Acheson. Elementary fluid dynamics. Oxford University Press, USA, 1990. Baines [1994] M. Baines. Moving finite elements. Oxford University Press, 1994. Baker and Maini [2007] R. Baker and P. Maini. A mechanism for morphogen-controlled domain growth. Journal of mathematical biology, 54(5):597–622, 2007. Barreira et al. [2011] R. Barreira, C. Elliott, and A. Madzvamuse. The surface finite element method for pattern formation on evolving biological surfaces. Journal of Mathematical Biology, pages 1–25, 2011. ISSN 0303-6812. Barrio et al. [2009] R. Barrio, R. Baker, B. Vaughan Jr, K. Tribuzy, M. de Carvalho, R. Bassanezi, and P. Maini. Modeling the skin pattern of fishes. Physical Review E, 79(3):31908, 2009. Brenner and Scott [2002] S. Brenner and L. Scott. The mathematical theory of finite element methods. Texts in Applied Mathematics, vol. 15, 2002. Chaplain et al. [2001] M. Chaplain, M. Ganesh, and I. Graham. Spatio-temporal pattern formation on spherical surfaces: numerical simulation and application to solid tumour growth. Journal of Mathematical Biology, 42(5):387–423, 2001. ISSN 0303-6812. Chueh et al. [1977] K. Chueh, C. Conley, and J. Smoller. Positively invariant regions for systems of nonlinear diffusion equations. Indiana Univ. Math. J, 26(2):373–392, 1977. Clément [1975] P. Clément. Approximation by finite element functions using local regularization. RAIRO, Rouge, Anal. Numér., 9(R-2):77–84, 1975. Crampin et al. [1999] E. Crampin, E. Gaffney, and P. Maini. Reaction and diffusion on growing domains: Scenarios for robust pattern formation. Bulletin of Mathematical Biology, 61(6):1093–1120, 1999. Crouzeix and Thomée [1987] M. Crouzeix and V. Thomée. The stability in $l_{p}$ and $w_{p}^{1}$ of the $l_{2}$-projection onto finite element function spaces. Mathematics of Computation, 48(178):pp. 521–532, 1987. ISSN 00255718. URL http://www.jstor.org/stable/2007825. Dupont [1982] T. Dupont. Mesh modification of evolution equations. Math. Comput., 39(159):85–107, 1982. Elliott and Stuart [1993] C. Elliott and A. Stuart. The global dynamics of discrete semilinear parabolic equations. SIAM Journal on Numerical Analysis, 30(6):1622–1663, 1993. doi: 10.1137/0730084. URL http://epubs.siam.org/doi/abs/10.1137/0730084. Elliott et al. [2012] C. M. Elliott, B. Stinner, and C. Venkataraman. Modelling cell motility and chemotaxis with evolving surface finite elements. Journal of The Royal Society Interface, 2012. doi: 10.1098/rsif.2012.0276. URL http://rsif.royalsocietypublishing.org/content/early/2012/05/29/rsif.2012.0276.abstract. Estep et al. [2000] D. Estep, M. Larson, and R. Williams. Estimating the error of numerical solutions of systems of reaction-diffusion equations. Amer Mathematical Society, 2000. ISBN 0821820729. Evans [2009] L. Evans. Partial Differential Equations (Graduate Studies in Mathematics, Vol. 19). Dover, 2009. Evans and Gariepy [1992] L. C. Evans and R. F. Gariepy. Measure Theory and Fine Properties of Functions. CRC Press, Boca Raton, FL, 1992. ISBN 0-8493-7157-0. Garvie and Trenchea [2007] M. Garvie and C. Trenchea. Finite element approximation of spatially extended predator–prey interactions with the Holling type II functional response. Numerische Mathematik, 107(4):641–667, 2007. Gierer and Meinhardt [1972] A. Gierer and H. Meinhardt. A theory of biological pattern formation. Biological Cybernetics, 12(1):30–39, 1972. Henderson et al. [2004] A. Henderson, J. Ahrens, and C. Law. The ParaView Guide. Kitware Clifton Park, NY, 2004. Hestenes and Stiefel [1952] M. Hestenes and E. Stiefel. Methods of Conjugate Gradients for Solving Linear Systems1. Journal of Research of the National Bureau of Standards, 49(6), 1952. Hoff [1978] D. Hoff. Stability and convergence of finite difference methods for systems of nonlinear reaction-diffusion equations. SIAM Journal on Numerical Analysis, 15(6):pp. 1161–1177, 1978. ISSN 00361429. URL http://www.jstor.org/stable/2156733. Kondo and Asai [1995] S. Kondo and R. Asai. A reaction–diffusion wave on the skin of the marine angelfish Pomacanthus. Nature, 376(6543):765–768, 1995. Labadie [2008] M. Labadie. The stabilizing effect of growth on pattern formation. Preprint, 2008. Lefever and Prigogine [1968] R. Lefever and I. Prigogine. Symmetry-breaking instabilities in dissipative systems II. J. chem. Phys, 48:1695–1700, 1968. Mackenzie and Madzvamuse [2011] J. Mackenzie and A. Madzvamuse. Analysis of stability and convergence of finite-difference methods for a reaction–diffusion problem on a one-dimensional growing domain. IMA Journal of Numerical Analysis, 31(1):212, 2011. ISSN 0272-4979. Madzvamuse [2000] A. Madzvamuse. A Numerical Approach to the Study of Spatial Pattern Formation. PhD thesis, University of Oxford, 2000. Madzvamuse [2006] A. Madzvamuse. Time-stepping schemes for moving grid finite elements applied to reaction-diffusion systems on fixed and growing domains. J. Comput. Phys., 214(1):239–263, 2006. ISSN 0021-9991. McKenna and Reichel [2007] P. McKenna and W. Reichel. Gidas–Ni–Nirenberg results for finite difference equations: Estimates of approximate symmetry. Journal of Mathematical Analysis and Applications, 334(1):206–222, 2007. Miller [1981] K. Miller. Moving finite elements. ii. SIAM Journal on Numerical Analysis, 18(6):1033–1057, 1981. Miller and Miller [1981] K. Miller and R. N. Miller. Moving finite elements. i. SIAM Journal on Numerical Analysis, 18(6):1019–1032, 1981. Miura et al. [2006] T. Miura, K. Shiota, G. Morriss-Kay, and P. Maini. Mixed-mode pattern in Doublefoot mutant mouse limb–Turing reaction-diffusion model on a growing domain during limb development. Journal of theoretical biology, 240(4):562–573, 2006. Moore [1994] P. Moore. A posteriori error estimation with finite element semi-and fully discrete methods for nonlinear parabolic equations in one space dimension. SIAM journal on numerical analysis, 31(1):149–169, 1994. Murray [2003] J. Murray. Mathematical biology. Springer Verlag, 2003. Schatz et al. [1980] A. Schatz, V. Thomée, and L. Wahlbin. Maximum norm stability and error estimates in parabolic finite element equations. Communications on Pure and Applied Mathematics, 33(3):265–304, 1980. Schmidt and Siebert [2005] A. Schmidt and K. Siebert. Design of adaptive finite element software: The finite element toolbox ALBERTA. Springer Verlag, 2005. Schmitt and Thompson [1998] K. Schmitt and R. Thompson. Nonlinear analysis and differential equations: An introduction. Lecture Notes, University of Utah, Department of Mathematics, 1998. Schnakenberg [1979] J. Schnakenberg. Simple chemical reaction systems with limit cycle behaviour. Journal of theoretical biology, 81(3):389, 1979. Schwab [1998] C. Schwab. p-and hp-finite element methods: Theory and applications in solid and fluid mechanics. Oxford University Press, USA, 1998. ISBN 0198503903. Smoller [1994] J. Smoller. Shock waves and reaction-diffusion equations. Springer, 1994. Thomée [2006] V. Thomée. Galerkin finite element methods for parabolic problems, volume 25 of Springer Series in Computational Mathematics. Springer-Verlag, Berlin, second edition, 2006. ISBN 978-3-540-33121-6; 3-540-33121-2. Thomée and Wahlbin [1975] V. Thomée and L. Wahlbin. On galerkin methods in semilinear parabolic problems. SIAM Journal on Numerical Analysis, 12(3):378–389, 1975. Turing [1952] A. Turing. The chemical basis of morphogenesis. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 237(641):37–72, 1952. Venkataraman et al. [2011] C. Venkataraman, T. Sekimura, E. Gaffney, P. Maini, and A. Madzvamuse. Modeling parr-mark pattern formation during the early development of amago trout. Phys. Rev. E, 84:041923, Oct 2011. doi: 10.1103/PhysRevE.84.041923. URL http://link.aps.org/doi/10.1103/PhysRevE.84.041923. Venkataraman et al. [2012] C. Venkataraman, O. Lakkis, and A. Madzvamuse. Global existence for semilinear reaction–diffusion systems on evolving domains. Journal of Mathematical Biology, 64:41–67, 2012. ISSN 0303-6812. URL http://dx.doi.org/10.1007/s00285-011-0404-x. 10.1007/s00285-011-0404-x. Venkataraman et al. [2013] C. Venkataraman, O. Lakkis, and A. Madzvamuse. Adaptive finite elements for semilinear reaction-diffusion systems on growing domains. Numerical Mathematics and Advanced Applications 2011, page 71, 2013. Wheeler [1973] M. Wheeler. A priori $L^{2}$ error estimates for galerkin approximations to parabolic partial differential equations. SIAM Journal on Numerical Analysis, 10(4):723–759, 1973. ISSN 0036-1429. Zegeling and Kok [2004] P. A. Zegeling and H. P. Kok. Adaptive moving mesh computations for reaction–diffusion systems. J. Comput. Appl. Math., 168(1-2):519–528, 2004. ISSN 0377-0427. doi: 10.1016/j.cam.2003.06.013. Zhang et al. [2008] K. Zhang, J. Wong, and R. Zhang. Second-order implicit-explicit scheme for the Gray-Scott model. Journal of Computational and Applied Mathematics, 213(2):559–581, 2008.
Nature of Lieb’s “hole” excitations and two-phonon states of a Bose gas Maksim Tomchenko Bogolyubov Institute for Theoretical Physics 14b, Metrolohichna Str., Kyiv 03143, Ukraine E-mail:[email protected] () It is generally accepted that the “hole” and “particle” excitations are two independent types of excitations of a one-dimensional system of point bosons. We show that the Lieb’s “hole” with the momentum $p=j2\pi/L$ is $j$ identical interacting phonons with the momentum $2\pi/L$ (here, $L$ is the size of the system, and $\hbar=1$). We strictly prove this assertion for $j=1,2$ by comparing solutions for a system of point bosons with solutions for a system of nonpoint bosons (in the limit of the point interaction). The Lieb-Liniger equations in Gaudin’s form imply that our conclusion is proper also for $j>2$. Thus, the holes are not a physically independent type of quasiparticles. Moreover, we find the solution for two interacting phonons in a Bose system with an interatomic potential of the general form at a weak coupling and any dimension (1, 2, or 3). It is also shown that the maximum possible number of phonons in a Bose system is equal to the number of atoms $N$. Finally, we discuss the solitonic properties of holes. Keywords: point bosons, interaction of phonons, hole-like excitations. 1 Introduction This work is devoted to two main problems: the determination of the wave function and the energy of two interacting phonons in a Bose gas with a potential of the general form and the study of the nature of Lieb’s “holes”. The first problem was not solved, to our knowledge, and can help one to solve the second problem. The elementary excitations of a one-dimensional (1D) system of point bosons are usually separated into two types: particle-like (“particles”) and hole-like (“holes”) [1, 2, 3, 4, 5, 6, 7]. At the weak coupling the dispersion law of “particles” coincides with the Bogolyubov law [8, 9] and agrees with the Feynman’s solutions [10, 11, 12] and more later models [13, 14, 15, 16, 17, 18, 19, 20, 21] (other references can be found in reviews [22, 23]). Therefore, it is natural to consider that the particles correspond to Bogolyubov–Feynman quasiparticles. The dispersion law of holes was found only in the approach based on the Bethe ansatz [1]. In this case, Lieb attacked the Bogolyubov’s and Feynman’s approaches and proposed some arguments in favor of that the holes are an independent type of elementary excitations [1, 2]. This point of view became traditional. Later on, it was found that the dispersion law of holes is close to that for the soliton solution of the 1D Gross–Pitaevskii equation [24, 25]. This became the main argument in favor of that the holes are a particular independent type of quasiparticles. However, such point of view does not agree with the models [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. It is important that the models [9, 10, 11, 16, 17, 18, 19, 20, 21] work in 1D, since they do not use a condensate (we note that the Bogolyubov method also works in 1D at small $\gamma$ and $T$, if $N$ is finite [26]). If the holes would be a separate type of quasiparticles, this would mean the significant shortcoming in the models [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] and in close ones. In addition, if the holes are an independent type of excitations, then they give a separate contribution to thermodynamic quantities (since holes interact with particles and, therefore, participate in the thermal equilibrium). Such analysis indicates that the question about the nature of holes is of fundamental importance. The one-dimensional system differs qualitatively from a three-dimensional (3D) one by that the atom in a 1D system cannot get around another atom. The former can only pass through the latter. Despite this circumstance, Lieb believed that 1D and 3D systems are qualitatively similar [1]. Therefore, he made conclusion [1] that holes can exists also in 3D systems, at least at some values of parameters. In what follows, we will study the structure of the wave functions of “particles” and holes and will strictly show that the hole is a collection of interacting “particles” (in this case, the hole can be a soliton). It was noted in the literature that the holes are not an independent type of excitations [6, 27, 28]. This conclusion was based on the Lieb–Lininger equations. However, these equations are not enough to clarify the physical nature of holes. Let us consider what the Lieb–Lininger equations can say about the nature of holes. These equations describe a periodic 1D system of point bosons [29]. Gaudin wrote them in the form [4, 30] $$\displaystyle Lk_{i}=2\pi n_{i}+2\sum\limits_{j=1}^{N}\arctan{\frac{c}{k_{i}-k% _{j}}}|_{j\neq i},\ i=1,\ldots,N,$$ (1) where $N$ is the number of bosons, $L$ is the size of the system, and $n_{i}=0,\pm 1,\pm 2,\ldots$. In the literature, the point bosons are usually described by the Lieb–Lininger equations in the Yang–Yang form [3]: $$\displaystyle Lk_{i}=2\pi I_{i}-2\sum\limits_{j=1}^{N}\arctan{\frac{k_{i}-k_{j% }}{c}},\quad i=1,\ldots,N.$$ (2) The equations (1) and (2) are equivalent [4, 30]: the formula $$\arctan{\alpha}=(\pi/2)sgn(\alpha)-\arctan{(1/\alpha)}$$ (3) allows one to rewrite Eqs. (2) in the form (1). In this case, $$I_{i}=n_{i}+i-\frac{N+1}{2}.$$ (4) The ground state of the system corresponds to the quantum numbers $\{I_{i}\}=(1-\frac{N+1}{2},2-\frac{N+1}{2},\ldots,N-\frac{N+1}{2})$, the particle-like excitation with the momentum $p=2\pi j/L$ corresponds to $\{I_{i}\}=(1-\frac{N+1}{2},\ldots,N-1-\frac{N+1}{2},N-\frac{N+1}{2}+j)$, and a hole with the momentum $p=2\pi l/L$ ($l>0$) corresponds to the quantum numbers $I_{i\leq N-l}=i-\frac{N+1}{2}$, $I_{i>N-l}=1+i-\frac{N+1}{2}$ (we set $\hbar=2m=1$ in this section). In the language of Eqs. (1), those states correspond to the following collections of quantum numbers $\{n_{i}\}=(n_{1},\ldots,n_{N})$: $(0,\ldots,0)$, $(0,\ldots,0,j)$, and $(0,\ldots,0,1,\ldots,1)$, where $1$ is repeated $l$ times. In this case, the state $(0,\ldots,0,1)$ is particular: it can be considered as a particle and as a hole. In the last case, any state $(n_{1},\ldots,n_{N})$ can be considered as a collection of interacting holes. If the state $(0,\ldots,0,1)$ is a particle, then any state $(n_{1},\ldots,n_{N})$ can be considered as a collection of interacting particles. Therefore, the physical nature of the state $(0,\ldots,0,1)$ is the key point. From physical reasonings, we may expect that the state $(0,\ldots,0,1)$ corresponds to a phonon with the wavelength $\lambda=L$ (indeed, if the state $(0,\ldots,0,1)$ would correspond to a hole, then the phonon with $\lambda=L$ would be absent in the system, which is strange). In this case, each state $(n_{1},\ldots,n_{N})$ can be considered as a collection of interacting phonons. In particular, the state $(0,\ldots,0,j)$ should correspond to one phonon with the momentum $p=2\pi j/L$. As for the state with $n_{j\leq N-l}=0,n_{j\geq N-l+1}=1,$ it should correspond to $l$ interacting phonons, each of them has the wavelength $\lambda=L$ and momentum $2\pi/L$. However, according to the Lieb’s classification [1], the state with the quantum numbers $n_{j\leq N-l}=0,n_{j\geq N-l+1}=1$ corresponds to a hole with the momentum $p=2\pi l/L$. Therefore, the hole with the momentum $p=2\pi l/L$ ($l>0$) should coincide with $l$ interacting phonons, each of them has the momentum $2\pi/L$. This possibility is also seen from the analysis by Lieb [1]. To ascertain the nature of a hole, it is necessary to study the structure of $N$-boson wave functions of a hole and a particle. In what follows, we will prove that the state $(0,\ldots,0,1)$ corresponds to a phonon, and the hole with the momentum $p=4\pi/L$ coincide with two interacting phonons $(0,\ldots,0,1)$ and $(0,\ldots,0,1,0)$. In addition, we will determine the largest number of quasiparticles in a Bose gas and discuss the interconnection between holes and solitons. 2 Phonon with the quantum numbers $\{n_{i}\}=(0,\ldots,0,1)$. One can investigate the structure of wave functions of a “particle” and a hole in two ways: based on the wave functions of point bosons [4, 29] or on the wave functions of nonpoint bosons (i.e., bosons with nonzero interaction radius) [9, 10, 11, 17, 21, 31, 32, 33, 34], by passing to a point potential in the last case. Let us consider the second way. The transition from the solutions for nonpoint bosons to solutions for point ones, based on the Bethe ansatz, was not studied in the literature in details. Consider a periodic system of $N$ bosons with interatomic potential of the general form $U(\textbf{r}_{j}-\textbf{r}_{l})$. The dimensionality can be equal to $1$, $2$, or $3$. The ground state of a gas is described by the wave function [34] $$\displaystyle\Psi_{0}(\textbf{r}_{1},\ldots,\textbf{r}_{N})=Ae^{S(\textbf{r}_{% 1},\ldots,\textbf{r}_{N})},$$ (5) $$\displaystyle S$$ $$\displaystyle=$$ $$\displaystyle\sum\limits_{j=1}^{N-1}\frac{1}{(j+1)!}\sum\limits_{\textbf{q}_{1% }\neq 0}\ldots\sum\limits_{\textbf{q}_{j}\neq 0}^{\textbf{q}_{1}+\ldots+% \textbf{q}_{j}\neq 0}\frac{a_{j+1}(\textbf{q}_{1},\ldots,\textbf{q}_{j})}{N^{(% j-1)/2}}\rho_{\textbf{q}_{1}}\ldots\rho_{\textbf{q}_{j}}\rho_{-\textbf{q}_{1}-% \ldots-\textbf{q}_{j}}=$$ (6) $$\displaystyle=$$ $$\displaystyle\sum\limits_{\textbf{q}_{1}\neq 0}\frac{a_{2}(\textbf{q}_{1})}{2!% }\rho_{\textbf{q}_{1}}\rho_{-\textbf{q}_{1}}+\sum\limits_{\textbf{q}_{1},% \textbf{q}_{2}\neq 0}^{\textbf{q}_{1}+\textbf{q}_{2}\not=0}\frac{a_{3}(\textbf% {q}_{1},\textbf{q}_{2})}{3!N^{1/2}}\rho_{\textbf{q}_{1}}\rho_{\textbf{q}_{2}}% \rho_{-\textbf{q}_{1}-\textbf{q}_{2}}+\ldots+$$ $$\displaystyle+$$ $$\displaystyle\sum\limits_{\textbf{q}_{1},\ldots,\textbf{q}_{N-1}\neq 0}^{% \textbf{q}_{1}+\ldots+\textbf{q}_{N-1}\not=0}\frac{a_{N}(\textbf{q}_{1},\ldots% ,\textbf{q}_{N-1})}{N!N^{(N-2)/2}}\rho_{\textbf{q}_{1}}\ldots\rho_{\textbf{q}_% {N-1}}\rho_{-\textbf{q}_{1}-\ldots-\textbf{q}_{N-1}},$$ and the wave function of a one-phonon state reads [17] $$\Psi_{\textbf{p}}(\textbf{r}_{1},\ldots,\textbf{r}_{N})=\psi_{\textbf{p}}\Psi_% {0},$$ (7) $$\displaystyle\psi_{\textbf{p}}$$ $$\displaystyle=$$ $$\displaystyle\sum\limits_{j=0}^{N-1}\frac{1}{(j+1)!}\sum\limits_{\textbf{q}_{1% }\neq 0}\ldots\sum\limits_{\textbf{q}_{j}\neq 0}^{\textbf{q}_{1}+\ldots+% \textbf{q}_{j}+\textbf{p}\neq 0}\frac{b_{j+1}(\textbf{q}_{1},\ldots,\textbf{q}% _{j};\textbf{p})}{N^{j/2}}\rho_{\textbf{q}_{1}}\ldots\rho_{\textbf{q}_{j}}\rho% _{-\textbf{q}_{1}-\ldots-\textbf{q}_{j}-\textbf{p}}=$$ (8) $$\displaystyle=$$ $$\displaystyle b_{1}(\textbf{p})\rho_{-\textbf{p}}+\sum\limits_{\textbf{q}_{1}% \neq 0}^{\textbf{q}_{1}+\textbf{p}\neq 0}\frac{b_{2}(\textbf{q}_{1};\textbf{p}% )}{2!N^{1/2}}\rho_{\textbf{q}_{1}}\rho_{-\textbf{q}_{1}-\textbf{p}}+\sum% \limits_{\textbf{q}_{1},\textbf{q}_{2}\neq 0}^{\textbf{q}_{1}+\textbf{q}_{2}+% \textbf{p}\not=0}\frac{b_{3}(\textbf{q}_{1},\textbf{q}_{2};\textbf{p})}{3!N}% \rho_{\textbf{q}_{1}}\rho_{\textbf{q}_{2}}\rho_{-\textbf{q}_{1}-\textbf{q}_{2}% -\textbf{p}}+$$ $$\displaystyle+$$ $$\displaystyle\ldots+\sum\limits_{\textbf{q}_{1},\ldots,\textbf{q}_{N-1}\neq 0}% ^{\textbf{q}_{1}+\ldots+\textbf{q}_{N-1}+\textbf{p}\not=0}\frac{b_{N}(\textbf{% q}_{1},\ldots,\textbf{q}_{N-1};\textbf{p})}{N!N^{(N-1)/2}}\rho_{\textbf{q}_{1}% }\ldots\rho_{\textbf{q}_{N-1}}\rho_{-\textbf{q}_{1}-\ldots-\textbf{q}_{N-1}-% \textbf{p}}.$$ Here, $N$ is the total number of atoms, $\textbf{r}_{j}$ are the coordinates of atoms, $\rho_{\textbf{q}}$ are the collective variables $$\rho_{\textbf{q}}=\frac{1}{\sqrt{N}}\sum\limits_{j=1}^{N}e^{-i\textbf{q}% \textbf{r}_{j}},$$ (9) and all wave vectors $\textbf{q}_{l}$, $\textbf{p}_{l}$, p are quantized in the 3D case by the rule $$\textbf{q}=2\pi\left(\frac{j_{x}}{L_{x}},\frac{j_{y}}{L_{y}},\frac{j_{z}}{L_{z% }}\right),$$ (10) where $j_{x},j_{y},j_{z}$ are integers, and $L_{x},L_{y},L_{z}$ are the sizes of the system. The approximate solutions for the functions $\Psi_{0}$ and $\Psi_{\textbf{p}}$ were first obtained by Feynman [10, 11], Bogolyubov and Zubarev [9], and Jastrow [31]. Then these methods were developed in a lot of works (see [14, 15, 16, 17, 18, 19, 20, 21, 32, 33, 34] and reviews [22, 23]). We will base on the method of collective variables by Vakarchuk and Yukhnovskii [17, 34]. It allows one to get two exact chains of equations for the functions $a_{j}$ and $b_{j}$ at $N=\infty$. The first equations from those chains are given in Appendix. For the weak coupling ($\gamma\ll 1$), we can set $a_{j\geq 3}=0$, $b_{j\geq 2}=0$ (it is the zero approximation; here, $\gamma=\rho^{1/3}cm/\hbar^{2}$ (for 3D), $cm/\hbar^{2}$ (2D), $2cm/(\rho\hbar^{2})$ (1D), $\rho$ is the particle number density, and $c=\nu(0)/2$, see (13)). The coefficient $b_{1}(\textbf{p})$ is considered to be normalizing: we set $b_{1}(\textbf{p})=1$. Then the equations in Appendix yield [17, 34] $$a_{2}(\textbf{p})\equiv a_{2}(p)=\frac{1-\alpha_{\textbf{p}}}{2},\quad\alpha_{% \textbf{p}}=\sqrt{1+\frac{2\rho\nu(p)}{\hbar^{2}p^{2}/(2m)}},$$ (11) $$E(\textbf{p})=\frac{\hbar^{2}p^{2}}{2m}(1-2a_{2}(\textbf{p}))=\sqrt{\left(% \frac{\hbar^{2}p^{2}}{2m}\right)^{2}+2\rho\nu(p)\left(\frac{\hbar^{2}p^{2}}{2m% }\right)}\equiv E_{B}(\textbf{p}),$$ (12) $$\nu(\textbf{p})=\int\limits_{-L_{x}}^{L_{x}}dx\int\limits_{-L_{y}}^{L_{y}}dy% \int\limits_{-L_{z}}^{L_{z}}dzU(r)e^{-i\textbf{p}\textbf{r}}.$$ (13) We have obtained the Bogolyubov dispersion law $E_{B}(p)$. In this approximation, formula (49) from Appendix gives the known Bogolyubov solution for the ground-state energy $E_{0}$ [8]. In the zero approximation the sound velocity is $v_{s}=\sqrt{\frac{\rho\nu(0)}{m}}\equiv v_{s}^{(0)}$. In the next approximation the solution is as follows [17]: $$v_{s}=v_{s}^{(0)}(1+\delta_{s}),\quad\delta_{s}=-\frac{\hbar^{2}}{32m^{2}(v_{s% }^{(0)})^{2}}\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}\frac{q^{2}}{\alpha_{% \textbf{q}}^{3}}\left(\frac{2\rho\nu(q)}{\hbar^{2}q^{2}/(2m)}\right)^{2}.$$ (14) For a 1D system the energy of a phonon with the momentum $\hbar p_{1}=\hbar 2\pi/L$ is $E(p_{1})=\hbar p_{1}v_{s}$. In this case, for a finite system we should set $v_{s}^{(0)}=\sqrt{\frac{\rho\nu(p_{1})}{m}+\frac{\hbar^{2}p_{1}^{2}}{4m^{2}}}$. Consider a finite 1D system of point bosons ($U(r)=2c\delta(r)$, $\nu(p)=2c$) and set $\hbar=2m=1$, $\gamma=c/\rho$. The above-presented formulae give the energy of a phonon with the momentum $p_{1}=2\pi/L$: $$E(p_{1})=\sqrt{p_{1}^{4}+4\rho^{2}\gamma p_{1}^{2}}\cdot(1+\delta_{s})=\frac{4% \pi\rho\sqrt{\gamma}}{L}\sqrt{1+\frac{\pi^{2}}{\gamma N^{2}}}\cdot(1+\delta_{s% }),$$ (15) $$\delta_{s}=-\frac{1}{4N}\frac{1}{1+\pi^{2}/(\gamma N^{2})}\sum\limits_{j=1,2,% \ldots,\infty}\frac{1}{1+\pi^{2}j^{2}/(\gamma N^{2})}\frac{1}{\sqrt{1+\gamma N% ^{2}/(\pi^{2}j^{2})}}.$$ (16) These formulae are valid for $N^{-2}\ll\gamma\ll 1$. Our task is to clarify the nature of the particle $(0,\ldots,0,1)$. It is known that the energy $E_{L}(p)$ of Lieb’s particle for small $p$ is close to the Bogolyubov energy $E_{B}(p)$ (12). The small deviation of the particle energy from $E_{B}(p)$ contains the information about the nature of the particle. Let us represent the energy of the particle with the momentum $p_{1}=2\pi/L$ in the form (15): $$E_{L}(p_{1})=\frac{4\pi\rho\sqrt{\gamma}}{L}\sqrt{1+\frac{\pi^{2}}{\gamma N^{2% }}}\cdot(1+\delta_{sL}).$$ (17) The energy and momentum of the particle is given by the known formulae $$E_{L}(p)=\sum\limits_{i=1}^{N}(\acute{k}^{2}_{i}-k^{2}_{i}),$$ (18) $$p=\sum\limits_{i=1}^{N}(\acute{k}_{i}-k_{i})=\frac{2\pi}{L}\sum\limits_{i=1}^{% N}(\acute{n}_{i}-n_{i}).$$ (19) In our case, the collections $\{\acute{k}_{i}\}$ and $\{k_{i}\}$ are solutions of the Gaudin’s equations (1) for a state with one particle ($\{\acute{n}_{i}\}=(0,\ldots,0,1)$) and for the ground state ($\{n_{i}\}=(0,\ldots,0,0)$), respectively. The quasimomenta $\{\acute{k}_{i}\}$ and $\{k_{i}\}$ can be obtained numerically from Eqs. (1) by the Newton method (the Yang–Yang equations (2) give the same solution). It is seen from Fig. 1 that the small quantity $\delta_{sL}$ obtained from Eqs. (17)-(19), (1) coincides with high accuracy with $\delta_{s}$ (16). The difference of $\delta_{sL}$ and $\delta_{s}$ is about $1\%$ for $\gamma=0.0001$–$0.1$. Since the function $\psi_{\textbf{p}}=\rho_{-\textbf{p}}$ for small $p$ describes a phonon in the interacting Bose gas [9, 11, 17], we conclude that Lieb’s particle $\{n_{i}\}=(0,\ldots,0,1)$ (i.e., $\{I_{i}\}=(-\frac{N-1}{2},-\frac{N-3}{2},\ldots,\frac{N-5}{2},\frac{N-3}{2},1+% \frac{N-1}{2})$) is a phonon. In this case, the Gaudin’s equations (1) imply that the hole with the momentum $p=2\pi l/L$ ($l>1$) should coincide with $l$ interacting phonons with the momentum $2\pi/L$. Let us verify this directly for $l=2$. 3 Two interacting phonons vs a hole with the quantum numbers $\{n_{i}\}=(0,\ldots,0,1,1)$. In the language of the Lieb–Lininger equations (2), the hole with the momentum $p=4\pi/L$ is characterized by the quantum numbers $\{I_{i}\}=(-\frac{N-1}{2},-\frac{N-3}{2},\ldots,\frac{N-5}{2},1+\frac{N-3}{2},% 1+\frac{N-1}{2})$. In the language of the Lieb–Lininger equations in the Gaudin’s form (1), such hole is described by the quantum numbers $\{n_{i}\}=(0,\ldots,0,1,1)$. In the previous section we proved that the state $\{n_{i}\}=(0,\ldots,0,1)$ describes a phonon with the momentum $p=2\pi/L$. The state $\{n_{i}\}=(0,\ldots,0,1,0)$ is equivalent to $\{n_{i}\}=(0,\ldots,0,1)$. Therefore, it is obvious that the state $\{n_{i}\}=(0,\ldots,0,1,1)$ is two interacting phonons with the momentum $p=2\pi/L$. We now verify this assumption independently, by using the method of collective variables. Consider a Bose gas with weak coupling and dimensionality of $1,2,$ or $3$. Let us find the wave function and the energy of two interacting phonons with wave vectors $\textbf{p}_{1}$ and $\textbf{p}_{2}$. Feynman noticed that the energy of interaction ($\delta E$) of two phonons should be by $\sim N$ times less than the energy of one phonon [10]. However, the solutions for a wave function and $\delta E$ were not found. The ground state is described by the wave function (5), (6) satisfying the Schrödinger equation $$-\frac{\hbar^{2}}{2m}\sum\limits_{j}\triangle_{j}\Psi+\frac{1}{2}\sum\limits_{% ij}^{i\not=j}U(|\textbf{r}_{i}-\textbf{r}_{j}|)\Psi=E\Psi$$ (20) with energy $E=E_{0}$. The equations for $E_{0}$ and the functions $a_{j}$ from (6) are given in Appendix. If the system contains one phonon, then the wave function is $\psi_{\textbf{p}}\Psi_{0}$, where $\psi_{\textbf{p}}$ is given by formula (8), and the solutions for the functions $b_{j}$ and the energy of a quasiparticle are given in the previous section. If two phonons with wave vectors $\textbf{p}_{1}$ and $\textbf{p}_{2}$ are present, then the system is described by the wave function $\psi_{\textbf{p}_{1}\textbf{p}_{2}}\Psi_{0}$. We substitute this function in the Schrödinger equation and take into account that $\Psi_{0}=Ae^{S}$ satisfies this equation with energy $E_{0}$. As a result, we obtain the equation for the function $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$: $$-\frac{\hbar^{2}}{2m}\sum\limits_{j}\left[\triangle_{j}\psi_{\textbf{p}_{1}% \textbf{p}_{2}}+2(\nabla_{j}S)(\nabla_{j}\psi_{\textbf{p}_{1}\textbf{p}_{2}})% \right]=E_{\textbf{p}_{1}\textbf{p}_{2}}\psi_{\textbf{p}_{1}\textbf{p}_{2}},$$ (21) where $E_{\textbf{p}_{1}\textbf{p}_{2}}=E-E_{0}$ is the energy of two interacting phonons. Since the interaction of two phonons should be weak, we seek $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ in the form $$\psi_{\textbf{p}_{1}\textbf{p}_{2}}=\psi_{\textbf{p}_{1}}\psi_{\textbf{p}_{2}}% +\frac{\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}}{\sqrt{N}},$$ (22) where $\psi_{\textbf{p}_{1}}$ and $\psi_{\textbf{p}_{2}}$ are one-phonon solutions. We substitute $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ (22) in Eq. (21) and take into account that the one-phonon functions $\psi_{\textbf{p}_{1}}$ and $\psi_{\textbf{p}_{2}}$ satisfy Eq. (21) with the energies $E(\textbf{p}_{1})$ and $E(\textbf{p}_{2}),$ respectively. In this way we get the following equation for $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$: $$\displaystyle-\frac{\hbar^{2}}{2m}\sum\limits_{j}\left[2(\nabla_{j}\psi_{% \textbf{p}_{1}})(\nabla_{j}\psi_{\textbf{p}_{2}})+\triangle_{j}\delta\psi_{% \textbf{p}_{1}\textbf{p}_{2}}+2(\nabla_{j}S)(\nabla_{j}\delta\psi_{\textbf{p}_% {1}\textbf{p}_{2}})\right]=$$ $$\displaystyle=[E(\textbf{p}_{1})+E(\textbf{p}_{2})+\delta E]\delta\psi_{% \textbf{p}_{1}\textbf{p}_{2}}+\delta E\psi_{\textbf{p}_{1}}\psi_{\textbf{p}_{2% }},$$ (23) $$E_{\textbf{p}_{1}\textbf{p}_{2}}=E(\textbf{p}_{1})+E(\textbf{p}_{2})+\delta E.$$ (24) Here, the energy $E_{\textbf{p}_{1}\textbf{p}_{2}}$ of two interacting phonons is represented as a sum of the energies $E(\textbf{p}_{1})$ and $E(\textbf{p}_{2})$ of free phonons and the correction $\delta E$. The solution for the function $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ should have the form $\psi_{\textbf{p}}$ (8) with $\textbf{p}=\textbf{p}_{1}+\textbf{p}_{2}$, since formula (8) describes the state with any number of quasiparticles possessing the total momentum $\hbar\textbf{p}$: $$\displaystyle\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$$ $$\displaystyle=$$ $$\displaystyle B_{1}(\textbf{p}_{1},\textbf{p}_{2})\rho_{-\textbf{p}}+\sum% \limits_{\textbf{q}\neq 0}^{\textbf{q}+\textbf{p}\neq 0}\frac{B_{2}(\textbf{q}% ;\textbf{p}_{1},\textbf{p}_{2})}{2!N^{1/2}}\rho_{\textbf{q}}\rho_{-\textbf{q}-% \textbf{p}}$$ (25) $$\displaystyle+$$ $$\displaystyle\sum\limits_{\textbf{q}_{1},\textbf{q}_{2}\neq 0}^{\textbf{q}_{1}% +\textbf{q}_{2}+\textbf{p}\not=0}\frac{B_{3}(\textbf{q}_{1},\textbf{q}_{2};% \textbf{p}_{1},\textbf{p}_{2})}{3!N}\rho_{\textbf{q}_{1}}\rho_{\textbf{q}_{2}}% \rho_{-\textbf{q}_{1}-\textbf{q}_{2}-\textbf{p}}+\ldots$$ $$\displaystyle+$$ $$\displaystyle\sum\limits_{\textbf{q}_{1},\ldots,\textbf{q}_{N-1}\neq 0}^{% \textbf{q}_{1}+\ldots+\textbf{q}_{N-1}+\textbf{p}\not=0}\frac{B_{N}(\textbf{q}% _{1},\ldots,\textbf{q}_{N-1};\textbf{p}_{1},\textbf{p}_{2})}{N!N^{(N-1)/2}}% \rho_{\textbf{q}_{1}}\ldots\rho_{\textbf{q}_{N-1}}\rho_{-\textbf{q}_{1}-\ldots% -\textbf{q}_{N-1}-\textbf{p}},$$ where $\textbf{p}=\textbf{p}_{1}+\textbf{p}_{2}$. We substitute $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ (25) in (23). The result is reduced to the form $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle C_{1}(\textbf{p}_{1},\textbf{p}_{2})\rho_{-\textbf{p}}+\sum% \limits_{\textbf{q}\neq 0}^{\textbf{q}+\textbf{p}\neq 0}\frac{C_{2}(\textbf{q}% ;\textbf{p}_{1},\textbf{p}_{2})}{N^{1/2}}\rho_{\textbf{q}}\rho_{-\textbf{q}-% \textbf{p}}+\ldots+$$ (26) $$\displaystyle+$$ $$\displaystyle\sum\limits_{\textbf{q}_{1},\ldots,\textbf{q}_{N-1}\neq 0}^{% \textbf{q}_{1}+\ldots+\textbf{q}_{N-1}+\textbf{p}\not=0}\frac{C_{N}(\textbf{q}% _{1},\ldots,\textbf{q}_{N-1};\textbf{p}_{1},\textbf{p}_{2})}{N^{(N-1)/2}}\rho_% {\textbf{q}_{1}}\ldots\rho_{\textbf{q}_{N-1}}\rho_{-\textbf{q}_{1}-\ldots-% \textbf{q}_{N-1}-\textbf{p}}$$ ($\textbf{p}=\textbf{p}_{1}+\textbf{p}_{2}$). Since $\rho_{-\textbf{p}}$, $\rho_{\textbf{q}}\rho_{-\textbf{q}-\textbf{p}}$, $\rho_{\textbf{q}_{1}}\rho_{\textbf{q}_{2}}\rho_{-\textbf{q}_{1}-\textbf{q}_{2}% -\textbf{p}},\ldots$ are independent functions of the variables $\textbf{r}_{1},\ldots,\textbf{r}_{N}$ [34], Eq. (26) is equivalent to the system of $N$ equations $$\displaystyle C_{j}(\textbf{q}_{1},\ldots,\textbf{q}_{j-1};\textbf{p}_{1},% \textbf{p}_{2})=0,\quad j=1,\ldots,N.$$ (27) For the weak coupling, it is sufficient to consider the equations $C_{1}=0$ and $C_{2}=0$. They have the form $$\displaystyle B_{1}(\textbf{p}_{1},\textbf{p}_{2})\frac{2m}{\hbar^{2}}[E(% \textbf{p}_{1})+E(\textbf{p}_{2})+\delta E-E_{1}(\textbf{p}_{1}+\textbf{p}_{2}% )]=$$ (28) $$\displaystyle=$$ $$\displaystyle 2\left[b_{1}(\textbf{p}_{1})b_{1}(\textbf{p}_{2})\textbf{p}_{1}% \textbf{p}_{2}-p_{1}^{2}b_{1}(\textbf{p}_{1})b_{2}(\textbf{p}_{1};\textbf{p}_{% 2})-p_{2}^{2}b_{1}(\textbf{p}_{2})b_{2}(\textbf{p}_{2};\textbf{p}_{1})\right]-$$ $$\displaystyle-$$ $$\displaystyle\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}^{\textbf{q}+\textbf{p}% \neq 0}B_{2}(\textbf{q};\textbf{p}_{1},\textbf{p}_{2})\textbf{q}(\textbf{q}+% \textbf{p})-\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}B_{3}(\textbf{q},-\textbf% {q};\textbf{p}_{1},\textbf{p}_{2})\textbf{q}^{2},$$ $$\displaystyle B_{2}(\textbf{q};\textbf{p}_{1},\textbf{p}_{2})\frac{2m}{\hbar^{% 2}}[E(\textbf{p}_{1})+E(\textbf{p}_{2})+\delta E-E_{1}(\textbf{q})-E_{1}(% \textbf{q}+\textbf{p})]=$$ $$\displaystyle=$$ $$\displaystyle-b_{2}(\textbf{q};\textbf{p}_{1})b_{2}(\textbf{q}+\textbf{p}_{1};% \textbf{p}_{2})(\textbf{q}+\textbf{p}_{1})^{2}-b_{2}(-\textbf{q}-\textbf{p};% \textbf{p}_{1})b_{2}(-\textbf{q}-\textbf{p}_{2};\textbf{p}_{2})(\textbf{q}+% \textbf{p}_{2})^{2}-$$ $$\displaystyle-$$ $$\displaystyle b_{2}(\textbf{q};\textbf{p}_{2})b_{2}(\textbf{q}+\textbf{p}_{2};% \textbf{p}_{1})(\textbf{q}+\textbf{p}_{2})^{2}-b_{2}(-\textbf{q}-\textbf{p};% \textbf{p}_{2})b_{2}(-\textbf{q}-\textbf{p}_{1};\textbf{p}_{1})(\textbf{q}+% \textbf{p}_{1})^{2}+$$ $$\displaystyle+$$ $$\displaystyle[b_{2}(\textbf{q};\textbf{p}_{1})+b_{2}(-\textbf{q}-\textbf{p}_{1% };\textbf{p}_{1})]b_{1}(\textbf{p}_{2})\textbf{p}_{2}(\textbf{q}+\textbf{p}_{1% })+$$ $$\displaystyle+$$ $$\displaystyle[b_{2}(\textbf{q};\textbf{p}_{2})+b_{2}(-\textbf{q}-\textbf{p}_{2% };\textbf{p}_{2})]b_{1}(\textbf{p}_{1})\textbf{p}_{1}(\textbf{q}+\textbf{p}_{2% })-$$ $$\displaystyle-$$ $$\displaystyle[b_{2}(-\textbf{q}-\textbf{p};\textbf{p}_{1})+b_{2}(\textbf{q}+% \textbf{p}_{2};\textbf{p}_{1})]b_{1}(\textbf{p}_{2})\textbf{p}_{2}(\textbf{q}+% \textbf{p}_{2})-$$ $$\displaystyle-$$ $$\displaystyle[b_{2}(-\textbf{q}-\textbf{p};\textbf{p}_{2})+b_{2}(\textbf{q}+% \textbf{p}_{1};\textbf{p}_{2})]b_{1}(\textbf{p}_{1})\textbf{p}_{1}(\textbf{q}+% \textbf{p}_{1})-$$ $$\displaystyle-$$ $$\displaystyle 2p_{1}^{2}b_{1}(\textbf{p}_{1})b_{3}(\textbf{q},\textbf{p}_{1};% \textbf{p}_{2})-2p_{2}^{2}b_{1}(\textbf{p}_{2})b_{3}(\textbf{q},\textbf{p}_{2}% ;\textbf{p}_{1})-N\delta Eb_{1}(\textbf{p}_{1})b_{1}(\textbf{p}_{2})\frac{2m}{% \hbar^{2}}(\delta_{\textbf{q},-\textbf{p}_{1}}+\delta_{\textbf{q},-\textbf{p}_% {2}})$$ $$\displaystyle+$$ $$\displaystyle 2B_{1}(\textbf{p}_{1},\textbf{p}_{2})\textbf{p}[\textbf{q}a_{2}(% \textbf{q})-(\textbf{p}+\textbf{q})a_{2}(\textbf{p}+\textbf{q})-\textbf{p}a_{3% }(\textbf{p},\textbf{q})]-$$ $$\displaystyle-$$ $$\displaystyle\frac{1}{N}\sum\limits_{\textbf{q}_{1}\neq 0}B_{3}(\textbf{q}_{1}% ,-\textbf{q}-\textbf{q}_{1}-\textbf{p};\textbf{p}_{1},\textbf{p}_{2})\textbf{q% }_{1}(\textbf{q}+\textbf{q}_{1}+\textbf{p})+$$ $$\displaystyle+$$ $$\displaystyle\frac{1}{N}\sum\limits_{\textbf{q}_{1}\neq 0}B_{3}(\textbf{q}_{1}% ,\textbf{q}-\textbf{q}_{1};\textbf{p}_{1},\textbf{p}_{2})\textbf{q}_{1}(% \textbf{q}-\textbf{q}_{1})-\frac{1}{N}\sum\limits_{\textbf{q}_{1}\neq 0}B_{4}(% \textbf{q}_{1},-\textbf{q}_{1},\textbf{q};\textbf{p}_{1},\textbf{p}_{2})q^{2}_% {1},$$ where $\textbf{p}=\textbf{p}_{1}+\textbf{p}_{2}$, $E_{1}(\textbf{q})=\frac{\hbar^{2}q^{2}}{2m}(1-2a_{2}(\textbf{q}))$, and $\delta_{\textbf{q},-\textbf{p}}$ is the Kronecker delta. In this case, $B_{2}(\textbf{q};\textbf{p}_{1},\textbf{p}_{2})=B_{2}(-\textbf{q}-\textbf{p};% \textbf{p}_{1},\textbf{p}_{2})$. Let us present the functions $\psi_{\textbf{p}_{1}}$, $\psi_{\textbf{p}_{2}},$ and $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ in (22) in the form of expansions (8) and (25). Then the “leading” term in the expansion of $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ is $A\rho_{-\textbf{p}_{1}}\rho_{-\textbf{p}_{2}}$. It is convenient to consider the constant $A$ normalizing. Let us write the functions $\psi_{\textbf{p}_{1}}$, $\psi_{\textbf{p}_{2}}$ in the form $b_{1}(\textbf{p}_{1})\tilde{\psi}_{\textbf{p}_{1}}$, $b_{1}(\textbf{p}_{2})\tilde{\psi}_{\textbf{p}_{2}}$. Then we present $\psi_{\textbf{p}_{1}}\psi_{\textbf{p}_{2}}$ as a series, where the first term is $b_{1}(\textbf{p}_{1})b_{1}(\textbf{p}_{2})\rho_{-\textbf{p}_{1}}\rho_{-\textbf% {p}_{2}}$. The corresponding terms in the expansion of $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ (25) have the form $\frac{B_{2}(-\textbf{p}_{1};\textbf{p}_{1},\textbf{p}_{2})+B_{2}(-\textbf{p}_{% 2};\textbf{p}_{1},\textbf{p}_{2})}{2N^{1/2}}\rho_{-\textbf{p}_{1}}\rho_{-% \textbf{p}_{2}}$. Eventually, the coefficient of $\rho_{-\textbf{p}_{1}}\rho_{-\textbf{p}_{2}}$ in the expansion of the function $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ (22) is $A=b_{1}(\textbf{p}_{1})b_{1}(\textbf{p}_{2})+\frac{B_{2}(-\textbf{p}_{1};% \textbf{p}_{1},\textbf{p}_{2})+B_{2}(-\textbf{p}_{2};\textbf{p}_{1},\textbf{p}% _{2})}{2N}$. Let us represent the function $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ (22) in the form $\psi_{\textbf{p}_{1}\textbf{p}_{2}}=A\tilde{\psi}_{\textbf{p}_{1}\textbf{p}_{2}}$, where $\tilde{\psi}_{\textbf{p}_{1}\textbf{p}_{2}}=\frac{b_{1}(\textbf{p}_{1})b_{1}(% \textbf{p}_{2})}{A}\tilde{\psi}_{\textbf{p}_{1}}\tilde{\psi}_{\textbf{p}_{2}}+% \frac{\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}}{A\sqrt{N}}$. Since the interaction of phonons is very weak, the term $\frac{B_{2}(-\textbf{p}_{1};\textbf{p}_{1},\textbf{p}_{2})+B_{2}(-\textbf{p}_{% 2};\textbf{p}_{1},\textbf{p}_{2})}{2N}$ in $A$ should be less than $b_{1}(\textbf{p}_{1})b_{1}(\textbf{p}_{2})$ by $\sqrt{N}$ or even $N$ times. Therefore, $\frac{b_{1}(\textbf{p}_{1})b_{1}(\textbf{p}_{2})}{A}\approx 1$. As a result, $\tilde{\psi}_{\textbf{p}_{1}\textbf{p}_{2}}=\tilde{\psi}_{\textbf{p}_{1}}% \tilde{\psi}_{\textbf{p}_{2}}+\frac{\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}}% {A\sqrt{N}}$. Here, $\tilde{\psi}_{\textbf{p}}$ is a one-phonon function (8) with $b_{1}=1$. In this case, $b_{j\geq 2}$ satisfy the equations from Appendix, in which $b_{1}=1$. Represent the term $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}/A$ in the form (25). Then we consider the factor $A$ to be normalizing and set $A=1$. Such transformations lead to the necessity to set $b_{1}(\textbf{p}_{1})=b_{1}(\textbf{p}_{2})=1$ and $B_{2}(-\textbf{p}_{1};\textbf{p}_{1},\textbf{p}_{2})=B_{2}(-\textbf{p}_{2};% \textbf{p}_{1},\textbf{p}_{2})=0$ in Eqs. (28), (3) and the equations of Appendix. We consider the coupling to be weak: $\gamma\ll 1$, but $\gamma\gg N^{-2}$ (the latter is necessary for the linearity of the dispersion law at small $p$). In this case, we can seek $\delta E$ and $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ in the zero approximation. This means [17, 34] that all sums in the chain of equations for $B_{j}$ and $\delta E$ should be neglected (if we find $B_{2}(\textbf{q};\textbf{p}_{1},\textbf{p}_{2})$ from (3), by neglecting sums of the form $\sum_{\textbf{q}_{1}\neq 0}B_{3}$, we convince ourselves that the sum $\frac{1}{N}\sum_{\textbf{q}\neq 0}^{\textbf{q}+\textbf{p}\neq 0}B_{2}(\textbf{% q};\textbf{p}_{1},\textbf{p}_{2})\textbf{q}(\textbf{q}+\textbf{p})$ is negligible relative to $2\textbf{p}_{1}\textbf{p}_{2}$ in Eq. (28)). As a result, Eq. (28) takes the form $$\displaystyle B_{1}(\textbf{p}_{1},\textbf{p}_{2})=\frac{\hbar^{2}}{m}\frac{% \textbf{p}_{1}\textbf{p}_{2}-p_{1}^{2}b_{2}(\textbf{p}_{1};\textbf{p}_{2})-p_{% 2}^{2}b_{2}(\textbf{p}_{2};\textbf{p}_{1})}{E(\textbf{p}_{1})+E(\textbf{p}_{2}% )+\delta E-E_{1}(\textbf{p}_{1}+\textbf{p}_{2})}.$$ (30) Let us set in (3) $\textbf{q}=-\textbf{p}_{1}$. Then Eq. (3) reads $$\displaystyle 0=-b_{2}(-\textbf{p}_{2};\textbf{p}_{1})b_{2}(\textbf{p}_{1}-% \textbf{p}_{2};\textbf{p}_{2})(\textbf{p}_{2}-\textbf{p}_{1})^{2}-b_{2}(-% \textbf{p}_{1};\textbf{p}_{2})b_{2}(\textbf{p}_{2}-\textbf{p}_{1};\textbf{p}_{% 1})(\textbf{p}_{2}-\textbf{p}_{1})^{2}$$ (31) $$\displaystyle-$$ $$\displaystyle[b_{2}(-\textbf{p}_{1};\textbf{p}_{2})+b_{2}(\textbf{p}_{1}-% \textbf{p}_{2};\textbf{p}_{2})]\textbf{p}_{1}(\textbf{p}_{1}-\textbf{p}_{2})-[% b_{2}(-\textbf{p}_{2};\textbf{p}_{1})+b_{2}(\textbf{p}_{2}-\textbf{p}_{1};% \textbf{p}_{1})]\textbf{p}_{2}(\textbf{p}_{2}-\textbf{p}_{1})$$ $$\displaystyle-$$ $$\displaystyle 2p_{1}^{2}b_{3}(-\textbf{p}_{1},\textbf{p}_{1};\textbf{p}_{2})-2% p_{2}^{2}b_{3}(-\textbf{p}_{1},\textbf{p}_{2};\textbf{p}_{1})-N\delta E\frac{2% m}{\hbar^{2}}(1+\delta_{\textbf{p}_{2},\textbf{p}_{1}})$$ $$\displaystyle+$$ $$\displaystyle 2B_{1}(\textbf{p}_{1},\textbf{p}_{2})\textbf{p}[-\textbf{p}_{1}a% _{2}(-\textbf{p}_{1})-\textbf{p}_{2}a_{2}(\textbf{p}_{2})-\textbf{p}a_{3}(% \textbf{p},-\textbf{p}_{1})].$$ Equation (3) for $\textbf{q}=-\textbf{p}_{2}$ is also reduced to (31) (to sight this, one needs to consider the relations $a_{2}(-\textbf{p})=a_{2}(\textbf{p})$, $a_{3}(\textbf{p},-\textbf{p}_{1})=a_{3}(\textbf{p},\textbf{p}_{1}-\textbf{p})$ and $b_{3}(\textbf{p}_{1},\textbf{p}_{2};\textbf{p}_{3})=b_{3}(-\textbf{p}_{1}-% \textbf{p}_{2}-\textbf{p}_{3},\textbf{p}_{2};\textbf{p}_{3})$). Equations (30), (31) allow us to find $B_{1}(\textbf{p}_{1},\textbf{p}_{2})$ and $\delta E$. From Eq. (3) at $\textbf{q}\neq-\textbf{p}_{1},-\textbf{p}_{2}$ we can determine $B_{2}(\textbf{q};\textbf{p}_{1},\textbf{p}_{2})$. Consider the case $\textbf{p}_{2}=\textbf{p}_{1}$. According to (8), q in $b_{2}(\textbf{q};\textbf{p})$ must be nonzero. Therefore, if (31) includes the term $b_{2}(0;\textbf{p})$, this term should be dropped. Then relations (30), (31) yield $$\displaystyle B_{1}(\textbf{p}_{1},\textbf{p}_{1})=\frac{p_{1}^{2}[2-4b_{2}(% \textbf{p}_{1};\textbf{p}_{1})]}{(2m/\hbar^{2})[2E(\textbf{p}_{1})+\delta E-E_% {1}(2\textbf{p}_{1})]},$$ (32) $$\displaystyle B_{1}(\textbf{p}_{1},\textbf{p}_{1})=-\frac{2p_{1}^{2}b_{3}(-% \textbf{p}_{1},\textbf{p}_{1};\textbf{p}_{1})+(2m/\hbar^{2})N\delta E}{4p_{1}^% {2}[a_{2}(\textbf{p}_{1})+a_{3}(2\textbf{p}_{1},-\textbf{p}_{1})]}.$$ (33) Equations (32) and (33) give a square equation for $\delta E$ with the roots $$\delta E_{\pm}=-\tilde{E}_{+}\pm\sqrt{\tilde{E}_{-}^{2}-8N^{-1}[\hbar^{2}p_{1}% ^{2}/(2m)]^{2}[1-2b_{2}(\textbf{p}_{1};\textbf{p}_{1})][a_{2}(\textbf{p}_{1})+% a_{3}(2\textbf{p}_{1},-\textbf{p}_{1})]},$$ (34) where $$\tilde{E}_{\pm}=E(p_{1})-\frac{E_{1}(2p_{1})}{2}\pm b_{3}(-\textbf{p}_{1},% \textbf{p}_{1};\textbf{p}_{1})\frac{\hbar^{2}p_{1}^{2}}{2mN}.$$ (35) At $p_{1}\rightarrow 0$ and $\gamma\ll 1,$ the formulae in Appendix yield $$a_{3}(2\textbf{p}_{1},-\textbf{p}_{1})=a_{3}(\textbf{p}_{1},\textbf{p}_{1})% \approx-a_{2}(\textbf{p}_{1})/4,\quad b_{2}(\textbf{p}_{1};\textbf{p}_{1})% \approx 1/8.$$ (36) Using the relation $a_{4}(-\textbf{p}_{1},\textbf{p}_{1},\textbf{p}_{1})\approx 7a_{2}(\textbf{p}_% {1})/16$ [17], we get $b_{3}(-\textbf{p}_{1},\textbf{p}_{1};\textbf{p}_{1})\approx-5/32$. Therefore, relations (34), (35) are reduced to $$\delta E_{\pm}=-\tilde{E}_{+}\pm\sqrt{\tilde{E}_{-}^{2}-\frac{9}{2N}\left(% \frac{\hbar^{2}p_{1}^{2}}{2m}\right)^{2}a_{2}(\textbf{p}_{1})},$$ (37) $$\tilde{E}_{\pm}=E(p_{1})-\frac{E_{1}(2p_{1})}{2}\mp\frac{5}{32N}\frac{\hbar^{2% }p_{1}^{2}}{2m},$$ (38) where $E(p_{1})=\hbar p_{1}v_{s}$, $a_{2}(\textbf{p}_{1})\approx-\alpha_{\textbf{p}_{1}}/2\approx-\frac{\sqrt{m% \rho\nu(p_{1})}}{\hbar p_{1}}$, and $E_{1}(2p_{1})$, $v_{s}$ are determined by formulae (12), (14). At $N\gg 1,\gamma\ \lower-1.2pt\vbox{\hbox{\hbox to 0.0pt{$<$}\lower 5.0pt\vbox{% \hbox{$\sim$}}}}\ N^{-1}$, the corrections $\frac{9}{2N}\left(\frac{\hbar^{2}p_{1}^{2}}{2m}\right)^{2}a_{2}(\textbf{p}_{1})$ and $\frac{5}{32N}\frac{\hbar^{2}p_{1}^{2}}{2m}$ in (37), (38) are negligible, and solutions (37), (38) take the simple form $$\delta E_{+}\approx 2|\tilde{E}|,\quad\delta E_{-}\approx-\frac{9E(p_{1})}{8N}% \frac{\hbar^{2}p_{1}^{2}}{2m|\tilde{E}|},$$ (39) $$\tilde{E}\approx E(p_{1})-\frac{E_{B}(2p_{1})}{2}.$$ (40) Since $\delta E_{+}>\delta E_{-}$, namely the solution $\delta E_{-}$ should be realized in Nature. Thus, we have found the energy of interaction, $\delta E$, of two phonons with the same momentum $\hbar p_{1}$ at $p_{1}\rightarrow 0$ and weak coupling ($N^{-2}\ll\gamma\ll 1$). This result is new. At the considered parameters of the system we have $|\tilde{E}|\sim\frac{\hbar^{2}p_{1}^{2}}{2m}$. Therefore, $\delta E_{-}\sim-E(p_{1})/N$. In this case, relations (33), (36) yield $B_{1}(\textbf{p}_{1},\textbf{p}_{1})\sim-1$. It is natural to expect that $|B_{1}(\textbf{p}_{1},\textbf{p}_{2})|\sim 1$ also at $\textbf{p}_{2}\neq\textbf{p}_{1}$. In this case, Eq. (3) yields $|B_{2}(\textbf{q};\textbf{p}_{1},\textbf{p}_{2})|\sim 1$. That is, the term $\delta\psi_{\textbf{p}_{1}\textbf{p}_{2}}/\sqrt{N}$ in formula (22) is less than the main term $\psi_{\textbf{p}_{1}}\psi_{\textbf{p}_{2}}$ by $\sim N$ times. These estimates show that the interaction of two phonons is indeed very weak. Let us return to the question about the nature of a hole. In the above equations, we pass to a 1D point potential. Compare $\delta E_{-}$ with the quantity $$\delta E_{h}=E_{h}(p=4\pi/L)-2E_{p}(p=2\pi/L)$$ (41) equal to the difference of the energy of a hole with the quantum numbers $\{I_{i}\}=(-\frac{N-1}{2},-\frac{N-3}{2},\ldots,\frac{N-5}{2},1+\frac{N-3}{2},% 1+\frac{N-1}{2})$ and two energies of a free “particle” (phonon) with the quantum numbers $\{I_{i}\}=(-\frac{N-1}{2},-\frac{N-3}{2},\ldots,\frac{N-5}{2},\frac{N-3}{2},1+% \frac{N-1}{2})$. The quantities $p=4\pi/L$ and $p=2\pi/L$ in (41) are momenta. The values of $E_{h}(p=4\pi/L)$ and $E_{p}(p=2\pi/L)$ can be found numerically from the Yang–Yang equations (2) and formulae (18), (19). The value of $\delta E_{-}$ follows from Eqs. (37) and (38), where we set $\nu(p)=2c$, $\hbar=2m=1$, $c/\rho=\gamma$, and $p_{1}=2\pi/L$. It is seen from Fig. 2 that the energy of interaction of two phonons ($\delta E_{-}$) is close to $\delta E_{h}$, if $N^{-2}\ll\gamma\ \lower-1.2pt\vbox{\hbox{\hbox to 0.0pt{$<$}\lower 5.0pt\vbox{% \hbox{$\sim$}}}}\ N^{-1}$. The very small value of $\delta E_{h}$ is an indicator of the nature of a hole. The closeness of the values of $\delta E_{-}$ and $\delta E_{h}$ proves that the hole $\{I_{i}\}=(-\frac{N-1}{2},-\frac{N-3}{2},\ldots,\frac{N-5}{2},1+\frac{N-3}{2},% 1+\frac{N-1}{2})$ coincides with two interacting phonons, each characterized by the collection $\{I_{i}\}=(-\frac{N-1}{2},-\frac{N-3}{2},\ldots,\frac{N-5}{2},\frac{N-3}{2},1+% \frac{N-1}{2})$. This is the main result of the present work. In the region $N^{-1}\ll\gamma\ll 1$ the quantities $\delta E_{-}$ and $\delta E_{h}$ are considerably different, since we found a solution for $\delta E_{-}$ only in zero approximation. The error of the numerical calculation of $\delta E_{h}$ should also be significant in this case. We note that, to obtain namely a two-phonon solution, it is necessary firstly to set the orders of the quantities $B_{j}$ and $\delta E$. Otherwise, we can arrive at another solution, since the function $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ (22), (25) can describe any excited state with the total momentum $\hbar(\textbf{p}_{1}+\textbf{p}_{2})$ (see Section 5). We took the two-phonon nature of the state into account with the help of the condition $|[B_{2}(-\textbf{p}_{1};\textbf{p}_{1},\textbf{p}_{2})+B_{2}(-\textbf{p}_{2};% \textbf{p}_{1},\textbf{p}_{2})]/(2N)|\ll|b_{1}(\textbf{p}_{1})b_{1}(\textbf{p}% _{2})|$. The above two-phonon solution should be contained in Eqs. (49)–(9) of Appendix, since any (not only one-phonon) excited state of the system with the total momentum p is described by the function $\psi_{\textbf{p}}\Psi_{0}$ (7). 4 Additional arguments. Consider a 1D Bose gas with point interaction. Let us find the limit $c\rightarrow 0$ for the Lieb–Lininger solutions [4, 29] $$\psi_{\{k\}}(x_{1},\ldots,x_{N})=const*\sum\limits_{P}a(P)e^{i\sum\limits_{l=1% }^{N}k_{P_{l}}x_{l}},$$ (42) $$a(P)=\prod\limits_{j<l}\left(1+\frac{ic}{k_{P_{j}}-k_{P_{l}}}\right).$$ (43) For the state $\{n_{i}\}=(0,\ldots,0,1)$, at $c\rightarrow 0$ we get $\{k_{i}\}=(0,\ldots,0,2\pi/L)$. Relations (42) and (43) yield $a(P)=1$ and $$\psi_{\{k\}}\equiv\psi_{1}=c_{1}\rho_{-k_{N}},$$ (44) where $k_{N}=2\pi/L$. For the state $\{n_{i}\}=(0,\ldots,0,1,1),$ we get $\{k_{i}\}\approx(0,\ldots,0,2\pi/L,2\pi/L)$. Then relations (42), (43) yield $a(P)=1$ and $$\psi_{\{k\}}\equiv\psi_{11}=c_{11}\left(\rho_{-k_{N}}\rho_{-k_{N}}-\frac{\rho_% {-2k_{N}}}{\sqrt{N}}\right).$$ (45) Here, while calculating $a(P),$ we take into account that $(k_{N}-k_{N-1})|_{c\rightarrow 0}\sim c^{1/2}$. Functions (44) and (45) coincide with the wave functions of a system of free bosons, in which one or two (respectively) atoms have the momentum $2\pi/L$. The normalizing coefficients are $c_{1}=L^{-N/2}$, $c_{11}=\sqrt{\frac{N}{N-1}}c_{1}$ [34]. Since $\rho_{-k_{N}}\sim 1$ for the overwhelming majority of configurations $(x_{1},\ldots,x_{N})$, the comparison of $\psi_{11}$ (45) and $\psi_{1}$ (44) shows that in the limit $c\rightarrow 0$ the hole $(0,\ldots,0,1,1)$ is two interacting particles $(0,\ldots,0,1)$, which agrees with the result of the previous section. At $c=0$ we have, of course, free atoms instead of quasiparticles. The one-phonon and two-phonon solutions (7) and (22) pass at $c=0$ to solutions (44) and (45). To demonstrate this with the formulae in Sections 2 and 3, we take the relations $a_{j}=0$, $b_{j\geq 2}=0$, $B_{j\geq 2}=0$, and $\delta E=0$ into account. Relation (32) yields $B_{1}(p_{1},p_{1})=-1$. Thus, Eqs. (7), (8), and (22) describe free bosons at the zero interaction and phonons at a nonzero one (if the interaction is switched-on, the functions $\psi_{\textbf{p}_{1}}$, $\psi_{\textbf{p}_{1}\textbf{p}_{2}}$ vary negligibly, but the dispersion law $E(p)\sim p^{2}$ transits into $E(p)\approx v_{s}p$ due to a change of $\Psi_{0}$). It is clear that any Lieb–Lininger solution (42) can be presented in the form (7), (8). It would be of interest to get solutions (7), (8), and (22) directly from (42) at $c\neq 0$. This is a task for the future. Both in the Gaudin’s numbering and in the method of collective variables, each excited state of a 1D system is described by the collection of quantum numbers $\{n_{i}\}$ ($i=1,\ldots,N$) corresponding to the collection of quasiparticles with the momenta $p_{1},\ldots,p_{N}$, where $p_{j}=2\pi n_{j}/L$. That is, there is one-to-one correspondence between solutions in the method of collective variables at $\nu(p)=2c$ and solutions in the Lieb–Lininger approach. In this case, the uniqueness of a solution for each collection $\{n_{i}\}$ was proved only for the Lieb–Lininger approach [5]. The calculation of the statistical sum of a 1D system of point bosons at $N=\infty$ gives [35] $$F|_{T\rightarrow 0}=E_{0}+k_{B}T\sum\limits_{l=\pm 1,\pm 2,\ldots}\ln{\left(1-% e^{-\frac{E_{p}(p_{l})}{k_{B}T}}\right)},$$ (46) where $E_{p}(p_{l})$ is the dispersion law of particles. The calculation [35] involves all states of the system (including the ground state, particles, and holes). Formula (46) is exact at $N=\infty$ and $T\rightarrow 0$. Equation (46) is the known formula for the free energy of an ensemble of noninteracting Bose quasiparticles. The verification [28] indicates that formula (46) and the Yang–Yang approach [3] lead to identical thermodynamic solutions $F,S$. If we consider formally the state $\{n_{i}\}=(0,\ldots,0,1)$ as a hole, then any excited state $(n_{1},\ldots,n_{N})$ can be approximately considered as a collection of noninteracting holes. This leads again to formula (46) with the replacement of $E_{p}(p)$ by the dispersion law of holes $E_{h}(p)$. Such dualism of holes and particles is interesting but illusory, since the state $\{n_{i}\}=(0,\ldots,0,1)$ is physically a phonon, not a hole. The analysis of Sections $1$–$4$ clearly shows that the hole is simply a collection of identical interacting phonons with the momentum $2\pi/L$. This corresponds directly to the Gaudin’s numbering (see Eq. (1)). Therefore, the introduction of quasiparticles with the help of the Gaudin’s numbering [28, 35] is more physical. In this case, the curve of holes $E_{h}(p)$ describes the excited states with minimum energy for given $p$. The Yang–Yang numbering (see Eq. (2)) is also useful: using it, it is easy to find the energy of quasiparticles at strong coupling. We note that though at $\gamma\rightarrow\infty$ the energy of a particle is close to the energy of a free fermion (it is seen from Eq. (2)), the particle is described by the Bose statistics, due to the Bose symmetry of the wave function and Bose formula (46). We recall also the arguments by Feynman [10, 11, 12]. According to them, only the single dispersion law, corresponding to phonons, should be in the region of low $E,p$. Such conclusion is in agreement with our analysis. 5 States with the largest number of quasiparticles. Consider $N=10^{6}$ Bose atoms placed in a vessel. How many quasiparticles can exist in such a system? At first sight, the number of quasiparticles should not be bounded from above, since a quasiparticle is similar to a wave in the probability field. However, it turns out that the number of quasiparticles cannot exceed $N$. This can be proved by two methods. The most simple way is to use the Lieb–Lininger equations (1). In the Gaudin’s numbering, the creation of a quasiparticle is equivalent to a change in some $n_{j}$ from $n_{j}=0$ to any $n_{j}=l\neq 0$. In this case, a Bogolyubov–Feynman quasiparticle with the momentum $p=2\pi l/L$ is created. The largest number of quasiparticles is equal to the number of $n$’s with different $j$: it is the number of equations in system (1), which is equal to the number of atoms $N$. In this case, a hole is several Bogolyubov–Feynman quasiparticles. These properties were noted in [28, 35]. For nonpoint bosons it is necessary to note that a wave function of the form (7), (8) describes not only a state with one quasiparticle, but also the states with any number of quasiparticles. Indeed, the wave function of any stationary excited state can be written in the form $f(\textbf{r}_{1},\ldots,\textbf{r}_{N})\Psi_{0}$. The periodic system has a definite momentum. The general form of the wave function of a state with the total momentum $\hbar\textbf{p}$ is set by formulae (7), (8). Therefore, the function $f(\textbf{r}_{1},\ldots,\textbf{r}_{N})$ should coincide with $\psi_{\textbf{p}}$ (8). In this case, $b_{j}$ are different for different states. For the state with one phonon, $b_{j}\sim 1$ for all $j$. For a state with two phonons with the momenta $\hbar\textbf{p}_{1}$ and $\hbar\textbf{p}_{2}$ we should set $\textbf{p}=\textbf{p}_{1}+\textbf{p}_{2}$ in (7), (8). In this case, $b_{j\geq 3}\sim 1$, $b_{1}(\textbf{p})\sim N^{-1/2}$, $b_{2}(\textbf{q}_{1};\textbf{p})\sim N^{-1/2}$ for $\textbf{q}_{1}\neq-\textbf{p}_{1},-\textbf{p}_{2}$, and $b_{2}(\textbf{q}_{1};\textbf{p})\sim N^{1/2}$ for $\textbf{q}_{1}=-\textbf{p}_{1},-\textbf{p}_{2}$. For a state with three phonons we have $\textbf{p}=\textbf{p}_{1}+\textbf{p}_{2}+\textbf{p}_{3}$. The lowest not small coefficients $b_{j}$ should be the coefficients $b_{3}(\textbf{q}_{1},\textbf{q}_{2};\textbf{p})$ with such $\textbf{q}_{1}$ and $\textbf{q}_{2}$, for which $\rho_{\textbf{q}_{1}}\rho_{\textbf{q}_{2}}\rho_{-\textbf{q}_{1}-\textbf{q}_{2}% -\textbf{p}}=\rho_{-\textbf{p}_{1}}\rho_{-\textbf{p}_{2}}\rho_{-\textbf{p}_{3}}$. For a state with $N$ quasiparticles the relation $\textbf{p}=\textbf{p}_{1}+\ldots+\textbf{p}_{N}$ holds, and the coefficients $b_{j\leq N-1}$ are negligible: $b_{j\leq N-1}\sim N^{-a_{j}}$ ($a_{j}>0$). The coefficients $b_{N}(\textbf{q}_{1},\ldots,\textbf{q}_{N-1};\textbf{p})$ are not small at such $\textbf{q}_{1},\ldots,\textbf{q}_{N-1}$, for which $\rho_{\textbf{q}_{1}}\ldots\rho_{\textbf{q}_{N-1}}\rho_{-\textbf{q}_{1}-\ldots% -\textbf{q}_{N-1}-\textbf{p}}=\rho_{-\textbf{p}_{1}}\ldots\rho_{-\textbf{p}_{N}}$. Formulae (7), (8) imply that the largest number of quasiparticles equals $N$, since series (8) contains the terms $\rho_{-\textbf{q}_{1}}\ldots\rho_{-\textbf{q}_{j}}$ with at most $N$ factors $\rho_{-\textbf{q}}$. The last property is caused by that the functions $1,\rho_{-\textbf{q}_{1}}$, $\rho_{-\textbf{q}_{1}}\rho_{-\textbf{q}_{2}},\ldots,\rho_{-\textbf{q}_{1}}% \ldots\rho_{-\textbf{q}_{N}}$ form the complete (nonorthogonal) collection of functions, in which any Bose-symmetric function of the variables $\textbf{r}_{1},\ldots,\textbf{r}_{N}$, which can be presented as the Fourier series, can be expanded [34]. Therefore, the product $\rho_{-\textbf{q}_{1}}\ldots\rho_{-\textbf{q}_{N}}\rho_{-\textbf{q}_{N+1}}% \ldots\rho_{-\textbf{q}_{N+M}}$ containing more than $N$ factors $\rho_{-\textbf{q}}$ is reduced to an expansion of the form $\psi_{\textbf{p}}$ (8) with $\textbf{p}=\textbf{q}_{1}+\ldots+\textbf{q}_{N+M}$. For example, for $N=2$ we obtain $$\rho_{\textbf{q}_{1}}\rho_{\textbf{q}_{2}}\rho_{\textbf{q}_{3}}=\frac{1}{\sqrt% {N}}(\rho_{\textbf{q}_{1}+\textbf{q}_{2}}\rho_{\textbf{q}_{3}}+\rho_{\textbf{q% }_{1}+\textbf{q}_{3}}\rho_{\textbf{q}_{2}}+\rho_{\textbf{q}_{2}+\textbf{q}_{3}% }\rho_{\textbf{q}_{1}})-\frac{2}{N}\rho_{\textbf{q}_{1}+\textbf{q}_{2}+\textbf% {q}_{3}}.$$ (47) Thus, the largest number of quasiparticles in a Bose gas, being in some pure state $\Psi_{p}$, is equal to $N$. According to quantum statistics, the equilibrium number of quasiparticles for the given temperature $T>0$ is $$\bar{N}_{Q}(T)=\frac{1}{Z}\int d\textbf{r}_{1}\ldots d\textbf{r}_{N}\sum% \limits_{p}e^{-E_{p}/k_{B}T}\Psi^{*}_{p}N_{Qp}\Psi_{p},$$ (48) where $Z=\sum_{l}e^{-E_{l}/k_{B}T}$, $\{\Psi_{p}(x_{1},\ldots,x_{N})\}$ is the complete orthonormalized set of wave functions of a system with a fixed number of particles $N$, and $N_{Qp}$ is the number of quasiparticles in the state $\Psi_{p}$. According to the above analysis, the value of $N_{p}$ is determined by the structure of $\Psi_{p}(x_{1},\ldots,x_{N})$, and $N_{p}\leq N$ for any state. Therefore, $\bar{N}_{Q}(T)<N$. At low temperatures, the states with small $N_{Qp}$ make the main contribution to (48). Therefore, the average number of quasiparticles is small. In this case, $\bar{N}_{Q}(T)$ increases with $T$. It is clear that, as $T\rightarrow\infty,$ we have $\bar{N}_{Q}(T)\rightarrow N$. Thus, in the gas at a high temperature, the number of quasiparticles is close to the number of atoms. This shows how a quantum Bose system transforms into a classical one. 6 Experiment In the experiment [36], one point of the dispersion law $E(p)$ of a 1D Bose system was obtained for different $\gamma$ by means of measuring the dynamical structural factor. The results were compared with the theory [27, 37, 38, 39, 40], in which the bosons were considered point-like (bosons with zero radius of interaction). At small $p$ the experimental value of $E(p)$ is close to $E_{p}(p)$ of particle-like excitations. At larger $p$ the experimental value of $E(p)$ is significantly lower than the theoretical one $E_{p}(p)$, and the deviation increases with $p$. The authors have concluded that this deviation is related to the contribution of holes, since the dispersion curve for holes $E_{h}(p)$ lies lower than $E_{p}(p)$. The experiment [36] is important, but the analysis [36] is insufficient to make conclusion about the contribution of holes. Many states with the quantum numbers $(n_{1},\ldots,n_{N})$ contribute to the dynamical structural factor. One particle corresponds to states of the form $(0,\ldots,0,1)$, and one hole corresponds to states of the form $(0,\ldots,0,1,\ldots,1)$. But the majority of states (e.g., $(0,\ldots,0,1,2,3)$ or $(0,\ldots,0,2,2,2)$)) can be divided into holes and particles by several (or many) different ways, irrespective of the nature of a hole. To determine the contribution of holes, we need indicate, for each state $(n_{1},\ldots,n_{N}),$ a rule of separation of the state into definite numbers of holes and particles. Such rule was not given in [36]. Therefore, in our opinion, the results of this work do not allow one to ascertain whether the contribution of holes is large. We have shown above that the hole is a collection of phonons. Therefore, there is no meaning to consider the holes as independent excitations. The difference between the experimental value of $E(p)$ and the theoretical one, $E_{p}(p),$ can be caused by that the real Bose atoms have a nonzero size. At $\gamma\ \lower-1.2pt\vbox{\hbox{\hbox to 0.0pt{$<$}\lower 5.0pt\vbox{\hbox{$% \sim$}}}}\ 10$ the curve $E_{p}(p)$ is close to the Bogolyubov one $E_{B}(p)$ (12) [1, 41]. At the passage to a nonpoint interaction, $E_{B}(p)$ decreases, since $\nu(p)<\nu(0)$ for the potentials of a reasonable shape. For example, for a 1D semitransparent ball $\nu(p)=\nu(0)\frac{\sin{pd_{0}}}{pd_{0}}$, where $d_{0}$ is the ball diameter. At $\pi/L\ll p\ \lower-1.2pt\vbox{\hbox{\hbox to 0.0pt{$<$}\lower 5.0pt\vbox{\hbox% {$\sim$}}}}\ \pi/d_{0}$ the value of $\nu(0)-\nu(p)$ is not small and increases with $p$. Therefore, $\sqrt{\left(\frac{\hbar^{2}p^{2}}{2m}\right)^{2}+2n\nu(0)\frac{\hbar^{2}p^{2}}% {2m}}-\sqrt{\left(\frac{\hbar^{2}p^{2}}{2m}\right)^{2}+2n\nu(p)\frac{\hbar^{2}% p^{2}}{2m}}$ increases also with $p$, which agrees qualitatively with the experiment [36]. 7 A hole and a soliton. The Lieb’s hole is a stationary solution of the $N$-body Schrödinger equation for a cyclic system: $\tilde{\Psi}(x_{1},\ldots,x_{N},t)=e^{-iE_{h}(p)t/\hbar}\Psi(x_{1},\ldots,x_{N})$. This solution is characterized by a constant density: $\rho(x,t)=const$ [42]. However, the quasiclassical dark soliton, as a solution of the 1D Gross–Pitaevskii equation, is a solitary running density wave of the form $\Psi(x,t)=\Psi(x-vt)$, $\rho(x,t)=\rho(x-vt)$ [24, 25]. In this case, the wave package of one-hole states shows the properties of an immovable soliton [42, 43, 44] (though the density profile $\rho(x,t)$ of such package spreads, as $t$ increases, in contrast to a quasiclassical soliton [24, 25]). Moreover, the conditional probability density $\rho_{N}(x)$ in the hole state coincides with the stationary dark soliton profile [45]. Note also that the analysis in [25] refers to an infinite noncyclic system. In this case, classical and quantum momentums of the soliton are different. The dispersion curves of solitons and holes are close only in the classical definition of the soliton momentum [25]. If such properties hold for a cyclic system too, then a single hole is not a soliton (despite results in [45]), since the quantum definition of the momentum is primary. On the whole, the connection between a hole and a soliton is not quite clear [42, 43, 44, 45]. We have shown above that the hole is a collection of identical interacting phonons with the momentum $p=2\pi/L$. Possibly, the collection of identical phonons with $p=4\pi/L$ (or $p=6\pi/L$, etc.) reveals also solitonic properties. Most probably, a hole has solitonic properties only for high momenta: in this case, the hole consists of a large number of identical phonons, and the collective effect is possible. The solitonic properties of holes are interesting, it is worth studying them in more details. In our opinion, it is better to use zero boundary conditions, because $\rho(x,t)\neq const$ in this case, and the density wave is possible. 8 Conclusion We have shown that the hole with the momentum $p=jp_{0}$, where $p_{0}=\pm\hbar 2\pi/L$, is a collection of $j$ identical interacting phonons with the momentum $p_{0}$. Therefore, a hole is a composite excitation. If $j\sim N$, the hole corresponds to the condensate of phonons. Thus, Lieb’s excitations quite agree with the Bogolyubov’s and Feynman’s solutions. The traditional point of view, according to which a holes are an independent type of excitations, has survived for so long since the Lieb–Lininger wave functions was not compared with the wave functions of a system of nonpoint bosons. We think that fermionicity “penetrates” into the Bethe equations since at $c=\infty$ the bosons are impenetrable and, therefore, are similar to the fermions. We have also proved that the largest number of quasiparticles in a Bose gas is equal to the number of atoms $N$. The present work is partially supported by the National Academy of Sciences of Ukraine (project No. 0116U003191). 9 Appendix. The functions $a_{j}$ and $b_{j}$ from Eqs. (6) and (8) satisfy the Vakarchuk–Yukhnovskii equations [17, 34] $$E_{0}=\frac{N-1}{2}n\nu(0)-\sum\limits_{\textbf{q}\neq 0}\frac{n\nu(q)}{2}-% \sum\limits_{\textbf{q}\neq 0}\frac{\hbar^{2}q^{2}}{2m}a_{2}(\textbf{q}),$$ (49) $$\frac{mn\nu(q)}{\hbar^{2}}+q^{2}a_{2}(\textbf{q})-q^{2}a^{2}_{2}(\textbf{q})-% \frac{1}{N}\sum\limits_{\textbf{q}_{1}\neq 0}a_{3}(\textbf{q},\textbf{q}_{1})% \textbf{q}_{1}(\textbf{q}+\textbf{q}_{1})-\frac{1}{2N}\sum\limits_{\textbf{q}_% {1}\neq 0}a_{4}(\textbf{q},-\textbf{q}_{1},\textbf{q}_{1})q_{1}^{2}=0,$$ (50) $$\displaystyle a_{3}(\textbf{q}_{1},\textbf{q}_{2})[E_{1}(\textbf{q}_{1})+E_{1}% (\textbf{q}_{2})+E_{1}(\textbf{q}_{1}+\textbf{q}_{2})]+2\textbf{q}_{1}\textbf{% q}_{2}a_{2}(\textbf{q}_{1})a_{2}(\textbf{q}_{2})-$$ $$\displaystyle-2\textbf{q}_{1}(\textbf{q}_{1}+\textbf{q}_{2})a_{2}(\textbf{q}_{% 1})a_{2}(\textbf{q}_{1}+\textbf{q}_{2})-2\textbf{q}_{2}(\textbf{q}_{1}+\textbf% {q}_{2})a_{2}(\textbf{q}_{2})a_{2}(\textbf{q}_{1}+\textbf{q}_{2})-$$ $$\displaystyle-\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}a_{5}(\textbf{q}_{1},% \textbf{q}_{2},\textbf{q},-\textbf{q})q^{2}+\frac{1}{N}\sum\limits_{\textbf{q}% \neq 0}\left[a_{4}(\textbf{q}_{1}-\textbf{q},\textbf{q}_{2},\textbf{q})(% \textbf{q}_{1}-\textbf{q})\textbf{q}+\right.$$ (51) $$\displaystyle+\left.a_{4}(\textbf{q}_{1},\textbf{q}_{2}-\textbf{q},\textbf{q})% (\textbf{q}_{2}-\textbf{q})\textbf{q}+a_{4}(\textbf{q}_{1},\textbf{q}_{2},-% \textbf{q}_{1}-\textbf{q}_{2}-\textbf{q})(-\textbf{q}_{1}-\textbf{q}_{2}-% \textbf{q})\textbf{q}\right]=0,$$ $$\displaystyle b_{1}(\textbf{p})E(\textbf{p})=b_{1}(\textbf{p})E_{1}(\textbf{p}% )-\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}b_{2}(\textbf{q};\textbf{p})\frac{% \hbar^{2}}{2m}(\textbf{p}+\textbf{q})\textbf{q}-\frac{1}{N}\sum\limits_{% \textbf{q}\neq 0}b_{3}(\textbf{q},-\textbf{q};\textbf{p})\frac{\hbar^{2}q^{2}}% {2m},$$ (52) $$\displaystyle b_{2}(\textbf{q};\textbf{p})\frac{2m}{\hbar^{2}}[E_{1}(\textbf{q% })+E_{1}(\textbf{p}+\textbf{q})-E(\textbf{p})]+2b_{1}(\textbf{p})\textbf{p}% \textbf{q}a_{2}(\textbf{q})-2b_{1}(\textbf{p})p^{2}a_{3}(\textbf{p},\textbf{q})-$$ $$\displaystyle-2b_{1}(\textbf{p})\textbf{p}(\textbf{p}+\textbf{q})a_{2}(\textbf% {p}+\textbf{q})-\frac{1}{N}\sum\limits_{\textbf{q}_{1}\neq 0}q_{1}^{2}b_{4}(% \textbf{q}_{1},-\textbf{q}_{1},\textbf{q};\textbf{p})+$$ (53) $$\displaystyle+\frac{1}{N}\sum\limits_{\textbf{q}_{1}\neq 0}\left[b_{3}(\textbf% {q}_{1},\textbf{q}-\textbf{q}_{1};\textbf{p})\textbf{q}_{1}(\textbf{q}-\textbf% {q}_{1})+b_{3}(\textbf{q}_{1},-\textbf{q}-\textbf{q}_{1}-\textbf{p};\textbf{p}% )\textbf{q}_{1}(-\textbf{q}_{1}-\textbf{q}-\textbf{p})\right]=0,$$ $$\displaystyle b_{3}(\textbf{q}_{1},\textbf{q}_{2};\textbf{p})\frac{2m}{\hbar^{% 2}}[E_{1}(\textbf{q}_{1})+E_{1}(\textbf{q}_{2})+E_{1}(\textbf{p}+\textbf{q}_{1% }+\textbf{q}_{2})-E(\textbf{p})]-2b_{1}(\textbf{p})p^{2}a_{4}(\textbf{q}_{1},% \textbf{q}_{2},\textbf{p})-$$ $$\displaystyle-$$ $$\displaystyle 2b_{1}(\textbf{p})[a_{3}(\textbf{q}_{1}+\textbf{p},\textbf{q}_{2% })\textbf{p}(\textbf{q}_{1}+\textbf{p})+a_{3}(\textbf{q}_{2}+\textbf{p},% \textbf{q}_{1})\textbf{p}(\textbf{q}_{2}+\textbf{p})-a_{3}(\textbf{q}_{1},% \textbf{q}_{2})\textbf{p}(\textbf{q}_{1}+\textbf{q}_{2})]-$$ $$\displaystyle-$$ $$\displaystyle 2b_{2}(\textbf{q}_{1};\textbf{p})a_{3}(\textbf{q}_{1}+\textbf{p}% ,\textbf{q}_{2})(\textbf{p}+\textbf{q}_{1})^{2}-2b_{2}(\textbf{q}_{2};\textbf{% p})a_{3}(\textbf{q}_{2}+\textbf{p},\textbf{q}_{1})(\textbf{p}+\textbf{q}_{2})^% {2}-$$ $$\displaystyle-$$ $$\displaystyle 2b_{2}(-\textbf{q}_{1}-\textbf{q}_{2}-\textbf{p};\textbf{p})a_{3% }(\textbf{q}_{1},\textbf{q}_{2})(\textbf{q}_{1}+\textbf{q}_{2})^{2}-\frac{1}{N% }\sum\limits_{\textbf{q}_{4}\neq 0}q_{4}^{2}b_{5}(\textbf{q}_{4},-\textbf{q}_{% 4},\textbf{q}_{1},\textbf{q}_{2};\textbf{p})-$$ $$\displaystyle-$$ $$\displaystyle 2a_{2}(\textbf{q}_{1})\textbf{q}_{1}[b_{2}(\textbf{q}_{2};% \textbf{p})(-\textbf{q}_{2}-\textbf{p})+b_{2}(-\textbf{q}_{1}-\textbf{q}_{2}-% \textbf{p};\textbf{p})(\textbf{q}_{1}+\textbf{q}_{2})]-$$ $$\displaystyle-$$ $$\displaystyle 2a_{2}(\textbf{q}_{2})\textbf{q}_{2}[b_{2}(\textbf{q}_{1};% \textbf{p})(-\textbf{q}_{1}-\textbf{p})+b_{2}(-\textbf{q}_{1}-\textbf{q}_{2}-% \textbf{p};\textbf{p})(\textbf{q}_{1}+\textbf{q}_{2})]-$$ $$\displaystyle-$$ $$\displaystyle 2a_{2}(\textbf{q}_{1}+\textbf{q}_{2}+\textbf{p})(\textbf{q}_{1}+% \textbf{q}_{2}+\textbf{p})[b_{2}(\textbf{q}_{1};\textbf{p})(\textbf{q}_{1}+% \textbf{p})+b_{2}(\textbf{q}_{2};\textbf{p})(\textbf{q}_{2}+\textbf{p})]-$$ $$\displaystyle+$$ $$\displaystyle\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}\textbf{q}(-\textbf{q}_{% 1}-\textbf{q}_{2}-\textbf{q}-\textbf{p})b_{4}(\textbf{q}_{1},\textbf{q}_{2},-% \textbf{q}_{1}-\textbf{q}_{2}-\textbf{q}-\textbf{p};\textbf{p})+$$ $$\displaystyle+$$ $$\displaystyle\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}\textbf{q}(\textbf{q}_{1% }-\textbf{q})b_{4}(\textbf{q}_{1}-\textbf{q},\textbf{q}_{2},-\textbf{q}_{1}-% \textbf{q}_{2}-\textbf{p};\textbf{p})+$$ $$\displaystyle+$$ $$\displaystyle\frac{1}{N}\sum\limits_{\textbf{q}\neq 0}\textbf{q}(\textbf{q}_{2% }-\textbf{q})b_{4}(\textbf{q}_{1},\textbf{q}_{2}-\textbf{q},-\textbf{q}_{1}-% \textbf{q}_{2}-\textbf{p};\textbf{p})=0.$$ Here, $E_{1}(\textbf{q})=\frac{\hbar^{2}q^{2}}{2m}(1-2a_{2}(\textbf{q}))$. The equation for the function $a_{4}$ is given in [17, 34]. If one of the arguments of the functions $a_{j}$ or $b_{j}$ in (49)–(9) is zero, then the corresponding $a_{j}$ or $b_{j}$ should be set zero. The functions $a_{j+1}(\textbf{q}_{1},\ldots,\textbf{q}_{j})$ and $b_{j+1}(\textbf{q}_{1},\ldots,\textbf{q}_{j};\textbf{p})$ are invariant relative to the permutations of two any arguments $\textbf{q}_{l}$, $\textbf{q}_{n}$. The functions $a_{j+1}(\textbf{q}_{1},\ldots,\textbf{q}_{j})$ are also invariant relative to the change $\textbf{q}_{l}\rightarrow-\textbf{q}_{1}-\textbf{q}_{2}-\ldots-\textbf{q}_{j}$ for any $j$ and $l=1,\ldots,j$. As for the functions $b_{j+1}(\textbf{q}_{1},\ldots,\textbf{q}_{j};\textbf{p}),$ they are invariant relative to the change $\textbf{q}_{l}\rightarrow-\textbf{q}_{1}-\textbf{q}_{2}-\ldots-\textbf{q}_{j}-% \textbf{p}$ for any $j\geq 1$, $l=1,\ldots,j$. In works [17, 34] a one-phonon state was considered and Eqs. (49)–(9) were deduced for $b_{1}(\textbf{p})=1$. We write these equations for any $b_{1}(\textbf{p})$, so that the equations can be used to describe the states with the number of phonons $\geq 1$. Equations (49)–(9) are exact for an infinite system: $N,V=\infty$. For a finite system, the product $\rho_{-\textbf{q}_{1}}\ldots\rho_{-\textbf{q}_{N}}\rho_{-\textbf{q}_{N+1}}% \ldots\rho_{-\textbf{q}_{N+M}}$ ($M=1,2,\ldots$) is reduced to a sum of terms, each of which contains at most $N$ factors of the form $\rho_{-\textbf{q}}$ (see Section 5). One needs to take this property into account while deriving the equations for $a_{j}$ and $b_{j}$, which will cause the appearance of many additional terms in Eqs. (49)–(9). However, for the weak coupling, these terms should be negligible. Apparently, they are negligible also for a nonweak coupling. Otherwise, the transition from the solutions for a very large finite system to solutions for the infinite one would occur by jump. However, we do not expect such a jump. One can verify that the solutions of the Lieb–Lininger equations ((1) or (2)) have no such jump. Those additional terms were not considered in the literature, and we omitted them in Sections 2, 3. [1] E.H. Lieb, Phys. Rev. 130, 1616 (1963). [2] E.H. Lieb, The Bose fluid in: Lectures in Theoretical Physics, vol. VII C, ed. by W.E. Brittin (Univ. of Colorado, Boulder, 1965), p. 175. [3] C.N. Yang, C.P. Yang, J. Math. Phys. (N.Y.) 10, 1115 (1969). [4] M. Gaudin, The Bethe Wavefunction (Cambridge University Press, Cambridge, 2014). [5] M. Takahashi, Thermodynamics of One-Dimensional Solvable Models (Cambridge University Press, Cambridge, 1999). [6] M.A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac, M. Rigol, Rev. Mod. Phys. 83, 1405 (2011). [7] Y.-Z. Jiang, Y.-Y. Chen, X.-W. Guan, Chin. Phys. B 24, 050311 (2015). [8] N.N. Bogoliubov, J. Phys. USSR 11, 23 (1947). [9] N.N. Bogoliubov, D.N. Zubarev, Sov. Phys. JETP 1, 83 (1956). [10] R. Feynman, Phys. Rev. 94, 262 (1954). [11] R.P. Feynman, M. Cohen, Phys. Rev. 94, 262 (1954). [12] R.P. Feynman, Statistical Mechanics: A Set of Lectures (W. A. Benjamin, Massachusetts, 1972). [13] K. Brueckner, Theory of Nuclear Structure (Methuen, London, 1959). [14] H.W. Jackson, E. Feenberg, Rev. Mod. Phys. 34, 686 (1962). [15] D.K. Lee, F.J. Lee, Phys. Rev. B 11, 4318 (1975). [16] C.C. Chang, C.E. Campbell, Phys. Rev. B 13, 3779 (1976). [17] I.A. Vakarchuk, I.R. Yukhnovskii, Theor. Math. Phys. 42, 73 (1980). [18] T. MacFarland, S.A. Vitiello, L. Reatto, G.V. Chester, M.H. Kalos, Phys. Rev. B 50, 13577 (1994). [19] L. Reatto, G.L. Masserini, S.A. Vitiello, Physica B 197, 189 (1994). [20] M.D. Tomchenko, Ukr. J. Phys. 50, 720 (2005). [21] C.E. Campbell, E. Krotscheck, T. Lichtenegger, Phys. Rev. B 91, 184510 (2015). [22] L. Reatto, J. Low Temp. Phys. 87, 375 (1992). [23] M.D. Tomchenko, arXiv:0904.4434 [cond-mat.other]. [24] T. Tsuzuki, J. Low Temp. Phys. 4, 441 (1971). [25] M. Ishikawa, H. Takayama, J. Phys. Soc. Jpn. 49, 1242 (1980). [26] M. Tomchenko, Ukr. J. Phys. 64, 250 (2019). [27] J.-S. Caux, P. Calabrese, Phys. Rev. A 74, 031605(R) (2006). [28] M. Tomchenko, J. Low Temp. Phys. 187, 251 (2017). [29] E.H. Lieb, W. Liniger, Phys. Rev. 130, 1605 (1963). [30] M. Gaudin, Phys. Rev. A 4, 386 (1971). [31] R. Jastrow, Phys. Rev. 98, 1479 (1955). [32] C.-W. Woo, Phys. Rev. A 6, 2312 (1972). [33] E. Feenberg, Ann. Phys. 84, 128 (1974). [34] I.A. Vakarchuk, I.R. Yukhnovskii, Theor. Math. Phys. 40, 626 (1979). [35] M. Tomchenko, J. Phys. A: Math. Theor. 48, 365003 (2015). [36] F. Meinert, M. Panfil, M.J. Mark, K. Lauber, J.-S. Caux, H.-C. Nägerl, Phys. Rev. Lett. 115, 085301 (2015). [37] A. Yu. Cherny, J. Brand, Phys. Rev. A 73, 023612 (2006). [38] M. Khodas, M. Pustilnik, A. Kamenev, L.I. Glazman, Phys. Rev. Lett. 99, 110405 (2007). [39] M. Panfil, J.-S. Caux, Phys. Rev. A 89, 033605 (2014). [40] V.N. Golovach, A. Minguzzi, L.I. Glazman, Phys. Rev. A 80, 043611 (2009). [41] M. Tomchenko, arXiv:1705.10565 [cond-mat.quant-gas]. [42] J. Sato, R. Kanamoto, E. Kaminishi, T. Deguchi, arXiv:1204.3960 [cond-mat.quant-gas]. [43] J. Sato, R. Kanamoto, E. Kaminishi, T. Deguchi, New J. Phys. 18, 075008 (2016). [44] S.S. Shamailov, J. Brand, Phys. Rev. A 99, 043632 (2019). [45] A. Syrwid, K. Sacha, Phys. Rev. A 92, 032110 (2015).
Principal Trade-off Analysis  Alexander Strang Department of Statistics University of Chicago Chicago, IL 60637 [email protected] &David SeWell Lockheed AI Center (LAIC) Lockheed Martin Corp [email protected] &Alexander Kim Lockheed AI Center (LAIC) Lockheed Martin Corp [email protected] &Kevin Alcedo Lockheed AI Center (LAIC) Lockheed Martin Corp [email protected] &David Rosenbluth Lockheed AI Center (LAIC) Lockheed Martin Corp [email protected] alexanderstrang.com, Equal ContributionEqual [email protected] author Abstract This paper develops Principal Trade-off Analysis (PTA), a decomposition method, analogous to Principal Component Analysis (PCA), which permits the representation of any game as the weighted sum of disc games (continuous R-P-S games). Applying PTA to empirically generated tournament graphs produces a sequence of embeddings into orthogonal 2D feature planes representing independent strategic trade-offs. Each trade-off generates a mode of cyclic competition. Like PCA, PTA provides optimal low rank estimates of the tournament graphs that can be truncated for approximation. The complexity of cyclic competition can be quantified by computing the number of significant cyclic modes. We illustrate the PTA via application to a pair of games (Blotto, Pokemon). The resulting 2D disc game representations are shown to be well suited for visualization and are easily interpretable. In Blotto, PTA identifies game symmetries, and specifies strategic trade-offs associated with distinct win conditions. For Pokemon, PTA embeddings produce clusters in the embedding space that naturally correspond to Pokemon types, a design in the game that produces cyclic trade offs. Keywords game theory  $\cdot$ principal component analysis  $\cdot$ Schur decomposition  $\cdot$ disc game  $\cdot$ Fourier  $\cdot$ data visualization 1 Introduction In recent years algorithms have achieved superhuman performance in a number of complex games such as Chess, Go, Shogi, Poker and Starcraft [27, 13, 21, 34]. Despite impressive game play, enhanced understanding of the game is typically only achieved by additional analysis of the algorithms game play post facto [26]. Current work overemphasizes the “policy problem", developing strong agents, despite growing demand for a task theory which addresses the “problem problem", i.e. what games are worth study and play [22, 9]. A task theory requires a language that characterizes and categorizes games, namely, a toolset of measures and visualization techniques that evaluate and illustrate game structure. Summary visuals and measures are especially important for complex games where direct analysis is intractable. Then, tournaments are used to sample the game and to empirically evaluate agents. The empirical analysis of tournaments has a long history, in sports analytics [18, 6] , ecology and animal behavior [16, 25], and biology [31, 28]. While the primary interest in these cases is typically in ranking agents/players, tournament graphs also reveal significant information about the nature of the game being played [32]. This paper describes mathematical techniques for extracting a wealth of information about the underlying game structure directly from tournament data. While these methods can be applied to the various contexts in which tournaments are already employed in machine learning (e.g., population based training), they open up a range of new research questions regarding the characterization of natural games, synthesis of artificial games (c.f. [22]), approximating games with simplified dynamics, and the strategic perturbation of games. Fine structural characteristics of a tournament graph can be represented by low dimensional embeddings that map competitive relationships to embedded geometry. We review and expand on methods introduced by [3], who proposed a canonical series of maps that provide a complete description of a sample tournament in terms of a sum of simple games, namely, disc games. Our contribution follows. First, we compare PCA [23] to disc game embedding, and show that disc game embedding inherits key algebraic properties responsible for the success of PCA. Based on this analogy, we propose PTA as a general technique for visualizing data arising from competitive tasks or pairwise choice tasks. Indeed, while we focus on games for their charisma, any data set representing a skew-symmetric comparison of objects is amendable to PTA. We show that PTA provides a much richer framework for analyzing trade-offs in games than previously demonstrated via a series of examples. Our examples exhibit a wide variety of strategic structures that can be clearly visualized with PTA. Unlike previous work, we focus on the relation between embedding coordinates, which represent performance relations, and underlying agent attributes in order to enumerate the principal trade-offs responsible for cyclic competition in each game. We consider the full information content of PTA by analyzing multiple leading disc games, and by studying the decay in their importance. Important strategic tradeoffs can arise in later disc games, so previous empirical work’s focus on the leading disc game is myopic. These examples also raise conceptual limitations not addressed in previous work, thus outline future directions for development. 2 Related Work Our work builds directly on [3], which used the embedding approach to introduce a comprehensive agent evaluation scheme. Their scheme uses the real Schur form (PTA) in conjunction with the Hodge decomposition to overcome deficiencies in standard ranking models. Our work also compliments efforts to explore cyclic structures in competitive systems [7, 29], economics [19, 20], and tangentially as multi-class classification problems [4, 14]. Cycles challenge traditional gradient methods and can slow training [22, 2]. Moreover, cyclic structures in games are often intricate and difficult to disentangle, particularly among intermediate competitors. Games of skill frequently exhibit this “spinning top geometry" [10]. By summarizing cyclic structures, PTA helps identify areas of the strategy space that cause difficulty during training, or should be targeted for diverse team design [1, 11]. In particular, we show that PTA can identify fundamental trade-offs that summarize otherwise opaque cyclic structure. Trade-offs play an important role in decision tasks and evolutionary processes outside of games, so general tools that isolate and reify trade-offs are of generic utility [22, 32]. In that sense, our attempt to visualize game structure is in line with generic data visualization efforts, which aim to convert complicated data into elucidating graphics (c.f. [12, 11]). 3 Background 3.1 Functional Form Games A two-player zero-sum functional form game, is defined by an attribute space $\Omega\subseteq\mathbb{R}^{T}$ and an evaluation function $f$ that defines an advantage of one agent over another given their attributes. Agents in the game can be represented by their attribute vectors $x,y\in\Omega$, the entries of which could represent agent traits, strategic policies, weights in a neural net governing their actions, or more generally, any attributes that influence competitive behavior. The function $f$ is of the form $f:\Omega\times\Omega\rightarrow\mathbb{R}$. The value $f(x,y)$, quantifies the advantage of agent $x$ over $y$ with a real number. The evaluation function must be fair, that is, the advantage of one competitor over another should not depend on the order they are listed in. Consequently, $f$ must be skew symmetric, $f(x,y)=-f(y,x)$ [29]. If $f(x,y)>0$ we say that $x$ beats $y$ and the outcome is a tie if $f(x,y)=0$. The larger $|f(x,y)|$ the larger the advantage one competitor possesses over another. We do not specify how advantage is measured, since the appropriate definition may depend on the setting. Possible examples include expected return in a zero-sum game, probability of win minus a half, or log odds of victory. With a set of agents $X$, pairwise comparisons of all agents gives a $N\times N$ evaluation matrix $F$ where $F_{ij}=f(x_{i},x_{j})$. This matrix can be analyzed to study the structure of the game among those competitors, i.e. the resulting tournament. 3.2 Disc Games The cyclic component of a tournament can be visualized using a combination of simple cyclic games [1, 3]. The simplest cyclic functional form game is a disc game, which acts as a continuous analog to rock-paper-scissors (RPS) in two-dimensional attribute spaces. The disc game evaluation function is the cross product between competitors embedded attributes, $$\text{disc}(x,y)=x\times y=x_{1}y_{2}-x_{2}y_{1}=x^{T}\begin{bmatrix}0&1\\ -1&0\end{bmatrix}y=x^{T}Ry$$ (1) where $R$ is the 2 $\times$ 2 ninety degree rotation matrix [3]. The cross product models a basic trade-off between the two attributes. Any evaluation matrix can be represented with a sum of pointwise embeddings into a sequence of disc games. The necessary construction is reviewed in the next Section. 4 Principal Trade-off Analysis (PTA) Any real, $m\times m$, skew-symmetric matrix $A$ admits a Schur decomposition (real Schur form), $QUQ^{T}$. Here $Q$ is an orthonormal $m\times\text{rank}(A)$ matrix, $U$ is block diagonal with $\text{rank}(A)/2$, $2\times 2$ blocks of the form $U^{(k)}=\omega_{k}R$, and where $\omega_{k}\geq 0$ is a nonnegative scalar. Each pair of consecutive columns, $[q_{2k-1},q_{2k}]$ correspond to the real and imaginary parts of an eigenvector of $A$ scaled by $\sqrt{2}$. The scalars $\omega$ are the nonnegative imaginary part of the corresponding eigenvalues, listed in decreasing order [35, 36]. A simple linear algebra excercise demonstrates that the columns of $Q$ are also proportional to the singular vectors of $A$, and the sequence of scalars $\omega$ match the singular values of $A$, thus a truncated expansion of $A$ using only the first $r$ blocks is equivalent to the optimal rank $2r$ approximation to $A$ under the Frobenius norm. See Appendix A When $A$ is replaced with the performance matrix $F$, each block in the Schur decomposition acts as a scaled version of a disc game where each competitor is assigned embedding coordinates via $Q$. The transitive component can be represented on a line via the ratings, so does not require additional visualization. The cyclic component, $F_{c}$, is given by $F_{c}=F-F_{t}$ where $F_{t}$ is the transitive component. All three of these matrices are skew symmetric, so $F_{c}$ admits a Schur decomposition. $$F_{c}=QUQ^{T}.$$ (2) As in PCA, we consider a low rank approximation of $F_{c}$ associated with expansion onto the first $k$ disc games, where $k$ is chosen large enough to satisfy a desired reconstruction accuracy. The optimal rank $2r$ approximation for $F_{c}$ is given by replacing $Q$ with $Q^{(1:2r)}$, and $U$ with $U^{(1:2r)}$ in Equation 2, where $Q^{(1:2r)}$ is the first $2r$ columns of $Q$, and $U^{(1:2r)}$ is the upper $2r$ by $2r$ minor of $U$. Optimality is measured using Frobenius norm error. The matrix $Q^{(1:2r)}$ provides a set of basis vectors. Projection onto those basis vectors define a new set of coordinates, thereby embedding the competitors. Specifically, let: $$\hat{Y}=Q^{(1:2r)^{T}}F_{c}=U^{(1:2r)}{Q^{(1:2r)^{T}}}.$$ (3) and scale each pair of embedding coordinates by the associated eigenvalue so $\vec{y}_{k}(i)=[y_{2k-1,i},y_{2k,i}]=\omega_{k}^{-1/2}[\hat{y}_{2k-1,i},\hat{y}_{2k,i}]={\omega_{k}}^{1/2}[q_{2k-1,i},q_{2k,i}]$. Then $\vec{y}_{k}(i)$ maps from competitor indices, $i$, to points in $\mathbb{R}^{2}$, and the set $Y=\{\vec{y}_{k}\}$ is a collection of planar embeddings, where $\vec{y}_{k}$ is given by projection onto a feature plane spanned by $q_{2k-1}$ and $q_{2k}$. Note that the Schur decomposition is only unique up to rotation within each feature plane, since complex conjugate pairs of eigenvectors of $F_{c}$ are only uniquely defined up to their complex phase. Thus we consider two embeddings equivalent if they agree up to rotation within each planar embedding. The evaluation $F_{c_{ij}}^{(2k)}$ between agent $i$ and agent $j$ equals a sum over each embedding $\vec{y}_{k}$, of the cross product $\vec{y}_{k}(i)\times\vec{y}_{k}(j)$ Appendix B. That is: $$F_{c_{ij}}^{(2r)}=\sum_{k=1}^{r}\vec{y}_{k}(i)\times\vec{y}_{k}(j)=\sum_{k=1}^{r}\text{disc}(\vec{y}_{k}(i),\vec{y}_{k}(j)).$$ (4) Thus, restricted to each planar embedding $F_{c}^{(2r)}$ is a disc game and the optimal rank $2r$ approximation of $F_{c}^{(2r)}$, is a linear combination of disc games applied to the sequence of planar embeddings $\{\vec{y}_{k}\}_{k=1}^{r}$. This decomposition is useful for two reasons. First, it depends on a spectral decomposition of $F_{c}$, so inherits the key properties that account for the success of PCA. An equivalent construction is introduced in [8] where it is called the “blade-chest-inner" model. The construction in [8] is not based on a spectral decomposition, so lacks orthogonality or low rank optimality. In PTA, the embeddings are projections onto orthogonal planes, so each embedding encodes independent information about cyclic competition. The planes act like feature vectors, and is typically associated with some strategic trade-off (see Section 5). Therefore, as PCA identifies principal components, PTA identifies principal trade-offs: orthogonal planes associated with a sequence of fundamental cyclic modes. The two decompositions differ since PCA uses the singular value decomposition, while PTA uses the Schur real form. Nevertheless, the sequence of embeddings form optimal low rank approximations to $F_{c}$, where the importance of each embedding is quantified by the associated eigenvalue. Thus, the sequence of eigenvalues determines the number of disc game embeddings, $r$, required to achieve a sufficiently accurate approximation of $F_{c}$. The number of disc games is half the effective rank of $F_{c}$, and is a natural measure of the complexity of cyclic competition. The complexity is distinct from the overall intensity of cyclic competition or the game balance, which depend on $\|F_{c}\|$ instead of its rank [29]. Instead, it counts the number of distinct cyclic modes in the evaluation matrix. It is possible to have many distinct, yet weak cyclic trade-offs, or one, very strong, cyclic trade-off. Second, the disc game construction encodes performance relations via geometry. Given coordinates $\vec{y}(i)$ and $\vec{y}(j)$, the advantage of competitor $i$ over competitor $j$ given by $\vec{y}(i)\times\vec{y}(j)$ equals the signed area of the triangle with vertices at the origin, $\vec{y}(i)$, and $\vec{y}(j)$. In polar coordinates, each point in a disc game has a radius and phase. The cross product $\text{disc}(\vec{y}(i),\vec{y}(j))$ equals the product of the radii, times the sine of the phase difference between $\vec{y}(i)$ and $\vec{y}(j)$. So, the farther a competitor is embedded from the origin, the more intensely they are involved in the associated cyclic mode. For a fixed radius, one competitor gains the most advantage when it is embedded ninety degrees clockwise from its opponent, and possesses an advantage as long as it is embedded clockwise of its opponent. Thus, advantage flows clockwise about the origin. We visualize this flow with a circulating vector field. These geometric properties allow the sequence of disc games to encode a variety of cyclic structures in interpretable visuals. Our subsequent analysis relies heavily on these properties. 5 Experiments Here, we illustrate the graphical power of PTA via Blotto and Pokemon. Both exhibit interesting cyclic structure. We emphasize the interpretation of each principal trade-off in terms of game strategy to show that PTA reveals diverse, fine-grained game structure based only on empirical game data. 5.1 Colonel Blotto Colonel Blotto is a zero-sum, simultaneous action, two player resource allocation game [15]. Each player possesses $N$ troops to distribute to $K$ zones. Each zone has an associated payout $Z_{k}$. A zone is conquered by a player if they allocate more troops to that zone than their opponent. The conquering player receives the payout. Ties result in both players receiving 0 payout. The player with the highest total payout wins the match. All allocations are revealed simultaneously. At simplest, the payouts are uniform across zones, so the player who conquers the most zones wins the game. Unweighted Blotto is a highly cyclic game since there is no dominating strategy. Every strategy admits a counter. Unless $K=N$ or $K<=2$, all allotments lose to some other allotment. To defeat an allotment, adopt the maxim, “lose big, win small". Mimic the allotment, then redistribute all the units from the zone with the most units as uniformly as possible across the remaining zones. Then, unless all zones were allotted one unit, the exploiting strategy sacrifices a loss in one zone to win in more than one other zone. In general, the more an allotment commits to a single zone, the more easily it is defeated. Unweighted Blotto is also relatively complex, since the zones are indistinguishable. Thus unweighted Blotto admits $K!$ fold symmetries with respect to the zone labels. Introducing nonuniform weights breaks the exchange symmetry of the zones. This changes the set of possible win conditions, and subsequently the overall cyclicity, complexity, and strategic trade-off structures revealed by PTA. We consider each unique strategy as a separate “agent”, parameterized by the corresponding allotment strategy. We generate agents by randomly sampling over the strategy space using a Dirichlet distribution with the support equal to the number of zones. After sampling, we compare each pair of strategies in the population. Each matchup is deterministic and results in a win, loss or tie, assigned scores (0.5,0,-0.5). We construct the associated evaluation matrix by setting $F_{ij}$ to the score of strategy $i$ against strategy $j$. Blotto Example 1 Table 1 summarizes the principal trade-offs associated with each disc game. These trade-offs are the most important sources of cycles in the tournament, accounting for 80% of its structure. PTA allows elegant visualization of relevant game structure by reducing a game to a small set of key trade-offs. We start by looking at the $K$ = 3, $N$ = 75 blotto game with uniform payouts. In general, the number of distinct allotments in a $K$ battlefield, $N$ troop blotto game grows at $\mathcal{O}(N^{K})$, but the complexity, which reflects the underlying number of cyclic modes, converges to a constant value associated with a continuous Blotto game, where commanders can allocate an arbitrary fraction of their force to each zone. Unweighted $K$ = 3, $N$ = 75 blotto admits 2926 allotments, but has a 3! fold exchange symmetry under permutations of the battlefield labels, leaving roughly 488 distinct allotments. We require only 3 disc games to reconstruct the evaluation matrix to $\approx$ 80% accuracy, 6 at $\approx$ 90% accuracy, and 12 at $\geq$ 95% accuracy. Thus the game has complexity 12 at a 95% standard. Trade-offs 4 - 12 represent refinements of the trade-offs present in the first three disc games, so PTA really allows a reduction in complexity from 2926 allocations (absent prior knowledge regarding symmetries), to 3 fundamental cyclic modes. Thus, PTA can effectively separate the underlying complexity of a game from the size of its strategy space. See Appendix A for more discussion on complexity. The exchange symmetry of the zones is apparent in the sequence of eigenvalues, $\omega_{k}$, representing disc game importance. Exchanges introduce 6 permutations under which the evaluation matrix is invariant. Consequently, $\omega_{k}$ come in sets of three, where each $\omega_{k}$ represents a pair of eigenvectors. Eigenvectors associated with identical eigenvalues are not uniquely defined. Instead, they are drawn from a subspace of dimension equal to the multiplicity of the repeated eigenvalue. Consequently, all of the eigenvectors $Q$ are chosen arbitrarily from six dimensional spaces. When the evaluation matrix has repeated eigenvalues, the associated disc game embeddings are not uniquely defined. Any unitary transform of the set of eigenvectors sharing an eigenvalue defines a valid embedding. Thus, symmetry presents an unusual challenge; degeneracy. In our case, the disc games come in sets of three, each representing an arbitrary rotation of a six dimensional object. Consequently, we consider multiple disc games simultaneously. This issue was not addressed in previous work, which largely only considered a single dominant disc game. Generic games should not exhibit such strong symmetries, so such degeneracy will be rare, and likely confined to toy examples. We analyze the three leading disc games to identify the most important allocation trade-offs. Figure 1 shows the first three disc games colored by rating, allocation to the three zones, and the mapping to phase and radius in each disc game as a function of allocation. Each share the same eigenvalue, so are equally important and could be mixed. Nevertheless, these three disc games represent distinct trade-offs in allocations that can be easily explained. The specific trade-offs can be identified directly from the disc games when colored by allocation. Consider the points labelled 1, 2, and 3 in the first disc game. Each maximize the radius of the scatter cloud while moving along its boundary, so represent the allocations most involved in the cycle. The low rated points at the bottom of the scatter allocate primarily to one zone (yellow in panels 2 - 4). Moving clockwise, the next extrema occurs at the top of the scatter. It is high rated, and has nearly equal allocation across all three zones (colored green in panels 2 - 4). Uniform allocations are rated highly since they perform well against most randomly sampled allocations, particularly those lying along a line connecting a corner of the simplex to its center. This induces a transitive trend among the bulk of the allocations moving from allocations that prioritize one zone, to allocations that treat the zones equally. This transitive trend is represented by the general shift of the disc game leftward off the origin. This subset of allocations compete transitively, producing the regular gradient from purple to yellow in rating when moving clockwise from the bottom to the top in the scatter. Not all allocations satisfy this transitive trend. Allocations that prioritize two zones counter the uniform strategy, and are countered by allocations that prioritize a single zone. For example, allocation [70,0,5] defeats [38,37,0]. Thus, allocations lying on the midpoints of an edge of the simplex lose to allocations near either neighboring endpoint. These counters close the cycle, and are represented by the rightmost pair of corners labelled 2 in disc game 1. Panels 2 - 4 show that each such corner receives an intermediate allocation in two zones (green), but little to none in the third (dark blue). Similar visual analysis identifies the RPS cycles among cyclic permutations of allocations [H,M,L] and [L,M,H] shown in disc games 2 and 3. For example, the leftmost corner of the scatter cloud shown in disc game 2 receives a high allocation in zone 1 (teal), an intermediate allocation in zone 2 (blue-green), and a low allocation in zone 3 (dark blue). Walking from R to P to S, the allocation patterns shifts cyclically. The same analysis applies to disc game 3, starting from [L,M,H]. Figure 2 shows the phase and radius assigned to each allocation in the simplex. Strikingly, subsequent disc games imitate the disc game 1-3 trade-offs, only at higher frequency on a smaller spatial scale in allocation. This suggests that the disc games may act like Fourier modes, where early disc games capture low frequency, global tradeoffs, and later disc games capture high frequency, local tradeoffs. It also suggests that orthogonality may not be the appropriate notion of independence for trade-offs. A sharper notion of equivalency is needed. Methods like nonnegative matrix factorization, which address similar issues among PCA features [17], suggest an avenue for further develop. An example that produces explicit sine series is discussed in the Appendix. Blotto Example 2 Next, consider a weighted three zone example with weights $[2,3,4]$. This weighting changes the win conditions. There are now two distinct types of win conditions, either, win any two zones, or tie on one zone and win in the higher valued of the remaining two. The former win condition is the same as in the unweighted case, and applies to the majority of allocations pairs. The latter case disallows ties, and breaks the exchange symmetry central to the previous example. As a result, the eigenvalues no longer come in repeated triples (though they still cluster in groups of three and decay at essentially the same rate). Therefore, each disc game is uniquely defined. Figure 3 shows the first three disc games with the same coloring structure and phase plot structure as before. Note the cyclic structure of the disc games. Each scatter cloud forms an annulus around the origin, shifted so that it passes close to the origin on one side. In fact, the scatter patterns in the three disc games are roughly identical after rotation. The near symmetry of the first three disc games reflects the exchange symmetry in the outright win conditions for each zone. This demonstrates that PTA can discover fundamental game symmetries absent a priori knowledge. Each disc game represents a trade-off associated with allocation to a single zone. This is most visible by looking diagonally from bottom left to top right in columns 2-4. The phase about the annulus in the first disc game is closely correlated with allotment into the third zone, as illustrated by the monotonic change in color from purple to yellow about the annulus. Similarly, phase in the second disc game is associated with allotment into zone two, and phase in the third disc game is associated with allotment into zone one. The order of this mapping is also consistent with the payoff structure. Zone three is more important than zone two, which is more important than zone one. The correlation of phase with allotment is apparent in the phase plots provided in the last column. Phase in the first disc game completes one complete cycle moving down from the top corner of the simplex (exclusive allotment to zone 3) to the bottom edge (no allotment to zone 3). The same pattern repeats for the subsequent disc games, only with respect to a different zone. As before, each disc game admits clear interpretation. Each annulus is shaped roughly like a capital “D”. In the $kth$ disc game, strategies that overallot to the $kth$ zone appear at the clockwise most corner of the “D”, so are the most easily exploited. Moving clockwise, ratings increase with an approach to uniform allotment, then decrease again as underallotment to the associated zone prevents uniform allotment. This, then, is the underlying trade-off. Overallotment ensures losses in other zones, underallotment ensures loss in the focal zone. 5.2 Pokemon We conclude by analyzing Pokemon. Pokemon originated from the Nintendo Game Boy console, but has since been played on a variety of mediums including playing cards [24]. Pokemon is of considerable interest from a game design perspective since the creators must design certain trade-offs to keep the game balanced and engaging. The game is made up of creatures, called Pokemon, that come in many varieties. Pokemon is interesting from a game design perspective, since the design should reward players for collecting diverse teams. Thus, each Pokemon has a different type, and each type has its own set of strengths and weaknesses. These different types satisfy interlocking cyclic relationships. The data used in this analysis comes from an open-source Kaggle data set [5]. The original data has 800 Pokemon, but we removed the 65 “legendary" Pokemon to simplify the analysis. The data consists of battle outcomes and pokemon attributes. Battle outcomes were converted into an evaluation matrix using logistic regression (see Appendix E). Here, we apply the Schur decomposition directly to $F$ to show that disc game embedding can successfully isolate a dominant transitive component. Figure 4 shows three of the first four disc games, chosen for their significance. The first disc game is the most important, and is clearly transitive since all points fall on a curve that does not enclose the origin. Position along the curve is closely correlated with speed, so speed determines rating. We query by attribute to interpret the remaining disc games. To start, consider the “type" attribute. The second disc game is clearly clustered by type (see Figure 4). A variety of RPS relationships are apparent among the type clusters. Any loop of clusters containing the origin corresponds to a cycle of type advantage. The intensity of the corresponding cycle (curl) is proportional the area of the convex hull formed by the clusters. Focus on the large clusters most involved in the trade-off, i.e. furthest from the origin. Figure 5 summarizes the RPS relations between these clusters. First, notice the highlighted triangle formed by the Water-Fire-Grass RPS relationship. The disc game shows the expected relationship among the three types since the triangle contains the origin. Thus, PTA successfully identifies known game structure without any domain specific knowledge. Additional clusters on the outer ring satisfy more intricate relations. The other three types are “bug", “rock" and “ground". To summarize these relations we construct a coarse grained evaluation matrix, $\hat{F}$. Specifically $\hat{F}_{ij}$ is the average performance of Pokemon from type $i$ vs the Pokemon from type $j$ in the second disc game. The associated matrix heat-map is shown in the middle panel of Figure 5. The types are ordered by angle moving clockwise about the origin. We compared these relations with available game design matrices known as “attack matrices". These matrices have a row and column for each Pokemon type, with each intersection listing the advantage of one Pokemon type over the other. We use the provided attack matrix from [33]. An attack matrix is written in terms of multiples, so Pokemon that are evenly matched have a $1\times$ advantage. We bucket the range of $i,j$ attack multipliers $a_{ij}$ into 5 bins ranging from $0\times$ to $2\times$, skew-symmetrize via $(A-A^{T})$. The result is the rightmost panel in Figure 5. The coarse grained summary $\hat{F}$ is strikingly similar to the provided attack matrix. The apparent structural parity in these two matrices highlights the virtues of PTA. Without any domain knowledge, access to attribute data, or any explicit instruction to identify clusters, PTA clustered Pokemon by their most relevant attributes (type) then encoded a game mechanism (type specific attack multipliers) directly from the cluster locations. Conversely, the second disc game shows how cyclic relations introduced at the mechanism level translate into realized cyclic relations in actual performance. Coloring the disc games by “generation", i.e. pokemon release date, reveals design choices. The game is frequently updated by the addition of new Pokemon. Updates present a design challenge. Game designers must introduce desirable new Pokemon without upsetting the game balance. The fourth disc game, shown in the far right plot of Figure 4, is balanced in that rating does not predict phase, and instead correlates with radius. Strong and weak Pokemon are closer to the origin, while Pokemon of intermediate rating are more involved in the trade-off. This reveals a spinning top structure characteristic of many games [10]. Rather, generation predicts phase. Each generation possesses an advantage over its predecessor, as illustrated by the fade from purple to yellow. Balance is retained since generational advantage is periodic. The same clockwise generation shift reappears in the second disc game. Within type, new beats old. For example, the bottom-most cluster (grass) clearly trends old to young. Cross type relations are largely unchanged. 6 Conclusion Following Balduzzi (Balduzzi et al., 2018), we have demonstrated that all evaluation matrices admit an expansion onto a sum of disc game embeddings. We suggest the name PTA based on the close analogy with PCA. Through examples, we have demonstrated that embeddings produced by PTA can reveal a surprising variety of competitive structures from outcome data alone. Without prior knowledge of Pokemon, PTA was able to reveal tradeoffs in the game arising from speed, type, and generation attributes. Game design choices related to both type and generation were discovered in a way that could have been done without aprior knowledge of their existense. Likewise, without any knowledge of the game rules, or win conditions, PTA identifies symmetries, and win condition trade-offs in Blotto. These methods are quite general and can be applied to any 2 player constant sum game, or any decision problem involving pairwise choice. Future work could expand on the class of games and provide more general methods for finding embeddings, such as a functional theory connecting performance with attribute space. Appendix A Appendix Appendix B Principal Trade-off Analysis B.1 Schur Decomposition is a Sum of Disc Games Here we prove that the Schur decomposition (real Schur form), is equivalent to a sum of disc games applied to the embedding maps $\vec{y}_{k}$. Recall the embedding construction. Given a skew symmetric matrix $F$, write $F=QUQ^{\intercal}$ where $Q$ is real, orthonormal, $U$ is block diagonal with diagonal blocks $\omega_{k}R$, and $R$ is the two by two ninety degree rotation matrix. Let $\vec{y}_{k}(i)=\omega_{k}^{1/2}[q_{i,2k-1},q_{i,2k}]$. Then, the rank $2r$ approximation to $F$ is: $$\displaystyle F_{ij}^{(2r)}$$ $$\displaystyle=e_{i}^{T}\left(\sum_{k=1}^{r}w_{k}[q_{2k-1};q_{2k}]^{T}R[q_{2k-1};q_{2k}]\right)e_{j}$$ (5) $$\displaystyle=\sum_{k=1}^{r}w_{k}[q_{i,2k-1};q_{i,2k}]^{T}R[q_{j,2k-1},q_{j,2k}]$$ $$\displaystyle=\sum_{k=1}^{r}w_{k}(q_{i,2k-1}q_{j,2k}-q_{i,2k},q_{j,2k-1})$$ $$\displaystyle=\sum_{k=1}^{r}\sqrt{w_{k}}[q_{i,2k-1},q_{i,2k}]\times\sqrt{w_{k}}[q_{j,2k-1},q_{j,2k}]$$ Recalling the embedding construction, write: $$F_{ij}^{(2r)}=\sum_{k=1}^{r}\vec{y}_{k}(i)\times\vec{y}_{k}(j)=\sum_{k=1}^{r}\text{disc}(\vec{y}_{k}(i),\vec{y}_{k}(j)).$$ (6) Thus, restricted to each planar embedding $F_{c}^{(2r)}$ is a disc game and the optimal rank $2r$ approximation of $F^{(2r)}$ is a linear combination of disc games applied to the sequence of planar embeddings $\{\vec{y}_{k}\}_{k=1}^{r}$. $\square$ B.2 PTA and Fourier Series Both the $[1,1,1]$ and the $[2,3,4]$ blotto examples exhibit strikingly modal disc games that repeat at increasing frequency, and on smaller spatial scales, with increasing disc game number. These patterns suggest an analogy to Fourier series. To make the analogy more concrete we present one last example. Consider $[1,2,4]$ blotto. Since the net value of the first two zones is less than the value of the fourth zone, a player wins the overall game if they they win the third zone, or, tie in the third zone and win the second. Otherwise they tie in all zones or lose. Thus, the performance function $f([x_{1},x_{2},x_{3}],[y_{1},y_{2},y_{3}])=\text{sign}(x_{3}-y_{3})+\chi_{z_{3}=0}(x-y)\text{sign}(x_{2}-y_{2})$ where $\chi(z)$ is the indicator function for the event in the subscript. Performance can be reduced to a comparison of a single agglomerated trait, $w(x)=x_{3}+\frac{1}{N-x_{3}+1}x_{2}$. Then $f(x,y)=\text{sign}(w(x)-w(y))$. Thus, performance is a step function applied to the difference $w(x)-w(y)$. The difference is dominated by the difference in allocations to the third zone. Figure 6 shows the first nine disc games. Notice that all allocations are embedded onto circles, or, in the first case, half circles. Moreover, phase (position along each circle), is entirely a function of the agglomerated trait $w(x)=x_{3}+\frac{1}{N-x_{3}+1}x_{2}$. This form is apparent in the phase panels, where phase is close to constant for fixed allocation to zone 3, but is tilted slightly to account for allocation to zone 2. The first disc game is transitive and completes one half circle moving clockwise from zero allocation to zone 3 to exclusive allocation to zone 3. Discs games 2 and 3 complete 1.5 and 2.5 rotations each when moving from zero allocation to zone 3 to exclusive allocation to zone 4. The pattern continues for the first 9 disc games. Moreover, the circle radii decay geometrically (as shown by the sequence of gold scatter points in the last panel of Figure 7). These features are hallmarks of a sine series embedding. We show below that any translationally invariant performance function of a single attribute can be represented by a sum of disc games, where the embedding into each disc game maps to a circle, the attribute maps to a phase coordinate around the circle, and the radii of the circles in each disc game are controlled by the coefficients of a sine series expansion. The performance function $f(x,y)$ only depends on the difference in allocations, $x-y$, so is translationally invariant, and is a function of a single trait, $w(x)$. Thus, it admits a sine series expansion. Note, the subsequent analysis does not guarantee low rank optimality, so only shows that disc game embedding via sine series is possible, not that PTA will necessarily produce such an embedding. Consider a performance function of the form: $$f(x,y)=A\sin(2\pi\omega(x-y))$$ (7) for $x,y\in\Omega\subset\mathbb{R}$, for arbitrary amplitude $A$, and frequency $\omega$. Performance functions of this form are easy to embed, since the disc game uses a cross product. The cross product between two points on a plane, expressed in polar coordinates, is the product of their radii times the sine of the difference in their phases. Therefore, if: $$\vec{y}(x)=\sqrt{|A|}[\cos(2\pi\text{sign}(A)\omega x),\sin(2\pi\text{sign}(A)\omega x)]$$ (8) then: $$\displaystyle\text{disc}(\vec{y}(x),\vec{y}(y))$$ $$\displaystyle=\sqrt{|A|}^{2}\sin(2\pi\text{sign}(A)\omega(x-y))$$ (9) $$\displaystyle=\text{sign}(A)|A|\sin(2\pi\omega(x-y))=A\sin(2\pi\omega(x-y))=f(x,y).$$ Lemma 1: [Trigonometric Performance Functions of One Trait] If $f(x,y)=A\sin(2\pi\omega(x-y))$ for $x,y$ both in a one-dimensional trait space, then $f$ is disc game embeddable using the embedding: $$\vec{y}_{k}(x)=\sqrt{|A|}[\cos(2\pi\text{sign}(A)\omega x),\sin(2\pi\text{sign}(A)\omega x)]$$ . Notice that this construction maps the intervals of length $1/\omega$ in $\Omega$ to the circle of radius $\sqrt{|A|}$ centered at the origin. It follows that, if a performance function is embeddable onto a circle centered at the origin then there exists a mapping from trait space to the real line where performance is of the form 7. This result extends easily to linear combinations of sinusoidal functions with varying frequencies. Consider a performance function of the form: $$f(x,y)=\sum_{k=1}^{n}A_{k}\sin(2\pi\omega_{k}(x-y))$$ (10) Then, $f$ can be recovered using a sum of $n$ disc game embeddings, where the $k^{th}$ embedding has the form: $$\vec{y}_{k}(x)=\sqrt{|A_{k}|}[\cos(2\pi\text{sign}(A_{k})\omega_{k}x),\sin(2\pi\text{sign}(A_{k})\omega_{k}x)]$$ (11) Note that, all the performance functions of this kind are translation invariant since they are functions of the difference $x-y$, which does not change if after shifting $x$ and $y$ by some amount $s$. : Theorem 1: [Translation Invariant One Trait Performance Functions] Suppose that $\Omega$ is a one-dimensional trait space, and $f(x,y)$ is translation invariant. Then there exists a function $h$ such that $f(x,y)=h(x-y)$. Suppose in addition that $h(x)$ is periodic with period $P$, or $\Omega$ is contained inside an interval with length $P/2$. Then $f$ is disc game embeddable using a countably infinite sequence of disc games, which correspond to the sine series expansion of $h$ and converge under the same conditions as the sine series. Moreover, each disc game represents a term in the sine series, and maps $\Omega$ to a subset of a circle centered at the origin with radius fixed by the corresponding coefficient in the sine series. Proof: If $f(x,y)$ is translation invariant then $f(x,y)=h(x-y)$ for some function $h$. Since $f(x,y)=-f(y,x)$, $h$ must be an odd function. If $\Omega$ is contained inside an interval of length $P/2$, then $h$ can be extended to an odd, continuous, $2P$ periodic function, or an odd $P$ periodic function. If not, then, by assumption, $h$ is periodic with period $P$. All integrable $P$ periodic functions on the real line can be approximated with a Fourier series. If the function is real valued and odd, then the Fourier series is a sine series of the form: $$h(x)\simeq\sum_{k=1}^{\infty}A_{k}\sin(2\pi\omega_{k}x),\quad\omega_{k}=\frac{k}{P}.$$ (12) Each term in the sine series can be reproduced by a disc game embedding using the method for embedding sinusoidal functions introduced before. Specifically, let: $$\vec{y}_{k}(x)=\sqrt{|A_{k}|}[\cos(2\pi\text{sign}(A_{k})\omega_{k}x),\sin(2\pi\text{sign}(A_{k})\omega_{k}x)]$$ (13) where $A_{k}$ is the $k^{th}$ amplitude in the sine series of $h$: $$A_{k}=\frac{4}{P}\int_{0}^{P/2}h(x)\sin(2\pi\omega_{k}x)dx.$$ (14) Then, a partial expansion in terms of $r$ disc games equals the $r$ term sine series expansion of $h(x-y)$: $$\sum_{k=1}^{r}\text{disc}(\vec{y}_{k}(x),\vec{y}_{k}(y)=\sum_{k=1}^{n}A_{k}\sin(2\pi\omega_{k}(x-y)).$$ (15) Thus, convergence of the sequence of disc game embeddings follows convergence of the sine series expansion. $\square$ It remains to show that, after sampling a finite set of agents, the result of PTA recovers the sine series representation. Sine series are low rank optimal in this case since, if the agents are ordered by increasing $w(x)$, the evaluation matrix is of the form: $$F=\left[\begin{array}[]{ccccc}0&1&1&\ldots&1\\ -1&0&1&\ldots&1\\ -1&-1&0&\ldots&1\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ -1&-1&-1&\ldots&0\end{array}\right]$$ (16) This matrix is real, skew-symmetric, Toeplitz, and is diagonalized by the discrete Fourier transform, so PTA produces a sine series. The sequence of disc games act as the sine series expansion of a step function, with each higher order disc game corresponding to a higher order correction of an approximation to the step function. The first disc game is transitive and captures 90% of the structure of the evaluation matrix. Subsequent disc games correct the first disc game in order to produce a step function. While it only takes two disc games to recover 95% of the structure of $F$, so the 95% complexity of $[1,2,4]$ blotto is 2, the subsequent eigenvalues decay slowly, so stricter accuracy requirements lead to large complexities. The slow decay of the eigenvalues is a natural consequence of the slow convergence of sine series to a step function. Here it is clear that the complexity predicted by PTA overstates the complexity of the underlying game, since subsequent disc games are best interpreted as corrections that gradually finesse the first disc game, not distinct trade-offs. More general proof and exploration is saved for future work. Appendix C Disc Games C.1 Geometry Principal trade-off analysis is a useful visualization technique since disc games encode performance relations via embedding geometry. Reading a disc game requires familiarity with this geometry. Namely, familiarity with the various interpretations of a cross product. Here we review some relevant relations. Cross products are closely related to area in the embedding. Given a pair of competitors with embedding coordinates $\vec{y}_{k}(i)$ and $\vec{y}_{k}(j)$, the performance of $i$ against $j$ in disc game $k$, $\text{disc}(\vec{y}_{k}(i),\vec{y}_{k}(j))$, equals the signed area of the triangle with vertices $\vec{y}_{k}(i)$, $\vec{y}_{k}(j$, and the origin. Further, the degree of cyclicity on a loop $\mathcal{C}$ of competitors can be computed by evaluating a path sum of the advantages around the loop, i.e. $\text{curl}(\mathcal{C})$ [29]. The curl on loop $\mathcal{C}$ equals the signed area of the loop traced out in each embedding [30], summed over the embeddings. It follows that curl inherits the invariances of areas. In particular, curl is translation invariant. The cyclic component $F_{c}$ on a given edge $i,j$ equals the average curl over all possible triangles formed by drawing a random $k$. Since curl is translation invariant, the cyclic component of competition is translation invariant. In contrast, the transitive component of competition is not translation invariant, and translation in a disc game induces a transitive component of competition [3]. By subtracting $F_{t}$ from $F$ to recover $F_{c}$ we center all the rows and columns, so the scatter cloud of embedded competitors will be centered at the origin. In contrast, if we embed $F$ directly, then transitivity arises from translation of each scatter cloud away from the origin. If the origin is not included in the convex hull of the embedded agents, then competition is transitive. In contrast, scaling the embedding coordinates does change the predicted performance relations. Area is proportional to length squared, so scaling the embedding coordinates by $\sqrt{s}$ scales the associated cyclic component of competition by $s$. The scaling from $\hat{Y}$ to $Y$, was adopted to ensure that unit area in embedding generates unit curl. It follows that the area encompassed by a set of points in a disc game embedding directly represents the amount of cyclic competition among those agents, and hence the importance of that embedding. Appendix D Blotto D.1 Intransitive and complexity analysis Here we look at how PTA can be useful for comparing different blotto games. New blotto games are easily created by varying the game parameters. The number of zones, troops, and payouts can be changed to generate games with varying structure. Other Blotto variants, such as Boolean Blotto, or Colonel Lotto, are not considered here. We focus on a small number of zones — $K=3$ or $K=4$ — to illustrate how the intransitivity ($\frac{\|F_{c}\|}{\|F_{t}\|}$) and complexity change for varying $N$ and $K$ across a number of different payout structures. We propose three hypotheses. First, while the number of distinct strategies grows combinatorially in $N$ and $K$, the allotment problem converges, in the limit of large $N$, to a continuous problem in which each commander can allot an arbitrary fraction of their total force to any zone. Therefore, our structural measures should converge to finite values representing the structure of a Blotto game allowing any fractional unit allotment on the interval $[0,1]$. Both measures should increase with increasing $N$ towards the limiting case, as the space of available allotment strategies grows with $N$. Second, complexity should increase with increasing $K$, since the number of distinct strategic trade-offs should increase as the number of distinct, exploitable, win conditions increases. Third, small changes in game structure can lead to large differences in both intransitivity and complexity. Small changes can break underlying symmetries, and, the majority functions that determine performance are discontinuous in nearby allocations so small changes in battlefield weights can produce sudden changes in the set of win conditions. To test these hypotheses, we vary $N$ between 5 and 45 while holding $K$ fixed at 3 and 4. We consider three distinct payout structures for K=3 and 2 for K=4. For each game we construct $F$ and compute the intransitivity $\frac{\|F_{c}\|}{\|F_{t}\|}$ using the HHD. The complexity of each game is computed by finding the number of disc games required to reach an error tolerance of $0.05$ (measured in relative Frobenius error). All of the games considered are highly cyclic, with $\|F_{c}\|>\|F_{t}\|$ in all but the $[1,2,4]$ case. The $[1,2,4]$ case is transitive case, since the only win condition is victory in. Note that $\|F_{c}\|$ is not zero for the $[1,2,4]$ case since the chosen performance measure cannot be expressed as a linear function of a difference in ratings, so is not perfectly transitive. Nevertheless, if the step function used to assign battlefield outcomes is replaced with any sigmoid $s(x)$, then the $[1,2,4]$ case is perfectly transitive with respect to $s^{-1}$, where rating equals allotment to zone 4. Figure 7 shows the results, and confirms our three hypotheses. First, the intransitivity and complexity for all games plateaus for large enough N. This indicates that, beyond a certain $N$, the game reaches its “strategic capacity"; all meaningful types of trade-offs have been expressed. In both cases, the limiting complexity is strikingly small relative to the size of the strategy space, which grows rapidly in $N$. For a game with $K$ zones and $N$ units, there are $\left(N+K-1\text{ choose }K-1\right)=\mathcal{O}(N^{K})$ distinct strategies to consider. In the most extreme case tested, $N=45$ and $K=4$, so the strategy space contains $\mathcal{O}(10^{4})$ distinct allotments, which can be reduced to $\mathcal{O}(10)$ distinct trade-offs. Thus PTA can effectively separate the underlying complexity of a game from the size of its strategy space. Second, in the right side of Figure 7 the complexity of the $K$=4 games are much higher than the $K$=3 games. Third, the difference in complexity from one game to the next in stark. At $N$ = 45 for example there are about 15 additional disc games needed to reach 95 $\%$ accuracy when going from the [1,2,4] transitive game to the uniform [1,1,1] game. Furthermore having similar $\frac{\|F_{c}\|}{\|F_{t}\|}$ does not imply similar complexity. The [1,1,1] and [2,3,4] games have similar intransitivity but have a difference of  20 in complexity. Appendix E Pokemon Here we describe in further detail the construction of the Pokemon data. In the full game a player (or trainer) captures Pokemon to compete against the Pokemon of other players, usually with teams of 6 Pokemon chosen at the player’s discretion. Players also choose the order in which their Pokemon compete since the actual combat is done pairwise. This pairwise interaction is what allowed us to ignore the team aspect and still learn important aspects of the game. Each of the pokemon had a set of attributes. The attributes are shown below. 1. Type 1: Main Type - Fire, Water, Grass, ect… 2. Type 2: Secondary Type - Not all pokemon have two types but we did not find this to contribute to any performance tradeoffs in a significant way 3. HP: Hit points - Indicated how much damage a pokemon can endure before losing the match. 4. Attack: Base modifier for normal attacks 5. Defense: The base damage resistance against normal attacks 6. Special attack 7. Special Defense 8. Speed: This stat largely determines which pokemon get to attack first. As combat is turn based, this constitues a large advantage which we saw in disc game 1. There were 50,000 pairwise interactions among the 735 pokemon that were used. The data for each interaction consisted of the name of the first and second pokemon as well as the winner of the match. In an individual matchup, each Pokemon has a certain level of HP or health. The two Pokemon take turns attacking one another until one of the them loses all of their HP and is declared the loser. The first to attack is determined by some set of attributes that is not explicitly given by the data set, but speed is known to be a large contributing factor. Since we did not have the full interaction graphWe filled in any missing data using logistic regression, producing a win probability matrix. We obtained the evaluation matrix $F$ via the logistic link function commonly used in Elo rating. The evaluation for competitor $i$ vs competitor $j$ is then given by $f_{i,j}=\log(\frac{p_{ij}}{1-p_{ij}})$ where $p_{ij}$ is the probability that Pokemon $i$ beats Pokemon $j$. We applied the Schur decomposition directly to $F$ to show that disc game embedding can successfully isolate a dominant transitive component (speed). In Figure 8 we show the third disc game left out of the main analysis. It shows a double loop structure with a full inner circle and a half outer circle. Like disc game 1, disc game 3 is, approximately, a curve parameterized by speed. As in the Fourier examples discussed before, the double loop represents a higher order correction to disc game 1. Disc game 1 confers a transitive, monotonically increasing advantage to faster agents. The faster an agent relative to their opponent, the larger their advantage. Disc game 3 adds nuance to this relation by discounting the advantage conferred by small differences in speed, increasing the advantage conferred by intermediate differences in speed, discounting the advantage conferred by large speed differences, and strongly rewarding maximal speed differences (see the evaluation matrix and associated subpanel in the rightmost column of Figure 8). Note, these corrections to disc game 1 are very small. The eigenvalue for disc game 1 is roughly 15 times larger than the eigenvalue for disc game 3, hence the relationship between speed and performance is largely determined by disc game 1. We did not discuss disc game 3 in the main text for this reason. References [1] D. Balduzzi, M. Garnelo, Y. Bachrach, W. Czarnecki, J. Perolat, M. Jaderberg, and T. Graepel, Open-ended learning in symmetric zero-sum games, in International Conference on Machine Learning, PMLR, 2019, pp. 434–443. [2] D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel, The mechanics of n-player differentiable games, in International Conference on Machine Learning, PMLR, 2018, pp. 354–363. [3] D. Balduzzi, K. Tuyls, J. Perolat, and T. Graepel, Re-evaluating evaluation, Advances in Neural Information Processing Systems, 31 (2018). [4] J. Bilmes, G. Ji, and M. Meila, Intransitive likelihood-ratio classifiers, Advances in Neural Information Processing Systems, 14 (2001). [5] J. Bouchet, Pokemon battles, October 2017. Retrieved October 21, 2021. [6] S. Bozóki, L. Csató, and J. Temesi, An application of incomplete pairwise comparison matrices for ranking top tennis players, European Journal of Operational Research, 248 (2016), pp. 211–218. [7] O. Candogan, I. Menache, A. Ozdaglar, and P. A. Parrilo, Flows and decompositions of games: Harmonic and potential games, Mathematics of Operations Research, 36 (2011), pp. 474–503. [8] S. Chen and T. Joachims, Modeling intransitivity in matchup and comparison data, in Proceedings of the ninth acm international conference on web search and data mining, 2016, pp. 227–236. [9] J. Clune, Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence, arXiv preprint arXiv:1905.10985, (2019). [10] W. M. Czarnecki, G. Gidel, B. Tracey, K. Tuyls, S. Omidshafiei, D. Balduzzi, and M. Jaderberg, Real world games look like spinning tops, Advances in Neural Information Processing Systems, 33 (2020), pp. 17443–17454. [11] M. Garnelo, W. M. Czarnecki, S. Liu, D. Tirumala, J. Oh, G. Gidel, H. van Hasselt, and D. Balduzzi, Pick your battles: Interaction graphs as population-level objectives for strategic diversity, arXiv preprint arXiv:2110.04041, (2021). [12] K. Healy, Data visualization: a practical introduction, Princeton University Press, 2018. [13] J. Heinrich and D. Silver, Deep reinforcement learning from self-play in imperfect-information games, arXiv preprint arXiv:1603.01121, (2016). [14] T.-K. Huang, R. C. Weng, C.-J. Lin, and G. Ridgeway, Generalized bradley-terry models and multi-class probability estimates., Journal of Machine Learning Research, 7 (2006). [15] D. Kovenock and B. Roberson, Generalizations of the general lotto and colonel blotto games, Economic Theory, 71 (2021), pp. 997–1032. [16] R. A. Laird and B. S. Schamp, Competitive intransitivity promotes species coexistence, The American Naturalist, 168 (2006), pp. 182–193. [17] D. D. Lee and H. S. Seung, Learning the parts of objects by non-negative matrix factorization, Nature, 401 (1999), pp. 788–791. [18] M. Lewis, Moneyball: The art of winning an unfair game, WW Norton & Company, 2004. [19] P. Linares, Are inconsistent decisions better? an experiment with pairwise comparisons, European Journal of Operational Research, 193 (2009), pp. 492–498. [20] K. O. May, Intransitivity, utility, and the aggregation of preference patterns, Econometrica: Journal of the Econometric Society, (1954), pp. 1–13. [21] M. Moravčík, M. Schmid, N. Burch, V. Lisỳ, D. Morrill, N. Bard, T. Davis, K. Waugh, M. Johanson, and M. Bowling, Deepstack: Expert-level artificial intelligence in heads-up no-limit poker, Science, 356 (2017), pp. 508–513. [22] S. Omidshafiei, K. Tuyls, W. M. Czarnecki, F. C. Santos, M. Rowland, J. Connor, D. Hennes, P. Muller, J. Pérolat, B. D. Vylder, et al., Navigating the landscape of multiplayer games, Nature communications, 11 (2020), pp. 1–17. [23] K. Pearson, Liii. on lines and planes of closest fit to systems of points in space, The London, Edinburgh, and Dublin philosophical magazine and journal of science, 2 (1901), pp. 559–572. [24] Pokemon-Main-Site, Pokemon main site. Retrieved May 11, 2022. [25] J. B. Silk, Male bonnet macaques use information about third-party rank relationships to recruit allies, Animal Behaviour, 58 (1999), pp. 45–51. [26] D. Silver, Alphazero: Shedding new light on chess, shogi, and go, December 2018. Retrieved May 13, 2022. [27] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al., A general reinforcement learning algorithm that masters chess, shogi, and go through self-play, Science, 362 (2018), pp. 1140–1144. [28] B. Sinervo and C. M. Lively, The rock–paper–scissors game and the evolution of alternative male strategies, Nature, 380 (1996), pp. 240–243. [29] A. Strang, K. C. Abbott, and P. J. Thomas, The network HHD: Quantifying cyclic competition in trait-performance models of tournaments, SIAM Review, 64 (2022), pp. 360–391. [30] A. Strang, D. SeWell, D. Rosenbluth, and K. Alcedo, Quadratic competition models for similar competitors: An analysis, arXiv, (2022). [31] D. M. Stuart-Fox, D. Firth, A. Moussalli, and M. J. Whiting, Multiple signals in chameleon contests: designing and analysing animal contests as a tournament, Animal Behaviour, 71 (2006), pp. 1263–1271. [32] K. Tuyls, J. Perolat, M. Lanctot, J. Z. Leibo, and T. Graepel, A generalised method for empirical game theoretic analysis, arXiv preprint arXiv:1803.06376, (2018). [33] P. type chart, Pokemon type chart. Retrieved January 20, 2022. [34] O. Vinyals, I. Babuschkin, W. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, and D. Silver, Grandmaster level in starcraft ii using multi-agent reinforcement learning, Nature, 575 (2019), pp. 350–354. [35] D. Youla, A normal form for a matrix under the unitary congruence group, Canadian Journal of Mathematics, 13 (1961), pp. 694–704. [36] B. Zumino, Normal forms of complex matrices, Journal of Mathematical Physics, 3 (1962), pp. 1055–1057.
CENTRE DE PHYSIQUE THEORIQUE CNRS - Luminy, Case 907 13288 Marseille Cedex Riemannian and Non-commutative Geometry in Physics Bruno IOCHUM 111 and Université de Provence,   [email protected][email protected] Daniel KASTLER 222 and Université de la Méditerrannée Thomas SCHÜCKER ${}^{1}$ Abstract We feel that non-commutative geometry is to particle physics what Riemannian geometry is to gravity. We try to explain this feeling. PACS-92: 11.15 Gauge field theories MSC-91: 81E13 Yang-Mills and other gauge theories oct. 1995 CPT-95/P.3260 hep-th/9511011 We feel that non-commutative geometry is as fundamental to physics as Minkowskian and Riemannian geometry. Let us try to explain this by comparing the standard model of particle physics and general relativity. From a chronological point of view, this comparison is difficult, because Riemannian geometry existed well before general relativity. However, the field theoretic approach allows to introduce general relativity in close analogy to classical electrodynamics without use of Riemannian geometry. Therefore this approach is well suited for our comparison. So let us imagine a world ignoring Riemannian geometry where physicists try to describe gravity. They are inspired by Maxwell who takes a field $A$ of spin 1, a second order differential operator $D_{Max}$ and writes down his field equation $$\displaystyle D_{Max}\,A=\frac{1}{c^{2}\epsilon_{0}}\,j,$$ where $j$ is the source, charge density and currents, and $\epsilon_{0}$ is the proportionality constant from Coulomb’s law. After many ingenious and expensive experiments and theoretical trials and errors, the physicists agree on the standard model of gravity. It starts from a particular spin 2 field $g$, and a second order differential operator $D_{Ein}$. The field equation is $$\displaystyle D_{Ein}\,g=-\frac{8\pi G}{c^{4}}\,T,$$ where the source $T$ is energy-momentum density and currents, and $G$ is the proportionality constant from Newton’s universal law of gravity. Although in perfect agreement with experiment, this standard model has draw backs: who ordered spin 2? Maxwell’s differential operator $D_{Max}$ contains 8 summands, the gravitational one $D_{Ein}$ results from brute force and contains roughly 80 000 summands. Some of these summands still are unaccessible to experiment. At this stage, Riemannian geometry is discovered, the spin 2 field is recognized as the metric and the differential operator $D_{Ein}$ is recognized as the curvature if the unknown summands are chosen properly. Most physicists say: so what, just fancy mathematics. Some dream of a geometric unification of all forces. Later, even more expensive experiments will test the predictions of Riemannian geometry coming from the unknown summands. If, in the real world, we qualify general relativity as revolution, we have several criteria. • Postdiction: the theory correctly reproduces experimental data, that remain unexplained in the old theories, e.g. the precession of perihelia of Mercury. • Prediction: the theory can be in contradiction with future experimental data, e.g. deflection of light. • New concepts, e.g. curved spacetimes, absence of universal time. • Reticence of the majority. Our purpose is to explain that for non-commutative geometry the analogue of $g$ in the imaginary world is the Higgs field, the analogue of $D_{Ein}$ is the Lagrangian of the standard model of electro-weak and strong interactions. Postdictions of the theory are that fermions sit in fundamental representations, that weak interactions violate parity, that strong interactions are vector like and $$\displaystyle\rho:=\frac{g_{1}^{2}+g_{2}^{2}}{g_{2}^{2}}\,\frac{m^{2}_{W}}{m^{% 2}_{Z}}=1,$$ (1) (2) $$\displaystyle m_{e}<m_{W}<m_{t}/\sqrt{3},$$ (3) $$\displaystyle 2g_{1}^{-2}>g_{2}^{-2}+g_{3}^{-2}/3.$$ There is also a prediction, the mass of the Higgs, accessible to experiment in about ten years. New concepts are fuzzy spacetimes — that is spacetimes with an uncertainty relation — and discrete spacetimes. 1 The Establishment Let us briefly summarize today’s established theory of particles and interactions. It is a particular Yang-Mills-Higgs theory. To get started, we view this class of theories as black box or slot machine. The input comes in two parts, bills and coins. The output is a particle phenomenology, that is cross sections, branching ratios, life times … To decide whether a particular input is a winner, its corresponding output is confronted with millions of experimental numbers that cost billions of Swiss Francs. 1.1 Bills and coins The Yang-Mills-Higgs machine has four slots for one bill each. In the first of these slots you are supposed to put a finite dimensional, real, compact Lie group $G$. For the remaining slots choose three unitary representations $\rho_{L}$, $\rho_{R}$, $\rho_{S}$ defined on Hilbert spaces ${\cal H}_{L}$, ${\cal H}_{R}$, ${\cal H}_{S}$. These Hilbert spaces will accommodate the left- and right-handed fermions and the Higgs scalars. After having eaten these four bills, the machine will ask you for coins, real or complex numbers. The number of coins depends on the chosen bills. • An invariant scalar product on the Lie algebra ${{g}}$ of $G$. This choice is parameterized by one positive number $g$, the ‘gauge coupling’, for every simple factor in $G$, e.g. $$\displaystyle(b,b^{\prime})$$ $$\displaystyle:=$$ $$\displaystyle\frac{1}{g_{1}^{2}}\bar{b}b^{\prime},\quad b,b^{\prime}\in\ u(1),$$ (5) $$\displaystyle(a,a^{\prime})$$ $$\displaystyle:=$$ $$\displaystyle\frac{2}{g_{n}^{2}}\,{\rm tr}\,(a^{*}a^{\prime}),\quad a,a^{% \prime}\in su(n).$$ • An invariant, positive polynomial $V(\varphi)$, $\varphi\in{\cal H}_{S}$ of order 4, the ‘Higgs potential’. We want this potential to break $G$ spontaneously. This means that no invariant vector in ${\cal H}_{S}$ minimizes $V$. For example if $G=SU(2)$ with the fundamental representation ${\cal H}_{S}={{C}}^{2}$, the most general Higgs potential is $$\displaystyle V(\varphi)=\lambda(\varphi^{\ast}\varphi)^{2}-{{\mu^{2}}\over 2}% \varphi^{\ast}\varphi,$$ $$\displaystyle\qquad\varphi\in{\cal H}_{S},\quad\lambda,\mu>0.$$ • One complex number or ‘Yukawa coupling’ $g_{Y}$ for every trilinear invariant — i.e. for every one dimensional invariant subspace, ‘singlet’ — in the decomposition of the representation associated to $\left({\cal H}_{L}^{\ast}\otimes{\cal H}_{R}\otimes{\cal H}_{S}\right)\oplus% \left({\cal H}_{L}^{\ast}\otimes{\cal H}_{R}\otimes{\cal H}_{S}^{*}\right).$ For example if $G=SU(2)$, ${\cal H}_{L}={{C}}^{2}$, ${\cal H}_{R}={{C}}$, ${\cal H}_{S}={{C}}^{2}$ there is one singlet: $$\displaystyle\sum_{j=1}^{2}\bar{\psi}_{Lj}\psi_{R}\varphi_{j},\quad\pmatrix{% \psi_{L1}\cr\psi_{L2}}\in{\cal H}_{L},\ \psi_{R}\in{\cal H}_{R},\ \pmatrix{% \varphi_{1}\cr\varphi_{2}}\in{\cal H}_{S}.$$ Physicist have been playing on this slot machine for the last thirty years. One winner clearly emerged, the so called standard model. Its bills are $$\displaystyle G$$ $$\displaystyle=$$ $$\displaystyle SU(3)\times SU(2)\times U(1)$$ (7) $$\displaystyle{\cal H}_{L}$$ $$\displaystyle=$$ $$\displaystyle\bigoplus_{1}^{3}\left[(1,2,-\frac{1}{2})\oplus(3,2,{1\over 6})% \right],$$ (8) $$\displaystyle{\cal H}_{R}$$ $$\displaystyle=$$ $$\displaystyle\bigoplus_{1}^{3}\left[(1,1,-1)\oplus(3,1,{2\over 3})\oplus(3,1,-% {1\over 3})\right],$$ (10) $$\displaystyle{\cal H}_{S}$$ $$\displaystyle=$$ $$\displaystyle(1,2,-\frac{1}{2}),$$ (11) where $(n_{3},n_{2},y)$ denotes the tensor product of an $n_{3}$ dimensional representation of $SU(3)$, an $n_{2}$ dimensional representation of $SU(2)$ and the one dimensional representation of $U(1)$ with hypercharge $y$: $$\displaystyle\rho(e^{i\theta})=e^{iy\theta},$$ $$\displaystyle\qquad y\in{{Q}},\ \theta\in[0,2\pi).$$ Some vocabulary: particles are basis elements. The spin 1 particles, the gauge bosons, span the Lie algebra ${{g}}$ of the group $G$. The eight basis elements of $su(3)$ are called gluons. They are massless and mediate the strong interactions, e.g. nuclear fusion, fission, $\alpha$-decay. The remaining $su(2)\oplus u(1)$ is spanned by the photon — Maxwell’s old friend and later found responsible for $\gamma$-decay — and three massive bosons, the $W^{+}$, $W^{-}$ and $Z$. They mediate the weak interactions, e.g. $\beta$-decay. The spin $\frac{1}{2}$ particles or fermions come in three identical copies, ‘generations’. The first generation of ${\cal H}_{L}$ is spanned by the left-handed parts (Weyl spinors) of the electronic neutrino, the electron and the up and down quarks. The first two are called leptons, from the greek word for mild, because sitting in $SU(3)$ singlets they are not subject to strong interactions. The other two left-handed generations are spanned by the muonic neutrino, the muon, the charm and strange quarks, and the tau neutrino, the tau, the top and bottom quarks. ${\cal H}$ is spanned by the right-handed parts of the same particles, except that there are no right-handed neutrinos. Consequently the neutrinos are massless. The particle count for the spin 0 particles, scalars, is a little bit more complicated. Not all basis elements of ${\cal H}_{S}$ correspond to physical scalars. There is only one in the standard model. It is called Higgs scalar and is still being searched for. Because of the high degree of reducibility in the bills, there are many coins, among them 27 Yukawa couplings. Not all of them have a physical meaning. They can be converted into 18 physically significant, positive numbers [1], three gauge couplings, $$\displaystyle g_{1}=0.3575\pm 0.0001,$$ $$\displaystyle g_{2}=0.6507\pm 0.0007,$$ $$\displaystyle g_{3}=1.207\pm 0.026,$$ eleven particle masses, $$\displaystyle m_{W}=80.22\pm 0.26\ {\rm GeV},$$ $$\displaystyle m_{H}>58.4\ {\rm GeV},$$ (12) $$\displaystyle m_{e}=0.51099906\pm 0.00000015\ {\rm MeV},$$ $$\displaystyle m_{u}=5\pm 3\ {\rm MeV},$$ $$\displaystyle m_{d}=10\pm 5\ {\rm MeV},$$ (13) $$\displaystyle m_{\mu}=0.105658389\pm 0.000000034\ {\rm GeV},$$ $$\displaystyle m_{c}=1.3\pm 0.3\ {\rm GeV},$$ $$\displaystyle m_{s}=0.2\pm 0.1\ {\rm GeV},$$ (14) $$\displaystyle m_{\tau}=1.7771\pm 0.0005\ {\rm GeV},$$ $$\displaystyle m_{t}=176\pm 18\ {\rm GeV},$$ $$\displaystyle m_{b}=4.3\pm 0.2\ {\rm GeV},$$ and quark mixings. These mixings are given in form of a unitary matrix, the Cabbibo-Kobayashi-Maskawa matrix $$\displaystyle C_{KM}:=\pmatrix{V_{ud}&V_{us}&V_{ub}\cr V_{cd}&V_{cs}&V_{cb}\cr V% _{td}&V_{ts}&V_{tb}}.$$ For physical purposes it can be parameterized by three angles $\theta_{12}$, $\theta_{23}$, $\theta_{13}$ and one $CP$ violating phase $\delta$: $$\displaystyle C_{KM}=\pmatrix{c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\cr-% s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i% \delta}&s_{23}c_{13}\cr s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{2% 3}-s_{12}c_{23}s_{13}e^{i\delta}&c_{23}c_{13}},$$ with $c_{kl}:=\cos\theta_{kl}$, $s_{kl}:=\sin\theta_{kl}$. The absolute values of the matrix elements are: $$\displaystyle\pmatrix{0.9753\pm 0.0006&0.221\pm 0.003&0.004\pm 0.002\cr 0.221% \pm 0.003&0.9745\pm 0.0007&0.040\pm 0.008\cr 0.010\pm 0.006&0.039\pm 0.009&0.9% 991\pm 0.0004}.$$ Every body agrees that the standard model is ugly, too ugly to be a fundamental theory. 1.2 The general rules Let us now have a closer look at the inside of the Yang-Mills-Higgs machine. It produces a Lagrangian that consists of five separate pieces. The first is the Yang-Mills Lagrangian, well motivated physically as non-abelian generalization of the famous Maxwell Lagrangian, $G=U(1)$. Also on the mathematical side, this Lagrangian needs no further introduction. Its fundamental field are the gauge bosons or connection, $A\in\Omega^{1}(M,{{g}})$, a 1-form on the spacetime manifold $M$ with values in the Lie algebra ${{g}}$: $$\displaystyle{\cal L}_{YM}[A]=\frac{1}{4}(F,*F),$$ where $F:=\,\hbox{\rm d}A+\frac{1}{2}[A,A]$ denotes the field strength or curvature of $A$, $*$ is the Hodge star, and $(\cdot,\cdot)$ is the chosen invariant scalar product on ${{g}}$. The second piece is the Dirac Lagrangian. It is geometricly as noble as the Yang-Mills Lagrangian. $$\displaystyle{\cal L}_{D}[A,\psi_{L},\psi_{R}]=\psi_{L}^{*}{\,\hbox{${\rm D}\!% \!\!\!/\,$}}\psi_{L}+\psi_{R}^{*}{\,\hbox{${\rm D}\!\!\!\!/\,$}}\psi_{R},$$ where $\psi_{L}$ is a left-handed spinor with values in ${\cal H}_{L}$, $\psi_{L}^{*}$ is its dual with respect to the scalar product in ${\cal H}_{L}$, ${\,\hbox{${\rm D}\!\!\!\!/\,$}}$ is the covariant Dirac operator, $\,\hbox{\rm D}\psi_{L}:=\,\hbox{\rm d}\psi_{L}+\tilde{\rho}_{L}(A)\psi_{L}.$ We denote by $\tilde{\rho}$ the Lie algebra representation belonging to the group representation $\rho$. For $G=U(1)$ these two Lagrangians yield the very successful quantum electrodynamics and for $G=SU(3)$, ${\cal H}_{L}={\cal H}_{R}={{C}}^{3}$ we get the present day theory for strong interactions, quantum chromodynamics. In order to incorporate weak interactions and to give masses to gauge bosons and fermions, one is forced to break the symmetry $G$ spontaneously. This is where the patchwork starts. One has to postulate the existence of scalars, til now unobserved. They are 0-forms with values in ${\cal H}_{S}$, $$\displaystyle\varphi\in\Omega^{0}(M,{\cal H}_{S}).$$ One also has to add three more Lagrangian pieces involving the scalars, the Klein-Gordon Lagrangian $$\displaystyle{\cal L}_{KG}[A,\varphi]=\frac{1}{2}\,\hbox{\rm D}\varphi^{*}*\,% \hbox{\rm D}\varphi,$$ the Higgs potential and the Yukawa terms $$\displaystyle{\cal L}_{Yu}[\psi_{L},\psi_{R},\varphi]=\sum_{j=1}^{n}g_{Yj}% \left(\psi_{L}^{*},\psi_{R},\varphi\right)_{j}+\sum_{j=n+1}^{m}g_{Yj}\left(% \psi_{L}^{*},\psi_{R},\varphi^{*}\right)_{j}\ +\ {\rm complex\ conjugate}.$$ To summarize, the standard model has two major shortcomings, the general rules of Yang-Mills-Higgs look artificial, as well as the input, bills and coins, singled out by nature. Nevertheless the standard model has resisted to an extremely detailed experimental analysis where all concurrent models have failed. 2 The Revolution The non-commutative formulation improves the situation on all three levels, general rules, bills and coins. 2.1 General rules The Yang-Mills and Dirac Lagrangians have a geometric origin and Alain Connes found a natural generalization of some of them to non-commutative geometry [2]. Connes and Lott have considered these Lagrangians in the particular case of a product geometry of an ordinary four dimensional spacetime geometry by a zero dimensional non-commutative geometry. There a miracle happens [2, 3]. When decomposing the non-commutative versions of the Yang-Mills and Dirac Lagrangians in terms of ordinary fields they retrieve of course the ordinary Yang-Mills and Dirac Lagrangians. Simultaneously and free of charge, they also get the other three pieces, the Klein-Gordon Lagrangian, the symmetry breaking Higgs potential, and some Yukawa terms. Every such Connes-Lott model yields a particular Yang-Mills-Higgs model. The contrary is far from being true, how far will be discussed in terms of bills and coins in the following subsections. 2.2 Bills Since the introduction of quantum mechanics, we are used to the description of non-commutative spaces in terms of involution algebras. A zero dimensional non-commutative space is given by a finite dimensional, real involution algebra ${\cal A}$. The group $G$ of the ensuing gauge model will be the group of unitaries of ${\cal A}$ $$\displaystyle\left\{a\in{\cal A}|\ a^{*}a=a^{*}a=1\right\}$$ or possibly a subgroup thereof. In order to construct a differential calculus on the non-commutative space, Connes introduces two algebra representations $\rho_{L}$ and $\rho_{R}$ on Hilbert spaces ${\cal H}_{L}$ and ${\cal H}_{R}$ such that $\rho_{L}\oplus\rho_{R}$ is faithful. In the finite dimensional case, this implies that ${\cal A}=M_{n}({{R}})$, $M_{n}({{C}})$ or $M_{n}({{{H}}})$, ${{{H}}}$ denoting the quaternions, and that the algebra representations are copies of the defining representation. For $M_{n}({{C}})$, there is— in addition to the defining representation — its conjugate. In terms of bills of the resulting Yang-Mills-Higgs model we have the following irreducible possibilities: $$\displaystyle G=O(n,{{R}}),$$ $$\displaystyle{\cal H}_{L,R}={{C}}^{n},$$ (15) $$\displaystyle G=U(n)\ {\rm or}\ SU(n),$$ $$\displaystyle{\cal H}_{L,R}={{C}}^{n},$$ (16) $$\displaystyle G=USp(n),$$ $$\displaystyle{\cal H}_{L,R}={{C}}^{2n}.$$ The restriction on the group bill is mild, only the exceptional groups are excluded. The restrictions on the two fermionic bills is appreciable, e.g. $U(1)$ only admits hypercharge -1, or 1, $SU(2)$ only has one irreducible representation, the two dimensional one, while in the general setting there is an infinite number to choose from. The restriction on the scalar bill is spectacular. It comes out to be a group representation, a unitary representation of the group of unitaries, and is restricted by the fermionic bills: its Hilbert space is an invariant subspace, $$\displaystyle{\cal H}_{S}\>\subset\>\left({\cal H}_{L}^{*}\otimes{\cal H}_{R}% \right)\,\oplus\,\left({\cal H}_{R}^{*}\otimes{\cal H}_{L}\right).$$ (17) This invariant subspace is entirely determined by the coins. One is of course tempted to build models with a simple algebra and/or irreducible fermion representations. Besides phenomenological shortcomings, all such models have a degenerate vacuum, an invariant vector in ${\cal H}$, that minimizes the Higgs potential. All popular Grand Unifies Theories are excluded in Connes and Lott’s approach. Similarly, all left-right symmetric models are excluded, because the constraint (17) forbids spontaneous parity violation. The minimal non-commutative model without degeneracy turns out to be the $SU(2)\times U(1)$ model of weak interactions with two generations of leptons: $$\displaystyle{\cal H}_{L}$$ $$\displaystyle=$$ $$\displaystyle\bigoplus_{1}^{2}(2,0),$$ (18) $$\displaystyle{\cal H}_{R}$$ $$\displaystyle=$$ $$\displaystyle\bigoplus_{1}^{2}(1,-1).$$ Comparing with (8-10), we see that the hypercharges are wrong. They are corrected by the inclusion of strong interactions. This inclusion requires a new ingredient [4], a real structure or — in physical terms — a generalization of charge conjugation to non-commutative geometry. The existence of a real structure implies additional constraints on the fermion representations. The representations (8-11) of the standard model have four features. • The weak interaction $SU(2)$ violates parity maximally, it acts only on left-handed fermions. • The strong interaction $SU(3)$ is vectorial, it acts in the same way on left- and right-handed fermions. • The scalars transform under $SU(2)$, implying spontaneous breaking of $SU(2)$ that renders its gauge bosons, $W^{+}$, $W^{-}$ and $Z$, massive. • The scalars do not transform under $SU(3)$. It remains unbroken and its gauge bosons, the gluons, massless. In a Yang-Mills-Higgs theory these four features are independent, not so in the non-commutative approach. We already stated that the scalar representation is not chosen and the two last features follow from the two first. On top, the existence of a real structure implies that the first feature implies the second [5]. The existence of a real structure is intimately related to another mathematical property, a non-commutative version of Poincaré duality which puts still another constraint on the fermion representations. It turns out that this constraint is fulfilled in the standard model (8-10). However, slightly modifying ${\cal H}_{R}$ by adding right-handed neutrinos — a modification compatible with all constraints so far [6] — violates this additional constraint [4][7]. 2.3 Coins In a Yang-Mills-Higgs model, that comes from a Connes-Lott model, the coins cannot be chosen independently. In an arbitrary Yang-Mills-Higgs model the choice of coins is a point in the space of direct products of intervals. In a Connes-Lott model this point must lie in a subspace. This subspace is a submanifold with interesting structure. Depending on the choice of bills, this submanifold may be of the same dimension as its surrounding hypercube or not. Due to the high degree of reducibility of its fermionic Hilbert space, the standard model is in the first case. Its Connes-Lott submanifold is an open subset of its Yang-Mills-Higgs hypercube given by the inequalities $$\displaystyle m_{e}<m_{W}<m_{t}/\sqrt{3},$$ (19) $$\displaystyle 2g_{1}^{-2}>g_{2}^{-2}+g_{3}^{-2}/3$$ (20) $$\displaystyle m_{H\,min}<m_{H}<m_{H\,max}.$$ The bounds on the Higgs mass are complicated functions of the other coins. The fact that the non-commutative constraints on the parameters of the standard model are inequalities rather than equations may be important to insure their stability under renormalization flow. On the other hand, for the experimental values of the parameters $$\displaystyle\frac{m_{H\,max}-m_{H\,min}}{({m_{H\,max}+m_{H\,min})/2}}\simeq% \frac{m_{\tau}^{2}-m_{e}^{2}}{m_{t}^{2}}\simeq 10^{-4}$$ and for all practical purpose the Higgs mass is fixed. To our knowledge, this is the first mass relation, that comes with a (small) conceptual uncertainty and we call it a fuzzy relation. We stress that the fuzziness of the Higgs mass requires the existence of at least two generations. 3 Conclusion The first miracle of non-commutative geometry applied to particle physics concerns the general rules. Here, this geometry answers the question: Who ordered the Higgs. The two subspaces of bills and coins accessible to a Connes-Lott model have interesting structure and are tiny compared to the two Yang-Mills-Higgs hypercubes, fig. 1 and 2. To us, it is a second miracle that the two points defining the standard model fall into these tiny subspaces, at least as long as the Higgs mass is unknown. References [1] L. Montanet et al. Review of Particle Properties, Phys. Rev. D50 (1994) 1173 [2] A. Connes, Noncommutative Geometry, Academic Press (1994) [3] A. Connes & J. Lott, The metric aspect of noncommutative geometry, in the proceedings of the 1991 Cargèse Summer Conference, eds.: J. Fröhlich et al., Plenum Press (1992) [4] A. Connes, Noncommutative geometry and reality, IHES/M/95/52 (1995) [5] R. Asquith, Non-commutative geometry and the strong force, DTP-95-49, CPT-95/P.3239, hep-th/9509163 [6] J. M. Gracia-Bondía, Connes’ interpretation of the Standard model and massive neutrinos, Phys. Lett. B, in press [7] D. Testard, Non-commutative Poincaré duality and right-handed neutrinos Figure captions Fig.1: An artist’s partial view of the space of bills of all Yang-Mills-Higgs models and some of its subspaces. $GUT$ stands for ‘Grand Unified Theories’, i.e. $G$ simple. $L-R$ stands for left-right symmetric models, i.e. ${\cal H}_{L}={\cal H}_{R}$. $SM$ stands for standard model and $CL$ for Connes-Lott models. Fig.2: Partial view of the space of coins of the standard model, lower and upper bounds of the Higgs mass as a function of the top and $\tau$ masses, all other coins are set to their experimental values. For the experimental value, $m_{\tau}\ =\ 1.8\ {\rm GeV}$, the two bounds differ by $10^{-2}\ {\rm GeV}$ in the indicated range of $m_{t}$.
Linear, second-order problems with Sturm-Liouville-type multi-point boundary conditions Bryan P. Rynne Department of Mathematics and the Maxwell Institute for Mathematical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, Scotland. [email protected] Abstract. We consider the linear eigenvalue problem consisting of the equation $$-u^{\prime\prime}=\lambda u,\quad\text{on $(-1,1)$},$$ (1) where $\lambda\in\mathbb{R}$, together with the general multi-point boundary conditions $$\alpha_{0}^{\pm}u(\pm 1)+\beta_{0}^{\pm}u^{\prime}(\pm 1)=\sum^{m^{\pm}}_{i=1}% \alpha^{\pm}_{i}u(\eta^{\pm}_{i})+\sum_{i=1}^{m^{\pm}}\beta^{\pm}_{i}u^{\prime% }(\eta^{\pm}_{i}),$$ (2) where $m^{\pm}\geqslant 1$ are integers, $\alpha_{0}^{\pm},\beta_{0}^{\pm}\in\mathbb{R}$, and, for each $i=1,\dots,m^{\pm}$, the numbers $\alpha_{i}^{\pm},\beta_{i}^{\pm}\in\mathbb{R}$, and $\eta_{i}^{\pm}\in[-1,1]$, with $\eta_{i}^{\pm}\neq\pm 1$. We also suppose that: $$\displaystyle\alpha_{0}^{\pm}\geqslant 0,\quad\alpha_{0}^{\pm}+|\beta_{0}^{\pm% }|>0,$$ (3) $$\displaystyle\pm\beta_{0}^{\pm}\geqslant 0,$$ (4) $$\displaystyle\left(\frac{\sum_{i=1}^{m^{\pm}}|\alpha_{i}^{\pm}|}{\alpha_{0}^{% \pm}}\right)^{2}+\left(\frac{\sum_{i=1}^{m^{\pm}}|\beta_{i}^{\pm}|}{\beta_{0}^% {\pm}}\right)^{2}<1,$$ (5) with the convention that if any denominator in (5) is zero then the corresponding numerator must also be zero, and the corresponding fraction is omitted from (5) (by (3), at least one denominator is nonzero in each condition). An eigenvalue is a number $\lambda$ for which (1)-(2), has a non-trivial solution $u$ (an eigenfunction), and the spectrum, $\sigma$, is the set of eigenvalues. In this paper we show that the basic spectral properties of this problem are similar to those of the standard Sturm-Liouville problem with separated boundary conditions. Similar multi-point problems have been considered before under more restrictive hypotheses. For instance, the cases where $\beta_{i}^{\pm}=0$, or $\alpha_{i}^{\pm}=0$, $i=0,\dots,m^{\pm}$ (such conditions have been termed Dirichlet-type or Neumann-type respectively), or the case of a single-point condition at one end point and a Dirichlet-type or Neumann-type multi-point condition at the other end. Different oscillation counting methods have been used in each of these cases, and the results here unify and extend all these previous results to the above general Sturm-Liouville-type boundary conditions. 1. Introduction We consider the linear eigenvalue problem consisting of the equation $$-u^{\prime\prime}=\lambda u,\quad\text{on $(-1,1)$},$$ (1.1) where $\lambda\in\mathbb{R}$, together with the general multi-point boundary conditions $$\alpha_{0}^{\pm}u(\pm 1)+\beta_{0}^{\pm}u^{\prime}(\pm 1)=\sum^{m^{\pm}}_{i=1}% \alpha^{\pm}_{i}u(\eta^{\pm}_{i})+\sum_{i=1}^{m^{\pm}}\beta^{\pm}_{i}u^{\prime% }(\eta^{\pm}_{i}),$$ (1.2) where $m^{\pm}\geqslant 1$ are integers, $\alpha_{0}^{\pm},\beta_{0}^{\pm}\in\mathbb{R}$, and, for each $i=1,\dots,m^{\pm}$, the numbers $\alpha_{i}^{\pm},\beta_{i}^{\pm}\in\mathbb{R}$, and $\eta_{i}^{\pm}\in[-1,1]$, with $\eta_{i}^{\pm}\neq\pm 1$. We write $\alpha^{\pm}:=(\alpha_{1}^{\pm},\dots,\alpha_{m^{\pm}}^{\pm})\in\mathbb{R}^{m^% {\pm}}$, and similarly for $\beta^{\pm}$, $\eta^{\pm}$. The notation $\alpha^{\pm}=0$ or $\beta^{\pm}=0,$ will mean the zero vector in $\mathbb{R}^{m^{\pm}}$, as appropriate. Naturally, an eigenvalue is a number $\lambda$ for which (1.1)-(1.2), has a non-trivial solution $u$ (an eigenfunction). The spectrum, $\sigma$, is the set of eigenvalues. Although the boundary conditions (1.2) are non-local, for ease of discussion we will usually say that the condition with superscript $\pm$ holds ‘at the end point $\pm 1$’. Throughout we will suppose that the following conditions hold: $$\displaystyle\alpha_{0}^{\pm}\geqslant 0,\quad\alpha_{0}^{\pm}+|\beta_{0}^{\pm% }|>0,$$ (1.3) $$\displaystyle\pm\beta_{0}^{\pm}\geqslant 0,$$ (1.4) $$\displaystyle\left(\frac{\sum_{i=1}^{m^{\pm}}|\alpha_{i}^{\pm}|}{\alpha_{0}^{% \pm}}\right)^{2}+\left(\frac{\sum_{i=1}^{m^{\pm}}|\beta_{i}^{\pm}|}{\beta_{0}^% {\pm}}\right)^{2}<1,$$ (1.5) with the convention that if any denominator in (1.5) is zero then the corresponding numerator must also be zero, and the corresponding fraction is omitted from (1.5) (by (1.3), at least one denominator is nonzero in each condition). The condition (1.3) simply ensures that the boundary conditions at $\pm 1$ actually involve the values $u(\pm 1)$ or $u^{\prime}(\pm 1)$. We will describe the motivation and consequences of (1.4) and (1.5) further here, and also in the following sections. When $\alpha^{\pm}=\beta^{\pm}=0$ the multi-point boundary conditions (1.2) reduce to standard (single-point) separated conditions at $x=\pm 1$, and the overall multi-point problem (1.1)-(1.2) reduces to a separated, linear Sturm-Liouville problem. Thus, we will term the conditions (1.2) Sturm-Liouville-type boundary conditions. The spectral properties of the separated problem are of course well known, see for example [3], but the spectral properties of the above general multi-point problem have not previously been obtained. Indeed, it is only recently that the basic spectral properties of any multi-point problems have been obtained, and these were obtained under more restrictive assumptions on the boundary conditions. Boundary value problems with multi-point boundary conditions have been extensively studied recently, see for example, [1, 4, 5, 6, 7, 9, 10, 12, 13, 14, 15, 16, 17], and the references therein. Many of these papers consider the problem on the interval $(0,1)$, and impose a single-point Dirichlet or Neumann condition at the end-point $x=0$, and a multi-point condition at $x=1$. In our notation, these particular single-point conditions correspond to the special cases $\beta_{0}^{-}=0$ or $\alpha_{0}^{-}=0$, respectively (as well as $\alpha^{-}=\beta^{-}=0$), so of course are covered by our results here. We have used the interval $(-1,1)$ in order to simplify the notation for problems with multi-point boundary conditions at both end-points — our results are, of course, independent of the interval on which the problem is posed. Problems with a single-point boundary condition at one end-point can often be treated using shooting methods (starting at the end with the single-point condition) and so are considerably simpler to deal with than problems having multi-point boundary conditions at both end-points (for which shooting is not possible). Problems with multi-point conditions at both end-points have been considered in [5, 7, 9, 13, 14] (and in many references therein — the bibliography in [9] is particularly extensive). The papers [13] and [14] discussed the following particular special cases, or types, of multi-point boundary conditions: Dirichlet-type: $$\displaystyle\sum_{i=1}^{m^{\pm}}|\alpha_{i}^{\pm}|<1=\alpha_{0}^{\pm},$$ $$\displaystyle\beta_{0}^{\pm}=0,\quad\beta^{\pm}=0;$$ (1.6) Neumann-type: $$\displaystyle\alpha_{0}^{\pm}=0,\quad\alpha^{\pm}=0,$$ $$\displaystyle\sum_{i=1}^{m^{\pm}}|\beta_{i}^{\pm}|<1=\beta_{0}^{\pm}.$$ (1.7) This terminology is motivated by observing that a Dirichlet-type (respectively Neumann-type) condition reduces to a single-point Dirichlet (respectively Neumann) condition when $\alpha=0$ (respectively $\beta=0$). The case of a Dirichlet-type condition at one end point and a Neumann-type condition at the other end point was also discussed in [14], where such conditions were termed mixed. Clearly, the hypotheses (1.6) and (1.7) are special cases of the general hypothesis (1.5), and in these cases (1.4) can be attained simply by multiplying the boundary condition at $x=-1$ by $-1$, so (1.4) is trivial. Hence, our results here will unify and generalise all the results in [13] and [14]. It was shown in [13] and [14] that the spectra of these particular boundary values problems have many of the ‘standard’ properties of the spectrum of the separated Sturm-Liouville problem, specifically: ($\sigma$-a) $\sigma$ is a strictly increasing sequence of real eigenvalues $\lambda_{k}$, $k=0,1,\dots;$ ($\sigma$-b) $\lim_{k\to\infty}\lambda_{k}=\infty$; for each $k\geqslant 0$: ($\sigma$-c) $\lambda_{k}$ has geometric multiplicity 1; ($\sigma$-d) the eigenfunctions of $\lambda_{k}$ have an ‘oscillation count’ equal to $k$. In the separated problem the oscillation count referred to in property ($\sigma$-d) is simply the number of interior (nodal) zeros of an eigenfunction. However, in the multi-point problem it was found in [13] and [14] that this method of counting eigenfunction oscillations no longer yields property ($\sigma$-d), and alternative, slightly ad hoc, methods were adopted, with different approaches being used for different types of problem. We will discuss this further below, and a more detailed discussion is given in Section 9.4 of [14]. Suffice it to say, for now, that the eigenfunction oscillation count we adopt here, based on a Prüfer angle approach (see Section 4.1), extends and unifies the disparate approaches adopted in [13] and [14]. It was also shown in [13] and [14] that, in order to obtain the spectral properties ($\sigma$-a)-($\sigma$-d), the conditions (1.6) and (1.7) are optimal for the Dirichlet-type and Neumann-type conditions respectively, in the sense that, in either of these cases, if the inequality $<1$ in (1.6) or (1.7) is relaxed to $<1+\epsilon$, for any $\epsilon>0$, then $\sigma$ need not have have all the properties ($\sigma$-a)-($\sigma$-d). For the general Sturm-Liouville-type boundary conditions (1.2) it will be shown here that if (1.4) and (1.5) hold then $\sigma$ has the properties ($\sigma$-a)-($\sigma$-d), and if either (1.4) or (1.5) do not hold then $\sigma$ need not have all these properties. Remarks 1.1. (i) Changing the length of the interval on which we consider the problem rescales the coefficients $\beta_{0}^{\pm},\beta^{\pm}$, but not the coefficients $\alpha_{0}^{\pm},\alpha^{\pm}$. Such a change should not affect our hypotheses on the coefficients, and indeed the condition (1.5) is invariant with respect to such a rescaling. Thus, the form of condition (1.5) seems natural in this respect. (ii) In the separated case (that is, when $\alpha^{\pm}=\beta^{\pm}=0$) the sign condition (1.4) ensures that $\lambda_{0}>0$ (except in the Neumann case, when $\lambda_{0}=0$), and if this sign condition does not hold then negative eigenvalues may exist. It will be shown below that this is also true for the above Sturm-Liouville-type boundary conditions (assuming that (1.3) and (1.5) hold); it will also be shown that negative eigenvalues may have geometric multiplicity 2. Of course, this cannot happen in the separated problem due to uniqueness of the solutions for initial value problems associated with (1.1). Hence, the full set of ‘standard’ properties ($\sigma$-a)-($\sigma$-d) need not hold if the sign condition (1.4) is not satisfied. (iii) In principle, we should consider the possibility of complex eigenvalues, especially as the problem is not ‘self-adjoint’ (without defining this precisely). Indeed, if we did not impose the condition (1.5) then complex eigenvalues could in fact occur. However, with this condition it can be shown that all eigenvalues must be real — the proof is very similar to the proof of Lemma 4.9 below, which shows that under our hypotheses the eigenvalues are positive. In the light of this we will simply take it for granted throughout the paper that all our coefficients, functions and function spaces are real. (iv) We primarily consider the spectral properties ($\sigma$-a)-($\sigma$-d) because of their potential applications to nonlinear problems (many of the cited references use eigenvalue properties to deal with nonlinear problems, using relatively standard arguments such as Rabinowitz’ global bifurcation theory). Of course, there are many other linear spectral properties that could be investigated, such as eigenfunction expansions (the problem is not self-adjoint, so this would not be trivial). However, for brevity, we will omit any discussion of nonlinear problems or other linear properties here. (v) Boundary conditions having a more general non-local dependence on the function $u$ than the finite sums of values at points in the interval $(-1,1)$ (as in (1.2)) have also been considered recently by several authors, see for example [15] and the references therein. These papers have considered Dirichlet-type and Neumann-type boundary conditions in which the finite summations have been replaced with Lebesgue-Stieltjes integrals, see [15] for further details (finite summations can be obtained simply by using step functions in Lebesgue-Stieltjes integrals, so such integral conditions generalise the finite summation conditions). The methods and results below can readily be extended to deal with such integral formulations of the boundary conditions — the only significant additional step required is dealing with the necessary measure and integration theory. These measure-theoretic details are described, for Dirichlet-type and Neumann-type conditions, in [5]. Since this step is relatively routine we will avoid all such measure-theoretic difficulties here by simply considering the finite summation conditions (1.2). 1.1. Plan of the paper The paper is organised as follows. In Section 2 we introduce various function spaces, and then use these to define an operator realization of the multi-point problem, and state the main properties of this operator. In Section 3 we prove an existence and uniqueness result for a problem consisting of equation (1.1) together with a single, multi-point, boundary condition. This problem could be regarded as a multi-point analogue of the usual initial value problem for equation (1.1). We also give some counter examples which show that this uniqueness result can fail in the multi-point setting when $\lambda<0$. As mentioned in Remark 1.1-(ii), the uniqueness result for this ‘multi-point, initial value problem’ then implies the simplicity of the eigenvalues of (1.1), (1.2), in the usual manner, and the loss of this uniqueness can result in the existence of eigenvalues having geometric multiplicity 2. In particular, this shows the necessity of the sign condition (1.4) if we wish to obtain all the properties ($\sigma$-a)-($\sigma$-d). Our main results are obtained in Section 4. In Section 4.1 we describe a Prüfer angle method of counting the oscillations of the eigenfunctions, and we then use this technique in Section 4.2 to obtain our main results regarding the properties of the spectrum. We also show that this Prüfer angle construction generalises and unifies the various oscillation counting methods used in [13] and [14] in the Dirichlet-type, Neumann-type and mixed cases respectively. In Section 4.3 we show that, under suitable additional hypotheses, the principal eigenfunction is positive. In Section 4.4 we reinterpret the eigenvalues as the characteristic values of the inverse operator constructed in Section 2, and show that these characteristic values have algebraic multiplicity 1; this result then yields the value of the topological degree of an associated linear operator. In Section 4.5 we give some counter examples to show the necessity of the hypothesis (1.5). 1.2. Some further notation Clearly, the eigenvalues $\lambda_{k}$ (and other objects to be introduced below) depend on the values of the coefficients $\alpha_{0}^{\pm},\,\beta_{0}^{\pm},\,\alpha^{\pm},\,\beta^{\pm},\,\eta^{\pm},$ but in general we regard these coefficients as fixed, and omit them from our notation. However, at certain points of the discussion it will be convenient to regard some, or all, of these coefficients as variable, and to indicate the dependence of various functions on these coefficients. To do this concisely we will write: $\boldsymbol{\alpha}_{0}:=(\alpha_{0}^{-},\alpha_{0}^{+})\in\mathbb{R}^{2}$ (for given numbers $\alpha_{0}^{\pm}\in\mathbb{R}$); $\boldsymbol{\alpha}:=(\alpha^{-},\alpha^{+})\in\mathbb{R}^{m^{-}+m^{+}}$ (for given coefficient vectors $\alpha^{\pm}\in\mathbb{R}^{m^{\pm}}$); and similarly for $\boldsymbol{\beta}_{0},\,\boldsymbol{\beta},\,\boldsymbol{\eta}$. We also define ${\boldsymbol{0}}:=(0,0)\in\mathbb{R}^{m^{-}+m^{+}}$. We may then write, for example, $\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})$ to indicate the dependence of $\lambda_{k}$ on $(\boldsymbol{\alpha},\boldsymbol{\beta})$. In most of the paper we will regard $(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$ as fixed, but at some points in the discussion it will be convenient to allow $(\boldsymbol{\alpha},\boldsymbol{\beta})$ to vary, so long as the conditions (1.3)-(1.5) continue to hold. To describe this we define the following sets, for any $(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})\in\mathbb{R}^{4}$ satisfying (1.3) and (1.4): $$\displaystyle{\mathcal{B}}(\alpha_{0}^{\pm},\beta_{0}^{\pm})$$ $$\displaystyle:=\{(\alpha^{\pm},\beta^{\pm})\in\mathbb{R}^{2m^{\pm}}:\text{$(% \alpha_{0}^{\pm},\beta_{0}^{\pm},\alpha^{\pm},\beta^{\pm})$ satisfies \eqref{AB_lin_cond.eq}}\},$$ $$\displaystyle{\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$$ $$\displaystyle:=\{(\boldsymbol{\alpha},\boldsymbol{\beta})\in\mathbb{R}^{2(m^{-% }+m^{+})}:\text{$(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{% \alpha},\boldsymbol{\beta})$ satisfies \eqref{AB_lin_cond.eq}}\}$$ (so ${\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$ is isomorphic to ${\mathcal{B}}(\alpha_{0}^{-},\beta_{0}^{-})\times{\mathcal{B}}(\alpha_{0}^{+},% \beta_{0}^{+})$); we also define the set $${\mathcal{B}}:=\{(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{% \alpha},\boldsymbol{\beta})\in\mathbb{R}^{2(2+m^{-}+m^{+})}:\text{\eqref{albe_% nz.eq}-\eqref{AB_lin_cond.eq} hold}\}.$$ At some points, when dealing with individual boundary conditions, it will be convenient to let $\nu$ denote one of the signs $\{\pm\}$, in which case, for a function $u$, the notation $u(\nu)$ will denote the value of $u$ at the corresponding end point $\pm 1$. 2. An operator realisation of the multi-point problem For any integer $n\geqslant 0$, let $C^{n}[-1,1]$ denote the usual Banach space of $n$-times continuously differentiable functions on $[-1,1]$, with the usual sup-type norm, denoted by $|\cdot|_{n}$. A suitable space in which to search for solutions of (1.1), incorporating the boundary conditions (1.2), is the space $$\displaystyle X$$ $$\displaystyle:=\{u\in C^{2}[-1,1]:\text{$u$ satisfies \eqref{slbc.eq}}\},$$ $$\displaystyle\|u\|_{X}$$ $$\displaystyle:=|u|_{2},\quad u\in X.$$ Letting $Y:=C^{0}[-1,1]$, with the norm $\|\cdot\|_{Y}:=|\cdot|_{0}$, we now define an operator $\Delta:X\to Y$ by $$\Delta u:=u^{\prime\prime},\quad u\in X.$$ By the definition of the spaces $X$, $Y$, the operator $\Delta$ is a well-defined, bounded, linear operator, and the eigenvalue problem (1.1)-(1.2) can be rewritten in the form $-\Delta(u)=\lambda u$, $u\in X$. We will consider the eigenvalue problem in Section 4.2 below, for now we will consider the invertibility of $\Delta$. In the Neumann-type case (that is, when $\alpha_{0}^{\pm}=0$) it is clear that any constant function $c$ lies in $X$, and $\Delta c=0$, so $\Delta$ cannot be invertible. Thus, to obtain invertibility it is necessary to exclude the Neumann-type case. In view of the assumption (1.3), we can achieve this by imposing the further condition $$\alpha_{0}^{-}+\alpha_{0}^{+}>0.$$ (2.1) The following theorem shows that this condition is sufficient to ensure invertibility of $\Delta$. Theorem 2.1. Suppose that (1.3)-(1.5) and (2.1) hold. Then $\Delta:X\to Y$ has a bounded inverse. Proof. We will show that the equation $$\Delta u=h,\quad h\in Y,$$ (2.2) has a unique solution for all $h\in Y$. Following the proof of Theorem 3.1 in [13] (which considers Dirichlet-type conditions and constructs a solution of (2.2) via a compact integral operator) shows that it suffices to prove the uniqueness of the solutions of (2.2). To prove this we observe that any solution $u_{0}$ of (2.2) with $h=0$ must have the form $u_{0}(x)=c_{0}+c_{1}x$, for some $(c_{0},c_{1})\in\mathbb{R}^{2}$, and substituting $u_{0}$ into the boundary conditions (1.2) yields the pair of equations $$c_{0}\Big{(}\alpha_{0}^{\pm}-\sum_{i=1}^{m^{\pm}}\alpha_{i}^{\pm}\Big{)}+c_{1}% \Big{(}\beta_{0}^{\pm}-\sum_{i=1}^{m^{\pm}}\beta_{i}^{\pm}\pm\alpha_{0}^{\pm}-% \sum_{i=1}^{m^{\pm}}\alpha_{i}^{\pm}\eta_{i}^{\pm}\Big{)}=0.$$ (2.3) It now follows from (1.3)-(1.5) that $$\alpha_{0}^{\pm}-\sum_{i=1}^{m^{\pm}}\alpha_{i}^{\pm}\geqslant 0,\quad\pm\Big{% (}\beta_{0}^{\pm}-\sum_{i=1}^{m^{\pm}}\beta_{i}^{\pm}\pm\alpha_{0}^{\pm}-\sum_% {i=1}^{m^{\pm}}\alpha_{i}^{\pm}\eta_{i}^{\pm}\Big{)}>0,$$ and it follows from (2.1) that at least one of the left hand inequalities here is strict. These sign properties now ensure that the determinant associated with the pair of equations (2.3) is non-zero, so that $(c_{0},c_{1})=(0,0)$ is the unique solution of (2.3). This proves the desired uniqueness result for (2.2), and hence proves the theorem. ∎ In applications, continuity properties of the inverse operator $\Delta^{-1}$ with respect to the various parameters in the problem are important. We will describe one such result — other such results could be obtained in a similar manner. Corollary 2.2. The operator $\Delta(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{\alpha},% \boldsymbol{\beta})^{-1}:Y\to C^{2}[-1,1]$ depends continuously on $(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{\alpha},% \boldsymbol{\beta})\in{\mathcal{B}}\setminus\{(\boldsymbol{\alpha}_{0},% \boldsymbol{\beta}_{0},\boldsymbol{\alpha},\boldsymbol{\beta}):\alpha_{0}^{-}+% \alpha_{0}^{-}=0\}$ $($with respect to the usual topology for bounded linear operators$)$. Proof. The functions $\Phi^{\pm}$ in the construction of $\Delta(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{\alpha},% \boldsymbol{\beta})^{-1}$ in the proof of Theorem 2.1 in [13] are continuous with respect to $(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{\alpha},% \boldsymbol{\beta})$, so the result follows immediately from that proof. ∎ Remark 2.3. We have used the spaces $C^{n}[-1,1]$, $n=0,\,2,$ to define the operator $\Delta$, and Theorem 2.1 showed that the resulting operator is invertible. This is the function space setting that we will use here. However, one could also use a Sobolev space setting to define a similar operator as follows. For arbitrary fixed $q\geqslant 1$, let $$\displaystyle\widetilde{Y}:=L^{q}(-1,1),\qquad\widetilde{X}$$ $$\displaystyle:=\{u\in W^{2,q}[-1,1]:\text{$u$ satisfies \eqref{slbc.eq}}\}.$$ Then $\widetilde{\Delta}:\widetilde{X}\to\widetilde{Y}$ can be defined in the obvious manner, and a similar proof to that of Theorem 2.1 shows that $\widetilde{\Delta}$ is invertible. 3. Problems with a single boundary condition In this section we consider the following problem with a single, multi-point boundary condition, $$\displaystyle-u^{\prime\prime}=\lambda u,\quad\text{on $\mathbb{R}$},$$ (3.1) $$\displaystyle\alpha_{0}u(\eta_{0})+\beta_{0}u^{\prime}(\eta_{0})=\sum_{i=1}^{m% }\alpha_{i}u(\eta_{i})+\sum_{i=1}^{m}\beta_{i}u^{\prime}(\eta_{i}),$$ (3.2) where $m\geqslant 1$, $\alpha_{0},\beta_{0},\eta_{0}\in\mathbb{R}$, and $\alpha,\beta,\eta\in\mathbb{R}^{m}$. The conditions (1.3) and (1.5) have obvious analogues in the current setting, simply by omitting the superscripts $\pm$, which we will use without further comment, while the condition (1.4) has no analogue here and $\beta_{0}$ may have either sign. For any $(\alpha_{0},\beta_{0})$ satisfying (1.3) we let ${\mathcal{B}}(\alpha_{0},\beta_{0})$ denote the set of $(\alpha,\beta)\in\mathbb{R}^{2m}$ satisfying (1.5). Theorem 3.1. Suppose that $(\alpha_{0},\beta_{0},\alpha,\beta)$ satisfies (1.3) and (1.5), and $\lambda\geqslant 0$. Then the set of solutions of (3.1), (3.2), is one-dimensional. Proof. If $\lambda=0$ then any solution of (3.1) has the form of $u_{0}$ used in the proof of Theorem 2.1, and substituting $u_{0}$ into (3.2) yields a linear equation relating the coefficients $c_{0},c_{1}$. A similar argument to the proof of Theorem 2.1 now shows that the set of solutions of this equation is one-dimensional. Now suppose that $\lambda>0$. For any $s>0$, $\theta\in\mathbb{R}$, we define $w(s,\theta)\in C^{1}(\mathbb{R})$ by $$w(s,\theta)(x):=\sin(sx+\theta),\quad x\in\mathbb{R}.$$ (3.3) Clearly, any solution of (3.1) must have the form $u=Cw(s,\theta)$, with $s=\lambda^{1/2}$ and suitable $C,\,\theta\in\mathbb{R}$. For the rest of this proof we regard $\theta$, $\alpha$, $\beta$ as variable, but all the other parameters and coefficients will be regarded as fixed and omitted from the notation when this is convenient. Defining $\Gamma:\mathbb{R}\times\mathbb{R}^{2m}\to\mathbb{R}$ by $$\displaystyle\Gamma(\theta,\alpha,\beta):=\alpha_{0}$$ $$\displaystyle\sin(s\eta_{0}+\theta)+s\beta_{0}\cos(s\eta_{0}+\theta)-\sum_{i=1% }^{m}\alpha_{i}\sin(s\eta_{i}+\theta)$$ $$\displaystyle-s\sum_{i=1}^{m}\beta_{i}\cos(s\eta_{i}+\theta),$$ it is clear that $\Gamma$ is $C^{1}$, and substituting (3.3) into (3.2) shows that $w(s,\theta)$ satisfies (3.1), (3.2) if and only if $$\Gamma(\theta,\alpha,\beta)=0.$$ (3.4) Hence, it suffices to consider the set of solutions of (3.4). Next, by definition, for any $(\alpha,\beta)\in\mathbb{R}^{2m}$ the function $\Gamma(\cdot,\alpha,\beta)$ is $\pi$-antiperiodic, so to prove the theorem it suffices to show that if $(\alpha,\beta)\in{\mathcal{B}}(\alpha_{0},\beta_{0})$ then $\Gamma(\cdot,\alpha,\beta)$ has exactly one zero in the interval $[0,\pi)$ (by $\pi$-antiperiodicity, other zeros of $\Gamma(\cdot,\alpha,\beta)$ do not contribute distinct solutions of (3.1), (3.2)). We will prove this by a continuation argument. We first observe that if $(\alpha,\beta)=(0,0)$ then $\Gamma(\cdot,0,0)$ has exactly 1 zero in $[0,\pi)$ and this zero is simple. To extend this property to $(\alpha,\beta)\neq(0,0)$ we will require the following lemma ($\Gamma_{\theta}$ will denote the partial derivative of $\Gamma$ with respect to $\theta$). Lemma 3.2. Suppose that $(\alpha_{0},\beta_{0},\alpha,\beta)$ satisfies (1.3) and (1.5), and $\lambda>0$. Then $$\Gamma(\theta,\alpha,\beta)=0\implies\Gamma_{\theta}(\theta,\alpha,\beta)\neq 0.$$ Proof. Suppose, on the contrary, that $$\Gamma(\theta,\alpha,\beta)=\Gamma_{\theta}(\theta,\alpha,\beta)=0,$$ (3.5) for some $\theta\in\mathbb{R}$ and $(\alpha,\beta)\in{\mathcal{B}}(\alpha_{0},\beta_{0})$. We now regard $(\theta,\alpha,\beta)$ as fixed, and write $$S(\eta):=\sin(s\eta+\theta),\quad C(\eta):=\cos(s\eta+\theta).$$ With this notation, equations (3.5) become $$\displaystyle\alpha_{0}S(\eta_{0})+s\beta_{0}C(\eta_{0})$$ $$\displaystyle=\sum_{i=1}^{m}\big{(}\alpha_{i}S(\eta_{i})+s\beta_{i}C(\eta_{i})% \big{)},$$ (3.6) $$\displaystyle\alpha_{0}C(\eta_{0})-s\beta_{0}S(\eta_{0})$$ $$\displaystyle=\sum_{i=1}^{m}\big{(}\alpha_{i}C(\eta_{i})-s\beta_{i}S(\eta_{i})% \big{)}.$$ (3.7) By (1.5) we can choose $b_{0}\in[0,\pi/2]$ such that, with $C_{b}:=\cos b_{0}$, $S_{b}:=\sin b_{0}$, $$\sum^{m}_{i=1}|\alpha_{i}|\leqslant C_{b}\alpha_{0},\quad\sum^{m}_{i=1}|\beta_% {i}|\leqslant S_{b}|\beta_{0}|,$$ (3.8) with at least one strict inequality in (3.8). Now suppose that $\beta_{0}\geqslant 0$. Elementary operations on (3.6), (3.7) now yield $$\displaystyle C_{b}\alpha_{0}+S_{b}s\beta_{0}=$$ $$\displaystyle=\sum_{i=1}^{m}\alpha_{i}\Bigl{(}C_{b}(S(\eta_{0})S(\eta_{i})+C(% \eta_{0})C(\eta_{i}))+S_{b}(C(\eta_{0})S(\eta_{i})-S(\eta_{0})C(\eta_{i}))% \Bigr{)}$$ $$\displaystyle\quad+s\sum_{i=1}^{m}\beta_{i}\Bigl{(}C_{b}(S(\eta_{0})C(\eta_{i}% )-C(\eta_{0})S(\eta_{i}))+S_{b}(C(\eta_{0})C(\eta_{i})+S(\eta_{0})S(\eta_{i}))% \Bigr{)}$$ $$\displaystyle=\sum_{i=1}^{m}\alpha_{i}\Bigl{(}C_{b}\cos s(\eta_{0}-\eta_{i})-S% _{b}\sin s(\eta_{0}-\eta_{i})\Bigr{)}$$ $$\displaystyle\quad+s\sum_{i=1}^{m}\beta_{i}\Bigl{(}C_{b}\sin s(\eta_{0}-\eta_{% i})+S_{b}\cos s(\eta_{0}-\eta_{i})\Bigr{)}$$ $$\displaystyle=\sum_{i=1}^{m}\alpha_{i}\cos s(b_{0}+\eta_{0}-\eta_{i})+s\sum_{i% =1}^{m}\beta_{i}\sin s(b_{0}-\eta_{0}+\eta_{i})$$ $$\displaystyle\leqslant\sum_{i=1}^{m}|\alpha_{i}|+s\sum_{i=1}^{m}|\beta_{i}|<C_% {b}\alpha_{0}+S_{b}s\beta_{0},$$ by (3.8). This contradiction shows that (3.5) cannot hold, and so proves the lemma, when $\beta_{0}\geqslant 0$. If $\beta_{0}<0$ then we simply replace $C_{b}\alpha_{0}+S_{b}s\beta_{0}$ with $C_{b}\alpha_{0}-S_{b}s\beta_{0}$ in the above calculation to obtain a similar contradiction, which completes the proof of Lemma 3.2. ∎ Now, since the set ${\mathcal{B}}(\alpha_{0},\beta_{0})$ is connected it follows from continuity, together with Lemma 3.2, the implicit function theorem and the $\pi$-antiperiodicity of $\Gamma(\cdot,\alpha,\beta)$, that $\Gamma(\cdot,\alpha,\beta)$ has exactly 1 (simple) zero in $[0,\pi)$ for all $(\alpha,\beta)\in{\mathcal{B}}(\alpha_{0},\beta_{0})$. This completes the proof of Theorem 3.1. ∎ For Dirichlet-type and Neumann-type problems, Theorem 3.1 was proved in [13] and [14], respectively. An adaptation of the proof of Lemma 3.2 also yields the following result, which will be crucial below. Lemma 3.3. Suppose that $\lambda>0$ and $(\alpha_{0},\beta_{0},\alpha,\beta)$ satisfies (1.3) and (1.5). If $u$ is a non-trivial solution of (3.1), (3.2) then $$\lambda\beta_{0}u(\eta_{0})-\alpha_{0}u^{\prime}(\eta_{0})\neq 0.$$ (3.9) Proof. The argument is similar to the proof of Lemma 3.2, and we use the notation from there. In particular, we suppose that $u$ has the form of $w$ given in (3.3), so that (3.2) takes the form (3.6), and to obtain a contradiction we suppose that (3.9) fails, that is, with this form of $u$, $$s\beta_{0}S(\eta_{0})-\alpha_{0}C(\eta_{0})=0.$$ (3.10) Multiplying (3.6) by $S(\eta_{0})$ and $C(\eta_{0})$, and using (3.10), yields respectively $$\displaystyle\alpha_{0}$$ $$\displaystyle=S(\eta_{0})\sum_{i=1}^{m}\big{(}\alpha_{i}S(\eta_{i})+\beta_{i}% sC(\eta_{i})\big{)},$$ $$\displaystyle s\beta_{0}$$ $$\displaystyle=C(\eta_{0})\sum_{i=1}^{m}\big{(}\alpha_{i}S(\eta_{i})+\beta_{i}% sC(\eta_{i})\big{)}.$$ If $\beta_{0}\geqslant 0$ then combining these inequalities and using (3.8) yields $$\displaystyle C_{b}\alpha_{0}+S_{b}s\beta_{0}$$ $$\displaystyle=\big{(}C_{b}S(\eta_{0})+S_{b}C(\eta_{0})\big{)}\sum_{i=1}^{m}% \big{(}\alpha_{i}S(\eta_{i})+\beta_{i}sC(\eta_{i})\big{)}$$ $$\displaystyle<C_{b}\alpha_{0}+S_{b}s\beta_{0},$$ which is the desired contradiction in this case. If $\beta_{0}<0$ then we simply replace $C_{b}\alpha_{0}+S_{b}s\beta_{0}$ with $C_{b}\alpha_{0}-S_{b}s\beta_{0}$ in the preceding calculation to obtain a similar contradiction. This completes the proof of Lemma 3.3. ∎ We also have the following immediate application of Theorem 3.1 to the eigenvalue problem. Corollary 3.4. Suppose that $(\alpha_{0}^{\pm},\beta_{0}^{\pm},\alpha^{\pm},\beta^{\pm})$ satisfy (1.3) and (1.5). Then any eigenvalue $\lambda>0$ of (1.1), (1.2), has geometric multiplicity one. 3.1. Counter examples The following example shows that if $\lambda<0$ then Theorem 3.1 need not hold. Example 3.5. Consider (3.1) with $\lambda=-1$, together with the boundary condition $$u(-1)+u^{\prime}(-1)=\alpha_{1}u(0)+\beta_{2}u^{\prime}(1).$$ (3.11) that is, with $\alpha_{0}=\beta_{0}=1$, $\beta_{1}=\alpha_{2}=0$ and $\eta_{0}=-1$, $\eta_{1}=0$, $\eta_{2}=1$; we will choose $\alpha_{1}$ and $\beta_{2}$ below. The general solution of equation (3.1) is $u(x)=c_{+}e^{x}+c_{-}e^{-x}$, for arbitrary $(c_{+},c_{-})\in\mathbb{R}^{2}$, and substituting this solution into the boundary condition (3.11) yields the equation $$c_{+}(2-\alpha_{1}e-\beta_{2}e^{2})-c_{-}(\alpha_{1}e^{-1}-\beta_{2}e^{-2})=0.$$ (3.12) Now, setting $$\alpha_{1}=\frac{2}{e(e^{2}+1)},\quad\beta_{2}=\frac{2}{e^{2}+1},$$ we see that $(\alpha_{0},\beta_{0},\alpha,\beta)$ satisfies (1.5), and (3.12) holds for all $(c_{+},c_{-})\in\mathbb{R}^{2}$. Hence, the solution set of this boundary value problem is two-dimensional, and so Theorem 3.1 does not hold in this case. $\square$ Example 3.5 can be extended to the eigenvalue problem to show that Corollary 3.4 need not hold for negative eigenvalues. Example 3.6. Consider the multi-point eigenvalue problem consisting of equation (3.1) together with the pair of boundary conditions $$u(\pm 1)\mp u^{\prime}(\pm 1)=\alpha_{1}u(0)\mp\beta_{2}u^{\prime}(\mp 1),$$ (3.13) with $\alpha_{1}$ and $\beta_{2}$ as in Example 3.5. It can be verified (as in Example 3.5) that $\lambda=-1$ is an eigenvalue of this boundary value problem with geometric multiplicity two. Hence, Corollary 3.4 need not hold for negative eigenvalues. We observe that both sets of boundary condition coefficients in this problem satisfy (1.5), but of course the sign condition (1.4) does not hold (which allows the negative eigenvalue). $\square$ The final example in this section shows that if $\lambda<0$ then Theorem 3.1 need not hold, even with a Dirichlet-type boundary condition (that is, with $\beta_{0}=0$ and $\beta=0$). However, this example is not relevant to the eigenvalue problem since negative eigenvalues do not occur with Dirichlet-type boundary conditions (also, in this example $\eta_{1}<\eta_{0}<\eta_{2}$, which is not consistent with the distribution of these points in the eigenvalue problem). Example 3.7. Consider (3.1) with $\lambda=-1$, together with the boundary condition $$u(0)=\frac{e(e^{2}-1)}{e^{4}-1}\big{(}u(-1)+u(1)\big{)}.$$ (3.14) It can be verified that (1.5) again holds, and for arbitrary $(c_{+},c_{-})\in\mathbb{R}^{2}$, the function $u(x)=c_{+}e^{x}+c_{-}e^{-x}$ satisfies both (3.1) and (3.14), that is the solution set of this boundary value problem is again two-dimensional. $\square$ 4. The structure of $\sigma$ In this section we discuss the structure of the spectrum of the multi-point eigenvalue problem (1.1)-(1.2), which we can rewrite as $$-\Delta(u)=\lambda u,\quad u\in X.$$ (4.1) We will show that $\sigma$ has the properties ($\sigma$-a)-($\sigma$-d) described in the introduction, that is, the multi-point spectrum has similar properties to the spectrum of the standard Sturm-Liouville with separated boundary conditions. In particular, we will obtain a characterisation of the eigenvalues in terms of an oscillation count of the corresponding eigenfunctions, as in the property ($\sigma$-d) in the introduction. The standard method of counting the oscillations of the eigenfunctions of separated problems is by counting the number of (nodal) zeros in the interval $(-1,1)$, and it is well known that this approach yields property ($\sigma$-d) in this case. Unfortunately, this need not be true for the multi-point boundary conditions. This was first observed in [12], in the case of a problem with a single-point Dirichlet condition at one end point and a multi-point Dirichlet-type condition at the other end point. For such a problem it was shown that, for $k\geqslant 0$, if $u_{k}$ is an eigenfunction corresponding to $\lambda_{k}$ then $u_{k}$ could have either $k$ or $k+1$ zeros in $(-1,1)$, whereas $u_{k}^{\prime}$ has exactly $k+1$ zeros in $(-1,1)$ (these zeros of $u_{k}^{\prime}$ were were termed ‘bumps’ in [12]). The results of [12] were then extended to a similar $p$-Laplacian problem in [4], and $p$-Laplacian problem with multi-point Dirichlet-type conditions at both end points in [13]. Thus, in the Dirichlet-type case, using nodal zeros to count the eigenfunction oscillations fails, and in fact the oscillations are best described by counting bumps (and by starting the enumeration of the eigenvalues/eigenfunctions at $k=1$, that is, the first eigenfunction has a single bump). However, it was then shown in [14] that counting bumps fails in the case of Neumann and mixed boundary conditions, and in fact in [12, 13, 14] a different oscillation counting procedure was adopted for each of these three types of boundary conditions, and each of these procedures could fail when applied to the other problems. To deal with the general Sturm-Liouville-type boundary conditions here we will use a Prüfer angle technique to characterise the oscillation count of the eigenfunctions. This technique will unify and extend the various types of oscillation count used previously in [12, 13, 14]. In view of this we begin with a preliminary section discussing a Prüfer angle method of defining an oscillation count for the multi-point problem. We then use this oscillation count to describe the multi-point spectrum. 4.1. Prüfer angles and oscillation count The Prüfer angle is a standard technique in the theory of ordinary differential equations, although there are slight variations in the precise definitions and functions used. The basic formulation is described in [3, Chapter 8] (although the terminology ‘Prüfer angle’ is not used in [3]). However, a more general formulation is described in [2, Section 2] (in a $p$-Laplacian context), together with some remarks about various ‘modified Prüfer angle’ formulations, and their history. In fact, we will adopt the form of the angle used in [2, Lemma 2.5], which was used earlier by Elbert (see Remark 4.7 below for the reason for our use of this formulation). We will then see that, in contrast to the separated case, the multi-point boundary conditions (1.2) do not determine the exact values of the Prüfer angle at the end points $\pm 1$, but instead they place bounds on these angles. We will give a full description of our constructions and results relating to the boundary conditions (1.2) but, for brevity, we will not describe the basic details of the Prüfer angle technique here but simply refer to [2] and [3]. Let ${C^{1}_{\rm s}}[-1,1]$ denote the set of functions $u\in C^{1}[-1,1]$ having only simple zeros (that is, $|u(x)|+|u^{\prime}(x)|>0$ for all $x\in[-1,1]$). For any $\lambda>0$ and $u\in{C^{1}_{\rm s}}[-1,1]$, we define a ‘modified’ Prüfer angle function $\omega_{(\lambda,u)}\in C^{0}[-1,1]$ by $$\omega_{(\lambda,u)}(-1)\in[0,\pi),\quad\omega_{(\lambda,u)}(x):=\tan^{-1}% \frac{\lambda^{1/2}u(x)}{u^{\prime}(x)},\quad x\in[-1,1]$$ (4.2) (when $u^{\prime}(x)=0$ the value of $\omega_{(\lambda,u)}(x)$ is defined by continuity). We note that the standard Prüfer angle does not have the factor $\lambda^{1/2}$ in the definition. Geometrically, for each $x\in[-1,1]$ we can regard $\omega_{(\lambda,u)}(x)$ as the angle between the vectors $(u^{\prime}(x),\lambda^{1/2}u(x))$ and $(1,0)$ in $\mathbb{R}^{2}$, defined to vary continuously with respect to $x$ (so $\omega_{(\lambda,u)}(x)$ need not lie within $[0,\pi/2]$, or even within $[0,2\pi]$). Clearly, if $u$ is a non-trivial solution of the differential equation (1.1) then $u\in{C^{1}_{\rm s}}[-1,1]$, so $\omega_{(\lambda,u)}$ is well defined. From now on we suppose that (1.3) and (1.4) hold, and we also define the angles $$\displaystyle\omega_{\lambda,0}^{-}:=-\tan^{-1}\frac{\lambda^{1/2}\beta_{0}^{-% }}{\alpha_{0}^{-}}\in[0,\pi/2],\qquad\omega_{\lambda,0}^{+}:=-\tan^{-1}\frac{% \lambda^{1/2}\beta_{0}^{+}}{\alpha_{0}^{+}}\in[\pi/2,\pi],$$ where the permissible ranges chosen here for the values of $\omega_{\lambda,0}^{\pm}$ are consistent with the sign conditions (1.4). Geometrically, $\omega_{\lambda,0}^{\pm}$ are the angles between the vectors $(\alpha_{0}^{\pm},-\lambda^{1/2}\beta_{0}^{\pm})$ and $(1,0)$. 4.1.1. Suppose that $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$. In this case the boundary conditions (1.2) reduce to the separated conditions $$\lambda^{-1/2}(u^{\prime}(\pm 1),\lambda^{1/2}u(\pm 1)).(\lambda^{1/2}\beta_{0% }^{\pm},\alpha_{0}^{\pm})=\alpha_{0}^{\pm}u(\pm 1)+\beta_{0}^{\pm}u^{\prime}(% \pm 1)=0$$ (4.3) (where the left hand side is the usual dot product of the vectors). That is, a function $u\in C^{1}[-1,1]$ satisfies (1.2) if and only if $$(u^{\prime}(\pm 1),\lambda^{1/2}u(\pm 1))$$ is perpendicular to $$(\lambda^{1/2}\beta_{0}^{\pm},\alpha_{0}^{\pm})$$. (4.4) Since the vectors $(\alpha_{0}^{\pm},-\lambda^{1/2}\beta_{0}^{\pm})$ and $(\lambda^{1/2}\beta_{0}^{\pm},\alpha_{0}^{\pm})$ are perpendicular, we see that $u$ satisfies (4.4) if and only if $$(u^{\prime}(\pm 1),\lambda^{1/2}u(\pm 1))$$ is parallel to $$(\alpha_{0}^{\pm},-\lambda^{1/2}\beta_{0}^{\pm})$$, (4.5) which is equivalent to $$\omega_{(\lambda,u)}(\pm 1)=\omega_{\lambda,0}^{\pm}\ (\rm{mod}\ \pi).$$ (4.6) Standard Sturm-Liouville theory for the separated boundary conditions (4.3) now yields the following properties of the spectrum, see Theorem 2.1 in [3, Chapter 8] (and the proof of this theorem). Theorem 4.1. Suppose that $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$. Then $\sigma$ consists of a strictly increasing sequence of real eigenvalues $\lambda_{k}^{\boldsymbol{0}}\geqslant 0$, $k=0,1,\dots.$ For each $k\geqslant 0$$:$ (a) $\lambda_{k}^{\boldsymbol{0}}$ has geometric multiplicity one$;$ (b) $\lambda_{k}^{\boldsymbol{0}}$ has an eigenfunction $u_{k}^{\boldsymbol{0}}$ whose Prüfer angle $\omega_{k}^{\boldsymbol{0}}:=\omega_{(\lambda_{k}^{\boldsymbol{0}},u_{k}^{% \boldsymbol{0}})}$ satisfies $$\omega_{k}^{\boldsymbol{0}}(-1)=\omega_{\lambda,0}^{-},\quad\omega_{k}^{% \boldsymbol{0}}(1)=\omega_{\lambda,0}^{+}+k\pi.$$ (4.7) Remark 4.2. By definition, for any $u\in{C^{1}_{\rm s}}[-1,1]$, $$\displaystyle u(x)$$ $$\displaystyle=0\iff\omega_{(\lambda,u)}(x)=0\,(\rm{mod}\,\pi),$$ $$\displaystyle u^{\prime}(x)$$ $$\displaystyle=0\iff\omega_{(\lambda,u)}(x)=\frac{\pi}{2}\,(\rm{mod}\,\pi).$$ In addition, it can be verified that if $u$ satisfies the differential equation (1.1), with $\lambda>0$, then $$u(x)u^{\prime}(x)=0\implies\omega^{\prime}_{(\lambda,u)}(x)>0,$$ so it follows from (4.7) that, for all $k\geqslant 0$, the eigenfunction $u_{k}^{\boldsymbol{0}}$ has exactly $k$ zeros in the interval $(-1,1)$; this is the usual ‘oscillation count’ for the standard, separated, Sturm-Liouville problem. Thus the oscillation count of the eigenfunctions of the separated problem can be described by the Prüfer angle, and this count is encapsulated in (4.7). 4.1.2. Suppose that $({\boldsymbol{0}},{\boldsymbol{0}})\neq(\boldsymbol{\alpha},\boldsymbol{\beta}% )\in{\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$. In this case the eigenfunctions need not satisfy (4.5)-(4.7) — to provide a replacement for these formulae we first prove the following lemma. Lemma 4.3. Suppose that $u$ is an eigenfunction, with eigenvalue $\lambda>0$. Then $$\omega_{(\lambda,u)}(\pm 1)-\omega_{\lambda,0}^{\pm}\neq\frac{\pi}{2}\ (\rm{% mod}\,\pi).$$ (4.8) Proof. It follows from the definitions of $\omega_{(\lambda,u)}$ and the angles $\omega_{\lambda,0}^{\pm}$ that $$\omega_{(\lambda,u)}(\pm 1)-\omega_{\lambda,0}^{\pm}=\frac{\pi}{2}\ ({\rm mod}% \,\pi)\iff\lambda\beta_{0}^{\pm}u(\pm 1)-\alpha_{0}^{\pm}u^{\prime}(\pm 1)=0,$$ so the result follows from Lemma 3.3 (by putting $\eta_{0}=\pm 1$, etc.). ∎ The geometrical interpretation of (4.8) is: $$(u^{\prime}(\pm 1),\lambda^{1/2}u(\pm 1))$$ is not perpendicular to $$(\alpha_{0}^{\pm},-\lambda^{1/2}\beta_{0}^{\pm})$$. (4.9) Thus we see that going from separated to multi-point boundary conditions has relaxed the ‘strictly parallel’ condition (4.5), holding in the separated case, to the ‘not perpendicular’ condition (4.9), holding in the multi-point case. Motivated by Theorem 4.1 and Lemma 4.3, we introduce some further notation. Definition 4.4. For $k\geqslant 0$, $P_{k}^{+}$ will denote the set of $(\lambda,u)\in(0,\infty)\times{C^{1}_{\rm s}}[-1,1]$ for which the Prüfer angle $\omega_{(\lambda,u)}$ satisfies $$|\omega_{(\lambda,u)}(-1)-\omega_{\lambda,0}^{-}|<\pi/2,\quad|\omega_{(\lambda% ,u)}(1)-\omega_{\lambda,0}^{+}-k\pi|<\pi/2;$$ (4.10) also, $P_{k}^{-}:=-P_{k}^{+}$ and $P_{k}:=P_{k}^{-}\cup P_{k}^{+}$. The sets $P_{k}^{\pm}$, $k\geqslant 0$, are open, disjoint subsets of $(0,\infty)\times C^{1}[-1,1]$, and they will be used to count eigenfunction oscillations in Theorem 4.8 below. In fact, the results of Theorem 4.8 below will demonstrate that, for general $(\boldsymbol{\alpha},\boldsymbol{\beta})\neq({\boldsymbol{0}},{\boldsymbol{0}})$, the conditions (4.8) and (4.10) are suitable replacements for conditions (4.6) and (4.7) respectively. As a preliminary to this we observe that the above definitions, together with Corollary 3.4 and Lemma 4.3 yield the following result. Corollary 4.5. Suppose that $u$ is an eigenfunction, with eigenvalue $\lambda>0$. Then$:$ (a) $\lambda$ has geometric multiplicity $1;$ (b) $(\lambda,u)\not\in\partial P_{l}$, for any $l\geqslant 0;$ (c) there exists $k\geqslant 0$ such that $(\lambda,u)\in P_{k}$. Motivated by Corollary 4.5 we define the sets $$\sigma_{k}:=\{\lambda\in\sigma:\text{for any eigenfunction $u$ of $\lambda$, $% (\lambda,u)\in P_{k}$}\},\quad k\geqslant 0.$$ By Corollary 4.5, we have $\sigma=\cup_{k\geqslant 0}\,\sigma_{k}$. Remark 4.6. In [13] and [14] certain subsets of ${C^{1}_{\rm s}}[-1,1]$, denoted $T_{k}$ and $S_{k}$, were used to count oscillations in the Dirichlet-type and Neumann-type cases respectively. It follows from the results in Remark 4.2 and the definitions of $T_{k}$ and $S_{k}$ in [13, Section 2.2] and [14, Section 2.2] that, for each integer $k\geqslant 0$: • Neumann-type case:  $\omega_{\lambda,0}^{\pm}=\frac{\pi}{2}$ and $(\lambda,u)\in P_{k}\implies\text{$u$ has exactly $k$ zeros in $(-1,1)$ and $u\in S_{k}$;}$ • Dirichlet-type case:  $\omega_{\lambda,0}^{-}=0$,  $\omega_{\lambda,0}^{+}=\pi$ and $(\lambda,u)\in P_{k}\implies\text{$u^{\prime}$ has exactly $k+1$ zeros in $(-1,1)$ and $u\in T_{k+1}$.}$ Hence, in the Dirichlet-type and Neumann-type cases respectively, the sets $P_{k}$ used here are analogous to the sets $(0,\infty)\times T_{k+1}$ and $(0,\infty)\times S_{k}$, and we see that using the sets $P_{k}$ to count the eigenfunction oscillations extends the oscillation counting methods used in the above special cases to the general Sturm-Liouville-type boundary conditions considered here. Remark 4.7. The above constructions depended on (3.9), via Lemma 4.3, and the occurrence of the term $\lambda^{1/2}$ in (3.9) dictated that the term $\lambda^{1/2}$ should appear in the definition of the Prüfer angle. This is why we have used the ‘modified’ Prüfer angle here. 4.2. The structure of $\sigma$ We can now prove the following theorem for general $(\boldsymbol{\alpha},\boldsymbol{\beta})$, which extends Theorem 4.1 to the general multi-point Sturm-Liouville problem. Theorem 4.8. Suppose that (1.3)-(1.5) hold. Then $\sigma$ consists of a strictly increasing sequence of real eigenvalues $\lambda_{k}\geqslant 0$, $k=0,1,\dots,$ such that $\lim_{k\to\infty}\lambda_{k}=\infty.$ For each $k\geqslant 0$$:$ (a) $\lambda_{k}$ has geometric multiplicity $1;$ (b) $\lambda_{k}$ has an eigenfunction $u_{k}$ such that $(\lambda_{k},u_{k})\in P_{k}^{+}$. In the Neumann-type case $\lambda_{0}=0$, while if (2.1) holds then $\lambda_{0}>0$. Proof. We will prove a series of results regarding the eigenvalues and eigenfunctions, which culminate in the proof of the theorem. The fact that the eigenvalues have geometric multiplicity 1 has already been proved in Corollary 3.4. Lemma 4.9. If $\lambda$ is an eigenvalue then $\lambda\geqslant 0$. If (2.1) holds then $\lambda>0$. Proof. Suppose that $\lambda<0$ and define $s:=\sqrt{-\lambda}$. Then any eigenfunction $u$ has the form $u(x)=c_{+}e^{sx}+c_{-}e^{-sx}$, for some $(c_{+},c_{-})\in\mathbb{R}^{2}$, and we see from this that $\max|u|$ and $\max|u^{\prime}|$ must both be attained at the same end point, say at $x=1$. Hence, $u(1)$ and $u^{\prime}(1)$ have the same sign. By (1.4), $\beta_{0}^{+}\geqslant 0$, so by (1.2) and (1.5), $$\displaystyle\alpha_{0}^{+}|u|_{0}+\beta_{0}^{+}|u^{\prime}|_{0}$$ $$\displaystyle=|\alpha_{0}^{+}u(1)+\beta_{0}^{+}u^{\prime}(1)|$$ $$\displaystyle\leqslant|u|_{0}\sum_{i=1}^{m^{+}}|\alpha^{+}_{i}|+|u^{\prime}|_{% 0}\sum_{i=1}^{m^{+}}|\beta^{+}_{i}|$$ $$\displaystyle<\alpha^{+}_{0}|u|_{0}+\beta^{+}_{0}|u^{\prime}|_{0},$$ and this contradiction proves the first part of the lemma. Next, if (2.1) holds then it follows from Theorem 2.1 that $\lambda\neq 0$, which completes the proof. ∎ Remark 4.10. It is well known that if the sign conditions (1.4) do not hold then Lemma 4.9 need not be true, even in the separated case. For example, if $$\alpha_{0}^{\pm}=\pm\epsilon,\quad\alpha=0,\qquad\beta_{0}^{\pm}=\pm 1,\quad% \beta=0.$$ The properties of the spectrum in the Neumann-type case have been proved in [14], so from now on in the proof we will suppose that (2.1) holds. Thus, by Theorem 2.1 and Lemma 4.9, if $\lambda$ is an eigenvalue with eigenfunction $u$, then $\lambda>0$ and we may suppose that $\lambda=s^{2},$ $u=w(s,\theta)$, for suitable $s>0$, $\theta\in\mathbb{R}$ (up to a scaling of the eigenfunction), where $w(s,\theta)$ was defined in (3.3). Defining functions $\Gamma^{\pm}:(0,\infty)\times\mathbb{R}\times\mathbb{R}^{2(m^{-}+m^{+})}\to% \mathbb{R}$ by $$\begin{split}\displaystyle\Gamma^{\pm}(s,\theta,\alpha^{\pm},\beta^{\pm})&% \displaystyle:=\alpha_{0}^{\pm}\sin(\pm s+\theta)+s\beta_{0}^{\pm}\cos^{\prime% }(\pm s+\theta)\,-\\ &\displaystyle\quad-\sum^{m^{\pm}}_{i=1}\alpha_{i}^{\pm}\sin(s\eta_{i}^{\pm}+% \theta)-s\sum^{m^{\pm}}_{i=1}\beta_{i}^{\pm}\cos(s\eta_{i}^{\pm}+\theta),\end{split}$$ and substituting $w(s,\theta)$ into (1.2) shows that $\lambda=s^{2}$ is an eigenvalue iff the pair of equations $$\Gamma^{\pm}(s,\theta,\alpha^{\pm},\beta^{\pm})=0$$ (4.11) holds, for some $\theta\in\mathbb{R}$. Hence, it suffices to consider the set of solutions of (4.11). We will now prove Theorem 4.8 by continuation with respect to $(\boldsymbol{\alpha},\boldsymbol{\beta})$, away from $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$, where the required information on the solutions of (4.11) follows from the standard theory of the separated problem in Theorem 4.1. For reference, we state this in the following lemma. Lemma 4.11. Suppose that $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$. For each $k=0,1,\dots,$ if we write $s_{k}^{\boldsymbol{0}}:=(\lambda_{k}^{\boldsymbol{0}})^{1/2}$ $($where $\lambda_{k}^{\boldsymbol{0}}$ is as in Theorem 4.1$)$, then there exists a unique $\theta_{k}^{\boldsymbol{0}}\in[0,\pi)$ such that $(s_{k}^{\boldsymbol{0}},\theta_{k}^{\boldsymbol{0}})$ satisfies (4.11). Of course, by the periodicity properties of $\Gamma^{\pm}$ with respect to $\theta$, there are other solutions of (4.11) (with $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$) than those in Lemma 4.11, but these do not yield distinct solutions of the eigenvalue problem (4.1). In fact, to remove these extra solutions and to reduce the domain of $\theta$ to a compact set, from now on we will regard $\theta$ as lying in the circle obtained from the interval $[0,2\pi]$ by identifying the points $0$ and $2\pi$, which we denote by $S^{1}$, and we regard the domain of the functions $\Gamma^{\pm}$ as $(0,\infty)\times S^{1}\times{\mathcal{B}}(\alpha_{0}^{\pm},\beta_{0}^{\pm})$. We now consider (4.11) when $(\boldsymbol{\alpha},\boldsymbol{\beta})\neq({\boldsymbol{0}},{\boldsymbol{0}})$. The following proposition provides some information on the signs of the partial derivatives $\Gamma^{\nu}_{s}$, $\Gamma^{\nu}_{\theta}$ at the zeros of $\Gamma^{\nu}$. Lemma 4.12. Suppose that $\nu\in\{\pm\}$ and $(\alpha^{\nu},\beta^{\nu})\in{\mathcal{B}}(\alpha_{0}^{\nu},\beta_{0}^{\nu})$. Then $$\Gamma^{\nu}(s,\theta,\alpha^{\nu},\beta^{\nu})=0\implies\nu\,\Gamma^{\nu}_{s}% (s,\theta,\alpha^{\nu},\beta^{\nu})\,\Gamma^{\nu}_{\theta}(s,\theta,\alpha^{% \nu},\beta^{\nu})>0.$$ (4.12) Proof. By a similar proof to that of Lemma 3.2 it can be shown that $$\Gamma^{\nu}(s,\theta,\alpha^{\nu},\beta^{\nu})=0\implies\Gamma^{\nu}_{s}(s,% \theta,\alpha^{\nu},\beta^{\nu})\,\Gamma^{\nu}_{\theta}(s,\theta,\alpha^{\nu},% \beta^{\nu})\neq 0.$$ (4.13) We now regard $(s,\theta,\alpha^{\nu},\beta^{\nu})$ as fixed, and consider the equation $$G(\widetilde{\theta},t):=\Gamma^{\nu}(s,\widetilde{\theta},t\alpha^{\nu},t% \beta^{\nu})=0,\quad(\widetilde{\theta},\ t)\in S^{1}\times[0,1].$$ (4.14) It is clear that if $t\in[0,1]$ then $(t\alpha^{\nu},t\beta^{\nu})\in{\mathcal{B}}(\alpha_{0}^{\nu},\beta_{0}^{\nu})$, so by (4.13), $$G(\theta,1)=0\quad\text{and}\quad G(\widetilde{\theta},t)=0\implies G_{% \widetilde{\theta}}(\widetilde{\theta},t)\neq 0.$$ (4.15) Hence, by (4.15), the implicit function theorem, and the compactness of $S^{1}$, there exists a $C^{1}$ solution function $t\to\widetilde{\theta}(t):[0,1]\to S^{1},$ for (4.14) such that $$\widetilde{\theta}(1)=\theta,\quad\Gamma^{\nu}(s,\widetilde{\theta}(t),t\alpha% ^{\nu},t\beta^{\nu})=0,\quad t\in[0,1]$$ (the local existence of this solution function, near $t=1$, is trivial; standard arguments show that its domain can be extended to include the interval $[0,1]$ — see the proof of part (b) of Lemma 4.13 below for a similar argument). Next, by the definition of $\Gamma^{\nu}$, (4.12) holds at $(s,\widetilde{\theta}(0),0,0)$ and hence, by (4.13) and continuity, (4.12) holds at $(s,\widetilde{\theta}(t),t\alpha^{\nu},t\beta^{\nu})$ for all $t\in[0,1]$. In particular, putting $t=1$ shows that (4.12) holds at $(s,\theta,\alpha^{\nu},\beta^{\nu})$, which completes the proof of Lemma 4.12. ∎ We now return to the pair of equations (4.11). To solve these using the implicit function theorem we define the Jacobian determinant $$J(s,\theta,\boldsymbol{\alpha},\boldsymbol{\beta}):=\begin{vmatrix}\Gamma^{-}_% {s}(s,\theta,\alpha^{-},\beta^{-})&\Gamma^{-}_{\theta}(s,\theta,\alpha^{-},% \beta^{-})\\ \Gamma^{+}_{s}(s,\theta,\alpha^{+},\beta^{+})&\Gamma^{+}_{\theta}(s,\theta,% \alpha^{+},\beta^{+})\end{vmatrix},$$ for $(s,\theta,\boldsymbol{\alpha},\boldsymbol{\beta})\in(0,\infty)\times S^{1}% \times{\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0}).$ It follows from the sign properties of $\Gamma^{\pm}_{s},\ \Gamma^{\pm}_{\theta}$ proved in Lemma 4.12 that $$\Gamma^{+}(s,\theta,\alpha^{+},\beta^{+})=\Gamma^{-}(s,\theta,\alpha^{-},\beta% ^{-})=0\implies J(s,\theta,\boldsymbol{\alpha},\boldsymbol{\beta})\neq 0,$$ (4.16) and hence we can solve (4.11) for $(s,\theta)$, as functions of $(\boldsymbol{\alpha},\boldsymbol{\beta})$, in a neighbourhood of an arbitrary solution of (4.11). Now suppose that $(s,\theta,\boldsymbol{\alpha},\boldsymbol{\beta})\in(0,\infty)\times S^{1}% \times{\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$ is an arbitrary (fixed) solution of (4.11). By (4.16) and the implicit function theorem there exists a maximal open interval $\widetilde{I}$ containing $1$ and a $C^{1}$ solution function $$t\to(\widetilde{s}(t),\widetilde{\theta}(t)):\widetilde{I}\to(0,\infty)\times S% ^{1},$$ such that $$(\widetilde{s}(1),\widetilde{\theta}(1))=(s,\theta),\quad\Gamma^{\pm}(% \widetilde{s}(t),\widetilde{\theta}(t),t\alpha^{\pm},t\beta^{\pm})=0,\quad t% \in\widetilde{I}.$$ Furthermore, by Corollary 4.5 and continuity, there exists an integer $\widetilde{k}\geqslant 0$ such that $$(\widetilde{s}(t)^{2},w(\widetilde{s}(t),\widetilde{\theta}(t)))\in P_{% \widetilde{k}},\quad t\in\widetilde{I}.$$ (4.17) Lemma 4.13. $(a)$ There exists constants $C$, $\delta>0$ such that $\delta\leqslant\widetilde{s}(t)\leqslant C$, $t\in\widetilde{I};$ $(b)$ $0\in\widetilde{I}$. Proof. (a)  From the form of $w(s,\theta)$, there exists $C>0$ such that if $s\geqslant C$ then $(s^{2},w(s,\theta))\not\in P_{\widetilde{k}}$, for any $\theta\in S^{1}$. Hence, by (4.17), $\widetilde{s}(t)\leqslant C$ for any $t\in\widetilde{I}$. Now suppose that the lower bound $\delta>0$ does not exist, so that we may choose a sequence $t_{n}\in\widetilde{I}$, $n=1,2,\dots,$ with $\widetilde{s}(t_{n})\to 0.$ Writing $\widetilde{s}_{n}:=s(t_{n})$, $\widetilde{\theta}_{n}:=\theta(t_{n})$ and $\widetilde{w}_{n}:=w(\widetilde{s}_{n},\widetilde{\theta}_{n})$, $n=1,2,\dots,$ it is clear that, as $n\to\infty$, $$|\widetilde{w}^{\prime}_{n}|_{0}={\rm O}(\widetilde{s}_{n})$$  and  $$|\widetilde{w}_{n}-c_{\infty}|_{0}\to 0$$, for some constant $c_{\infty}$ (after taking a subsequence if necessary, and regarding $c_{\infty}$ as an element of $C^{0}[-1,1]$). We now consider various cases. Suppose that $c_{\infty}\neq 0$. By (2.1), $\alpha_{0}^{\nu}\neq 0$ for some $\nu\in\{\pm\}$, and the corresponding boundary condition (1.2) yields $$0=\alpha_{0}^{\nu}\widetilde{w}_{n}(\nu)-\sum^{m^{\nu}}_{i=1}\alpha^{\nu}_{i}% \widetilde{w}_{n}(\eta^{\nu}_{i})+{\rm O}(\widetilde{s}_{n})\to c_{\infty}\Big% {(}\alpha_{0}^{\nu}-\sum^{m^{\nu}}_{i=1}\alpha^{\nu}_{i}\Big{)},$$ which contradicts (1.5), and so proves the existence of $\delta>0$ in this case. Now suppose that $c_{\infty}=0$. Without loss of generality we also suppose that $\widetilde{\theta}_{n}\searrow 0$ (after taking a subsequence if necessary) and so, for all $n$ sufficiently large, $|\widetilde{w}_{n}|_{0}$ is attained at the end point $x=1$. Suppose that $\alpha_{0}^{+}\neq 0$. By the definition of $\widetilde{w}_{n}$, we obtain from (1.2) $$\displaystyle\widetilde{s}_{n}\Big{(}\alpha_{0}^{+}-\sum^{m^{+}}_{i=1}\alpha^{% +}_{i}\eta^{+}_{i}+\beta_{0}^{+}-\sum^{m^{+}}_{i=1}\beta^{+}_{i}\Big{)}+% \widetilde{\theta}_{n}\Big{(}\alpha_{0}^{+}-\sum^{m^{+}}_{i=1}\alpha^{+}_{i}% \Big{)}={\rm O}(\widetilde{s}_{n}^{3}+\widetilde{\theta}_{n}^{3}),$$ but, by (1.3)-(1.5), the terms in the brackets on the left hand side are strictly positive, so this is contradictory when $n$ is sufficiently large. Suppose that $\alpha_{0}^{+}=0$, and so $\beta_{0}^{+}>0$ (by (1.3), (1.4)). Dividing (1.2) by $\widetilde{s}_{n}$ and letting $n\to\infty$ yields $$0=s_{n}^{-1}\Big{(}\beta_{0}^{+}\widetilde{w}_{n}^{\prime}(1)-\sum^{m^{+}}_{i=% 1}\beta^{+}_{i}\widetilde{w}_{n}^{\prime}(\eta^{+}_{i})\Big{)}\to\beta_{0}^{+}% -\sum^{m^{+}}_{i=1}\beta^{+}_{i}>0,$$ by (1.5), which is again contradictory. This completes the proof of part (a) of Lemma 4.13. (b)  Suppose that $0\not\in\widetilde{I}$, and let $\hat{t}=\inf\{t\in\widetilde{I}\}\geqslant 0$. By part (a) of the lemma, there exists a sequence $t_{n}\in\widetilde{I}$, $n=1,2,\dots,$ and a point $(\hat{s},\hat{\theta})\in(0,\infty)\times S^{1}$, such that $$\lim_{n\to\infty}t_{n}=\hat{t},\quad\lim_{n\to\infty}(\widetilde{s}(t_{n}),% \widetilde{\theta}(t_{n}))=(\hat{s},\hat{\theta}).$$ Clearly, the point $(\hat{s},\hat{\theta},\hat{t}\boldsymbol{\alpha},\hat{t}\boldsymbol{\beta})$ satisfies (4.11) so, by the above results, the solution function $(\widetilde{s},\widetilde{\theta})$ extends to an open neighbourhood of $\hat{t}$, which contradicts the choice of $\hat{t}$ and the maximality of the interval $\widetilde{I}$. ∎ For any given $(\boldsymbol{\alpha},\boldsymbol{\beta})\in{\mathcal{B}}(\boldsymbol{\alpha}_{% 0},\boldsymbol{\beta}_{0})$ the above arguments have shown that: (a) any solution $(s,\theta,\boldsymbol{\alpha},\boldsymbol{\beta})\in(0,\infty)\times S^{1}% \times{\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$ of (4.11) can be continuously connected to exactly one of the solutions $\{(s_{k}^{\boldsymbol{0}},\theta_{k}^{\boldsymbol{0}},{\boldsymbol{0}},{% \boldsymbol{0}}):k\geqslant 0\}$. Similar arguments show that: (b) any solution $\{(s_{k}^{\boldsymbol{0}},\theta_{k}^{\boldsymbol{0}},{\boldsymbol{0}},{% \boldsymbol{0}}):k\geqslant 0\}$ can be continuously connected to exactly one solution, say $(s_{k}(\boldsymbol{\alpha},\boldsymbol{\beta}),\theta_{k}(\boldsymbol{\alpha},% \boldsymbol{\beta}),\boldsymbol{\alpha},\boldsymbol{\beta})\in(0,\infty)\times S% ^{1}\times{\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$, of (4.11). Hence, for each $k\geqslant 0$, we obtain the eigenvalue and eigenfunction $$(\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta}),u_{k}(\boldsymbol{\alpha}% ,\boldsymbol{\beta})):=(s_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})^{2},w(s_% {k}(\boldsymbol{\alpha},\boldsymbol{\beta}),\theta_{k}(\boldsymbol{\alpha},% \boldsymbol{\beta})))\in P_{k},$$ and we see that there is no eigenvalue $\widetilde{\lambda}\neq\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})$, with eigenfunction $\widetilde{u}$, for which $(\widetilde{\lambda},\widetilde{u})\in P_{k}$. Next, by Theorem 4.1, $s_{k}^{\boldsymbol{0}}=s_{k}^{\boldsymbol{0}}({\boldsymbol{0}},{\boldsymbol{0}% })<s_{k+1}^{\boldsymbol{0}}=s_{k+1}^{\boldsymbol{0}}({\boldsymbol{0}},{% \boldsymbol{0}})$ and by Theorem 3.1, $s_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})\neq s_{k+1}(\boldsymbol{\alpha},% \boldsymbol{\beta})$ for any $(\boldsymbol{\alpha},\boldsymbol{\beta})\in{\mathcal{B}}(\boldsymbol{\alpha}_{% 0},\boldsymbol{\beta}_{0})$, so it follows from the continuation construction that $s_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})<s_{k+1}(\boldsymbol{\alpha},% \boldsymbol{\beta})$ for all $(\boldsymbol{\alpha},\boldsymbol{\beta})\in{\mathcal{B}}(\boldsymbol{\alpha}_{% 0},\boldsymbol{\beta}_{0})$. Finally, for fixed $(\boldsymbol{\alpha},\boldsymbol{\beta})$, the fact that $(\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta}),u_{k}(\boldsymbol{\alpha}% ,\boldsymbol{\beta}))\in P_{k}$, for $k\geqslant 1$, shows that as $k\to\infty$ the oscillation count tends to $\infty$, so by standard properties of the differential equation (1.1) we must have $\lim_{k\to\infty}\lambda_{k}=\infty$. This concludes the proof of Theorem 4.8. ∎ The implicit function theorem construction of $\lambda_{k}$ and $u_{k}$ in the proof of Theorem 4.8 also imply continuity properties which will be useful below, so we state these in the following corollary (continuity of $u_{k}$ will be in the space $C^{0}[-1,1]$, although stronger results could easily be obtained). Corollary 4.14. For each $k\geqslant 0$, $\lambda_{k}\in\mathbb{R}$ and $u_{k}\in C^{0}[-1,1]$ depend continuously on $(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0},\boldsymbol{\alpha},% \boldsymbol{\beta},\boldsymbol{\eta})\in{\mathcal{B}}\times(-1,1]^{m^{-}}% \times[-1,1)^{m^{+}}$. 4.3. Positivity of the principal eigenfunction In many applications it is important to know that the principal eigenfunction $u_{0}$ is positive. Thus we will now consider conditions which ensure this is true. Theorem 4.15. Suppose that (1.3)-(1.5) hold, and $\alpha^{\pm}\geqslant 0$. Then$:$ $(a)$ $u_{0}>0$ on $(-1,1);$ $(b)$ if $\beta_{0}^{\nu}\neq 0$, for some $\nu\in\{\pm\}$, then $u_{0}(\nu)>0$. Proof. By standard Sturm-Liouville theory the result is true when $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$ (part (a) is standard and (b) follows immediately since, under the stated hypotheses, $u_{0}(\nu)=0\Rightarrow u_{0}^{\prime}(\nu)=0$, and an eigenfunction cannot have a double zero). Now suppose that both $\beta_{0}^{\pm}\neq 0$. If the result fails then, by using a limiting argument in the construction of the eigenvalues by continuation from $(\boldsymbol{\alpha},\boldsymbol{\beta})=({\boldsymbol{0}},{\boldsymbol{0}})$ in the proof of Theorem 4.8, we can show that there exists some $(\boldsymbol{\alpha},\boldsymbol{\beta})\in{\mathcal{B}}(\boldsymbol{\alpha}_{% 0},\boldsymbol{\beta}_{0})$, with $\alpha^{\pm}\geqslant 0$, such that the principal eigenfunction $u_{0}(\boldsymbol{\alpha},\boldsymbol{\beta})\geqslant 0$ satisfies: (1) $u_{0}(\boldsymbol{\alpha},\boldsymbol{\beta})>0$ on $(-1,1)$  (since $u_{0}(\boldsymbol{\alpha},\boldsymbol{\beta})$ cannot have a double zero); (2) $u_{0}(\boldsymbol{\alpha},\boldsymbol{\beta})(\nu)=0$, and hence $|u_{0}^{\prime}(\boldsymbol{\alpha},\boldsymbol{\beta})(\nu)|=|u_{0}^{\prime}(% \boldsymbol{\alpha},\boldsymbol{\beta})|_{0}$, for some $\nu\in\{\pm\}$. Now, by (1.2)-(1.5) $$\displaystyle 0$$ $$\displaystyle=\beta_{0}^{\nu}u_{0}^{\prime}(\boldsymbol{\alpha},\boldsymbol{% \beta})(\nu)-\sum_{i=1}^{m^{\nu}}\alpha_{i}^{\nu}u_{0}(\boldsymbol{\alpha},% \boldsymbol{\beta})(\eta_{i}^{\nu})-\sum_{i=1}^{m^{\nu}}\beta_{i}^{\nu}u_{0}^{% \prime}(\boldsymbol{\alpha},\boldsymbol{\beta})(\eta_{i}^{\nu})$$ $$\displaystyle\leqslant-|u_{0}^{\prime}(\boldsymbol{\alpha},\boldsymbol{\beta})% |_{0}\Big{(}|\beta_{0}^{\nu}|-\sum_{i=1}^{m^{\nu}}|\beta_{i}^{\nu}|\Big{)}<0,$$ and this contradiction shows that this case cannot occur. Next, suppose that one, or both, of $\beta_{0}^{\pm}=0$. We replace the coefficients $\beta_{0}^{\pm}$ by $\beta_{0}^{\pm}\pm 1/n$, $n=1,2,\dots,$ and then let $n\to\infty$. By the result just proved, each of the corresponding principal eigenfunctions, say $u_{0,n}\geqslant 0$, have the properties (a) and (b), and so by Corollary 4.14 the limiting eigenfunction, say $u_{0,\infty}\geqslant 0$, satisfies (a), and we can now prove that $u_{0,\infty}$ satisfies (b) by the same calculation as before. ∎ 4.4. Algebraic multiplicity Throughout this section we will suppose that (2.1) holds so that, by Theorem 2.1, $\Delta$ has an inverse operator $\Delta^{-1}:Y\to X$ (see Remark 4.18 below for some comments on the Neumann-type case, when (2.1) does not hold). We can also regard this inverse as an operator $\Delta^{-1}:Y\to Y$, which we will denote as $\Delta^{-1}_{Y}$. Since $X$ is compactly embedded into $Y$, $\Delta^{-1}_{Y}$ is compact (indeed, this compactness together with the fact that $\Delta^{-1}_{Y}$ maps $Y$ into itself is the motivation for introducing $\Delta^{-1}_{Y}$). Now, the eigenvalue problem (4.1) is equivalent to the equation $$(I_{Y}+\lambda\Delta^{-1}_{Y})u=0,\quad u\in Y,$$ (4.18) where $I_{Y}$ denotes the identity on $Y$. Hence, each eigenvalue $\lambda_{k}$, $k=0,1,\dots,$ can be regarded as a characteristic value of $-\Delta^{-1}_{Y}$. As usual, we define the algebraic multiplicity of the characteristic value $\lambda_{k}$ to be $$\dim\bigcup_{j=1}^{\infty}N((I_{Y}+\lambda_{k}\Delta^{-1})^{j})$$ (where $N$ denotes null-space). Lemma 4.16. For each $k\geqslant 0$ the algebraic multiplicity of the characteristic value $\lambda_{k}$ of $-\Delta^{-1}_{Y}$ is equal to 1. Proof. The proof is again by continuation with respect to $(\boldsymbol{\alpha},\boldsymbol{\beta})$, so we now write $\Delta^{-1}_{Y}(\boldsymbol{\alpha},\boldsymbol{\beta})$ and $\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})$, for $(\boldsymbol{\alpha},\boldsymbol{\beta})\in{\mathcal{B}}(\boldsymbol{\alpha}_{% 0},\boldsymbol{\beta}_{0})$. When $(\boldsymbol{\alpha},\boldsymbol{\beta})=(0,0)$ it is easy to see that the algebraic multiplicity of $\lambda_{k}(0,0)$ is equal to 1 (this case corresponds to the standard Sturm-Liouville problem). Next, it was shown in Corollaries 2.2 and 4.14 that $\Delta^{-1}_{Y}(\boldsymbol{\alpha},\boldsymbol{\beta})$ and $\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})$ depend continuously on $(\boldsymbol{\alpha},\boldsymbol{\beta})$, and Theorem 4.8 shows that as $(\boldsymbol{\alpha},\boldsymbol{\beta})$ varies over ${\mathcal{B}}(\boldsymbol{\alpha}_{0},\boldsymbol{\beta}_{0})$, eigenvalues with different $k$ never meet. Hence, by the results in [8, Ch. 2, Sec. 5], the algebraic multiplicity of $\lambda_{k}(\boldsymbol{\alpha},\boldsymbol{\beta})$ is constant for $(\boldsymbol{\alpha},\boldsymbol{\beta})\in{\mathcal{B}}(\boldsymbol{\alpha}_{% 0},\boldsymbol{\beta}_{0})$ (the discussion in [8, Ch. 2, Sec. 5] is in finite dimensions but, as noted there, the results extend to bounded operators in infinite dimensions). This proves the result. ∎ Remark 4.17. In the case $\alpha^{-}=0$, $\alpha^{+}>0$, $\beta^{\pm}=0$, Lemma 4.16 was proved directly in [17, Lemma 2.6] and [12, Lemma 3.8] (that is, without relying on perturbation theory for linear operators), but it seems to be difficult to extend this proof to the general case. This result was extended to general Dirichlet-type and Neumann-type problems in [13] and [14] respectively. Remark 4.18. For simplicity we have excluded the Neumann-type case from this section, since in this case the operator $\Delta$ does not have an inverse. Of course, one could consider the operator $\Delta-\mu I_{Y}$, with $\mu>0$; it can be shown that this operator has an inverse, which is compact (as a mapping into $Y$), that is, $\Delta$ has compact resolvent. We could then obtain similar results to those above. However, this would entail considerable additional notational complexity, and the Neumann-type case was treated in detail in [14], so we will simply omit this case here. 4.5. Counter examples In this section we will show that Theorem 4.8 need not be true if (1.5) does not hold, and that the condition (1.5) is, in some sense, optimal for the validity of Theorem 4.8. In fact, for the Dirichlet-type problem, it was shown in [12, Examples 3.5, 3.6] that if $\sum_{i=1}^{m^{\pm}}|\alpha_{i}^{\pm}|=\alpha_{0}^{\pm}$ then we may have an eigenvalue/eigenfunction pair $(\lambda,u)\in\partial P_{k}$, for some $k$ (in the present notation) while if $\sum_{i=1}^{m^{\pm}}|\alpha_{i}^{\pm}|>\alpha_{0}^{\pm}$ then we may have $\sigma_{k}=\mbox{\Large\o}$ for a finite, but arbitrarily large, set of integers $k$, that is, the corresponding eigenvalues $\lambda_{k}$ may be ‘missing’ from the sequence of eigenvalues constructed in Theorem 4.8. Similar examples were constructed for the Neumann-type case in [14, Examples 4.17, 4.18]. These examples show that condition (1.5) is optimal in the cases where one or other of the fractions on the left hand side of (1.5) is absent. Thus it seems of interest to also show that (1.5) is optimal when both fractions are present. The following example will do this when these fractions are nonzero and equal to each other. More precisely, in this case we will show that if the number $1$ on the right hand side of (1.5) is increased by an arbitrarily small amount then Theorem 4.8 need not hold, and arbitrarily many eigenvalues may be ‘missing’. For notational simplicity we will consider the problem on the interval $(0,1)$, with a standard Dirichlet condition at $x=0$, and the following multi-point condition at $x=1$ $$\alpha_{0}u(1)+\beta_{0}u^{\prime}(1)=\alpha_{1}u(\eta_{1})+\beta_{2}u^{\prime% }(\eta_{2}).$$ (4.19) For any eigenvalue $\lambda=s^{2}>0$ the corresponding eigenfunction must have the form $C\sin sx$, $C\in\mathbb{R}$. Hence, defining $\Gamma:\mathbb{R}\to\mathbb{R}$ by $$\Gamma(s):=\alpha_{0}\sin s+s\beta_{0}\cos s-\alpha_{1}\sin s\eta_{1}+s\beta_{% 2}\cos s\eta_{2},\quad s\in\mathbb{R},$$ it is clear that $\lambda=s^{2}$ is an eigenvalue iff $\Gamma(s)=0$, and also, for any integer $k\geqslant 0$, $\lambda\in\sigma_{k}\implies\lambda\in[(k-2)\pi,(k+2)\pi]$. To construct our counter example we will show that with a suitable choice of the coefficients in the boundary condition (4.19) there exists a ‘long’ interval $I$ such that if $s\in I$ then $\Gamma(s)\neq 0$, that is, $s^{2}$ cannot be an eigenvalue. This will show that $\sigma_{k}=\mbox{\Large\o}$ for a range of values of $k$. Choose a ‘large’ integer $k_{0}$ (we will be more specific below), and set: $$\epsilon=\frac{10}{k_{0}},\quad s(\gamma)=(1+\gamma\epsilon)k_{0}\pi,\quad% \gamma\in[-1,1].$$ Hence, as $\gamma$ varies over the interval $[-1,1]$, the number $s(\gamma)$ varies over the interval $$I_{k_{0}}:=[(k_{0}-10)\pi,(k_{0}+10)\pi].$$ We also set: $$\displaystyle\alpha_{0}$$ $$\displaystyle=1,$$ $$\displaystyle\beta_{0}$$ $$\displaystyle=\frac{1}{k_{0}\pi},$$ $$\displaystyle\alpha_{1}$$ $$\displaystyle=\frac{1+\epsilon}{\sqrt{2}},$$ $$\displaystyle\beta_{2}$$ $$\displaystyle=\frac{1}{k_{0}\pi}\frac{1+\epsilon}{\sqrt{2}},$$ $$\displaystyle\eta_{1}$$ $$\displaystyle=\frac{1}{2k_{0}},$$ $$\displaystyle\eta_{2}$$ $$\displaystyle=\frac{1}{k_{0}}.$$ Simple estimates now show that if $\epsilon$ is sufficiently small (that is, if $k_{0}$ is sufficiently large) then, for $\gamma\in[-1,1]$, $$\displaystyle\Gamma(s(\gamma))$$ $$\displaystyle\leqslant\sqrt{2}+\epsilon-\frac{1+\epsilon}{\sqrt{2}}\Big{(}\sin% \frac{\pi}{2}(1+\epsilon)-(1+\epsilon)\cos\pi(1+\epsilon)\Big{)}$$ $$\displaystyle\leqslant\sqrt{2}+\epsilon-\sqrt{2}(1+\epsilon)(1-\epsilon/14)$$ $$\displaystyle<\epsilon\big{(}1-\frac{13\sqrt{2}}{14}+{\rm O}(\epsilon)\big{)}$$ $$\displaystyle<0.$$ This shows that there is no eigenvalue $\lambda=s^{2}$ with $s\in I_{k_{0}}$, that is, $\sigma_{k}=\mbox{\Large\o}$ if $k\in[k_{0}-7,k_{0}+7]$. Clearly, there is nothing special about the number 10 in this example, so in fact we could construct an example for which $\sigma_{k}=\mbox{\Large\o}$ for an arbitrarily long succession of integers $k$. Also, since $$\frac{\alpha_{1}}{\alpha_{0}}=\frac{\beta_{2}}{\beta_{0}}=\frac{1+\epsilon}{% \sqrt{2}},$$ and $\epsilon$ is arbitrarily small, we see that if the number $1$ in condition (1.5) is increased by an arbitrarily small amount then Theorem 4.8 need not hold. References [1] C. Bai, J. Fang, Existence of multiple positive solutions for nonlinear $m$-point boundary value problems, J. Math. Anal. Appl. 281 (2003), 76–85. [2] P. Binding, P. Drábek, Sturm-Liouville theory for the $p$-Laplacian, Studia Sci. Math. Hungar. 40 (2003), 375–396. [3] E. A. Coddington, N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York (1955). [4] N. Dodds, B. P. Rynne, Spectral properties and nodal solutions for second-order, $m$-point, $p$-Laplacian boundary value problems, Topol. Methods Nonlinear Anal. 32 (2008), 21–40. [5] F. Genoud, B. P. Rynne, Some recent results on the spectrum of multi-point eigenvalue problems for the $p$-Laplacian, to appear in Commun. Appl. Anal. [6] M. García-Huidobro, Ch. P. Gupta, R. Manásevich, Some multipoint boundary value problems of Neumann-Dirichlet type involving a multipoint $p$-Laplace like operator, J. Math. Anal. Appl. 333 (2007), 247–264. [7] C. P. Gupta, A non-resonant generalized multi-point boundary-value problem of Dirichelet type involving a $p$-Laplacian type operator, Proceedings of the Sixth Mississippi State–UBA Conference on Differential Equations and Computational Simulations, 127–139, Electron. J. Differ. Equ. Conf., 15, Southwest Texas State Univ., San Marcos, TX, 2007. [8] T. Kato, Perturbation Theory for Linear Operators, Springer, 1984. [9] Y. Liu, Non-homogeneous boundary-value problems of higher order differential equations with $p$-Laplacian, Electron. J. Differential Equations 2008, No. 22. [10] R. Ma, D. O’Regan, Nodal solutions for second-order $m$-point boundary value problems with nonlinearities across several eigenvalues, Nonlinear Anal. 64 (2006), 1562–1577. [11] P. H. Rabinowitz, Some global results for nonlinear eigenvalue problems, J. Funct. Analysis 7 (1971), 487–513 [12] B. P. Rynne, Spectral properties and nodal solutions for second-order, $m$-point, boundary value problems, Nonlinear Analysis 67 (2007), 3318–3327. [13] B. P. Rynne, Spectral properties of second-order, multi-point, $p$-Laplacian boundary value problems, Nonlinear Analysis 72 (2010), 4244-4253. [14] B. P. Rynne, Spectral properties of $p$-Laplacian problems with Neumann and mixed-type multi-point boundary conditions, Nonlinear Analysis 74 (2010), 1471–1484. [15] J. R. L. Webb, G. Infante, Positive solutions of nonlocal boundary value problems: a unified approach, J. London Math. Soc. 74 (2006), 673–693. [16] J. R. L. Webb, K. Q. Lan, Eigenvalue criteria for existence of multiple positive solutions of nonlinear boundary value problems of local and nonlocal type, Topol. Methods Nonlinear Anal. 27 (2006), 91–115. [17] X. Xu, Multiple sign-changing solutions for some m-point boundary-value problems, Electron. J. Differential Equations 89 (2004).
\setenumerate label=$()$,itemsep=1pt,parsep=1pt,topsep=1pt,partopsep=1pt Low-mass doubly-charged Higgs bosons at LHC Saiyad Ashanujjaman ID [email protected] Institute of Physics, Bhubaneswar, Sachivalaya Marg, Sainik School, Bhubaneswar 751005, India Department of Physics, SGTB Khalsa College, Delhi 110007, India Department of Physics and Astrophysics, University of Delhi, Delhi 110007, India    Kirtiman Ghosh [email protected] Institute of Physics, Bhubaneswar, Sachivalaya Marg, Sainik School, Bhubaneswar 751005, India Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India    Rameswar Sahu ID [email protected] Institute of Physics, Bhubaneswar, Sachivalaya Marg, Sainik School, Bhubaneswar 751005, India Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India Abstract Search for light (within the mass range 84–200 GeV) doubly-charged Higgs bosons decaying into a pair of W-bosons has been deemed challenging using the conventional LHC searches with leptons, jets and missing transverse momentum in the final state. Such Higgses together with slightly heavier singly-charged and neutral Higgses, when arranged in an $SU(2)_{L}$ triplet as in the type-II see-saw model, are lately shown to accommodate the recent measurement of the $W$-boson mass by the CDF collaboration. These, when produced in a highly Lorentz-boosted regime, tend to manifest themselves as a single fat-jet or a pair of adjacent same-sign leptons plus missing transverse momentum. First, we perform a multivariate analysis to discern such exotic jets from the SM jets. Then, we present a novel search in the final state with an exotic jet and two same-sign leptons plus missing transverse momentum. We find that such low-mass doubly-charged Higgsses could be directly probed with the already collected Run 2 LHC data. I Introduction Despite being remarkably successful in understanding particle physics phenomenology, the Standard Model (SM) in its present form lacks a mass term for the neutrinos. However, a trivial Dirac mass term for the neutrinos can be effectuated by dint of the usual Higgs mechanism by introducing right-handed neutrinos to the SM. Although plausible, this warrants philosophical displeasure as it calls for diminutive Yukawa couplings. Conversely, a well-founded remedy to this menace is offered by the so-called see-saw mechanism, wherein a lepton number violating New Physics beyond the SM is invoked at a priori unknown scale—presumably away from both the electroweak (EW) scale and the Planck scale, so that on integrating out the heavy fields, the SM neutrinos are left with observed sub-eV masses after the EW symmetry breaking. Pointedly, numerous models of varying complexity and testability at colliders have been proposed over the last few decades. The type-II see-saw model [1, 2, 3, 4, 5, 6], a UV completion of the Weinberg operator at the tree level [7, 8], extending the SM with an $SU(2)_{L}$ triplet scalar field with hypercharge $Y=1$, is arguably the most widely-studied variant [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. For one, the flavour structure of the Yukawa coupling driving the leptonic decays of the triplet-like scalars ensues to be governed by the neutrino oscillation data up to the scalar triplet VEV. Moreover, the presence of the doubly-charged scalars ($H^{\pm\pm}$) and their characteristic decays to a pair of same-sign leptons ($\ell^{\pm}\ell^{\pm}$) or $W$-bosons offer interesting ways to probe them directly at the current and near-future experiments. The experimental collaborations have carried out several searches for $H^{\pm\pm}$ [63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74], and non-observations of any significant excess over the SM expectations have led to stringent limits on them. For $H^{\pm\pm}$ decaying into $\ell^{\pm}\ell^{\pm}$, the ATLAS collaboration has set a lower limit of 1020 GeV assuming equal branching fractions across modes [74]. This search considers only light leptons in the final states, and thus not sensitive for $H^{\pm\pm}$ decaying into $\tau^{\pm}\tau^{\pm}$. The CMS collaboration has set a lower limit of 535 GeV on such scalars [68]. For $H^{\pm\pm}$ decaying into $W^{\pm}W^{\pm}$, the ATLAS collaboration has excluded them within the mass range 200–350 GeV considering their Drell-Yan pair production [73]. An orderly re-interpretation of this search considering all possible Drell-Yan production modes for the triplet-like scalars results in an improved exclusion range of 200–400 GeV [60]. Moreover, a re-interpretation of the ATLAS same-sign dilepton search in Ref. [65] has derived an exclusion limit of 84 GeV [38]. In a nutshell, $H^{\pm\pm}$ decaying into $WW^{(*)}$ are still allowed in the 84–200 GeV mass window. In this mass window, the type-II see-saw model predicts a cross-section between 1.5 pb to 65 fb for $pp\to H^{++}H^{--}$ at the 13 TeV LHC. Despite a sizeable cross-section, searching such an $H^{\pm\pm}$ using the conventional LHC searches with leptons, jets, and missing transverse momentum in the final state has been challenging. the CMS and ATLAS collaborations have turned a blind eye to this. Presumably, for one, their eventual decay products tend to be not so hard and are likely to be drowned in the LHC environment owing to the inherent towering EW and QCD backgrounds. Moreover, ineludible contamination from the SM resonances makes the state of affairs worse. To the extent of our knowledge, the only notable effort in probing this mass window was made in Ref. [75]. Lately, Refs. [76, 77, 78, 79] have demonstrated that the recently reported measurement of the $W$-bosoon mass by the CDF experiment [80] which substantially differs from the global EW fit [81] can be explained within the type-II see-saw model predicting such low-mass $H^{\pm\pm}$ and slightly heavier singly-charged and neutral scalars. Therefore, it is paramount to look for such $H^{\pm\pm}$ at the LHC. In this work, we present a novel search strategy for such $H^{\pm\pm}$. We consider their pair production in a highly Lorentz-boosted regime such that they are produced back-to-back with large transverse momenta, manifesting themselves as a single fat-jet or a pair of adjacent same-sign leptons plus missing transverse momentum. Obviously, this would reduce the signal cross-section significantly. However, should we be able to discern such exotic jets from the SM jets, a final state with such a jet and two same-sign leptons plus missing transverse momentum would have a compensating advantage of reducing the SM background more aggressively, thereby ameliorating the signal-to-background ratio. Keeping that in mind, first, we perform a multivariate analysis incorporating the jet mass, jet charge, $N$-subjettiness, etc. variables as inputs to the boosted decision tree (BDT) classifier to discern such exotic jets (dubbed $H^{\pm\pm}$-jets hereafter) from the SM jets. Then, we perform a search in the final state with an $H^{\pm\pm}$-jet and two same-sign leptons plus missing transverse momentum. The rest of this work is structured as follows. In Section II, we briefly discuss the doubly-charged Higgses in the type-II see-saw model. We perform a detailed collider analysis in Section III. Finally, we summarise in Section IV. II The doubly-charged Higgses In the type-II see-saw model, the SM is augmented with an $SU(2)_{L}$ triplet scalar field with hypercharge $Y=1$ $$\Delta=\begin{pmatrix}\Delta^{+}/\sqrt{2}&\Delta^{++}\\ \Delta^{0}&-\Delta^{+}/\sqrt{2}\end{pmatrix}.$$ The scalar potential involving $\Delta$ and the SM Higgs doublet $\Phi=\begin{pmatrix}\Phi^{+}\!&\!\Phi^{0}\end{pmatrix}^{T}$ is given by $$\displaystyle V(\Phi,\Delta)$$ $$\displaystyle=-m_{\Phi}^{2}{\Phi^{\dagger}\Phi}+\frac{\lambda}{4}(\Phi^{\dagger}\Phi)^{2}+m_{\Delta}^{2}{\rm Tr}(\Delta^{\dagger}{\Delta})$$ $$\displaystyle+[\mu(\Phi^{T}{i}\sigma^{2}\Delta^{\dagger}\Phi)+{\rm h.c.}]+\lambda_{1}(\Phi^{\dagger}\Phi){\rm Tr}(\Delta^{\dagger}{\Delta})$$ $$\displaystyle+\lambda_{2}[{\rm Tr}(\Delta^{\dagger}{\Delta})]^{2}+\lambda_{3}{\rm Tr}[(\Delta^{\dagger}{\Delta})^{2}]+\lambda_{4}{\Phi^{\dagger}\Delta\Delta^{\dagger}\Phi},$$ where $m_{\Phi}^{2},m_{\Delta}^{2}$ and $\mu$ are the mass parameters, $\lambda$ and $\lambda_{i}$ ($i\!=\!1,\dots,4$) are the dimensionless quartic couplings, and $\sigma^{2}$ is one of the Pauli matrices. The neutral components $\Phi^{0}$ and $\Delta^{0}$ procures respective VEVs $v_{d}$ and $v_{t}$ that $\sqrt{v_{d}^{2}+2v_{t}^{2}}=246$ GeV. For detailed discussions of the main dynamical features of the scalar potential, see Refs. [24, 28, 31, 44]. After the EW symmetry is broken, the degrees of freedom carrying identical electric charges mix, thereby resulting in several physical Higgs states: 1. the neutral states $\Phi^{0}$ and $\Delta^{0}$ mix into two CP-even states ($h$ and $H0$) and two CP-odd states ($G^{0}$ and $A^{0}$), 2. the singly-charged states $\Phi^{\pm}$ and $\Delta^{\pm}$ mix into two mass states $G^{\pm}$ and $H^{\pm}$, 3. the doubly-charged state $\Delta^{\pm\pm}$ is aligned with its mass state $H^{\pm\pm}$. The mass states $G^{0}$ and $G^{\pm}$ are the would-be Nambu-Goldstone bosons, $h^{0}$ is identified as the 125 GeV Higgs observed at the LHC, and the rest follows the sum rule $$m_{H^{\pm\pm}}^{2}-m_{H^{\pm}}^{2}\approx m_{H^{\pm}}^{2}-m_{H^{0}/A^{0}}^{2}\approx-\frac{\lambda_{4}}{4}v_{d}^{2}.$$ The Yukawa interaction $Y^{\nu}_{ij}L^{T}_{i}Ci\sigma^{2}\Delta L_{j}$ ($L_{i}$ stands for the SM lepton doublet with $i\in e,\mu,\tau$, and $C$ the charge-conjugation operator) induces masses for the neutrinos: $$m_{\nu}=\sqrt{2}Y^{\nu}v_{t}.$$ The doubly-charged Higgses are pair produced aplenty at the LHC by quark-antiquark annihilation via the neutral current Drell-Yan mechanism:111They are also produced via $t/u$-channel photon fusion as well as vector-boson fusion processes. However, such processes are rather sub-dominant. $$q\bar{q}\to\gamma^{*}/Z^{*}\to H^{++}H^{--}.$$ We evaluate the leading order (LO) cross-sections using the SARAH 4.14.4 [82, 83] generated UFO [84] modules in MadGraph5_aMC_v2.7.3 [85, 86] with the NNPDF23_lo_as_0130_qed parton distribution function [87, 88]. Fig. 1 shows the LO doubly-charged Higgs pair production cross-section at the 13 TeV LHC as a function of their mass. Following the relevant QCD corrections estimated in Refs. [13, 89], we naively scale the LO cross-section by an overall next-to-leading (NLO) $K$-factor of 1.15. Therefore, the resulting $pp\to H^{++}H^{--}$ cross-section varies from 1.72 pb to 74.5 fb for 84 GeV to 200 GeV mass. After being produced, $H^{\pm\pm}$ decays into $\ell^{\pm}\ell^{\pm}$, $W^{\pm}W^{\pm(*)}$ and $H^{\pm}W^{\pm*}$, if kinematically allowed. In broad terms, the dominance of one decay mode over the others depends on three parameters, namely $m_{H^{\pm\pm}}$, $v_{t}$ and $\Delta m=m_{H^{\pm\pm}}-m_{H^{\pm}}$, see Refs. [20, 26, 60] for detailed discussions. For the present work, without commiting to a fixed value for $v_{t}$ and $\Delta m$, we assume exclusive prompt decays of $H^{\pm\pm}$ to $W^{\pm}W^{\pm(*)}$. III Collider analysis In this section, we present a novel search strategy for $H^{\pm\pm}$ with $m_{H^{\pm\pm}}\in$ [84–200] GeV. We only consider $H^{\pm\pm}$ which are produced in a highly Lorentz-boosted regime, manifesting themselves as a single fat-jet or a pair of adjacent same-sign leptons plus missing transverse momentum. Such a requirement significantly reduces the signal cross-section.222For example, a parton level cut of $p_{T}(H^{\pm\pm})>300$ GeV reduces the $pp\to H^{++}H^{--}$ cross-section by a factor of 48(4.4) to 37.4(17.0) fb for $m_{H^{\pm\pm}}=84(200)$ GeV. As argued earlier, despite such a notable reduction in the signal cross-section, the final state with an $H^{\pm\pm}$-jet and two same-sign leptons plus missing transverse momentum (see Fig. 2) is expected to have a compensating advantage of reducing the SM background more aggressively with the proviso that we discern the $H^{\pm\pm}$-jets from the SM jets. In the following, we briefly describe the reconstruction and selection of various physics objects, then perform a multivariate analysis to discern the $H^{\pm\pm}$-jets from the SM jets, viz. QCD jets, $W/Z$-jets, $h$-jets, and $t$-jets, and finally delineate a search in the final state with an $H^{\pm\pm}$-jet and two same-sign leptons plus missing transverse momentum. III.1 Object reconstruction and selection We pass the parton-level events into PYTHIA 8.2 [90] to simulate subsequent decays for the unstable particles, initial and final state radiations (ISR and FSR), showering, fragmentation and hadronisation, and then into Delphes 3.4.2 with the default CMS card [91] for simulating detector effects as well as reconstructing various physics objects, viz. photons, electrons, muons and jets. Constituents of the would-be fat-jets are clustered using the anti-k${}_{T}$ algorithm [92] with a characteristic jet radius $R=1.0$ as implemented in FastJet 3.3.2 [93]. To remove the soft yet wide-angle QCD emissions from the fat-jets, we use the jet pruning algorithm [94, 95] with the default values for the pruning parameters: $z_{cut}=0.1$ and $R_{cut}=0.5$ [94]. Further, to unfold the multi-prong nature of the fat-jets, we use an inclusive jet shape termed as $N$-subjettiness $\tau_{N}$ [96, 97]333It is defined as $\tau_{N}=\frac{1}{d_{0}}\sum_{k}p_{T,k}{\rm min}\left(\Delta R^{\beta}_{1,k},\Delta R^{\beta}_{2,k},...,\Delta R^{\beta}_{N,k}\right)$, where $N$ is the number of subjets a jet is presumably composed of, $k$ runs over the jet constituents with transverse momentum $p_{T,k}$, $\Delta R_{i,k}$ is the distance in the rapidity-azimuth plane between a candidate subjet $i$ and a jet constituent $k$, $d_{0}=\sum_{k}p_{T,k}R_{0}^{\beta}$ with $R_{0}(=1.0)$ being the characteristic jet radius used in the original jet clustering algorithm, and $\beta$ is an angular weighting exponent dubbed thurst parameter. choosing one-pass $k_{T}$-axes for the minimisation procedure and $\beta=1$. Reconstructed jets are required to be within the pseudorapidity range $|\eta|<2.5$ and have a transverse momentum $p_{T}>30$ GeV, whereas the leptons (electrons and muons) are required to have $|\eta|<2.5$ and $p_{T}>10$ GeV. Moreover, we demand the scalar sum of the $p_{T}$s of all other objects lying within a cone of radius 0.3(0.4) around an electron (a muon) to be smaller than 10%(15%) of its $p_{T}$. This ensures that the leptons are isolated. Finally, the missing transverse momentum $\vec{p}_{T}^{\rm\,\,miss}$ (with magnitude $p_{T}^{\rm miss}$) is estimated from the momentum imbalance in the transverse direction associated to all reconstructed objects in an event. III.2 Multivariate analysis: discerning the $H^{\pm\pm}$-jets from the SM jets Here we perform a multivariate analysis with the BDT classifier implemented in the TMVA 4.3 toolkit integrated into the analysis framework ROOT 6.24. For training and testing the classifier, we use 600000 events for each category of the SM jets and 300000 for each $m_{H^{\pm\pm}}$ within the [85,195] GeV range in steps of 10 GeV. Of these, 80% are picked randomly for training, and the rest are used for testing. We use the following kinematic features of the jets as inputs to the BDT classifier: 1. invariant mass $m$ 2. $b$-tag444It is a boolean indicating whether or not at least one of the constituet subjet is a $b$-jet. 3. jet charge $Q_{k}$ [98]555Jet charge is defined as $Q_{k}=\frac{\sum_{i}q_{i}\left(p_{T,i}\right)^{k}}{\sum_{i}p_{T,i}}$, where $i$ runs over the associated tracks with transverse momentum $p_{T,i}$ and charge $q_{i}$, and $k$ is a free regularisation exponent which we take to be 0.2. 4. $N$-subjettiness variables $\tau_{1},\tau_{21},\tau_{32}$ and $\tau_{43}$.666$\tau_{N,N-1}=\tau_{N}/\tau_{N-1}$ is an useful discriminant between $N$- and $(N-1)$-prong jets. The normalised distributions for some of the input features are shown in Fig. 3, the rest are not shown for brevity. These variables constitute a minimal set with $(a)$ good discrimination power between the $H^{\pm\pm}$-jets and the SM jets, and $(b)$ low correlations among themselves. The method-unspecific separation is a good measure of the former. For a given feature $x$, this is defined as $$\langle S^{2}\rangle=\frac{1}{2}\int\frac{\left[\hat{x}_{H}(x)-\hat{x}_{SM}(x)\right]^{2}}{\hat{x}_{H}(x)+\hat{x}_{SM}(x)}dx$$ where $\hat{x}_{H}(x)$ and $\hat{x}_{SM}(x)$ are the probability density functions of $x$ for the $H^{\pm\pm}$-jets and the SM jets, respectively. Table 1 shows method-unspecific separation for the input features, while Fig. 4 show their Pearson’s linear correlation coefficients defined as $$\rho(x,y)=\frac{\langle xy\rangle-\langle x\rangle\langle y\rangle}{\sigma_{x}\sigma_{y}},$$ where $\langle x\rangle$ and $\sigma_{x}$, respectively, are the expectation value and standard deviation of $x$. To enhance the BDT classification, we use the adaptive boost algorithm with a learning rate of 0.1, and combine 1000 decision trees with 5% minimum node size and a depth of 4 layers per tree into a forest. As the separation criterion for node splitting, we use the so-called Gini index. The relevant BDT hyperparameters are summarised in Table 2. Table 1 also shows the method-specific ranking of the input features. In other words, this shows the relative importance of the input features in separating the $H^{\pm\pm}$-jets from the SM jets. As we see from Table 1, the $N$-subjettiness variable $\tau_{21}$ is the best separating variable, while the jet-charge $Q_{k}$ is the one with least separating power. Finally, we check the classifier for overtraining by performing the Kolmogorov-Smirnov (KS) test which compares the BDT response curves for the training and testing subsamples, see Fig. 5. These response curves exhibit no considerable overtraining. In the left panel of Fig. 6, we show the receiver-operator-characteristic (ROC) curve, which quantifies the combined BDT performance, for $m_{H^{\pm\pm}}=150$ GeV. The right panel of Fig. 6 shows the signal (with $m_{H^{\pm\pm}}=150$ GeV) and background efficiencies ($\epsilon_{\rm Sig}$ and $\epsilon_{\rm Bckg}$) as a function of the BDT response. The area below the ROC curve is $\sim 0.13$, indicating considerably well separation between the signal and background. For a BDT response greater than 0, not only $\epsilon_{\rm Bckg}$ but also $\epsilon_{\rm Sig}$ falls to lower values, whereas for a BDT response less than 0, both rises to higher values. Therefore, we choose an optimum value of 0.1 for the BDT response. In Fig. 7, we show the variation of $\epsilon_{\rm Sig}$ with $m_{H^{\pm\pm}}$ for the chosen value of the BDT response. The abrupt drop in $\epsilon_{\rm Sig}$ for $m_{H^{\pm\pm}}\lesssim 100$ GeV is ascribed to the small mass difference between $m_{H^{\pm\pm}}$ and the $W$-mass. For small mass difference, the decay products of the off-shell $W$-boson emanating from $H^{\pm\pm}$ tend to be very soft, and thus are not likely to pass the object reconstruction and selection criteria discussed in Section III.1. As a consequence of this, the features of an $H^{\pm\pm}$-jet resemble to that of an SM jet, thereby making the former indiscernible from the latter. III.3 SM backgrounds As the background for the present analysis, we consider numerous SM processes such as diboson, triboson and tetraboson processes, Higgsstrahlung processes, single and multi-top productions in association with/without gauge bosons, and Drell-Yan processes. All these processes are generated in association with up to two jets at the LO using MadGraph5_aMC_v2.7.3 [85, 86] at least of worth 3000 fb${}^{-1}$ luminosity of data at the 13 TeV LHC, followed by the MLM matching using PYTHIA 8.2 [90], and then naively scaled by appropriate NLO (or higher, whichever is available in the literature) $K$-factors [99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 86, 109, 110, 111, 112, 113]. The relevant backgrounds can be broadly classified into two classes: prompt and non-prompt. While most of these processes contribute to the former, only the processes where a jet is misidentified as a lepton or additional leptons originate from ISR/FSR photon conversions and in-flight heavy-flavour decays constitute the latter. Though the lepton isolation requirement (mentioned in Section III.1) and the $b$-jet veto (mentioned later in Section III.4) significantly subdue the latter, a considerable fraction of this still passes the object selection. The estimation of this contribution requires a data-driven approach, naemly the so-called fake factor method, which is beyond the realm of this work. We adopt a conservative approach, assuming a $p_{T}$-dependent probability of 0.1–0.3% for a jet to be misidentified as a lepton [114]. Further, to account for the electron charge misidentification due to their bremsstrahlung interactions with the inner detector material, all prompt electrons are naively corrected with a $p_{T}$- and $\eta$-dependent charge misidentification probability: $P(p_{T},\eta)=\sigma(p_{T})\times f(\eta)$, where $\sigma(p_{T})$ and $f(\eta)$ ranges from 0.02 to 0.1 and 0.03 to 1, respectively [115]. III.4 Event selection and analysis Here we discuss the selection criteria that are adept in ameliorating the signal-to-background ratio. Only the events satisfying the following selection cuts (S0) are considered for further analysis: 1. one fat-jet with $p_{T}>300$ GeV, 2. two same-sign leptons, 3. the angular separation between the leptons $\Delta R_{\ell\ell}>0.05$, 4. the dilepton invariant mass $m_{\ell\ell}>1$ GeV as well as $m_{\ell\ell}\notin[3,3.2]$ GeV. The requirements $\Delta R_{\ell\ell}>$ 0.05 and $m_{\ell\ell}>$ 1 GeV vanquishes the background contributions from muon bremsstrahlung interactions as well as ISR/FSR photon conversions, and $m_{\ell\ell}\notin[3,3.2]$ GeV suppresses contributions from $J/\psi$ decays. The events satisfying the S0 cut are then fed to the trained BDT classifier described in Section III.2. Following the discussion in Section III.2, we impose a modest cut on the BDT response $${\rm{\it S1}:~{}BDT~{}response}>0.1.$$ Figure 8 shows the normalised distributoin of $m_{\ell\ell}$ for the signal with $m_{H^{\pm\pm}}=150$ GeV and background events satisfying the S1 cut. For the signal, it is a monotonically falling distribution with an end point near 120 GeV as ocassioned by the low mass of $H^{\pm\pm}$. On the contrary, the background boasts a peak at the $Z$-boson mass with the lion’s share of the contributions accruing from $Z\to e^{-}e^{+}$ when one of the electrons charge get misidentified. To supress the $Z\to e^{-}e^{+}$ contribution, we require that $${\rm{\it S2}}:~{}m_{\ell\ell}<80{\rm~{}GeV}.$$ In the left panel of Fig. 8, displayed is the normalised distribution for $p_{T}^{\rm miss}$ suggesting that the signal looks much harder than the background. Therefore, a reasonably strong cut on $p_{T}^{\rm miss}$ would be helpful in curtailing the latter without impinging much on the former. In Fig. 8, also displayed are the distributions for the angular separtion between the two leptons ($\Delta R_{\ell\ell}$) and the azimuthal separation between the dilepton system and $p_{T}^{\rm miss}$ ($\Delta\phi(\ell\ell,p_{T}^{\rm miss})$). As we see, unlike the background, most of the signal events are contained within $\Delta R_{\ell\ell}\sim 1$ and $\Delta\phi(\ell\ell,p_{T}^{\rm miss})\sim 1$ showing that, as we expect, the leptons and neutrinos emanating from highly Lorentz-boosted $H^{\pm\pm}$ are adjacent to each other. Guided by these distribution, we impose the following set of cuts: $${\rm{\it S3}}:~{}\Delta R_{\ell\ell}<1.2,~{}p_{T}^{\rm miss}>80{\rm~{}GeV},~{}\Delta\phi(\ell\ell,p_{T}^{\rm miss})<0.8.$$ Table 3 shows the progression of the background and signal (with $m_{H^{\pm\pm}}=90,120$ and 150 GeV) cross-sections at the 13 TeV LHC as subsequent selection cuts are imposed. As we see, all these cuts turn out be very efficacious in subjugating the background while keeping the signal relatively less harmed. III.5 Discovery and exclusion projection Next, we estimate the discovery and exclusion projection for different $m_{H^{\pm\pm}}$. Following the Refs. [116, 117, 118], we use the following approximated expressions for the median expected discovery and exclusion significances: $$\displaystyle Z_{\rm dis}$$ $$\displaystyle=\left[2\left((s+b)\ln\left[\frac{(s+b)(b+\delta_{b}^{2})}{b^{2}+(s+b)\delta_{b}^{2}}\right]-\frac{b^{2}}{\delta_{b}^{2}}\ln\left[1+\frac{\delta_{b}^{2}s}{b(b+\delta_{b}^{2})}\right]\right)\right]^{1/2},$$ $$\displaystyle Z_{\rm exc}$$ $$\displaystyle=\left[2\left\{s-b\ln\left(\frac{b+s+x}{2b}\right)-\frac{b^{2}}{\delta_{b}^{2}}\ln\left(\frac{b-s+x}{2b}\right)\right\}-(b+s-x)(1+b/\delta_{b}^{2})\right]^{1/2},$$ where $x=\sqrt{(s+b)^{2}-4sb\delta_{b}^{2}/(b+\delta_{b}^{2})}$, $s$ and $b$ are number of signal and background events, respectively, and $\delta_{b}$ is the uncertainty in the measurement of the background. The estimation of the background uncertainty arising from several sources such as the reconstruction, identification, isolation and trigger efficiency, the energy scale and resolution of different physics objects, the luminosity measurements, the pile-up modelling, the parton-shower modelling, the higher-order QCD corrections, etc. is beyond the scope of this work. We adopt a conservative approach, following the typical LHC searches [119, 120], for which both the theoretical and experimental uncertainties are O(10)% each, we assume an overall 20% total uncertainty for the same. In Table 9, we show the required luminosities (in fb${}^{-1}$) needed to achieve a median expected $Z_{\rm exc}\geq 1.645$ (95% CL exclusion) as well as $Z_{\rm dis}\geq 5$ ($5\sigma$ discovery) for different $m_{H^{\pm\pm}}$. The rise in the required luminosity for $m_{H^{\pm\pm}}\lesssim 100$ GeV could be attributed to, as discussed in the end of Section III.2, the poor separation between the $H^{\pm\pm}$-jets and the SM jets, wheras that for larger masses is due to the fall in the signal cross-section (see Fig. 1). We find that $H^{\pm\pm}$ within the [84,200] GeV mass range could be probed with $5\sigma$ discovery significane with the already collected Run 2 LHC data. On the other hand, in the case of the data found to be consistent with the SM background, only a fraction of the collected data suffices to exclude them at the 95% CL. IV Summary Doubly-charged Higgs bosons within the mass range 84–200 GeV decaying into a pair of W-bosons have been overlooked by the LHC searches. Lately, Refs. [76, 77, 78, 79] have demonstrated that the recently reported measurement of the $W$-bosoon mass by the CDF experiment can be accomodated within the type-II see-saw model predicting such low-mass $H^{\pm\pm}$ and slightly heavier singly-charged and neutral scalars. In view of this, it has been paramount to look for such $H^{\pm\pm}$ at the LHC. In this work, we have presented a novel search strategy for such $H^{\pm\pm}$ considering their pair production in a highly Lorentz-boosted regime such that they are produced back-to-back with large transverse momenta, manifesting themselves as a single fat-jet or a pair of adjacent same-sign leptons plus missing transverse momentum. First, we perform a multivariate analysis to discern such exotic $H^{\pm\pm}$-jets from the SM jets. Then, we perform a search in the final state with an $H^{\pm\pm}$-jet and two same-sign leptons plus missing transverse momentum. We find that such low-mass $H^{\pm\pm}$ could be directly probed with the already collected Run 2 LHC data. In closing this section, we mention that the search strategy presented here is applicable to any low-mass BSM Higgses (charged as well as neutral) decaying into a pair of SM gauge bosons. Acknowledgements.SA acknowledges the SERB Core Research Grant CRG/2018/004889, and KG acknowledges the DST INSPIRE Research Grant DST/INSPIRE/04/2014/002158 and SERB Core Research Grant CRG/2019/006831. The simulations were supported in part by the SAMKHYA: High Performance Computing Facility provided by Institute of Physics, Bhubaneswar. Note added: While preparing this manuscript, an article [121] with similar motivation appeared on the arXiv, concluding that the most of the favoured space for the CDF discrepancy is already excluded by the existing LHC Run 2 data. While our proposed search strategy is completely different from Ref. [121], we also arrived at the same conclusion, i.e., the LHC run II data is sufficient to probe the low mass doubly charged Higgs bosons in type-II seesaw model. Moreover, our strategy is applicable to any low-mass BSM Higgses (charged as well as neutral) decaying into a pair of SM gauge bosons. References Konetschny and Kummer [1977] W. Konetschny and W. Kummer, Nonconservation of Total Lepton Number with Scalar Bosons, Phys. Lett. B 70, 433 (1977). Cheng and Li [1980] T. P. Cheng and L.-F. Li, Neutrino Masses, Mixings and Oscillations in SU(2) x U(1) Models of Electroweak Interactions, Phys. Rev. D 22, 2860 (1980). Lazarides et al. [1981] G. Lazarides, Q. Shafi, and C. Wetterich, Proton Lifetime and Fermion Masses in an SO(10) Model, Nucl. Phys. B 181, 287 (1981). Schechter and Valle [1980] J. Schechter and J. W. F. Valle, Neutrino Masses in SU(2) x U(1) Theories, Phys. Rev. D 22, 2227 (1980). Mohapatra and Senjanovic [1981] R. N. Mohapatra and G. Senjanovic, Neutrino Masses and Mixings in Gauge Models with Spontaneous Parity Violation, Phys. Rev. D 23, 165 (1981). Magg and Wetterich [1980] M. Magg and C. Wetterich, Neutrino Mass Problem and Gauge Hierarchy, Phys. Lett. B 94, 61 (1980). Weinberg [1979] S. Weinberg, Baryon and Lepton Nonconserving Processes, Phys. Rev. Lett. 43, 1566 (1979). Ma [1998] E. Ma, Pathways to naturally small neutrino masses, Phys. Rev. Lett. 81, 1171 (1998), arXiv:hep-ph/9805219 . Huitu et al. [1997] K. Huitu, J. Maalampi, A. Pietila, and M. Raidal, Doubly charged Higgs at LHC, Nucl. Phys. B 487, 27 (1997), arXiv:hep-ph/9606311 . Gunion et al. [1996] J. F. Gunion, C. Loomis, and K. T. Pitts, Searching for doubly charged Higgs bosons at future colliders, eConf C960625, LTH096 (1996), arXiv:hep-ph/9610237 . Chakrabarti et al. [1998] S. Chakrabarti, D. Choudhury, R. M. Godbole, and B. Mukhopadhyaya, Observing doubly charged Higgs bosons in photon-photon collisions, Phys. Lett. B 434, 347 (1998), arXiv:hep-ph/9804297 . Chun et al. [2003] E. J. Chun, K. Y. Lee, and S. C. Park, Testing Higgs triplet model and neutrino mass patterns, Phys. Lett. B 566, 142 (2003), arXiv:hep-ph/0304069 . Muhlleitner and Spira [2003] M. Muhlleitner and M. Spira, A Note on doubly charged Higgs pair production at hadron colliders, Phys. Rev. D 68, 117701 (2003), arXiv:hep-ph/0305288 . Akeroyd and Aoki [2005] A. G. Akeroyd and M. Aoki, Single and pair production of doubly charged Higgs bosons at hadron colliders, Phys. Rev. D 72, 035011 (2005), arXiv:hep-ph/0506176 . Akeroyd et al. [2008] A. G. Akeroyd, M. Aoki, and H. Sugiyama, Probing Majorana Phases and Neutrino Mass Spectrum in the Higgs Triplet Model at the CERN LHC, Phys. Rev. D 77, 075010 (2008), arXiv:0712.4019 [hep-ph] . Garayoa and Schwetz [2008] J. Garayoa and T. Schwetz, Neutrino mass hierarchy and Majorana CP phases within the Higgs triplet model at the LHC, JHEP 03, 009, arXiv:0712.1453 [hep-ph] . Han et al. [2007] T. Han, B. Mukhopadhyaya, Z. Si, and K. Wang, Pair production of doubly-charged scalars: Neutrino mass constraints and signals at the LHC, Phys. Rev. D 76, 075013 (2007), arXiv:0706.0441 [hep-ph] . Kadastik et al. [2008] M. Kadastik, M. Raidal, and L. Rebane, Direct determination of neutrino mass parameters at future colliders, Phys. Rev. D 77, 115023 (2008), arXiv:0712.3912 [hep-ph] . del Aguila and Aguilar-Saavedra [2009] F. del Aguila and J. A. Aguilar-Saavedra, Distinguishing seesaw models at LHC with multi-lepton signals, Nucl. Phys. B 813, 22 (2009), arXiv:0808.2468 [hep-ph] . Fileviez Perez et al. [2008a] P. Fileviez Perez, T. Han, G.-y. Huang, T. Li, and K. Wang, Neutrino Masses and the CERN LHC: Testing Type II Seesaw, Phys. Rev. D 78, 015018 (2008a), arXiv:0805.3536 [hep-ph] . Fileviez Perez et al. [2008b] P. Fileviez Perez, T. Han, G.-y. Huang, T. Li, and K. Wang, Neutrino Masses and the CERN LHC: Testing Type II Seesaw, Phys. Rev. D 78, 015018 (2008b), arXiv:0805.3536 [hep-ph] . Akeroyd and Chiang [2009] A. G. Akeroyd and C.-W. Chiang, Doubly charged Higgs bosons and three-lepton signatures in the Higgs Triplet Model, Phys. Rev. D 80, 113010 (2009), arXiv:0909.4419 [hep-ph] . Akeroyd et al. [2010] A. G. Akeroyd, C.-W. Chiang, and N. Gaur, Leptonic signatures of doubly charged Higgs boson production at the LHC, JHEP 11, 005, arXiv:1009.2780 [hep-ph] . Arhrib et al. [2011] A. Arhrib, R. Benbrik, M. Chabab, G. Moultaka, M. C. Peyranere, L. Rahili, and J. Ramadan, The Higgs Potential in the Type II Seesaw Model, Phys. Rev. D 84, 095005 (2011), arXiv:1105.1925 [hep-ph] . Melfo et al. [2012] A. Melfo, M. Nemevsek, F. Nesti, G. Senjanovic, and Y. Zhang, Type II Seesaw at LHC: The Roadmap, Phys. Rev. D 85, 055018 (2012), arXiv:1108.4416 [hep-ph] . Aoki et al. [2012] M. Aoki, S. Kanemura, and K. Yagyu, Testing the Higgs triplet model with the mass difference at the LHC, Phys. Rev. D 85, 055007 (2012), arXiv:1110.4625 [hep-ph] . Akeroyd and Sugiyama [2011] A. G. Akeroyd and H. Sugiyama, Production of doubly charged scalars from the decay of singly charged scalars in the Higgs Triplet Model, Phys. Rev. D 84, 035010 (2011), arXiv:1105.2209 [hep-ph] . Arbabifar et al. [2013] F. Arbabifar, S. Bahrami, and M. Frank, Neutral Higgs Bosons in the Higgs Triplet Model with nontrivial mixing, Phys. Rev. D 87, 015020 (2013), arXiv:1211.6797 [hep-ph] . Chiang et al. [2012] C.-W. Chiang, T. Nomura, and K. Tsumura, Search for doubly charged Higgs bosons using the same-sign diboson mode at the LHC, Phys. Rev. D 85, 095023 (2012), arXiv:1202.2014 [hep-ph] . Akeroyd et al. [2012] A. G. Akeroyd, S. Moretti, and H. Sugiyama, Five-lepton and six-lepton signatures from production of neutral triplet scalars in the Higgs Triplet Model, Phys. Rev. D 85, 055026 (2012), arXiv:1201.5047 [hep-ph] . Chun et al. [2012] E. J. Chun, H. M. Lee, and P. Sharma, Vacuum Stability, Perturbativity, EWPD and Higgs-to-diphoton rate in Type II Seesaw Models, JHEP 11, 106, arXiv:1209.1303 [hep-ph] . Chun and Sharma [2012] E. J. Chun and P. Sharma, Same-Sign Tetra-Leptons from Type II Seesaw, JHEP 08, 162, arXiv:1206.6278 [hep-ph] . del Águila and Chala [2014] F. del Águila and M. Chala, LHC bounds on Lepton Number Violation mediated by doubly and singly-charged scalars, JHEP 03, 027, arXiv:1311.1510 [hep-ph] . Chun and Sharma [2014] E. J. Chun and P. Sharma, Search for a doubly-charged boson in four lepton final states in type II seesaw, Phys. Lett. B 728, 256 (2014), arXiv:1309.6888 [hep-ph] . Kanemura et al. [2013] S. Kanemura, K. Yagyu, and H. Yokoya, First constraint on the mass of doubly-charged Higgs bosons in the same-sign diboson decay scenario at the LHC, Phys. Lett. B 726, 316 (2013), arXiv:1305.2383 [hep-ph] . Bhupal Dev et al. [2013] P. S. Bhupal Dev, D. K. Ghosh, N. Okada, and I. Saha, 125 GeV Higgs Boson and the Type-II Seesaw Model, JHEP 03, 150, [Erratum: JHEP 05, 049 (2013)], arXiv:1301.3453 [hep-ph] . Kanemura et al. [2014] S. Kanemura, M. Kikuchi, K. Yagyu, and H. Yokoya, Bounds on the mass of doubly-charged Higgs bosons in the same-sign diboson decay scenario, Phys. Rev. D 90, 115018 (2014), arXiv:1407.6547 [hep-ph] . Kanemura et al. [2015] S. Kanemura, M. Kikuchi, H. Yokoya, and K. Yagyu, LHC Run-I constraint on the mass of doubly charged Higgs bosons in the same-sign diboson decay scenario, PTEP 2015, 051B02 (2015), arXiv:1412.7603 [hep-ph] . Kang et al. [2015a] Z. Kang, J. Li, T. Li, Y. Liu, and G.-Z. Ning, Light Doubly Charged Higgs Boson via the $WW^{*}$ Channel at LHC, Eur. Phys. J. C 75, 574 (2015a), arXiv:1404.5207 [hep-ph] . Deppisch et al. [2015] F. F. Deppisch, P. S. Bhupal Dev, and A. Pilaftsis, Neutrinos and Collider Physics, New J. Phys. 17, 075019 (2015), arXiv:1502.06541 [hep-ph] . Han et al. [2015a] Z.-L. Han, R. Ding, and Y. Liao, LHC Phenomenology of Type II Seesaw: Nondegenerate Case, Phys. Rev. D 91, 093006 (2015a), arXiv:1502.05242 [hep-ph] . Han et al. [2015b] Z.-L. Han, R. Ding, and Y. Liao, LHC phenomenology of the type II seesaw mechanism: Observability of neutral scalars in the nondegenerate case, Phys. Rev. D 92, 033014 (2015b), arXiv:1506.08996 [hep-ph] . Blunier et al. [2017] S. Blunier, G. Cottin, M. A. Díaz, and B. Koch, Phenomenology of a Higgs triplet model at future $e^{+}e^{-}$ colliders, Phys. Rev. D 95, 075038 (2017), arXiv:1611.07896 [hep-ph] . Das and Santamaria [2016] D. Das and A. Santamaria, Updated scalar sector constraints in the Higgs triplet model, Phys. Rev. D 94, 015015 (2016), arXiv:1604.08099 [hep-ph] . Mitra et al. [2017] M. Mitra, S. Niyogi, and M. Spannowsky, Type-II Seesaw Model and Multilepton Signatures at Hadron Colliders, Phys. Rev. D 95, 035042 (2017), arXiv:1611.09594 [hep-ph] . Cai et al. [2018] Y. Cai, T. Han, T. Li, and R. Ruiz, Lepton Number Violation: Seesaw Models and Their Collider Tests, Front. in Phys. 6, 40 (2018), arXiv:1711.02180 [hep-ph] . Ghosh et al. [2018] D. K. Ghosh, N. Ghosh, I. Saha, and A. Shaw, Revisiting the high-scale validity of the type II seesaw model with novel LHC signature, Phys. Rev. D 97, 115022 (2018), arXiv:1711.06062 [hep-ph] . Nomura et al. [2018] T. Nomura, H. Okada, and H. Yokoya, Discriminating leptonic Yukawa interactions with doubly charged scalar at the ILC, Nucl. Phys. B 929, 193 (2018), arXiv:1702.03396 [hep-ph] . Antusch et al. [2019] S. Antusch, O. Fischer, A. Hammad, and C. Scherb, Low scale type II seesaw: Present constraints and prospects for displaced vertex searches, JHEP 02, 157, arXiv:1811.03476 [hep-ph] . Bhupal Dev and Zhang [2018] P. S. Bhupal Dev and Y. Zhang, Displaced vertex signatures of doubly charged scalars in the type-II seesaw and its left-right extensions, JHEP 10, 199, arXiv:1808.00943 [hep-ph] . Crivellin et al. [2019] A. Crivellin, M. Ghezzi, L. Panizzi, G. M. Pruna, and A. Signer, Low- and high-energy phenomenology of a doubly charged scalar, Phys. Rev. D 99, 035004 (2019), arXiv:1807.10224 [hep-ph] . Agrawal et al. [2018] P. Agrawal, M. Mitra, S. Niyogi, S. Shil, and M. Spannowsky, Probing the Type-II Seesaw Mechanism through the Production of Higgs Bosons at a Lepton Collider, Phys. Rev. D 98, 015024 (2018), arXiv:1803.00677 [hep-ph] . Rahili et al. [2019] L. Rahili, A. Arhrib, and R. Benbrik, Associated production of SM Higgs with a photon in type-II seesaw models at the ILC, Eur. Phys. J. C 79, 940 (2019), arXiv:1909.07793 [hep-ph] . de Melo et al. [2019] T. B. de Melo, F. S. Queiroz, and Y. Villamizar, Doubly Charged Scalar at the High-Luminosity and High-Energy LHC, Int. J. Mod. Phys. A 34, 1950157 (2019), arXiv:1909.07429 [hep-ph] . Dev et al. [2019] P. S. B. Dev, S. Khan, M. Mitra, and S. K. Rai, Doubly-charged Higgs boson at a future electron-proton collider, Phys. Rev. D 99, 115015 (2019), arXiv:1903.01431 [hep-ph] . Primulando et al. [2019] R. Primulando, J. Julio, and P. Uttayarat, Scalar phenomenology in type-II seesaw model, JHEP 08, 024, arXiv:1903.02493 [hep-ph] . Chun et al. [2020] E. J. Chun, S. Khan, S. Mandal, M. Mitra, and S. Shil, Same-sign tetralepton signature at the Large Hadron Collider and a future $pp$ collider, Phys. Rev. D 101, 075008 (2020), arXiv:1911.00971 [hep-ph] . Padhan et al. [2020] R. Padhan, D. Das, M. Mitra, and A. Kumar Nayak, Probing doubly and singly charged Higgs bosons at the $pp$ collider HE-LHC, Phys. Rev. D 101, 075050 (2020), arXiv:1909.10495 [hep-ph] . Bandyopadhyay et al. [2020] P. Bandyopadhyay, A. Karan, and C. Sen, Discerning Signatures of Seesaw Models and Complementarity of Leptonic Colliders,  (2020), arXiv:2011.04191 [hep-ph] . Ashanujjaman and Ghosh [2022] S. Ashanujjaman and K. Ghosh, Revisiting type-II see-saw: present limits and future prospects at LHC, JHEP 03, 195, arXiv:2108.10952 [hep-ph] . Yang and Yang [2022] X.-H. Yang and Z.-J. Yang, Doubly charged Higgs production at future $ep$ colliders, Chin. Phys. C 46, 063107 (2022), arXiv:2103.11412 [hep-ph] . Ashanujjaman et al. [2022] S. Ashanujjaman, K. Ghosh, and K. Huitu, Type-II see-saw: searching the LHC elusive low-mass triplet-like Higgses at $e^{-}e^{+}$ colliders,   (2022), arXiv:2205.14983 [hep-ph] . Aad et al. [2012] G. Aad et al. (ATLAS), Search for doubly-charged Higgs bosons in like-sign dilepton final states at $\sqrt{s}=7$ TeV with the ATLAS detector, Eur. Phys. J. C 72, 2244 (2012), arXiv:1210.5070 [hep-ex] . Chatrchyan et al. [2012] S. Chatrchyan et al. (CMS), A Search for a Doubly-Charged Higgs Boson in $pp$ Collisions at $\sqrt{s}=7$ TeV, Eur. Phys. J. C 72, 2189 (2012), arXiv:1207.2666 [hep-ex] . Aad et al. [2015] G. Aad et al. (ATLAS), Search for anomalous production of prompt same-sign lepton pairs and pair-produced doubly charged Higgs bosons with $\sqrt{s}=8$ TeV $pp$ collisions using the ATLAS detector, JHEP 03, 041, arXiv:1412.0237 [hep-ex] . Khachatryan et al. [2015] V. Khachatryan et al. (CMS), Study of vector boson scattering and search for new physics in events with two same-sign leptons and two jets, Phys. Rev. Lett. 114, 051801 (2015), arXiv:1410.6315 [hep-ex] . CMS [2016] Search for a doubly-charged Higgs boson with $\sqrt{s}=8~{}\mathrm{TeV}$ $pp$ collisions at the CMS experiment,   (2016). CMS [2017] A search for doubly-charged Higgs boson production in three and four lepton final states at $\sqrt{s}=13~{}\mathrm{TeV}$,   (2017). Aaboud et al. [2018a] M. Aaboud et al. (ATLAS), Search for doubly charged Higgs boson production in multi-lepton final states with the ATLAS detector using proton–proton collisions at $\sqrt{s}=13\,\text{TeV}$, Eur. Phys. J. C 78, 199 (2018a), arXiv:1710.09748 [hep-ex] . Sirunyan et al. [2018] A. M. Sirunyan et al. (CMS), Observation of electroweak production of same-sign W boson pairs in the two jet and two same-sign lepton final state in proton-proton collisions at $\sqrt{s}=$ 13 TeV, Phys. Rev. Lett. 120, 081801 (2018), arXiv:1709.05822 [hep-ex] . Aaboud et al. [2019] M. Aaboud et al. (ATLAS), Search for doubly charged scalar bosons decaying into same-sign $W$ boson pairs with the ATLAS detector, Eur. Phys. J. C 79, 58 (2019), arXiv:1808.01899 [hep-ex] . Aad et al. [2021a] G. Aad et al. (ATLAS), Search for doubly and singly charged Higgs bosons decaying into vector bosons in multi-lepton final states with the ATLAS detector using proton-proton collisions at $\sqrt{\mathrm{s}}$ = 13 TeV, JHEP 06, 146, arXiv:2101.11961 [hep-ex] . Aad et al. [2021b] G. Aad et al. (ATLAS), Search for doubly and singly charged Higgs bosons decaying into vector bosons in multi-lepton final states with the ATLAS detector using proton-proton collisions at $\sqrt{\mathrm{s}}$ = 13 TeV, JHEP 06, 146, arXiv:2101.11961 [hep-ex] . ATL [2022] Search for doubly charged Higgs boson production in multi-lepton final states using $139\,\text{fb}^{-1}$ of proton–proton collisions at $\sqrt{s}=13\,\text{TeV}$ with the ATLAS detector,   (2022). Kang et al. [2015b] Z. Kang, J. Li, T. Li, Y. Liu, and G.-Z. Ning, Light Doubly Charged Higgs Boson via the $WW^{*}$ Channel at LHC, Eur. Phys. J. C 75, 574 (2015b), arXiv:1404.5207 [hep-ph] . Kanemura and Yagyu [2022] S. Kanemura and K. Yagyu, Implication of the W boson mass anomaly at CDF II in the Higgs triplet model with a mass difference, Phys. Lett. B 831, 137217 (2022), arXiv:2204.07511 [hep-ph] . Heeck [2022] J. Heeck, W-boson mass in the triplet seesaw model, Phys. Rev. D 106, 015004 (2022), arXiv:2204.10274 [hep-ph] . Bahl et al. [2022] H. Bahl, W. H. Chiu, C. Gao, L.-T. Wang, and Y.-M. Zhong, Tripling down on the $W$ boson mass,  (2022), arXiv:2207.04059 [hep-ph] . Cheng et al. [2022] Y. Cheng, X.-G. He, F. Huang, J. Sun, and Z.-P. Xing, Electroweak precision tests for triplet scalars,   (2022), arXiv:2208.06760 [hep-ph] . Aaltonen et al. [2022] T. Aaltonen et al. (CDF), High-precision measurement of the W boson mass with the CDF II detector, Science 376, 170 (2022). Awramik et al. [2004] M. Awramik, M. Czakon, A. Freitas, and G. Weiglein, Precise prediction for the W boson mass in the standard model, Phys. Rev. D 69, 053006 (2004), arXiv:hep-ph/0311148 . Staub [2014] F. Staub, SARAH 4 : A tool for (not only SUSY) model builders, Comput. Phys. Commun. 185, 1773 (2014), arXiv:1309.7223 [hep-ph] . Staub [2015] F. Staub, Exploring new models in all detail with SARAH, Adv. High Energy Phys. 2015, 840780 (2015), arXiv:1503.04200 [hep-ph] . Degrande et al. [2012] C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer, and T. Reiter, UFO - The Universal FeynRules Output, Comput. Phys. Commun. 183, 1201 (2012), arXiv:1108.2040 [hep-ph] . Alwall et al. [2011] J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, and T. Stelzer, MadGraph 5 : Going Beyond, JHEP 06, 128, arXiv:1106.0522 [hep-ph] . Alwall et al. [2014] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 07, 079, arXiv:1405.0301 [hep-ph] . Ball et al. [2013] R. D. Ball, V. Bertone, S. Carrazza, L. Del Debbio, S. Forte, A. Guffanti, N. P. Hartland, and J. Rojo (NNPDF), Parton distributions with QED corrections, Nucl. Phys. B 877, 290 (2013), arXiv:1308.0598 [hep-ph] . Ball et al. [2015] R. D. Ball et al. (NNPDF), Parton distributions for the LHC Run II, JHEP 04, 040, arXiv:1410.8849 [hep-ph] . Fuks et al. [2020] B. Fuks, M. Nemevšek, and R. Ruiz, Doubly Charged Higgs Boson Production at Hadron Colliders, Phys. Rev. D 101, 075022 (2020), arXiv:1912.08975 [hep-ph] . Sjöstrand et al. [2015] T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, An introduction to PYTHIA 8.2, Comput. Phys. Commun. 191, 159 (2015), arXiv:1410.3012 [hep-ph] . de Favereau et al. [2014] J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES 3), DELPHES 3, A modular framework for fast simulation of a generic collider experiment, JHEP 02, 057, arXiv:1307.6346 [hep-ex] . Cacciari et al. [2008] M. Cacciari, G. P. Salam, and G. Soyez, The anti-$k_{t}$ jet clustering algorithm, JHEP 04, 063, arXiv:0802.1189 [hep-ph] . Cacciari et al. [2012] M. Cacciari, G. P. Salam, and G. Soyez, FastJet User Manual, Eur. Phys. J. C 72, 1896 (2012), arXiv:1111.6097 [hep-ph] . Ellis et al. [2009] S. D. Ellis, C. K. Vermilion, and J. R. Walsh, Techniques for improved heavy particle searches with jet substructure, Phys. Rev. D 80, 051501 (2009), arXiv:0903.5081 [hep-ph] . Ellis et al. [2010] S. D. Ellis, C. K. Vermilion, and J. R. Walsh, Recombination Algorithms and Jet Substructure: Pruning as a Tool for Heavy Particle Searches, Phys. Rev. D 81, 094023 (2010), arXiv:0912.0033 [hep-ph] . Thaler and Van Tilburg [2011] J. Thaler and K. Van Tilburg, Identifying Boosted Objects with N-subjettiness, JHEP 03, 015, arXiv:1011.2268 [hep-ph] . Thaler and Van Tilburg [2012] J. Thaler and K. Van Tilburg, Maximizing Boosted Top Identification by Minimizing N-subjettiness, JHEP 02, 093, arXiv:1108.2701 [hep-ph] . Krohn et al. [2013] D. Krohn, M. D. Schwartz, T. Lin, and W. J. Waalewijn, Jet Charge at the LHC, Phys. Rev. Lett. 110, 212001 (2013), arXiv:1209.2421 [hep-ph] . Catani et al. [2009] S. Catani, L. Cieri, G. Ferrera, D. de Florian, and M. Grazzini, Vector boson production at hadron colliders: a fully exclusive QCD calculation at NNLO, Phys. Rev. Lett. 103, 082001 (2009), arXiv:0903.2120 [hep-ph] . Balossini et al. [2010] G. Balossini, G. Montagna, C. M. Carloni Calame, M. Moretti, O. Nicrosini, F. Piccinini, M. Treccani, and A. Vicini, Combination of electroweak and QCD corrections to single W production at the Fermilab Tevatron and the CERN LHC, JHEP 01, 013, arXiv:0907.0276 [hep-ph] . Campbell et al. [2011] J. M. Campbell, R. K. Ellis, and C. Williams, Vector boson pair production at the LHC, JHEP 07, 018, arXiv:1105.0020 [hep-ph] . Cascioli et al. [2014] F. Cascioli, T. Gehrmann, M. Grazzini, S. Kallweit, P. Maierhöfer, A. von Manteuffel, S. Pozzorini, D. Rathlev, L. Tancredi, and E. Weihs, ZZ production at hadron colliders in NNLO QCD, Phys. Lett. B 735, 311 (2014), arXiv:1405.2219 [hep-ph] . Campbell et al. [2016] J. M. Campbell, R. K. Ellis, and C. Williams, Associated production of a Higgs boson at NNLO, JHEP 06, 179, arXiv:1601.00658 [hep-ph] . de Florian et al. [2016] D. de Florian et al. (LHC Higgs Cross Section Working Group), Handbook of LHC Higgs Cross Sections: 4. Deciphering the Nature of the Higgs Sector 2/2017, 10.23731/CYRM-2017-002 (2016), arXiv:1610.07922 [hep-ph] . Shen et al. [2017] Y.-B. Shen, R.-Y. Zhang, W.-G. Ma, X.-Z. Li, and L. Guo, NLO QCD and electroweak corrections to WWW production at the LHC, Phys. Rev. D 95, 073005 (2017), arXiv:1605.00554 [hep-ph] . Nhung et al. [2013] D. T. Nhung, L. D. Ninh, and M. M. Weber, NLO corrections to WWZ production at the LHC, JHEP 12, 096, arXiv:1307.7403 [hep-ph] . Shen et al. [2015] Y.-B. Shen, R.-Y. Zhang, W.-G. Ma, X.-Z. Li, Y. Zhang, and L. Guo, NLO QCD + NLO EW corrections to $WZZ$ productions with leptonic decays at the LHC, JHEP 10, 186, [Erratum: JHEP 10, 156 (2016)], arXiv:1507.03693 [hep-ph] . Wang et al. [2016] H. Wang, R.-Y. Zhang, W.-G. Ma, L. Guo, X.-Z. Li, and S.-M. Wang, NLO QCD + EW corrections to ZZZ production with subsequent leptonic decays at the LHC, J. Phys. G 43, 115001 (2016), arXiv:1610.05876 [hep-ph] . Frederix et al. [2014] R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, P. Torrielli, E. Vryonidou, and M. Zaro, Higgs pair production at the LHC with NLO and parton-shower effects, Phys. Lett. B 732, 142 (2014), arXiv:1401.7340 [hep-ph] . Kidonakis [2015] N. Kidonakis, Theoretical results for electroweak-boson and single-top production, PoS DIS2015, 170 (2015), arXiv:1506.04072 [hep-ph] . Muselli et al. [2015] C. Muselli, M. Bonvini, S. Forte, S. Marzani, and G. Ridolfi, Top Quark Pair Production beyond NNLO, JHEP 08, 076, arXiv:1505.02006 [hep-ph] . Broggio et al. [2019] A. Broggio, A. Ferroglia, R. Frederix, D. Pagani, B. D. Pecjak, and I. Tsinikos, Top-quark pair hadroproduction in association with a heavy boson at NLO+NNLL including EW corrections, JHEP 08, 039, arXiv:1907.04343 [hep-ph] . Frederix et al. [2018] R. Frederix, D. Pagani, and M. Zaro, Large NLO corrections in $t\bar{t}W^{\pm}$ and $t\bar{t}t\bar{t}$ hadroproduction from supposedly subleading EW contributions, JHEP 02, 031, arXiv:1711.02116 [hep-ph] . ATL [2016] Electron efficiency measurements with the ATLAS detector using the 2015 LHC proton-proton collision data,   (2016). Aaboud et al. [2018b] M. Aaboud et al. (ATLAS), Search for doubly charged Higgs boson production in multi-lepton final states with the ATLAS detector using proton–proton collisions at $\sqrt{s}=13\,\text{TeV}$, Eur. Phys. J. C 78, 199 (2018b), arXiv:1710.09748 [hep-ex] . Cowan et al. [2011] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C 71, 1554 (2011), [Erratum: Eur.Phys.J.C 73, 2501 (2013)], arXiv:1007.1727 [physics.data-an] . Li and Ma [1983] T. P. Li and Y. Q. Ma, Analysis methods for results in gamma-ray astronomy, Astrophys. J. 272, 317 (1983). Cousins et al. [2008] R. D. Cousins, J. T. Linnemann, and J. Tucker, Evaluation of three methods for calculating statistical significance when incorporating a systematic uncertainty into a test of the background-only hypothesis for a Poisson process, Nucl. Instrum. Meth. A 595, 480 (2008), arXiv:physics/0702156 . Sirunyan et al. [2020] A. M. Sirunyan et al. (CMS), Search for physics beyond the standard model in multilepton final states in proton-proton collisions at $\sqrt{s}=$ 13 TeV, JHEP 03, 051, arXiv:1911.04968 [hep-ex] . ATL [2021] Search for new phenomena in three- or four-lepton events in $pp$ collisions at $\sqrt{s}=$ 13 TeV with the ATLAS detector,   (2021). Butterworth et al. [2022] J. Butterworth, J. Heeck, S. H. Jeon, O. Mattelaer, and R. Ruiz, Testing the Scalar Triplet Solution to CDF’s Fat $W$ Problem at the LHC,   (2022), arXiv:2210.13496 [hep-ph] .
An Assessment of Safety-Based Driver Behavior Modeling in Microscopic Simulation Utilizing Real-Time Vehicle Trajectories Awad Abdelhalim    Corresponding Author Postdoctoral Research Associate Department of Urban Studies and Planning Massachusetts Institute of Technology    Cambridge    MA 02139 Email: [email protected] Montasir Abbas Professor Virginia Polytechnic Institute and State University 301 Patton Hall    Virginia Tech    Blacksburg    VA 24061 Tel: 540-231-9002 Email: [email protected] 1 Abstract Accurate representation of observed driving behavior is critical for effectively evaluating safety and performance interventions in simulation modeling. In this study, we implement and evaluate a safety-based Optimal Velocity Model (OVM) to provide a high-fidelity replication of safety-critical behavior in microscopic simulation and showcase its implications for safety-focused assessments of traffic control strategies. A comprehensive simulation model is created for the site of study in PTV VISSIM utilizing detailed vehicle trajectory information extracted from real-time video inference, which are also used to calibrate the parameters of the safety-based OVM to replicate the observed driving behavior in the site of study. The calibrated model is then incorporated as an external driver model that overtakes VISSIM’s default Wiedemann 74 model during simulated car-following episodes. The results of the preliminary analysis show the significant improvements achieved by using our model in replicating the existing safety conflicts observed at the site of the study. We then utilize this improved representation of the status quo to assess the potential impact of different scenarios of signal control and speed limit enforcement in reducing those existing conflicts by up to 23%. The results of this study showcase the considerable improvements that can be achieved by utilizing data-driven car-following behavior modeling, and the workflow presented provides an end-to-end, scalable, automated, and generalizable approach for replicating the existing driving behavior observed at a site of interest in microscopic simulation by utilizing vehicle trajectories efficiently extracted via roadside video inference. Keywords: Driver behavior modeling, optimal velocity model, microscopic simulation. 2 INTRODUCTION Traffic modeling and simulation is one of the most powerful tools at the disposal of transportation engineers today. The modeling and simulation of a specific site, corridor, or network of interest provides transportation practitioners with an efficient and effective method to assess the current performance and evaluate any proposed performance or safety interventions. Simulation modeling can be sub-categorized into three types: • Macroscopic Simulation, which provides a high-level representation of traffic streams, • Microscopic Simulation, where vehicles are modeled at an individual level, and • Mesoscopic Simulation, which aims to utilize a combination of macro and microscopic simulation methods. While macroscopic and mesoscopic are the more common choices of modeling when simulating larger networks, microscopic simulation is more often used for studying the traffic flow in smaller areas, in greater detail. In microscopic simulation, however, the individual vehicles’ behavior is based on mathematically-derived driver behavior models that require a tedious process of data collection and parameter calibration to better replicate the observed behavior in the specific area of interest. This has generally been the challenge for the wide-scale adoption of microscopic simulation. PTV’s Verkehr In Städten SIMulationsmodel (VISSIM) is a discrete, stochastic, time step-based simulation software that is one of the most popular tools used for traffic simulation due to its extensive capabilities for modeling and evaluating different components of the transportation ecosystem (vehicles, public transit, pedestrians, cyclists, roadside infrastructure, etc.). VISSIM’s base car-following model is based on the Wiedemann 74 and 99 psycho-physical models [wiedemann1991modelling]. The car-following behavior of vehicles in a VISSIM simulation is modified through ten driver behavior parameters (labeled CC0 – CC9) that represent different thresholds of four assumed driving states: free-driving, approaching, following, and braking. For each of those modes, the desired instantaneous acceleration is a result of the vehicle’s current speed, the speed and distance differences between the vehicle and the lead vehicle, and specific characteristics of the driver and the vehicle. The challenge with the Wiedemann model (and other conventional driver behavior models, e.g. Gazis-Herman-Rothery, Fritzsche) is that they assume that drivers make longitudinal decisions (i.e. acceleration) based on assumed momentary stimuli inputs. These models do not account for the driver’s planned decision process, where a driver could perceive an impending danger, and plans to avoid that danger, then executes that plan in multiple time steps. This is especially important when modeling safety-critical or close-to-critical situations, when an ego vehicle approaches a vehicle that slows down, forcing the driver of the ego vehicle to start a process of avoidance or adjustment in their speed trajectory. The failure to capture this behavior leads to an inaccurate representation of traffic when modeled in simulation, hindering the ability to evaluate safety interventions. 2.1 Objective and Contributions This study aims to showcase the improvements that can be achieved in VISSIM simulation of safety-critical behavior utilizing real-time, safety-based driver behavior modeling. The real-time safety-based Optimal Velocity Model, which was proposed in a recent study [abdelhalim2021safety] is calibrated for the site of study using vehicle trajectories extracted from a roadside camera and implemented in VISSIM as an external driver model. We compare the performance of our model to the default VISSIM model in terms of replicating the existing safety-critical behavior at the area of study. We then assess the impact of different scenarios of signal control and speed enforcement in mitigating the simulated safety concerns. The following sections of this paper are as follows: (a) a survey of related literature on driver behavior modeling in microscopic simulation and trajectory-based traffic safety, (b) a detailed breakdown of the VISSIM model development and external driver model implementation, (c) results and analyses of the case study, and, (d) discussion and conclusions. 2.2 Related Work 2.2.1 Driver Behavior and Safety Modeling in Microscopic Simulation Over the past two decades, the growing adoption of microscopic simulation software has provided researchers with powerful means for evaluating different components of transportation networks and their interactions. The simulated vehicle interactions, which form the basis of microscopic driving behavior models, allowed for the assessment of traffic safety and evaluation of countermeasures as opposed to solely relying on analyzing crash data from police reports. The Surrogate Safety Assessment Model (SSAM) [gettman2008surrogate] is the prime example of such efforts. Using vehicle trajectories exported from microsimulation as an input, SSAM’s approach utilizes conflict analysis for near misses, in terms of time-to-collision (TTC) and post encroachment time (PET), to infer surrogate measures of actual crash data in terms of frequency, types, and severity of crashes. SSAM, however, is dependent on the outputs generated by microsimulation. Hence, adequate parameter calibration for the underlying driver behavior model used to generate the vehicle trajectories is a crucial step that the lack thereof was found to result in a significant deviation of simulated safety conflicts from the observed behavior [vasconcelos2014validation], [huang2013identifying], [dijkstra2010calculated]. Given the increasing popularity of VISSIM over the years, numerous studies have been conducted to assess the sensitivity of calibrating its underlying Wiedemann car-following model to better mimic the existing traffic flow and behavior and result in more realistic simulated safety conflicts. Fellendorf and Vortisch [fellendorf2001validation] conducted one of the earliest studies, where they assessed the impact of the driver model calibration on both a micro and macroscopic level. The results of their study indicated that a well-calibrated model can replicate traffic flow with high accuracy, with the caveat that driver behavior models should at least take national and regional regulations and driving styles into account to produce reliable representations in simulation. Lownes and Machemehl [lownes2006vissim, lownes2006sensitivity] conducted a sensitivity analysis to assess the impact of modifying VISSIM’s driver behavior parameters on simulated traffic flow capacity. Their work has highlighted the impact of changes on driver behavior models at a microscopic level on traffic flow at a macroscopic level, further making the case for the necessity of parameter calibration to ensure adequate representation of traffic modeled in the simulation. Other studies have also highlighted the necessity of taking into account the varying traffic patterns to be modeled and the specific considerations needed for calibrating VISSIM’s driver behavior parameters accordingly, such cases include modeling traffic at urban intersections [manjunatha2013methodology], [arafat2020data] and freeway merge areas [fan2013using]. The work of Fan et. al. [fan2013using] particularly stands out in this category, as VISSIM’s default driver behavior parameters were calibrated for freeway driving, yet separate parameter calibration for merging sections specifically was necessary, with the results highlighting the significant improvements obtained. This was attributed to the default VISSIM model parameters being calibrated from driving data from the German autobahn, while the study was conducted in China, hence the driving behavior parameters were not entirely transferable. The transferability of the calibrated VISSIM driver behavior model parameters was assessed by Essa and Sayed [essa2015transferability], who calibrated the VISSIM model parameters to maximize the correlation between observed and simulated traffic conflicts in one intersection and tested the calibrated model on a nearby intersection. Their study concluded that while not all of the calibrated model parameters are fully transferable, the calibrated model still significantly outperforms the default model in terms of providing a better correlation between the observed and simulated traffic conflicts. Hence, any safety assessment without proper model calibration should be avoided. Essa and Tarek further highlighted this need for rigorous calibration and the limitations of simulation-based safety assessment in other studies [essa2015simulated], [essa2020comparison]. The extensive literature in VISSIM, and microscopic model parameter calibration in general, lacks a clear and transferable framework to overcome the shortcomings of site-by-site data collection and calibration of driver model parameters to reduce errors versus observed data. In recent years, researchers have opted to utilize neural network models to overpass this tedious process [otkovic2020validation], [naing2021data]. And while such methods succeed in producing improved results, they lack interpretability, unlike the traditional driving behavior models which are based on traffic flow theory fundamentals. There exists, therefore, a need for a generalizable driving behavior modeling approach for simulation to better mimic the vehicle trajectories being simulated and their resulting safety conflicts, allowing for a more accurate assessment of proposed safety interventions. To overcome this gap identified in driver behavior modeling calibration and its application in VISSIM, we propose an implementation of a data-driven high-fidelity external driver behavior model in VISSIM and showcase its utilization in a case study to assess appropriate safety interventions. The proposed external driver model is calibrated from video inference for the site of interest, providing an end-to-end scalable, automated, and generalizable workflow for replicating the observed behavior in microscopic simulation. 3 METHODOLOGY 3.1 Area of Study, Data Collection, and the VISSIM Model In a recent study [abdelhalim2021safety], we utilized an extended VT-Lane framework to obtain the trajectories and calibrate a data-driven safety-based driver behavior model for the area of study. Previous works detail the implementation and evaluation of the computer-vision-based trajectory tracking framework [abdelhalim2020vt, abdelhalim2020towards, abdelhalim2021framework]. Figure 1 shows the area of this study from the perspective of the roadside camera used to obtain the video data, illustrating the NEMA movement enumeration for the site. The traffic volumes and speed distributions obtained from the inference of 1-hour PM traffic footage were modeled in VISSIM alongside the existing on-site signal timing plan illustrated in Figure 2. 3.2 Incorporating the Safety-Based Driver Model VISSIM’s external driver model module is utilized to incorporate our calibrated safety-based OVM. The C++ generated Dynamic Link Library (DLL) file is executed during VISSIM’s simulation runs once every simulation step for all vehicles present in the network. All vehicles in the network are controlled by the default Wiedemann model that VISSIM utilizes unless they are engaged in car-following episodes. In this study, a vehicle is considered in car-following if it is moving at a speed $\geq$ 5 m/sec (18 km/hr), and there exists a slower leading vehicle moving in the same lane. The threshold ensures that the model is not utilized for vehicles that are stopping at the intersection due to signal control, which could be assumed to be in a car-following model based on estimated TTC. Figure 3 shows a flowchart with the code execution logic for driver behavior model selection in VISSIM, which is detailed in Algorithm 1. The widely used optimal velocity model first proposed by Bando et al. [bando1995dynamical] calculates the speed of the following vehicle based on the distance gap given by Equation 1, from which the desired acceleration is calculated using Equation 2. $$v_{opt}(s)=v_{o}\frac{\text{tanh}\left(\frac{s}{\Delta s}-\beta\right)+\text{tanh}\ \beta}{1+\text{tanh}\ \beta}$$ (1) $$\dot{v}=\frac{v_{opt}(s)-v}{\tau}$$ (2) Where: $s$ = Distance gap between vehicles in a car-following episode ($m$). $v_{opt}(s)$ = The theoretical optimal velocity for a given distance gap ($km/hr$). $v_{o}$ = Desired speed ($km/hr$), $\Delta s$ = Transition width ($m$). $\beta$ = Form Factor, $\dot{v}$ = OVM acceleration ($km/hr/sec$). $v$ = Actual speed of a following vehicle ($km/hr$). $\tau$ = Adaptation time ($sec$). For our modified safety-based OVM [abdelhalim2021safety] which is incorporated in VISSIM as an external driver model for this study, the instantaneous optimal velocity and resulting desired acceleration based on real-time TTC are calculated using the following equations: $$v_{opt}(ttc)_{i,n}=v_{o}\frac{\text{tanh}\left(\frac{ttc_{i,n}}{\Delta s}-\beta\right)+\text{tanh}\ \beta}{1+\text{tanh}\ \beta}$$ (3) $$\dot{v}=(1-\alpha)\frac{v_{opt_{i,n}}-v_{i,n}}{\tau}+\alpha f(ttc_{observed})$$ (4) Where: $ttc_{i,n}$ = Instantaneous time-to-collision between vehicle${}_{i}$ and preceding vehicle in a car-following episode at simulation time step $n$ ($sec$). $v_{opt}(ttc)_{i,n}$ = The theoretical optimal velocity for vehicle $i$ during simulation time step $n$ ($km/hr$). $v_{o}$ = Desired speed ($km/hr$), $\Delta s$ = Transition width ($sec$). $\beta$ = Form Factor, $\dot{v_{i,n}}$ = Desired acceleration for vehicle $i$ during time step $n$ ($km/hr/sec$). $v_{i,n}$ = Actual speed of vehicle $i$ during simulation time step $n$ ($km/hr$). $\tau$ = Adaptation time ($simulation\ time\ steps$). The weighted observed acceleration, $\alpha f(ttc_{observed})$, allows our OVM to implicitly learn location-specific driving behavior characteristics from a subset of the observed driving behavior on-site to supplement the base OVM model’s assumptions while maintaining the benefits of interpretability of the remaining model parameters (e.g., the calibrated model had a lower desired speed for vehicles executing turning movements compared to through moving vehicles, which is logical). We utilized VT-Lane’s ability to classify the vehicles’ movements across the intersection to calibrate separate model parameters for through-moving vehicles and vehicles executing turning movements at the intersection. The parameter calibration process for the model and substantial improvements achieved compared to the base OVM are detailed in [abdelhalim2021safety]. The calibrated model parameters for each movement type utilized by the external driver model in this study are detailed in Table 1. Algorithm 1 details the steps for the external driver model application during each simulation time step, where it is applied to all the vehicles in simulation. 4 RESULTS AND ANALYSIS 4.1 Observed Vehicle Behavior and Safety Conflicts A 1-hour simulation (preceded by a 15-minute warm-up) was conducted in VISSIM and the instantaneous time-to-collision for vehicles involved in car-following episodes was calculated and stored in an external database. Figure 4 illustrates the location of safety conflicts and the associated average TTC for car-following instances where the TTC was $\leq$ 3 seconds for each location. X and Y coordinates are respectively the local longitude/latitude in the VISSIM model. It can be clearly observed that VISSIM’s base Wiedemann model is extremely conservative in generating safety conflicts during car-following episodes. Even when instances of low TTCs do occur, the vast majority take place as vehicles approach the stop bars at the intersections’ four approaches. The base VISSIM model generates minimal safety conflicts as vehicles traverse the intersection and as they continue to accelerate downstream, where following vehicles seem to always be accelerating at a rate slower than the leading vehicles, leading to a severe underestimation of safety conflicts as reported by numerous studies in the literature. Our Safety-Based OVM that was incorporated in VISSIM generates a more realistic driving behavior that would be expected at an urban intersection, especially as vehicles accelerate to clear the intersection and at the merge areas of through lanes with turning lanes as clearly illustrated in Figure 4 (b). This distinction between VISSIM’s default behavior and the external driver model is most evident when looking at safety-critical situations where the TTC between vehicles involved in a car-following episode momentarily drops to 1 second or less as can be seen in the figure. Out of 2667 vehicles in the 1-hour simulation, 31 vehicles (1.1%) had safety-critical car-following instances using VISSIM’s default model, compared to 210 (7.8%) when utilizing the external driver model, where an instance is one simulation timestep (0.10 seconds). 4.2 Impact Assessment of Mitigation Measures Based on this replication of observed driving behavior, we assess the impact of different mitigation measures on reducing the number of simulated conflicts. The area of study has a posted speed limit of 35 mph (55 km/hr). The observed speeds from video inference that were utilized in creating this simulation model had an 80th percentile speed of 35 mph (55 km/hr) and 95th percentile speed of 47 mph (75 km/hr), which is expected as there is no speed enforcement on-site and drivers typically drive above this posted speed limit. We assess the impact of enforcing speeds at 65 km/hr and 55 km/hr, respectively. Those enforcement measures are assessed with three scenarios of signal control, the current signal control previously illustrated in Figure 2, a scenario with a half-cycle length (70 seconds instead of 140), and a split-phasing signal control scenario. A sensitivity analysis for those enforcement and signal control scenarios is conducted for the observed, as well as half, and double the traffic volumes. For brevity, the conventions for the different scenarios used hereafter are shown in Table 2. Signal control scenarios are color-coded and labeled by name. Figure 5 shows the counts and percentages of car-following episodes with instances of safety-critical TTC $\leq$ 1.5 sec. For the existing volume and given turn movement classifications at the intersection, utilizing a split-phasing control was found to reduce the number of generated safety-critical conflicts by over 23%. Enforcing a 65 km/hr speed provides further but not substantial improvements. Split-phasing was also found to produce substantially better results in terms of safety in the scenario of lower traffic volumes. Conversely, it was the lowest performer in the case of double the observed volume. Figures 6 to 8 show all of those safety-critical conflicts for the different simulation scenarios, illustrating instantaneous (not averaged) safety-critical time-to-collision conflicts and their locations within the intersection’s area. The split-phasing which was found to significantly reduce the existing safety conflicts can be seen in Figure 8 almost eliminating the conflicts generated at the turning lanes. The extended queues for phases 2 and 6 due to split-phasing resulted in an extended shockwave as vehicles approach and stop at the intersection during a red light, hence it is to be avoided in case of an increased traffic volume at this intersection. 5 CONCLUSIONS This study provided an assessment of utilizing a data-driven driver behavior model to provide a high-fidelity simulation of safety-critical behavior. A safety-based Optimal Velocity Model that was calibrated for the location of study in Blacksburg, VA, and implemented in VISSIM as an external driver model. The resulting safety conflicts were compared to that of VISSIM’s default Wiedemann model, and the results showcase the substantial improvements that can be achieved in simulation utilizing our proposed model in terms of replicating the existing safety concerns. We utilized this improved behavior modeling to quantify the impact of different signal control and speed enforcement scenarios in mitigating the simulated safety concerns. Unlike existing studies in the literature, our assessment did not involve a process of specifically calibrating VISSIM’s model parameters to replicate the observed or expected safety concerns in the area of study. Conversely, our safety-based OVM utilized as an external driver model in this study was calibrated beforehand from video inference, and the direct implementation of that calibrated model resulted in a highly accurate representation of the status-quo at the area of study without targeted fine-tuning within VISSIM. The results highlight the value of this approach of video inference-based driver behavior modeling which provides a highly scalable, automated, and generalizable workflow and succeeds in better replicating the observed behavior in microsimulation. Vehicle trajectories can be efficiently extracted in real-time via our VT-Lane framework. While additional time is required post trajectory acquisition to calibrate the model parameters, the task can be effectively accomplished in near real-time which allows for the assessment of changes in driving behavior due to traffic disruptions, changes in weather, and other factors. We concluded this study by assessing the potential impact of different signal control and speed enforcement measures to reduce the simulated conflicts at the intersection of the study across varying traffic volumes. The results obtained and discussed in this study do not take into account the trade-off between capacity and operational safety, which could be the subject of future works. Those results, however, do present the practitioners with an additional decision-support layer that effectively quantifies the expected safety impact for the selected interventions as a function of the existing driving behavior in an area of interest. Future studies will also assess the scalability and transferability of this approach, which could provide traffic safety practitioners and transport agencies with a robust decision-support tool for evaluating the safety impact of different traffic demand management, signal control, and enforcement measures. 6 Author Contribution Statement The authors confirm their contribution to the paper as follows: study conception and design: A.A., M.A.; data collection: A.A.; analysis and interpretation of results: A.A.; draft manuscript preparation: A.A., M.A. Both authors reviewed the results and approved the final version of the manuscript. The authors do not have any conflicts of interest to declare.
On Haar systems for groupoids Anton Deitmar Abstract: It is shown that a locally compact groupoid with open range map does not always admit a Haar system. It then is shown how to construct a Haar system if the stability groupoid and the quotient by the stability groupoid both admit one. MSC: 28C10, 22A22 Contents 1 Locally compact groupoids 2 Haar systems Introduction Topological groupoids occur naturally in encoding hidden symmetries like in fundamental groupoids or holonomy groupoids of foliations, see [Paterson], for instance. In order to construct convolution algebras on groupoids [Renault, RenaultCK], one needs continuous families of invariant measures, so called Haar systems [Seda1], see also Section 1. These do not always exist. One known criterion is that a Haar system can only exist if the range map is open (Corollary to Lemma 2 in [Seda], see also [Williams]). A second criterion, which has been neglected in the literature, is the possibility of failing support, i.e., it is possible that, although the range map is open, the support condition of a Haar system cannot be satisfied, see Proposition 2.2. We conjecture, however, that there should always be a Haar system for a locally compact groupoid with open range map, if the groupoid is second countable. We show how to construct Haar systems if the stability groupoid and its quotient both admit one. I thank Dana Williams for some very helpful comments. 1 Locally compact groupoids Definition 1.1. By a bundle of groups we understand a continuous map $\pi:G\to X$ between locally compact Hausdorff spaces together with a group structure on each fibre $G_{x}=\pi^{-1}(x)$, $x\in X$ such that the following maps are continuous: $$\displaystyle\varepsilon:X\to G$$ identity, $$\displaystyle m:G^{(2)}\to G$$ multiplication, $$\displaystyle\iota:G\to G$$ $$\displaystyle\text{inverse},$$ where $G^{(2)}$ is the set of all $(x,y)\in G\times G$ with $\pi(x)=\pi(y)$. Note that this implies that $\varepsilon$ is a homeomorphism onto the image, so $X$ carries the subspace topology but also $X$ carries the quotient topology induced by the surjective map $\pi$. In all, the topology on $X$ is determined by the one on $G$. Definition 1.2. Each fibre $G_{x}$ being a locally compact group, carries a Haar measure which is unique up to scaling. A coherent system of Haar measures is a family $(\mu_{x})_{x\in X}$ such that $\mu_{x}$ is a Haar measure on $G_{x}$ such that for each $\phi\in C_{c}(G)$ the map $$x\mapsto\int_{G_{x}}\phi\,d\mu_{x}$$ is continuous. Proposition 1.3. Let $\pi:G\to X$ be a bundle of groups over a paracompact space $X$. There exists a coherent system of Haar measures $\mu_{x}$ on $G$ if and only if the map $\pi$ is open. Proof. This is Lemma 1.3 in [RenaultIdeal]. ∎ Definition 1.4. Let $X$ be a set. By a groupoid over $X$ we mean a category with object class $X$ (so it is a small category) in which each arrow is an isomorphism. We write $G$ for the set of arrows and we use the following notation $$\displaystyle r,s:G\to X$$ range and source maps, $$\displaystyle\varepsilon:X\to G$$ identity, $$\displaystyle G^{(2)}\subset G\times G$$ set of composable pairs, $$\displaystyle m:G^{(2)}\to G$$ composition, $$\displaystyle\iota:G\to G$$ $$\displaystyle\text{inverse}.$$ Definition 1.5. A topological groupoid is a groupoid $G$ over $X$ together with topologies on $G$ and $X$ such that the structure maps $r,s,\varepsilon,m,\iota$ are continuous. Here $G\times G$ carries the product topology and $G^{(2)}\subset G\times G$ the subset topology. Note that if $X$ is Hausdorff, then $G^{(2)}=\{(\alpha,\beta)\in G\times G:r(\beta)=s(\alpha)\}$ is a closed subset of $G\times G$. A locally compact groupoid is a topological groupoid such that $G$ and $X$ are locally compact Hausdorff spaces. From now on $G$ is assumed to be a locally compact groupoid. We use the notation $$\displaystyle G_{x}$$ $$\displaystyle=\{g\in G:s(g)=x\},$$ $$\displaystyle G^{y}$$ $$\displaystyle=\{g\in G:r(g)=y\},$$ $$\displaystyle G_{x}^{y}$$ $$\displaystyle=G_{x}\cap G^{y}.$$ As $X$ is Hausdorff, all three sets are closed in $G$. Note that a bundle of groups is a special case of a groupoid $G$ with $G_{x}^{y}=\emptyset$ if $x\neq y$. Definition 1.6. For a groupoid $G$ the stability groupoid is defined to be the subset $$G^{\prime}=\big{\{}g\in G:r(g)=s(g)\big{\}}.$$ If $G$ is a topological groupoid, then $G^{\prime}$ is a closed subgroupoid. Definition 1.7. On a groupoid $G$ we install an equivalence relation $$g\sim h\quad\Leftrightarrow\quad r(g)=r(h)\text{ and }s(g)=s(h).$$ we write $[g]$ for the equivalence class, i.e., $[g]=G_{s(g)}^{r(g)}$. Now assume that $(\mu_{x}^{x})_{x\in X}$ is a coherent family of measures on the bundle of groups $G^{\prime}=\{g\in G:r(g)=s(g)\}$. We then get invariant measures $\mu_{[g]}$ on the classes $[g]$ by setting $$\int_{[g]}\phi(x)\,d\mu_{[g]}(x)=\int_{G_{s(g)}^{s(g)}}\phi(gx)\,d\mu_{s(g)}^{s(g)}(x).$$ The invariance of the $\mu_{x}^{x}$ yields the well-definedness of the $\mu_{[g]}$. The uniqueness of a Haar measure implies that $\mu_{[g]}$ is, up to scaling, the unique Radon measure on $[g]$ being right-invariant under $G_{s(g)}^{s(g)}$ or left-invariant under $G_{r(g)}^{r(g)}$. In the sequel, we shall identify a Radon measure with its positive linear functional, so we write $\mu_{[g]}(\phi)$ for the above integral. Definition 1.8. We shall need the notion of a topological right-action of a topological groupoid $H$ on a topological space $Z$. This is given by the following data: first there is a continuous surjection $\rho:Z\to X$, where $X$ is the base set of $H$. We define $$Z*H=\big{\{}(z,h):\rho(z)=r(h)\big{\}}.$$ This is a closed subset of $Z\times H$ and we consider it equipped with the corresponding topology. Next the action is given by a map $$\displaystyle Z*H$$ $$\displaystyle\to Z,$$ $$\displaystyle(z,h)$$ $$\displaystyle\mapsto zh,$$ such that $\rho(zh)=s(h)$ and $z\cdot 1=z$ as well as $z(hh^{\prime})=(zh)h^{\prime}$ holds for all $(z,h),(z,hh^{\prime})\in Z*H$. Note that the action defines an equivalence relation on $Z$ given by $z\sim zh$ for $h\in H$. We naturally equip $Z/H$ with the quotient topology. Lemma 1.9. Assume the locally compact groupoid $H$ acts on a locally compact space $Z$ and that $H$ has open range map. Then the projection $Z\to Z/H$ is open. Proof. This is Lemma 2.1 in [MW]. However, in that paper the assertion was given under a stronger definition of $H$-actions then the one we use, as it was assumed that the map $\rho:Z\to X$ also be open. Lemma 2.1 and its proof in [MW], however, are valid under our weaker assumptions. For the convenience of the reader we shall show this by reproducing the proof here: Let $V\subset Z$ be open, in order to show that its image in $Z/H$ is open, it suffices to show that the union of orbits $VH=\big{\{}vh:v\in V,(v,h)\in Z*H\big{\}}$ is open in $Z$. So it suffices to show that any net $z_{i}\to vh$ with $v\in V$ and $h\in H$ eventually is in $VH$. But $\rho(z_{i})$ converges to $\rho(vh)=s(h)$. As the range map of $H$ is open, so is the source map $s$, hence the set $s(H)$ is open and we can find a net $h_{i}$ in $H$ on the same index set, such that $\rho(z_{i})=s(h_{i})$ for all $i\geq i_{0}$ for some index $i_{0}$. Further, the same applies to open neighborhoods of $h$, so we can choose the net so that $h_{i}\to h$. Then $z_{i}h_{i}^{-1}$ converges to $v$ and thus is eventually in $V$ and $z_{i}=z_{i}h_{i}^{-1}h_{i}$ is eventually in $VH$. ∎ Definition 1.10. An action of a groupoid $H$ on a space $Z$ is called free if $zh=z$ implies that $h=1_{s(g)}$ and it is called proper, if the map $Z*H\to Z\times Z$, $(z,h)\mapsto(zh,z)$ is proper. For any groupoid $G$ the action of $G^{\prime}$ on $G$ is easily seen to be free and proper. Lemma 1.11. Let $G$ be a locally compact groupoid over a paracompact space $X$ and let $(\mu_{x}^{x})_{x\in X}$ be a coherent system of Haar measures on the groups $G_{x}^{x}$, $x\in X$. Then for every $\phi\in C_{c}(G)$ the function $$\overline{\phi}:g\mapsto\mu_{[g]}(\phi)$$ is continuous. Proof. Since the $G^{\prime}$ action is free and proper, this is immediate from Lemma 2.9 of [MRW]. ∎ 2 Haar systems Definition 2.1. A Haar system on the locally compact groupoid $G$ is a family $(\mu^{x})_{x\in X}$ of Radon measures on $G$ with (a) $\operatorname{supp}(\mu^{x})=G^{x}$, (b) $\displaystyle\int_{G}\phi(\alpha g)\,d\mu^{y}(g)=\int_{G}\phi(g)\,d\mu^{x}$ for every $\phi\in C_{c}(G)$ and every $\alpha\in G_{y}^{x}$, (c) $\displaystyle x\mapsto\int_{G}\phi(g)\,d\mu^{x}(g)$ is continuous on $X$ for every $\phi\in C_{c}(G)$. If a locally compact groupoid $G$ admits a Haar system, then the range map, and so the source map, too, is open, see Corollary to Lemma 2 in [Seda], see also [Williams]. The question for the converse assertion, asked in [Williams], is answered in the negative by the following proposition. Proposition 2.2. There exists a locally compact, even compact, groupoid $G$, whose range map is open, but no Haar system exists on $G$. Proof. There are locally compact, even compact, Hausdorff spaces which cannot be the support of any Radon measure. Here are two examples: • Let $X$ be the unit ball of a Hilbert space of uncountable dimension and equip $X$ with the weak topology. By the Banach-Alaoglu-Theorem, $X$ is a compact Hausdorff space. By Corollary 7.14.59 of volume 2 of [Boga], the set $X$ cannot be the support of any Radon measure • (Williams) Let $Y$ be an uncountable set with the discrete topology and let $X=Y\cup\{\infty\}$ be its one-point compactification. Then $X$ cannot be the support of any Radon measure. To see this, let $m$ be a Radon measure on $X$, then $m(X)<\infty$, as $X$ is compact. Further, $m(Y)=\sum_{y\in Y}m(\{y\})$, as $m$ is regular and the only compact subsets of $Y$ are the finite sets. As $m(Y)<\infty$, the set $M$ of all $y\in Y$ with $m(\{y\})>0$ is countable, therefore $M\neq Y$ and $m$ is supported in $M\cup\{\infty\}$. Let now $X$ be any locally compact Hausdorff space which is not the support of a Radon measure. Let $G=X\times X$ with the product topology and make $g$ a groupoid by setting $(x,y)(y,z)=(x,z)$ and $r(x,y)=x$ as well as $s(x,y)=y$. Then the source map is a homeomorphism between $G^{x}$ and $X$, so $G^{x}$ cannot be the support of any Radon measure, hence no Haar system exists. ∎ Conjecture 2.3. Every second countable, locally compact groupoid with open range map admits a Haar system. Definition 2.4. Let $G$ be a groupoid over $X$. We write $E(G)\subset X\times X$ for the image of the map $g\mapsto(s(g),r(g))$. Then $E(G)$ is an equivalence relation on $X$. We say that a groupoid $G$ is a principal groupoid if $G_{x}^{x}=\{1_{x}\}$ for every $x\in X$. This means that the groupoid is completely described by its equivalence relation. Note, though, that for topological groupoids the topology on $G$ generally differs from the one on $E(G)$ as a subset of $X\times X$. Lemma 2.5. Let $G$ be a groupoid over a set $X$. Define an equivalence relation on $G$ by $$g\sim h\quad\Leftrightarrow\quad r(g)=r(h)\text{ and }s(g)=s(h).$$ Then the set $\overline{G}=G/\sim$ becomes a groupoid, indeed a principal groupoid, by setting $[g][h]=[gh]$ whenever $g$ and $h$ are composable. Proof. This is easily checked. ∎ Theorem 2.6. Let $G$ be a locally compact groupoid over a paracompact space $X$. Suppose that the stability groupoid $G^{\prime}$ has open range map. (a) The groupoid $\overline{G}$, when equipped with the quotient topology, is a locally compact groupoid. The quotient map $G\to\overline{G}$ is open. (b) If the range map of $G$ is open, then so is the range map of $\overline{G}$. (c) If $\overline{G}$ admits a Haar system, then $G$ admits a Haar system. Proof. (a) By Proposition 1.3, the groupoid $G^{\prime}$ admits a coherent system of Haar measures $(\mu_{x}^{x})_{x\in X}$. Let $g_{0}\in G$ and let $\phi\in C_{c}^{+}(G)$ such that $\phi(g_{0})>0$. Let $$\overline{\phi}:g\mapsto\int_{G_{s(g)}^{r(g)}}\phi(gh)\,d\mu_{s(g)}^{r(g)}(h).$$ By Lemma 1.11 the map $\overline{\phi}$ is continuous. It factors over $\overline{G}$, hence defines a continuous map of compact support on $\overline{G}$. The set $U=\{x\in\overline{G}:\overline{\phi}(x)>0\}$ is an open neighborhood of $[g_{0}]$, so $\operatorname{supp}(\overline{\phi})$ is a compact neighborhood of $[g_{0}]$. Therefore $\overline{G}$ is locally compact. If $[g]\neq[h]$, then we can find $\phi,\psi\in C_{c}^{+}(G)$ such that $\overline{\phi}$ and $\overline{\psi}$ have disjoint supports and $\phi(g),\psi(h)>0$. Considering the continuous function $\overline{\phi}-\overline{\psi}$ on $\overline{G}$, one sees that $[h]$ and $[g]$ have disjoint neighborhoods, so $\overline{G}$ is a Hausdorff space. Together we infer that $\overline{G}$ is a locally compact groupoid. The quotient map $p:G\to\overline{G}$ is open by Lemma 1.9. (b) As the range map of $G$ is open and factors over the range map of $\overline{G}$, the range map of $\overline{G}$ is open as well. (c) If $(m^{x})$ is a Haar system for $\overline{G}$, then $$\phi\mapsto\int_{\overline{G}}\overline{\phi}(g)\,dm^{x}(g)$$ defines a Haar system on $G$. ∎ References Mathematisches Institut, Auf der Morgenstelle 10, 72076 Tübingen, Germany [email protected]
SIRI: Spatial Relation Induced Network For Spatial Description Resolution Peiyao Wang $\dagger$   Weixin Luo $\dagger$  Yanyu Xu ShanghaiTech University {wangpy, luowx, xuyy2}@shanghaitech.edu.cn &Haojie Li Dalian University of Technology [email protected] &Shugong Xu Shanghai University [email protected] &Jianyu Yang Soochow Univerisity [email protected] &Shenghua Gao [email protected] Abstract Spatial Description Resolution, as a language-guided localization task, is proposed for target location in a panoramic street view, given corresponding language descriptions. Explicitly characterizing an object-level relationship while distilling spatial relationships are currently absent but crucial to this task. Mimicking humans, who sequentially traverse spatial relationship words and objects with a first-person view to locate their target, we propose a novel spatial relationship induced (SIRI) network. Specifically, visual features are firstly correlated at an implicit object-level in a projected latent space; then they are distilled by each spatial relationship word, resulting in each differently activated feature representing each spatial relationship. Further, we introduce global position priors to fix the absence of positional information, which may result in global positional reasoning ambiguities. Both the linguistic and visual features are concatenated to finalize the target localization. Experimental results on the Touchdown show that our method is around 24% better than the state-of-the-art method in terms of accuracy, measured by an 80-pixel radius. Our method also generalizes well on our proposed extended dataset collected using the same settings as Touchdown. The code for this project is publicly available at https://github.com/wong-puiyiu/siri-sdr.111$\dagger$: Equal Contribution 1 Introduction Visual localization tasks aim to locate target positions according to language descriptions, where many downstream applications have been developed such as visual question answering antol2015vqa ; santoro2017simple ; selvaraju2017grad , visual grounding rohrbach2016grounding ; plummer2018conditional ; yu2018rethinking ; dogan2019neural and spatial description resolution (SDR) chen2019touchdown , etc. These language-guided location tasks can be categorized in terms of input formats, e.g. perspective images in visual grounding or panoramic images in the recently introduced SDR. The Challenge of SDR: Both of visual grounding and spatial description resolution tasks need to explore the correlation between vision and language to locate the target locations. Unlike traditional visual grounding, the recently proposed spatial description resolution of panoramic images, however, presents its own difficulties due to the following aspects. (1) As shown in Figure 1, the complicated entities, such as buildings in an image, present some challenges for the advanced object detection he2017mask . For example, existing methods may fail to instantiate multiple adjacent buildings. (2) The short language descriptions in visual grounding are more about well-described instances with multiple attributes, while the long descriptions in spatial description resolution describe multiple spatial relationship words, such as ‘your right/left’, ‘on the left/right’ and ‘in the front’, from a distant starting point to the target. It is worth noting that such crucial issues have not been well addressed in previous work. (3) Panoramic images in visual grounding with a first-person view cover more complex visual details on a street compared to the perspective images with a third-person view in visual grounding. Our Solution: To efficiently tackle SDR, humans start at their own position with a first-person perspective and sequentially traverse the objects with spatial relationship words, finally locating their target. To mimic the human behavior on SDR, we propose a spatial relationship induced (SIRI) network to explicitly tackle the SDR task in a real-world environment. As shown in Figure 2, we firstly leverage a graph-based global reasoning network chen2019graph (GloRe) to model the correlations of all the object-object pairs in the extracted visual feature, where the visual feature is projected to a latent space to implicitly represent object instances in an unsupervised manner. Implicitly learning object concepts and their visual correlations free us from explicitly designing an object detector for a street view. Meanwhile it enables each object in the image to accumulate its contextual information, which is extremely important for scene understanding as well as for spatial description resolution. Next, a local spatial relationship guided distillation module is appended to distill the visual features to different discriminative features, where each corresponds to a spatial relationship word. We argue that distilling visual features with local spatial relationships concentrates on specific features corresponding to these crucial language hints, consequently facilitating final target localization. After averaging all the distilled features, we introduce two global coordinate maps, of which the origin is at the agent’s position, i,e., the bottom center of the image. Such a position prior alleviates the ambiguities of global positional reasoning in an efficient way. All encoded linguistic features, distilled visual features and position priors are fed into LingUnetchen2019touchdown to finalize target localization. It is worth noting that our solution tackles the task of SDR in a highly efficient way and performs significantly better than other existing methods. Our contributions: (1) A novel framework is proposed to explicitly tackle the SDR task in terms of object-level visual correlation, local spatial relationship distillation and global spatial positional embedding. (2) Extensive experiments on the Touchdown dataset show that our method outperforms LingUnet chen2019touchdown by 24% in terms of accuracy, measured by an 80-pixel radius. (3) We propose an extended dataset collected using the same settings as Touchdown, and our proposed method also generalizes well. 2 Related Work Language Guided Localization Task. Visual grounding rohrbach2016grounding ; plummer2018conditional ; yu2018rethinking ; dogan2019neural and referring expression comprehension nagaraja2016modeling ; yu2018mattnet ; wang2019neighbourhood aim to locate target objects or regions according to given languages. The images in these tasks are perspective images that contain a limited number of entities, and the expression languages are also short. Object detection, which is one of the tasks in these datasets, is commonly used to provide a prior that functions as a correspondence between objects in images and language-based entity nouns. Methods under the object detection framework can be categorized in two ways. The first category plummer2015flickr30k ; wang2018learning ; yu2018mattnet ; plummer2018conditional has two stages, in which object detection is carried out at the beginning and object proposals are ranked according to the language query. Two-stage approaches, however, are time-consuming. Thus, one-stage approaches yang2019fast ; sadhu2019zero ; zhao2018weakly ; chen2018real have been proposed to achieve greater efficiency. Nevertheless, the object detectors can fail when it comes to real-world environments in spatial resolution description chen2019touchdown , where more objects and complex backgrounds are included with large fields of view, as shown in Figure 1. In addition to this, the given language descriptions in SDR are longer and describe more object pair spatial relationships. Undoubtedly, existing one-stage methods with weak contextual information on objects for grounding do not specialize when processing spatial positioning words. Recently, LingUnet chen2019touchdown was proposed, and it treats linguistic features as dynamic filters to convolve visual features, taking all regions into consideration. But it does not yet fully explore the visual and spatial relationships in such complex environments. In this paper, we intend to fully investigate these spatial relationships between objects. Spatial Positional Embedding. As has been studied, convolutional layers cannot easily extract position information  liu2018intriguing . Thus, spatial positional embedding has been commonly used in localization tasks. For instance, Liu liu2018intriguing proposed CoordConv to concatenate coordinate maps into channels of features, enabling convolutions to access their own input coordinates, which was of ultimate benefit to multiple downstream tasks. In addition, coordinate maps have been embedded in object detection gu2018learning . An 8-D spatial coordinate feature is provided at each spatial position for image segmentation gould2008multi .  manhardt2019roi included 2D coordinates maps with the corresponding regions to predict more precise depth maps.  bello2019attention concatenated positional channels to an activation map to introduce explicit spatial information for recognition tasks. In SDR, it is particularly important to accurately describe the positional information of each object since the corresponding language descriptions sequentially depict bearings between objects. All these operations in the recently proposed LingUnet are, however, convolutional layers, leading to an absence of positional information for each pixel. Undoubtedly, ambiguities emerge when duplicated target objects are present in the same image. Unfortunately, spatial positional embedding has not been properly studied in SDR. Thus, we introduce a global spatial positional information to SDR to handle this problem. 3 Method 3.1 Overview We illustrate our proposed SIRI network in Figure 2. It consists of a visual correlation, a local spatial relationship guided distillation and a global spatial positional embedding. For privacy preserving, only the features extracted from a pretrained RESNET18 he2016deep are provided in the TouchDown dataset. Thus, given the object-level visual feature of an image $I$ with a shape of $h(\text{height})\times w(\text{width})$ and a natural language description, the output of our proposed SIRI network is a heatmap with the same resolution of the input image, and its peak is the final target localization result. All spatial relationships represented by orientation words in the entire dataset form the set $W=\{$right, left, …$\}$. Further, we denote the orientation words in the descriptions corresponding to the image as $W_{I}$. Language representation. Different images have different language description lengths. We adapt a Bidirectional LSTM (BiLSTM) to extract the linguistic features at all time steps for all words in a given language description. Then an averaging operation is conducted on these linguistic features, resulting in a fixed-length feature vector $L_{I}$. It then functions as a dynamic filter in LingUnet, where it will be separated into two equally sized slices, after which each slice will be projected and reshaped into a filter by a fully-connected layer. Also, it will be projected and reshaped into the linguistic features used in feature concatenation via another full-connected layer. 3.2 Object-level Visual Feature Correlation Since the goal of SDR is to localize a specific position on an object, an advanced object detector such as Mask RCNN he2017mask can be used to instantiate each object in the given street-view image. Cohesive buildings cannot be differentiated by such detectors, however, leading to a single detection box on them. Therefore we instantiate each object in a latent space using GloRe chen2019graph . Specifically, we cast the input space into an interaction space by multiplying a projection matrix, where a graph convolutional network kipf2016semi is then utilized to conduct visual relationship learning. Another projection matrix is then used to cast the visual relationship features back to the original space. Such an embedding space enables us to conduct an object-level visual feature correlation in an unsupervised manner. Thus, we stack the GloRe multiple times to fully explore the visual relationships of the input visual features. 3.3 Local Spatial Relation Guided Distillation The correlated visual features $X$ are overly dense, and they contain a maximum of visual features of objects and local spatial relationships. This makes it more difficult to find the target position. On the other hand, the spatial relationship words in the corresponding language descriptions are limited and are related to some specific objects. Thus, distilling the specific spatial relationships corresponding to these language descriptions is helpful in locating the language-guided regions in panoramas. In this paper, we employ spatial relationship guided distillation to distill the correlated visual features based on each spatial relationship (orientation) word in the language descriptions. Specifically, we introduce $K$ branches of convolutional blocks that correspond to $K$ orientation words. For each branch corresponding to a specific orientation word, a different $5\times 5$ trainable filter is used to activate the features corresponding to a specific word. Ideally, $K$ would be equal to the size of $W$. This is, however, impractical for handling the huge sets of orientation words in the entire dataset. Thus, we select the top $k$ high-frequency words among $W$, which forms the set $W^{H}$. Then, the output of these $k$ branches will be averaged. We also use a skip-connection to add the input features $X$ to this output in case none of the high-frequency orientation words are present in the language descriptions. Mathematically, the output of the spatial relationship distillation $G$ can be formulated as follows: $$G(X)=\sum_{k=1}^{K}\mathbbm{1}_{\{W_{I}^{k}\in W^{H}\}}\times\text{Conv}_{k}(X% )+X$$ (1) . As shown in Figure 2, The orientation words in the descriptions function as switches, which means if a certain orientation word is present in the descriptions, then the corresponding convolutional branch is activated regardless of the number of appearances. Finally, the features corresponding to the high-frequency orientation words in the descriptions will be distilled in an end-to-end training manner. 3.4 Global Spatial Positional Embedding The previous procedure, however, misses the global positional information, which makes the final target localization difficult due to the global positional words such as ‘on your left’ in the language descriptions. To introduce this absent but extremely important information, we introduce a spatial positional embedding with global coordinate maps to fix this. By concatenating the distilled features with two auxiliary features, the ambiguities due to the absence of global positional information are alleviated. Global Coordinate Maps. Since the language descriptions are based on the egocentric viewpoint of an agent that is always located at the bottom center and that is moving forward, most reasoning routes start from the position of the agent and turns to either the upper left side or the upper right side of the panorama. Thus, such orientations provide a strong orientation prior for reasoning routes. More specifically, we build a coordinate whose origin is the same with the location of the agent (the bottom center of the image), where the x-axis runs along the horizontal direction and the y-axis is the vertical direction. We then arrive at two coordinate maps whose values correspond to the coordinates in the x-axis direction and the y-axis direction, respectively, as shown in Figure 2. These coordinate maps are denoted as $M^{C}\in\mathbb{R}^{h^{F}\times w^{F}\times 2}$. In addition, we normalize them onto the range [0, 1]. Considering the following fusion of different feature maps, we firstly transform the language representation $L_{I}$ to a vector with a fully-connected ($FC$) layer and we then reshape it to a feature map with the same resolution of the image. Then, we concatenate these linguistic features with coordinate maps, as well as with the distilled visual features. This is followed by a convolution operation with a kernel size of $3\times 3$ to fuse all of this information. Formally, we denote the output $R$ of these operations as follows: $$R(M^{C},L_{I},I^{F})=\text{Conv}([M^{C};\text{Reshape}(FC(L_{I}));G(I^{F})]).$$ (2) 3.5 Destination Estimation As this point, we can directly predict the target position map, which is regularized by the corresponding ground-truth. Motivated by the success of LingUnet for SDR, we append a LingUnet for destination estimation. Formally, we denote the predicted heatmap $\hat{M}$ as follows: $$\hat{M}=\text{LingUnet}(R(M^{C},L_{I},I^{F}),L_{I}).$$ (3) It is worth noting that our proposed method can achieve significantly better results compared to LingUnet, even without appending LingUnet. 3.6 Objective Function Given the input image feature $I$ and the corresponding language description, we apply the Equation 3 to generate the predicted heatmap $\hat{M}$. For the ground-truth heatmap, we apply a gaussian filter over the target position and denote it as $M$. We then leverage a KL divergence loss between the ground-truth heatmap with an $h\times w$ down-sampling, after which the predicted heatmap $\hat{M}$ over each pixel is as follows: $$L_{KL}(\hat{M},M)=\sum_{i=1}^{hw}M_{i}\log{\hat{M}_{i}}.$$ (4) 4 Experiments Touchdown and Extended Touchdown datasets. We conducted all experiments on the TouchDown dataset chen2019touchdown , which is designed for navigation and spatial description reasoning in a real-life environment. In this paper, we focus on the analysis of the spatial description resolution task of locating on Touchdown given panoramic images and corresponding language descriptions. Touchdown location strings of text are given as natural languages, and the locations are presented as heatmaps. In total, this dataset contains $27,575$ samples for SDR, including $17,878$ training samples, $3,836$ validation samples and $3,859$ testing samples. To see how well our proposed method generalizes in the wild, we built a new, extended dataset of Touchdown, using data collected under the same settings as the original Touchdown. The details and an analysis of our proposed dataset can be found in the supplementary materials. Implementation Details. It should be noted that we do not conduct any down-sampling operations in any of the modules, which means that the resolutions of all the feature maps are $100\times 464$. In the spatial relationship guided distillation procedure, we choose the top six of the high-frequency orientation words, because of the large number of orientation words in the entire dataset and their long-tail distribution. In all experiments, we use the Adam optimizerkingma2014adam to train the network. In addition, the number of training mini-batches and the learning rate are 10 and 0.0001 respectively. The code is implemented in Pytorch. Evaluation Metric. Following the previous work chen2019touchdown , we adapt the same evaluation metric in terms of accuracy and distance. We denote the peaks of the predicted heatmap and the ground-truth heatmap as $\hat{m}$ and $m$, respectively. Formally, the distance is defined as follows: $$Dist(m,\hat{m})=\|m-\hat{m}\|_{2}$$ (5) Similarly, the accuracy is defined over the whole dataset with $N$ samples, based on radii $r$ of 40, 80 and 120 pixels, denoted as A@40px, A@80px and A@120px, respectively. $$\text{A@}r\text{px}=\frac{1}{N}\sum_{i}^{N}\mathbbm{1}_{\{Dist(m_{i},\hat{m}_{% i})\leq r\}}\times 100\%$$ (6) 4.1 Comparison with the State-Of-The-Art Following the previous work chen2019touchdown , we compare our method with three non-learning-based methods, i.e. Random, Center and Average, as well as two learning-based baselines, i.e. Text2Conv and LingUnet, to prove the effectiveness of our proposed SIRI network. To investigate the effects of the introduced LingUnet on destination estimation, we replace LingUnet with several convolutional layers with nearly the same number of parameters as a SIRI-Conv baseline. We show the results of all the methods on the Touchdown dataset in Table 1, where we can see that our proposed SIRI network significantly improves the performance by almost 24% for A@80px, with a decrease of 66 pixels of distance error decreasing in the testing set compared to the state-of-the-art method LingUnet. In addition, when the baseline SIRI-Conv replaces LingUnet with several convolutional layers, a slight performance drop results, demonstrating the effectiveness of our proposed modules in SIRI. We also compare our approach to some closely related work, including FiLM perez2017film . FiLM has achieved state-of-the-art performance on the CLEVR benchmark johnson2017clevr . The GRU module in FiLM functions similarly to the BiLSTM module in our proposed SIRI, and the FiLM functions similarly to the spatial relationship guided distillation module. In addition, positional embedding is leveraged in both studies. The spatial relationship guided distillation that we introduced, however, induces the network with a gate mechanism, while FiLM performs a feature-wise affine transformation. To fairly compare our SIRI with FiLM, we adopt a local spatial relationship guided distillation module and a global spatial positional embedding module only in SIRI. Our SIRI has an accuracy@80px of 58.33%, much better than FiLM, which has an accuracy@80px of 52.37%. This demonstrates the effectiveness of our SIRI. 4.2 Generalization of Our Proposed Dataset As shown in Table 2, we also test our proposed model (trained on the entire TouchDown dataset) on our proposed extended dataset. This table shows that SIRI consistently outperforms LingUnet by around 18% for A@80px, which demonstrates the robustness of our proposed method. Further, we visualize the prediction results on our proposed extended Touchdown dataset, as shown in Figure 3, which illustrates how these predictions are closer to the ground-truth compared to those of LingUnet. 4.3 Ablation Study Our proposed SIRI significantly improves the accuracy of reasoning target positions via the addition of our proposed modules, as compared to the baseline LingUnet (a). Knowing this, we carefully investigate the impact of each proposed module, as well as their combinations of them. To begin with, the modules, except LingUnet are evaluated independently, corresponding to the methods (b), (c) and (d). As shown in Table 3, the most improvement is from the local spatial relationship guided distillation module, where A@80px is improved by around 10%. And the second biggest improvement of 4% is from the local spatial positional embedding. This means that this module provides accurate positional information when exploring the spatial position relationship of objects. In addition to this, we carefully study the number of top-$k$ selected orientation words in this module. The accuracies for @80px are 51.02%, 58.33% and 59.74% when $k$ is 4, 6 and 8 respectively, and where the method (g) is evaluated. To reduce the time cost, we set $k$ to 6 throughout all the experiments. Further, performance consistently increases when more modules are connected in series. This means that these modules are complementary and ultimately achieve state-of-the-art accuracy is achieved when they are all appended. 4.4 Visual Ambiguities We further carefully investigate which part of SDR our proposed SIRI improves. To begin , the SDR task can be split into two individual sub-tasks, i.e. target object localization and spatial relationship reasoning. It is worth noting that the performance gain in the SDR task may result merely from the improvement of target object localization, especially when the target object is unique throughout the entire given image and the spatial relationship reasoning is ignored. It is, however, difficult to visualize the spatial relationships. Thus, we copy the left half part of each testing image and paste it to the right half part of the image, when the target is on the left, and vice versa. Finally, we introduce visual ambiguities in all the testing images to see whether our proposed SIRI as well as LingUnet is able to capture the spatial relationships rather than only conduct object localization. Quantitive Analysis. To this end, we firstly conduct inference for all the copy-pasted testing images, and we calculate the accuracies. As shown in Table 4, we observe a roughly 5% drop for A@40px over SIRI, compared to around a 14% drop over LingUnet. We claim that our proposed SIRI fully explores these spatial relationships, thus leading to a smaller performance drop when visual ambiguities are introduced. Qualitative Analysis. Next, we visualize the localization results of SIRI and LingUnet in two cases as follows, to see what caused this performance drop. 1) For those samples whose language descriptions have some absolute orientation words such as ‘on your left’, this introduced visual ambiguity does not adversely affect target localization. As shown in Figure 4 (a), SIRI successfully predicts the correct target position, although the copy operation introduces ambiguity; LingUnet, however, predicts the target position in an incorrect opposite direction. Therefore, our proposed SIRI properly characterizes this spatial relationship, whereas LingUnet overfits to the target object localization and causes a significant performance drop . 2) For those samples whose language descriptions have some ambiguous words such as ‘next to you’, the introduced visual ambiguity misdirects target localization. As shown in Figure 4 (b), both SIRI and LingUnet predict the target position in an incorrect opposite direction, which explains the reason why A@40px for SIRI decreases by around 5%. 4.5 Running Time To evaluate the efficiency of our proposed method, we calculate the number of parameters, running time per 50 images and A@80px for LingUnet and SIRI. It should be noted that theses running times exclude the inference time for feature extraction. All the experiments are conducted with a GeForce GTX TITAN X. As shownn in Table 5, our proposed SIRI cannot be operate at real-time at the moment but some solutions including model compression and model distillation can still be studied. We leave this for future work. 5 Conclusion We present a novel spatial relationship induced network for the SDR task. It characterizes the object-level visual feature correlations, which enables an object to perceive the surrounding scene. Besides, the local spatial relationship is distilled to simplify the spatial relationship representation of the entire image. Further, a global positional information is embedded to alleviate the ambiguities caused by the absence of spatial positions throughout the entire image. Since our proposed network can fully explore these spatial relationships and is robust to the visual ambiguities introduce by a copy-paste operation, our proposed SIRI outperforms the state-of-the-art method LingUnet by 24% for A@80px, and it generalizes consistently well on our proposed extended dataset. Acknowledge This work was supported by the National Key R&D Program of China (2018AAA0100704), NSFC(No. 61932020, No. 61773272), the Science and Technology Commission of Shanghai Municipality (Grant No. 20ZR1436000) and ShanghaiTech-Megavii Joint Lab. References [1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. Vqa: Visual question answering. In IEEE international conference on computer vision, pages 2425–2433, 2015. [2] I. Bello, B. Zoph, A. Vaswani, J. Shlens, and Q. V. Le. Attention augmented convolutional networks. In IEEE International Conference on Computer Vision, pages 3286–3295, 2019. [3] H. Chen, A. Suhr, D. Misra, N. Snavely, and Y. Artzi. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In IEEE Conference on Computer Vision and Pattern Recognition, pages 12538–12547, 2019. [4] X. Chen, L. Ma, J. Chen, Z. Jie, W. Liu, and J. Luo. Real-time referring expression comprehension by single-stage grounding network. arXiv preprint arXiv:1812.03426, 2018. [5] Y. Chen, M. Rohrbach, Z. Yan, Y. Shuicheng, J. Feng, and Y. Kalantidis. Graph-based global reasoning networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 433–442, 2019. [6] P. Dogan, L. Sigal, and M. Gross. Neural sequential phrase grounding (seqground). In IEEE Conference on Computer Vision and Pattern Recognition, pages 4175–4184, 2019. [7] S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller. Multi-class segmentation with relative location prior. International Journal of Computer Vision, 80(3):300–316, 2008. [8] J. Gu, H. Hu, L. Wang, Y. Wei, and J. Dai. Learning region features for object detection. In European Conference on Computer Vision (ECCV), pages 381–395, 2018. [9] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In IEEE international conference on computer vision, pages 2961–2969, 2017. [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [11] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. Lawrence Zitnick, and R. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2901–2910, 2017. [12] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference for Learning Representations, 2015. [13] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. International Conference for Learning Representations, 2017. [14] R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Processing Systems, pages 9605–9616, 2018. [15] F. Manhardt, W. Kehl, and A. Gaidon. Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2069–2078, 2019. [16] V. K. Nagaraja, V. I. Morariu, and L. S. Davis. Modeling context between objects for referring expression understanding. In European Conference on Computer Vision, pages 792–807. Springer, 2016. [17] E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. C. Courville. Film: Visual reasoning with a general conditioning layer. In AAAI, 2018. [18] B. A. Plummer, P. Kordas, M. Hadi Kiapour, S. Zheng, R. Piramuthu, and S. Lazebnik. Conditional image-text embedding networks. In European Conference on Computer Vision (ECCV), pages 249–264, 2018. [19] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In IEEE International Conference on Computer Vision, pages 2641–2649, 2015. [20] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrases in images by reconstruction. In European Conference on Computer Vision, pages 817–834. Springer, 2016. [21] A. Sadhu, K. Chen, and R. Nevatia. Zero-shot grounding of objects from natural language queries. In IEEE International Conference on Computer Vision, pages 4694–4703, 2019. [22] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neural network module for relational reasoning. In In Advances in Neural Information Processing Systems, pages 4967–4976, 2017. [23] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE international conference on computer vision, pages 618–626, 2017. [24] L. Wang, Y. Li, J. Huang, and S. Lazebnik. Learning two-branch neural networks for image-text matching tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2):394–407, 2018. [25] P. Wang, Q. Wu, J. Cao, C. Shen, L. Gao, and A. v. d. Hengel. Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1960–1968, 2019. [26] Z. Yang, B. Gong, L. Wang, W. Huang, D. Yu, and J. Luo. A fast and accurate one-stage approach to visual grounding. In IEEE International Conference on Computer Vision, pages 4683–4693, 2019. [27] L. Yu, Z. Lin, X. Shen, J. Yang, X. Lu, M. Bansal, and T. L. Berg. Mattnet: Modular attention network for referring expression comprehension. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1307–1315, 2018. [28] Z. Yu, J. Yu, C. Xiang, Z. Zhao, Q. Tian, and D. Tao. Rethinking diversified and discriminative proposal generation for visual grounding. IJCAI, 2018. [29] F. Zhao, J. Li, J. Zhao, and J. Feng. Weakly supervised phrase localization with multi-scale anchored transformer network. In IEEE Conference on Computer Vision and Pattern Recognition, pages 5696–5705, 2018. 1 Details about Extended Touchdown Dataset 1.1 Extended Touchdown We build a new extended dataset of the Touchdown, which are collected by the same way as the Touchdown. First, we choose some panorama IDs in the test data of the Touchdown dataset and download the panoramasin equirectangular projection. Then we slice each into eight images and project them to perspective projection. Next we put touchdowns on the target locations in the panoramas and write some language descriptions to instruct people to find them. After that, we also ask some volunteers to double check the annotations by looking for the target with the language we annotate. In addition, these data are collected from the New York StreetView. Although the IDs are the same with ones in the test set of the touchdown dataset, the scene images are changed because of different timestamps. And we rewrite the language descriptions with the new locations of touchdowns, so the dataset is different from the original touchdown dataset. Thus, this new extended dataset will be used to evaluate the generalization as well as visualize the predicted results of our proposed method. 1.2 Analysis on the Touchdown and the Extended Touchdown We further analyze the distribution of orientation words on the Touchdown and the extended Touchdown, as shown in Figure 5. It illustrates a long-tail frequency distribution over the orientation words on both two datasets, where the high-frequency words are quite similar. Also, most of language descriptions contain 20 words on both datasets, which illustrates the consistency of them. 1.3 Examples of Extended Touchdown dataset 2 More Results 2.1 Results in Successful cases This part shows the successful examples of SIRI and LingUnet. When both of the methods localize targets correctly, the SIRI is closer to the ground-truth. 2.2 Results of Ambiguity only in Images In this case, there is ambiguity only in images. The language descriptions remove the ambiguity of localization with the global orientations. SIRI can predict correctly because the method can perceive things globally and judge directions like ‘your left/right’, while LingUnet predicts positions in the opposite locations. 2.3 Results of Ambiguity in both Images and Language descriptions In this case, the ambiguities in both images and descriptions make it difficult to localize targets correctly. Here are some examples of the failure cases. Although the SIRI and LingUnet predict correctly locally, the final results are wrong because of the ambiguity of the language descriptions.
Direct inversion of the nonequispaced fast Fourier transform Melanie Kircheis Technische Universität Chemnitz, Faculty of Mathematics, 09107 Chemnitz, Germany (). [email protected], [email protected]    Daniel Potts11footnotemark: 1 Abstract Various applications such as MRI, solution of PDEs, etc. need to perform an inverse nonequispaced fast Fourier transform (NFFT), i. e., compute $M$ Fourier coefficients from given $N$ nonequispaced data. In the present paper we consider direct methods for the inversion of the NFFT. We introduce algorithms for the setting $M=N$ as well as for the underdetermined and overdetermined cases. For the setting $M=N$ a direct method of complexity $\mathcal{O}(N\log N)$ is presented which utilizes Lagrange interpolation and the fast summation. For the remaining cases, we use the matrix representation of the NFFT to deduce our algorithms. Thereby, we are able to compute an inverse NFFT up to a certain accuracy by dint of a modified adjoint NFFT in $\mathcal{O}(M\log M+N)$ arithmetic operations. Finally, we show that these approaches can also be explained by means of frame approximation. keywords: inverse nonequispaced fast Fourier transform, nonuniform fast Fourier transform, direct inversion, frame approximation, iNFFT, NFFT, NUFFT \newsiamremark remarkRemark \newsiamremarkexampleExample \newsiamthmalgAlgorithm \headersDirect inversion of the NFFTMelanie Kircheis and Daniel Potts {AMS} 65Txx, 42C15 1 Introduction The NFFT, short hand for nonequispaced fast Fourier transform or nonuniform fast Fourier transform (NUFFT), is a fast algorithm to evaluate a trigonometric polynomial $$f(x)=\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\hat{f}_{k}\,\mathrm{e}^{2\pi\mathrm% {i}kx}$$ (1.1) at nonequispaced points $x_{j}\in\left[-\frac{1}{2},\frac{1}{2}\right),\,j=1,\dots,N,$ for given Fourier coefficients $\hat{f}_{k}\in\mathbb{C}$. In case we are given equispaced points and $M=N$, this evaluation can be realized by means of the fast Fourier transform (FFT). For this setting also an algorithm for the inverse problem is known. Hence, we are interested in an inversion also for nonequispaced data, i. e., the Fourier coefficients $\hat{f}_{k}$ shall be computed for given function values $f(x_{j})$ of the trigonometric polynomial (1.1). Additionally, we study the inversion of the adjoint problem, namely the reconstruction of function values $f_{j}\in\mathbb{C}$ from given data $$h_{k}=\sum_{j=1}^{N}f_{j}\,\mathrm{e}^{-2\pi\mathrm{i}kx_{j}},\quad k=-\tfrac{% M}{2},\dots,\tfrac{M}{2}-1.$$ (1.2) In general, the number $N$ of nodes $x_{j}$ is independent from the number $M$ of Fourier coefficients $\hat{f}_{k}$ and hence the nonequispaced Fourier matrix $$\boldsymbol{A}\coloneqq\left(\mathrm{e}^{2\pi\mathrm{i}kx_{j}}\right)_{j=1,\,k% =-\frac{M}{2}}^{N,\;\frac{M}{2}-1}\ \in\mathbb{C}^{N\times M},$$ (1.3) which we would have to invert, is rectangular in most cases. Nevertheless, several approaches have been developed to compute an inverse NFFT (iNFFT). First of all, there are some iterative methods. Recently, an algorithm for the setting $M=N$ was published in [21] based on the CG method and especially designed for jittered equispaced points. An approach for the overdetermined case can be found in [8], where the solution is computed iteratively by dint of the CG algorithm using $\boldsymbol{A}^{*}\boldsymbol{W}\boldsymbol{A}$ with a diagonal matrix $\boldsymbol{W}$ of voronoi weights. In [17] the CG method in connection with the NFFT was used to formulate an iterative algorithm for the underdetermined setting which deploys $\boldsymbol{A}\boldsymbol{\hat{W}}\boldsymbol{A}^{*}$ with weights $\boldsymbol{\hat{W}}$ based on kernel approximation. Furthermore, already in [7] a direct method was explained for the setting $M=N$ which uses Lagrange interpolation as well as fast multipole methods. Based on this, in [22] another direct method for the same setting was deduced which also uses Lagrange interpolation. In addition, also a frame-theoretical approach is known from [11] which provides a link between the adjoint NFFT and frame approximation and could therefore be seen as a method to invert the NFFT. In this paper we present new direct methods for inverting the NFFT in general. For the quadratic setting, i. e., $N=M$, we review our method introduced in [16] which is also based on Lagrange interpolation but utilizes the fast summation to evaluate the occuring sums. For the general case, we take as a motivation that for equispaced points an inversion can be realized by $\boldsymbol{A}\boldsymbol{A}^{*}\approx M\boldsymbol{I}_{N}$ and aim to generalize this result to find a good approximation of the inversion for nonequispaced nodes. To this end, we make use of the decomposition $\boldsymbol{A}\approx\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}$ known from the NFFT approach and compute the sparse matrix $\boldsymbol{B}$ such that we receive approximations of the form $\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\approx M% \boldsymbol{I}_{N}$. In other words, we are able to compute an inverse NFFT by dint of a modified adjoint NFFT. Analogously, an inverse adjoint NFFT can be obtained by modifying the NFFT. Hence, the inversions can be computed in $\mathcal{O}(M\log M+N)$ arithmetic operations. The necessary precomputations developed in this paper are of complexity $\mathcal{O}(N^{2})$ and $\mathcal{O}(M^{2})$, respectively. Therefore, our method is especially beneficial in case we are given fixed nodes for several problems. Finally, we show that these approaches can also be explained by means of frame approximation. The present work is organized as follows. In Section 2 we introduce the already mentioned algorithm, the NFFT. Afterwards, in Section 3 we deal with the inversion of this algorithm. In Section 3.1 we firstly review our method from [16] for the quadratic setting $M=N$. Secondly, in Section 3.2 the underdetermined and overdetermined settings are studied, which are treated separately in Sections 3.2.1 and 3.2.2. Finally, in Section 4 we deduce an approach for the inversion which is based on frame theory. Therefore, first of all, the main ideas of frames and approximation via frames will be introduced in Section 4.1 and subsequently, in Section 4.2, we will use these ideas to develop an approach for the iNFFT adapted from [11]. In the end, we will see that this frame-theoretical approach can be traced back to the methods for the inversion introduced in Section 3.2. 2 Nonequispaced fast Fourier transform For given nodes $x_{j}\in\left[-\frac{1}{2},\frac{1}{2}\right)$,  $j=1,\dots,N$, $M\in 2\mathbb{N}$, as well as arbitrary coefficients $\hat{f_{k}}\in\mathbb{C},$ $k=-\frac{M}{2},\dots,\frac{M}{2}-1,$ we consider the computation of the sums $$f_{j}=f(x_{j})=\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\hat{f_{k}}\,\mathrm{e}^{2% \pi\mathrm{i}kx_{j}},\quad j=1,\dots,N,$$ (2.1) as well as the adjoint problem of the computation of the sums (1.2) for given values $f_{j}\in\mathbb{C}$. A fast algorithm to solve this problem is called nonequispaced fast Fourier transform (NFFT) and is briefly explained below, cf. [6, 2, 23, 20, 19, 12, 15, 21]. By defining the matrix (1.3) as well as the vectors $\boldsymbol{f}\coloneqq\left(f_{j}\right)_{j=1}^{N}$, $\boldsymbol{\hat{f}}\coloneqq(\hat{f}_{k})_{k=-\frac{M}{2}}^{\frac{M}{2}-1}$ and $\boldsymbol{h}\coloneqq(h_{k})_{k=-\frac{M}{2}}^{\frac{M}{2}-1}$, the computation of sums of form (2.1) and (1.2) can be written as $\boldsymbol{f}=\boldsymbol{A}\boldsymbol{\hat{f}}$ and $\boldsymbol{h}=\boldsymbol{A}^{*}\boldsymbol{f}$, where $\boldsymbol{A}^{*}=\overline{\boldsymbol{A}}^{\mathrm{T}}$ denotes the adjoint matrix of $\boldsymbol{A}$. 2.1 The NFFT We firstly restrict our attention to problem (2.1), which is equivalent to the evaluation of a trigonometric polynomial $f$ at nodes $x_{j}$, see (1.1). At first, we approximate $f$ by a linear combination of translates of a 1-periodic function $\tilde{w}$, i. e., $$f(x)\approx s_{1}(x)\coloneqq\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}% {2}-1}g_{l}\,\tilde{w}\hskip-2.0pt\left(x-\tfrac{l}{M_{\sigma}}\right),$$ where $M_{\sigma}=\sigma M$ with the so-called oversampling factor $\sigma\geq 1$. In the easiest case $\tilde{w}$ originates from periodization of a function $w\colon[-\frac{1}{2},\frac{1}{2})\to\mathbb{R}$. Let this so-called window function be chosen such that its 1-periodic version $\tilde{w}(x)=\sum_{r\in\mathbb{Z}}w(x+r)$ has an absolute convergent Fourier series. By means of the definition $$\hat{g}_{k}\coloneqq\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}g_{% l}\,\mathrm{e}^{-2\pi\mathrm{i}kl/{M_{\sigma}}},\quad k\in\mathbb{Z},$$ and the convolution theorem, $s_{1}$ can be represented as $$\displaystyle s_{1}(x)$$ $$\displaystyle=\sum_{k=-\infty}^{\infty}c_{k}(s_{1})\,\mathrm{e}^{2\pi\mathrm{i% }kx}$$ (2.2) $$\displaystyle=\sum_{k=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}\hat{g}_{% k}\;c_{k}(\tilde{w})\,\mathrm{e}^{2\pi\mathrm{i}kx}+\sum_{r=-\infty\atop{r\neq 0% }}^{\infty}\sum_{k=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}\hat{g}_{k}% \;c_{k+M_{\sigma}r}(\tilde{w})\,\mathrm{e}^{2\pi\mathrm{i}(k+M_{\sigma}r)x}.$$ Comparing (2.1) and (2.2) gives rise for the following definition. We set $$\hat{g}_{k}\coloneqq\left\{\begin{array}[]{cl}\dfrac{\hat{f}_{k}}{\hat{w}(k)}&% :k\in\{-\frac{M}{2},\dots,\frac{M}{2}-1\},\\ 0&:k\in\{-\frac{M_{\sigma}}{2},\dots,\frac{M_{\sigma}}{2}-1\}\setminus\{-\frac% {M}{2},\dots,\frac{M}{2}-1\},\end{array}\right.$$ where the Fourier transform of $w$ is given by $$\hat{w}(k)=\int_{-\infty}^{\infty}w(x)\,\mathrm{e}^{-2\pi\mathrm{i}kx}\,% \mathrm{d}x=\int_{-\frac{1}{2}}^{\frac{1}{2}}\tilde{w}(x)\,\mathrm{e}^{-2\pi% \mathrm{i}kx}\,\mathrm{d}x=c_{k}(\tilde{w}).$$ (2.3) Furthermore, we suppose $w$ is small outside the interval $\left[-\sfrac{m}{M_{\sigma}},\sfrac{m}{M_{\sigma}}\right]$,  $m\ll M_{\sigma}.$ Then $w$ can be approximated by $w_{m}(x)=\chi_{\left[-\sfrac{m}{M_{\sigma}},\sfrac{m}{M_{\sigma}}\right]}\cdot w% (x)$, which is compactly supported since $\chi_{\left[-\sfrac{m}{M_{\sigma}},\sfrac{m}{M_{\sigma}}\right]}$ denotes the characteristic function of $\left[-\sfrac{m}{M_{\sigma}},\sfrac{m}{M_{\sigma}}\right].$ Thus, $\tilde{w}$ can be approximated by the 1-periodic function $\tilde{w}_{m}$ with $$\sum_{k\in\mathbb{Z}}\hat{w}(k)\,\mathrm{e}^{2\pi\mathrm{i}kx}=\tilde{w}(x)% \approx\tilde{w}_{m}(x)=\sum_{r\in\mathbb{Z}}w_{m}(x+r).$$ Hence, we obtain the following approximation $$f(x_{j})\approx s_{1}(x_{j})\approx s(x_{j})\coloneqq\sum_{l=-\frac{M_{\sigma}% }{2}}^{\frac{M_{\sigma}}{2}-1}g_{l}\,\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-% \tfrac{l}{M_{\sigma}}\right)=\sum_{l=\lceil M_{\sigma}x_{j}\rceil-m}^{\lfloor M% _{\sigma}x_{j}\rfloor+m}g_{l}\,\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac{l}{% M_{\sigma}}\right),$$ where simplification arises because many summands vanish. By defining • the diagonal matrix $$\boldsymbol{D}\coloneqq\text{diag}\left(\frac{1}{M_{\sigma}\hat{w}(k)}\right)_% {k=-\frac{M}{2}}^{\frac{M}{2}-1}\ \in\mathbb{C}^{M\times M},$$ (2.4) • the truncated Fourier matrix $$\boldsymbol{F}\coloneqq\left(\mathrm{e}^{2\pi\mathrm{i}k\frac{l}{M_{\sigma}}}% \right)_{l=-\frac{M_{\sigma}}{2},\,k=-\frac{M}{2}}^{\frac{M_{\sigma}}{2}-1,\;% \frac{M}{2}-1}\ \in\mathbb{C}^{M_{\sigma}\times M},$$ (2.5) • and the sparse matrix $$\boldsymbol{B}\coloneqq\bigg{(}\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac{l}{% M_{\sigma}}\right)\bigg{)}_{j=1,\,l=-\frac{M_{\sigma}}{2}}^{N,\;\frac{M_{% \sigma}}{2}-1}\ \in\mathbb{R}^{N\times M_{\sigma}},$$ (2.6) this can be formulated in matrix-vector notation and we receive the approximation $\boldsymbol{A}\approx\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}$. Therefore, the corresponding fast algorithm consisting of three steps is of complexity $\mathcal{O}(M\log M+N)$. Remark 1 Suitable window functions can be found in [6, 2, 23, 5, 9, 12, 15].       $\hfill\rule{6.45pt}{6.45pt}\\ $ Remark 2 It must be pointed out that because of consistency the factor $\frac{1}{M_{\sigma}}$ is here not located in the matrix $\boldsymbol{F}$ as usual but in the matrix $\boldsymbol{D}$.       $\hfill\rule{6.45pt}{6.45pt}\\ $ 2.2 The adjoint NFFT Now we consider the problem (1.2), which is treated similarly to [20] and therefore we firstly define the function $$\tilde{g}(x)\coloneqq\sum_{j=1}^{N}f_{j}\,\tilde{w}(x_{j}-x)$$ (2.7) and calculate its Fourier coefficients $$\displaystyle c_{k}(\tilde{g})$$ $$\displaystyle=\int_{-\frac{1}{2}}^{\frac{1}{2}}\tilde{g}(x)\,\mathrm{e}^{-2\pi% \mathrm{i}kx}\,\mathrm{d}x=\sum_{j=1}^{N}f_{j}\,\mathrm{e}^{-2\pi\mathrm{i}kx_% {j}}\int_{-\frac{1}{2}}^{\frac{1}{2}}\tilde{w}(y)\,\mathrm{e}^{2\pi\mathrm{i}% ky}\,\mathrm{d}y=h_{k}\,c_{-k}(\tilde{w}).$$ In other words, the values $h_{k}$ can be computed if $c_{-k}(\tilde{w})$ and $c_{k}(\tilde{g})$ are known. The Fourier coefficients of $\tilde{g}$ are determined approximately by dint of the trapezoidal rule $$c_{k}(\tilde{g})\approx\frac{1}{M_{\sigma}}\sum_{l=-\frac{M_{\sigma}}{2}}^{% \frac{M_{\sigma}}{2}-1}\sum_{j=1}^{N}f_{j}\,\tilde{w}\hskip-1.5pt\left(x_{j}-% \tfrac{l}{M_{\sigma}}\right)\,\mathrm{e}^{-2\pi\mathrm{i}kl/M_{\sigma}}.$$ Let the function $w$ moreover be well localized in time so that $\tilde{w}$ can be replaced by $\tilde{w}_{m}$ again. Then we obtain the approximation $$\frac{c_{k}(\tilde{g})}{c_{-k}(\tilde{w})}\approx\frac{1}{M_{\sigma}\hat{w}(-k% )}\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}\sum_{j=1}^{N}f_{j}\,% \tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac{l}{M_{\sigma}}\right)\,\mathrm{e}^% {-2\pi\mathrm{i}kl/M_{\sigma}}\eqqcolon\tilde{h}_{k}.$$ (2.8) Rewriting this by dint of (2.4), (2.5) and (2.6) we receive $\boldsymbol{A}^{*}\approx\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}$. Hence, the algorithm for the adjoint problem is also of complexity $\mathcal{O}(M\log M+N)$. 3 Inversion of the NFFT Having introduced the fast methods for nonequispaced data, we aim to find an inversion for these algorithms encouraged by the fact that for equispaced data the inversion is well-known. Therefore, we face the following two problems. (1) Solve $$\begin{split}\displaystyle\boldsymbol{A}\boldsymbol{\hat{f}}&\displaystyle=% \boldsymbol{f},\\ \displaystyle\text{ given: }\boldsymbol{f}\in\mathbb{C}^{N},&\displaystyle% \text{ find: }\boldsymbol{\hat{f}}\in\mathbb{C}^{M},\end{split}$$ (3.1) i. e., reconstruct the Fourier coefficients $\boldsymbol{\hat{f}}=(\hat{f}_{k})_{k=-\frac{M}{2}}^{\frac{M}{2}-1}$ from function values $\boldsymbol{f}=(f_{j})_{j=1}^{N}$. This will be solved by an inverse NFFT. (2) Solve $$\begin{split}\displaystyle\boldsymbol{A}^{*}\boldsymbol{f}&\displaystyle=% \boldsymbol{h},\\ \displaystyle\text{ given: }\boldsymbol{h}\in\mathbb{C}^{M},&\displaystyle% \text{ find: }\boldsymbol{f}\in\mathbb{C}^{N},\end{split}$$ (3.2) i. e., reconstruct the coefficients $\boldsymbol{f}=(f_{j})_{j=1}^{N}$ from given data $\boldsymbol{h}=(h_{k})_{k=-\frac{M}{2}}^{\frac{M}{2}-1}$. This will be solved by an inverse adjoint NFFT. In both problems the numbers $M$ and $N$ are independent. It is obvious that except for the quadratic setting $M=N$ there are two different ways to choose $M$ and $N$. The first possibility is $M<N$, i. e., for the inverse NFFT in (3.1) we are given more function values than Fourier coefficients, which we are supposed to find. That means, we are in an overdetermined setting. The second variation is the converse setting $M>N$, where we have to find more Fourier coefficients than we are given initial data. Hence, this is the underdetermined case. Analogously, the same relations can be considered for the inverse adjoint NFFT in (3.2). There $M$ belongs to the given data whereas $N$ goes with the wanted solution. Thus, the overdetermined case in now $M>N$ while the problem is underdetermined for $M<N$. This section is organized as follows. Firstly, in Section 3.1 the inversions are derived for the quadratic case $M=N$. Secondly, in Section 3.2.1 we survey the underdetermined case of the inverse NFFT, which corresponds to the overdetermined case of the adjoint. Finally, in Section 3.2.2 the overdetermined case of the inverse NFFT is explained, which is related to the underdetermined case of the adjoint. 3.1 The quadratic case For the quadratic case $M=N$ we use an approach analogous to [7, 22] where an inversion is realized by means of Lagrange interpolation. While the fast algorithms are obtained in [7] by dint of FMM, our method from [16] employs the fast summation for acceleration, see [19]. The main idea is to use a relation between two evaluations of a trigonometric polynomial $$f_{j}\coloneqq f(y_{j})=\sum_{k=-\frac{N}{2}}^{\frac{N}{2}-1}\hat{f}_{k}\,% \mathrm{e}^{2\pi\mathrm{i}ky_{j}},\quad j=1,\dots,N,$$ and $$g_{l}\coloneqq f(x_{l})=\sum_{k=-\frac{N}{2}}^{\frac{N}{2}-1}\hat{f}_{k}\,% \mathrm{e}^{2\pi\mathrm{i}kx_{l}},\quad l=1,\dots,N,$$ (3.3) for different sets of nodes $x_{l},y_{j}\in\left[-\frac{1}{2},\frac{1}{2}\right)$, $l,j=1,\dots,N,$ and Fourier coefficients $\hat{f}_{k}\in\mathbb{C}$,  $k=-\frac{N}{2},\dots,\frac{N}{2}-1$. By defining the coefficients $$a_{l}=\prod_{n=1}^{N}\sin(\pi(x_{l}-y_{n}))\quad\text{ and }\quad b_{j}=\prod_% {n=1\atop{n\neq j}}^{N}\frac{1}{\sin(\pi(y_{j}-y_{n}))},\quad l,j=1,\dots,N,$$ (3.4) we observe the relation $$g_{l}=a_{l}\sum_{j=1}^{N}f_{j}\,b_{j}\left(\frac{1}{\tan(\pi(x_{l}-y_{j}))}-% \mathrm{i}\right),\quad l=1,\dots,N,$$ (3.5) cf. [7, Theorem 2.3]. Hence, for given nonequispaced nodes $y_{j}$ the computation of an inverse NFFT can be realized by choosing additional points $x_{l}$ and applying formula (3.5). If these nodes $x_{l}$ are chosen equidistantly we can compute the Fourier coefficients $\hat{f}_{k}$ by simply applying an FFT to the coefficients $g_{l}$ in (3.3). Remark 3 It must be pointed out that the considered approach is only applicable for disjunct nodes. If this condition is violated we obtain zeros for the $a_{l}$ and therefore the result is zero but it is especially prohibited in case of the coefficients $b_{j}$ since this would mean division by zero, cf. (3.4).       $\hfill\rule{6.45pt}{6.45pt}\\ $ We approximate the coefficients $g_{l}$ in (3.5) by using the fast summation, see [19]. Considering the computation scheme we see that it is possible to compute $$\tilde{g}_{l}\coloneqq\sum_{j=1}^{N}f_{j}\,b_{j}\cot(\pi(x_{l}-y_{j})),\quad l% =1,\dots,N,$$ (3.6) by means of the fast summation. Then the wanted coefficients $g_{l}$ can be obtained by $$g_{l}=a_{l}\left(\tilde{g}_{l}-\mathrm{i}\cdot\sum_{j=1}^{N}f_{j}\,b_{j}\right% ),\quad l=1,\dots,N,$$ (3.7) where we only have to compute an additional scalar product of two vectors, which requires only $\mathcal{O}(N)$ arithmetic operations. Considering the kernel $K(x)=\cot(\pi x)$ in detail it becomes apparent that this function has not only the singularity at $x=0$ but also at the boundary $x=\pm 1$. For further information see [16]. The coefficients $a_{l}$ and $b_{j}$ can also be computed efficiently by the fast summation because of the observation $$\tilde{a}_{l}\coloneqq\ln|a_{l}|=\ln\left|\,\prod_{n=1}^{N}\sin(\pi(x_{l}-y_{n% }))\right|=\sum_{n=1}^{N}\,\ln\left|\sin(\pi(x_{l}-y_{n}))\right|$$ and $$\displaystyle\tilde{b}_{j}\coloneqq\ln|b_{j}|$$ $$\displaystyle=\ln\left|\,\prod_{n=1\atop{n\neq j}}^{N}\frac{1}{\sin(\pi(y_{j}-% y_{n}))}\right|=-\sum_{n=1\atop{n\neq j}}^{N}\,\ln\left|\sin(\pi(y_{j}-y_{n}))% \right|.$$ Therefore, it is possible to use the kernel $K(x)=\ln(|\sin(\pi x)|)$ to compute the absolute values and perform a correction of sign afterwards to receive the signed coefficients $a_{l}$ and $b_{j}$. Having a closer look at the kernel it becomes apparent that this function is also one-periodic and shows singularities at the same positions as the cotangent does. Hence, the computation works analogously. Additionally, a stabilization can be incorporated. For detailed information see [16]. Thus, we obtain the following fast algorithm. Remark 4 This algorithm is part of the software package NFFT 3.4.1, see [14, ./matlab/infft1d].       $\hfill\rule{6.45pt}{6.45pt}\\ $ Now we have a look at some numerical examples. Example 1 We choose arbitrary Fourier coefficients $\hat{f}_{k}\in[1,100]$ and compute the evaluations of the related trigonometric polynomial (1.1). Out of these we want to retrieve the given $\hat{f}_{k}$. As mentioned in [11, 4, 1] we examine so-called jittered equispaced nodes $$x_{j}=-\frac{1}{2}+\frac{j-1}{N}+\frac{1}{4N}\,\theta,\quad j=1,\dots,N,\ % \text{with}\ \theta\sim U(0,1),$$ (3.8) where $U(0,1)$ denotes the uniform distribution on the interval $(0,1)$. We consider the absolute and relative errors per node $$\frac{e_{p}^{\rm abs}}{N}=\frac{1}{N}\|\boldsymbol{\hat{f}}-\boldsymbol{\check% {f}}\|_{p}\quad\text{ and }\quad\frac{e_{p}^{\rm rel}}{N}=\frac{\|\boldsymbol{% \hat{f}}-\boldsymbol{\check{f}}\|_{p}}{N\,\|\boldsymbol{\hat{f}}\|_{p}}$$ (3.9) for $p\in\{2,\infty\}$, where $\boldsymbol{\check{f}}$ is the outcome of Algorithm 3.1. As a first experiment we use $N=2^{c}$ with $c=1,\dots,14$, and for the parameters needed in the fast summation we take the standard values, see [14]. In a second experiment we fix $N=1024$ and increase some of the standard values, namely the cut-off parameter $m$ and the degree of smoothness $p$ shall be chosen uniformly $m=p=c$ with $c=4,\dots,12$. The corresponding results are depicted in Figure 3.1. Having a look at the errors per node for growing $N$, see (a), we observe that the errors are worse if we consider very small sizes of $N$. Otherwise, we recognize that these errors remain stable for large sizes of $N$. In (b) we can see that for fixed $N$ a higher accuracy can be achieved by tuning the parameters of the fast summation.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Remark 5 We obtain an inverse adjoint NFFT by simply considering the adjoint of Algorithm 3.1, i. e., for $\boldsymbol{v}\coloneqq(v_{l})_{l=1}^{N}$ being the inverse Fourier transform of $\boldsymbol{h}$ we apply the formula $$f_{j}=b_{j}\sum_{l=1}^{N}a_{l}\,v_{l}\left(\frac{1}{\tan(\pi(x_{l}-y_{j}))}+% \mathrm{i}\right),\quad j=1,\dots,N.$$ (3.10) This relation can easily be seen by using the matrix representation of Algorithm 3.1.       $\hfill\rule{6.45pt}{6.45pt}\\ $ 3.2 The rectangular case For the general case $M\neq N$ we follow a different approach. To clarify the idea we firstly have a look at equispaced nodes $$x_{j}=\tfrac{j}{N}\in\left[-\tfrac{1}{2},\tfrac{1}{2}\right),\,j=-\tfrac{N}{2}% ,\dots,\tfrac{N}{2}-1.$$ Thereby, we obtain $$\boldsymbol{A}=\left(\mathrm{e}^{2\pi\mathrm{i}k\frac{j}{N}}\right)_{j=-\frac{% N}{2},\,k=-\frac{M}{2}}^{\frac{N}{2}-1,\;\frac{M}{2}-1}\quad\text{ and }\quad% \boldsymbol{A}^{*}=\left(\mathrm{e}^{-2\pi\mathrm{i}k\frac{j}{N}}\right)_{k=-% \frac{M}{2},\,j=-\frac{N}{2}}^{\frac{M}{2}-1,\;\frac{N}{2}-1}.$$ Considering products of these two matrices it becomes apparent that $\boldsymbol{A}^{*}\boldsymbol{A}=N\boldsymbol{I}_{M}$ for $M\leq N$ as well as $\boldsymbol{A}\boldsymbol{A}^{*}=M\boldsymbol{I}_{N}$ for $M\geq N$ with $N\mid M$. That is to say, in these special cases we are given an inversion of the NFFT by composition of the Fourier matrices. Hence, we seek to use this result to find a good approximation of the inversion in the general case. This will be done by modification of the matrix $\boldsymbol{B}$ so that we receive an approximation of the form $\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\approx M% \boldsymbol{I}_{N}$ similar to the equispaced case. For that purpose, the entries of the matrix $\boldsymbol{B}$ should be calculated such that its sparse structure with at most $(2m+1)$ entries per row and consequently the arithmetic complexity of the algorithms is preserved. A matrix $\boldsymbol{B}$ satisfying this property we call (2m+1)-sparse. It is to be noticed that the fact of underdetermination and overdetermination is not of great importance when deducing the methods for the inversion. Even if it is a necessary condition for the exact inversion for equispaced nodes, the algorithms in the nonequispaced setting can always be used in both cases. However, we will see later on that each algorithm works best in one of these cases and therefore they are introduced for a special case not meaning it as a condition. Having this in mind we give an outline how to handle problems (3.1) and (3.2). (1) To solve (3.1) our aim is to compute a sparse matrix $\boldsymbol{B}^{*}$ from given nodes $x_{j}$ such that by application of an adjoint NFFT we obtain a fast inverse NFFT. Suppose we are given the approximation $\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\approx M% \boldsymbol{I}_{N}$. Then it also holds that $$\frac{1}{M}\ \boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^% {*}\boldsymbol{f}\approx\boldsymbol{f}\quad\forall\boldsymbol{f}\in\mathbb{C}^% {N}.$$ (3.11) If we now set $$\boldsymbol{\check{f}}:=\frac{1}{M}\,\boldsymbol{D}^{*}\boldsymbol{F}^{*}% \boldsymbol{B}^{*}\boldsymbol{f},$$ we can rewrite approximation (3.11) as $\boldsymbol{A}\boldsymbol{\check{f}}\approx\boldsymbol{f}$. Since we already know that $\boldsymbol{A}\boldsymbol{\hat{f}}=\boldsymbol{f}$ this means $\boldsymbol{\check{f}}\approx\boldsymbol{\hat{f}}$, which could be interpreted as reconstruction of the Fourier coefficients $\boldsymbol{\hat{f}}$. To achieve a good approximation we want $\boldsymbol{\check{f}}$ to be as close as possible by $\boldsymbol{\hat{f}}$. This can be accomplished by optimizing $\boldsymbol{A}\boldsymbol{\check{f}}\approx\boldsymbol{f}$, i. e., we aim to solve the optimization problem $$\underset{\boldsymbol{\check{f}}\in\mathbb{C}^{M}}{\text{Minimize }}\ \|% \boldsymbol{A}\boldsymbol{\check{f}}-\boldsymbol{f}\|_{2}.$$ Using the definition of $\boldsymbol{\check{f}}$ this norm can be estimated above by $$\|M\boldsymbol{A}\boldsymbol{\check{f}}-M\boldsymbol{f}\|_{2}=\|\boldsymbol{A}% \boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\boldsymbol{f}-M% \boldsymbol{f}\|_{2}\leq\|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}% \boldsymbol{B}^{*}-M\boldsymbol{I}_{N}\|_{\mathrm{F}}\,\|\boldsymbol{f}\|_{2},$$ where the Frobenius norm is denoted by $\|\cdot\|_{\mathrm{F}}$. Because $\boldsymbol{f}$ is given, this expression can be minimized by solving $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{A}\boldsymbol{D}^{*}% \boldsymbol{F}^{*}\boldsymbol{B}^{*}-M\boldsymbol{I}_{N}\|_{\mathrm{F}}^{2}.$$ (3.12) (2) To solve (3.2) we aim to compute a sparse matrix $\boldsymbol{B}$ from given nodes $x_{j}$ such that by application of an NFFT we obtain a fast inverse adjoint NFFT. Again we suppose $\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\approx M% \boldsymbol{I}_{N}$, which is equivalent to its adjoint $$\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\approx M\,% \boldsymbol{I}_{N}\quad\text{and\ }\quad\frac{1}{M}\,\boldsymbol{B}\boldsymbol% {F}\boldsymbol{D}\,(\boldsymbol{A}^{*}\boldsymbol{f})\approx\boldsymbol{f}% \quad\forall\boldsymbol{f}\in\mathbb{C}^{N},$$ respectively. Because we know $\boldsymbol{h}=\boldsymbol{A}^{*}\boldsymbol{f}$, this could be interpreted as reconstruction of the coefficients $\boldsymbol{f}$. To achieve a good approximation we solve the optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{B}\boldsymbol{F}% \boldsymbol{D}\boldsymbol{h}-M\boldsymbol{f}\|_{2},$$ where the norm could be estimated as follows. $$\|\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{h}-M\boldsymbol{f}\|_{% 2}=\|\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{f% }-M\boldsymbol{f}\|_{2}\leq\|\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}% \boldsymbol{A}^{*}-M\boldsymbol{I}_{N}\|_{\mathrm{F}}\,\|\boldsymbol{f}\|_{2}.$$ Hence, we end up with the optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{B}\boldsymbol{F}% \boldsymbol{D}\boldsymbol{A}^{*}-M\boldsymbol{I}_{N}\|_{\mathrm{F}}^{2}.$$ So, all in all, with the chosen approach we are able to generate an inverse NFFT as well as an inverse adjoint NFFT by modifying the matrices $\boldsymbol{B}^{*}$ and $\boldsymbol{B}$, respectively, and applying an adjoint NFFT or an NFFT with these modified matrices. Remark 6 We investigate below if the reconstruction error can be reduced by appropriate choice of the entries of the matrix $\boldsymbol{B}$. Already in [18] the minimization of the Frobenius norm $\|\boldsymbol{A}-\boldsymbol{BFD}\|_{\textrm{F}}$ was analyzed regarding a sparse matrix $\boldsymbol{B}$ to achieve a minimum error for the NFFT. In contrast, we study the minimization of $\|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}-M% \boldsymbol{I}_{N}\|_{\textrm{F}}^{2}$ to achieve a minimum error for the inverse NFFT as well as the minimization of $\|\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}-M\boldsymbol{I}% _{N}\|_{\textrm{F}}^{2}$ to achieve a minimum error for the inverse adjoint NFFT.       $\hfill\rule{6.45pt}{6.45pt}\\ $ 3.2.1 Inverse NFFT – underdetermined case We start deducing our inversion as outlined in general. However, in the numerical experiments in Examples 2 and 3 we will see that this method is especially beneficial for the underdetermined setting and hence it is already attributed to this case. As mentioned before we aim to find a solution for (3.1) by solving (3.12). Therefore, we consider the matrix $\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}$ for given nodes $x_{j}\in\mathbb{T}$,  $j=1,\dots,N$. Apparently, we have $$\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}=\left[\frac{1}{M_{\sigma}}% \sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\frac{1}{\hat{w}(-k)}\,\mathrm{e}^{2\pi% \mathrm{i}k\left(x_{j}-\frac{l}{M_{\sigma}}\right)}\right]_{j=1,\,l=-\frac{M_{% \sigma}}{2}}^{N,\frac{M_{\sigma}}{2}-1}.$$ (3.13) By defining the “inverse window function” $$K(x)=\frac{1}{M_{\sigma}}\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\frac{1}{\hat{w}% (-k)}\,\mathrm{e}^{2\pi\mathrm{i}kx}$$ (3.14) we receive $$\boldsymbol{K}\coloneqq\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}=% \left(K\hskip-2.0pt\left(x_{h}-\tfrac{l}{M_{\sigma}}\right)\right){}_{h=1,\,l=% -\frac{M_{\sigma}}{2}}^{N,\ \frac{M_{\sigma}}{2}-1}.$$ Having a look at the matrix $\boldsymbol{B}^{*}$ it becomes apparent that there are only a few nonzero entries. Thus, we study the window $\tilde{w}_{m}$ for further simplification. For $w_{m}$ we have $\text{supp}(w_{m})=\left[-\tfrac{m}{M_{\sigma}},\tfrac{m}{M_{\sigma}}\right]$, i. e., for the 1-periodic version $\tilde{w}_{m}(x):=\sum_{z\in\mathbb{Z}}w_{m}(x+z)$ it holds $$\displaystyle\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac{l}{M_{\sigma}}\right)\neq 0$$ $$\displaystyle\iff\exists\,z\in\mathbb{Z}:\ -m\leq M_{\sigma}x_{j}-l+M_{\sigma}% z\leq m.$$ By defining the set $$I_{M_{\sigma},m}(x_{j})\coloneqq\left\{l\in\left\{-\tfrac{M_{\sigma}}{2},\dots% ,\tfrac{M_{\sigma}}{2}-1\right\}:\exists\,z\in\mathbb{Z}\ \text{with}-m\leq M_% {\sigma}x_{j}-l+M_{\sigma}z\leq m\right\}$$ (3.15) we can therefore write $$\displaystyle\left(\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}% \boldsymbol{B}^{*}\right)_{h,j}$$ $$\displaystyle=\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\mathrm{e}^{2\pi\mathrm{i}% kx_{h}}\hskip-3.0pt\left(\sum_{l\in I_{M_{\sigma},m}(x_{j})}\frac{1}{M_{\sigma% }\hat{w}(-k)}\,\mathrm{e}^{-2\pi\mathrm{i}k\frac{l}{M_{\sigma}}}\ \tilde{w}_{m% }\hskip-2.5pt\left(x_{j}-\tfrac{l}{M_{\sigma}}\right)\hskip-3.0pt\right).$$ Hence, our considered norm can be written as $$\displaystyle\|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B% }^{*}-M\boldsymbol{I}_{N}\|_{\textrm{F}}^{2}=$$ (3.16) $$\displaystyle=\sum_{h=1}^{N}\sum_{j=1}^{N}$$ $$\displaystyle\left|\hskip-3.0pt\ \sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\mathrm{% e}^{2\pi\mathrm{i}kx_{h}}\hskip-3.0pt\left(\sum_{l\in I_{M_{\sigma},m}(x_{j})}% \frac{1}{M_{\sigma}\hat{w}(-k)}\,\mathrm{e}^{-2\pi\mathrm{i}k\frac{l}{M_{% \sigma}}}\,\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac{l}{M_{\sigma}}\right)% \hskip-3.0pt\right)-N\delta_{hj}\right|^{2}.$$ Based on the definition of the Frobenius norm of a matrix $\boldsymbol{A}\in\mathbb{R}^{k\times n}$ we obtain for $\boldsymbol{a}_{j}$ being columns of $\boldsymbol{A}\in\mathbb{R}^{k\times n}$ that $$\|\boldsymbol{A}\|_{F}^{2}=\sum_{i=1}^{k}\sum_{j=1}^{n}|a_{ij}|^{2}=\sum_{j=1}% ^{n}\|\boldsymbol{a}_{j}\|_{2}^{2}.$$ This yields that (3.16) can be rewritten by dint of $$\boldsymbol{T}_{j}=\left(\mathrm{e}^{-2\pi\mathrm{i}k\frac{l}{M_{\sigma}}}% \right)_{k=-\frac{M}{2},\,l\in I_{M_{\sigma},m}(x_{j})}^{\frac{M}{2}-1},\quad% \boldsymbol{b}_{j}=\bigg{(}\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac{l}{M_{% \sigma}}\right)\bigg{)}_{l\in I_{M_{\sigma},m}(x_{j})}$$ and $\boldsymbol{e}_{j}=(\delta_{hj})_{h=1}^{N}$ as $$\|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}-M% \boldsymbol{I}_{N}\|_{\textrm{F}}^{2}=\sum_{j=1}^{N}\|\boldsymbol{A}% \boldsymbol{D}^{*}\boldsymbol{T}_{j}\boldsymbol{b}_{j}-M\boldsymbol{e}_{j}\|_{% 2}^{2}.$$ (3.17) Therefore, the considered norm in (3.12) is minimal if and only if $\|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{T}_{j}\boldsymbol{b}_{j}-M% \boldsymbol{e}_{j}\|_{2}^{2}$ is minimal for all $j=1,\dots,N$. Hence, we obtain the optimization problems $$\underset{\tilde{\boldsymbol{b}}_{j}\in\mathbb{R}^{2m+1}}{\text{Minimize }}\ % \|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{T}_{j}\tilde{\boldsymbol{b}}_{j}% -M\boldsymbol{e}_{j}\|_{2}^{2},\quad j=1,\dots,N,$$ (3.18) since the columns of the matrix $\boldsymbol{B}^{*}$ contain at most $(2m+1)$ nonzeros. Thus, if $$\hskip-5.690551pt\boldsymbol{K}_{j}\coloneqq\boldsymbol{A}\boldsymbol{D}^{*}% \boldsymbol{T}_{j}=\left[\frac{1}{M_{\sigma}}\sum_{k=-\frac{M}{2}}^{\frac{M}{2% }-1}\frac{1}{\hat{w}(-k)}\,\mathrm{e}^{2\pi\mathrm{i}k\left(x_{h}-\frac{l}{M_{% \sigma}}\right)}\right]_{h=1,\,l\in I_{M_{\sigma},m}(x_{j})}^{N}\hskip-35.0pt% \in\mathbb{C}^{N\times(2m+1)}$$ (3.19) has full rank the solution of problem (3.18) is given by $$\tilde{\boldsymbol{b}}_{j}=M\left(\boldsymbol{K}_{j}^{*}\boldsymbol{K}_{j}% \right)^{-1}\boldsymbol{K}_{j}^{*}\boldsymbol{e}_{j},\quad j=1,\dots,N.$$ (3.20) For generating the modified matrix $\boldsymbol{B}_{\rm opt}^{*}$ it must be pointed out that the vectors $\tilde{\boldsymbol{b}}_{j}$ only contain the $(2m+1)$ nonzeros of the columns of $\boldsymbol{B}_{\rm opt}^{*}$. Hence, attention must be paid to the periodicity which can also be seen in the structure of the matrix $\boldsymbol{B}^{*}$. Remark 7 Whether the matrix $\boldsymbol{K}_{j}$ has full rank only depends on the matrix $\boldsymbol{A}$. The conditions when this matrix has full rank can e. g. be found in [13] and [17].       $\hfill\rule{6.45pt}{6.45pt}\\ $ Next we develop a method for the fast evaluation of $\tilde{\boldsymbol{b}}_{j}$. We already know from Section 2 that sums of the form (2.1) can be computed in $\mathcal{O}(M\log M+N)$ arithmetic operations for given nodes $x_{j}\in\left[-\frac{1}{2},\frac{1}{2}\right)$, $j=1,\dots,N,$ and coefficients $\hat{f}_{k}\in\mathbb{C},\,k=-\frac{M}{2},\dots,\frac{M}{2}-1$. If we have a look at the matrix $\boldsymbol{K}_{j}$, cf. (3.19), it becomes apparent that we can compute its entries by dint of the NFFT with coefficients $$\hat{f}_{k}=\frac{1}{M_{\sigma}\hat{w}(-k)},\quad k=-\tfrac{M}{2},\dots,\tfrac% {M}{2}-1,$$ (3.21) and nodes $y_{h,l}\coloneqq x_{h}-\tfrac{l}{M_{\sigma}},\,h=1,\dots,N,l\in I_{M_{\sigma},% m}(x_{j})$, which are at most $N(2m+1)$ many. If we put the columns of $\boldsymbol{K}_{j}$ one below the other into a vector, we are able to compute these entries only using one NFFT of length $N(2m+1)$. In so doing, we have to reshape the obtained vector into a matrix afterwards. Another point to mention is that the coefficients $\hat{f}_{k}$ are the same for the computation of all matrices $\boldsymbol{K}_{j},\,j=1,\dots,N$. This is to say, we can precompute step 1 and step 2 of the NFFT since there only information about the Fourier coefficients $\hat{f}_{k}$ is needed, cf. [20]. Merely for the last step of the NFFT we need the current nodes and therefore this step has to be performed separately for every $j=1,\dots,N$. Thus, we receive the following algorithm. Remark 8 It is possible to simplify Algorithm 3.2.1 by replacing the inverse window function in (3.14) by the Dirichlet kernel $$D_{\frac{M}{2}-1}(x)=\sum_{k=-\frac{M}{2}+1}^{\frac{M}{2}-1}\mathrm{e}^{2\pi% \mathrm{i}kx}=\frac{\sin((M-1)\pi x)}{\sin(\pi x)}.$$ (3.22) Hence, the entries of $\boldsymbol{K}_{j}$ in (3.19) can explicitly be stated by means of (3.22) as $$\boldsymbol{K}_{j}=\left[\frac{1}{M_{\sigma}}\,D_{\frac{M}{2}-1}\left(x_{h}-% \tfrac{l}{M_{\sigma}}\right)\right]_{h=1,\,l\in I_{M_{\sigma},m}(x_{j})}^{N}$$ and thereby the term $M\log M$ in the computational costs of Algorithm 3.2.1 is eliminated. Thus, we end up with computational costs of $\mathcal{O}(N^{2})$.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Now we have a look at some numerical examples. Example 2 Firstly, we verify that the optimization was successful. To this end, we compare the norms $$\|\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}-M% \boldsymbol{I}_{N}\|_{\textrm{F}}\quad\text{ and }\quad\|\boldsymbol{A}% \boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}_{\rm opt}^{*}-M\boldsymbol{% I}_{N}\|_{\textrm{F}},$$ (3.23) where $\boldsymbol{B}^{*}$ denotes the original matrix from the adjoint NFFT and $\boldsymbol{B}_{\rm opt}^{*}$ the optimized matrix generated by Algorithm 3.2.1. Even though our method is attributed to the underdetermined setting, this is not a restriction. Hence, we also test for the overdetermined setting. (i) Firstly, we choose $N=128$ jittered equispaced nodes, cf. (3.8), $M=2^{c}$ with $c=4,\dots,12,$ and in Algorithm 3.2.1 we choose the Kaiser-Bessel window, $\sigma_{2}=2.0$ and $m_{2}=2m$ to receive high accuracy. Figure 3.2 depicts the comparison of the norms (3.23) for different values of $m$ and $\sigma$ for $\boldsymbol{B}_{\rm opt}^{*}$ generated using B-Splines as well as the Dirichlet kernel, mentioned in Remark 8. It can be seen that the minimization was really successful for large values of $M$ compared to $N$ whereas it does not work good in the overdetermined setting $M<N$. But in fact, this is not surprising because then we try to approximate the identity by a low rank matrix since $\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\in\mathbb% {C}^{N\times N}$ has at most rank $M$. Therefore, Algorithm 3.2.1 is especially attributed to the underdetermined case. We recognize that the optimization is worsened against expectation by using a higher oversampling factor $\sigma$ whereas increasing the cut-off $m$ expectedly leads to better results. Figure 3.3 displays the run-times of Algorithm 3.2.1 comparing B-Spline and Dirichlet kernel for $m=2$ and $\sigma=1.0$. It is obvious that using the Dirichlet kernel considerably reduced the run-time. Since the results are the same for other parameters and window functions additional tests are omitted. (ii) Next we repeat the example for Chebyshev nodes $$x_{j}=\frac{1}{2}\cos\left(\frac{2(N-j)+1}{2N}\ \pi\right),\quad j=1,\dots,N.$$ (3.24) The corresponding outcomes for B-Splines can be found in Figure 3.4. There we see that the gap between $M$ and $N$ has to be huge to achieve results similar to those for jittered equispaced nodes.        $\hfill\rule{6.45pt}{6.45pt}\\ $ Example 3 Secondly, we check if this approach allows us to perform an inverse NFFT for a given function $f$. As supposed in [11], we choose the $$f(x)=\cos^{2}(\pi x^{2})\,\sin(10x^{2}),\quad x\in\left[-\tfrac{1}{2},\tfrac{1% }{2}\right).$$ (3.25) We aim to approximate the Fourier coefficients $$c_{k}(f)\coloneqq\int_{-\frac{1}{2}}^{\frac{1}{2}}f(x)\,\mathrm{e}^{-2\pi% \mathrm{i}kx}\,\mathrm{d}x,\quad k=-\tfrac{M}{2},\dots,\tfrac{M}{2}-1,$$ (3.26) of the function $f$. For this purpose we consider the function $$g(x)\coloneqq\int_{-\frac{1}{2}}^{\frac{1}{2}}f(y)\,\tilde{w}(y-x)\,\mathrm{d}y.$$ (3.27) By means of the convolution operator we are able to write $g=f\ast\tilde{w}(-\boldsymbol{\cdot})$. Then the convolution theorem implies $c_{k}(g)=c_{k}(f)\,c_{-k}(\tilde{w})$ such that we have $c_{k}(f)=\frac{c_{k}(g)}{c_{-k}(\tilde{w})}$. If we suppose we are not given $g$ but only evaluations at points $x_{j}\in\left[-\frac{1}{2},\frac{1}{2}\right)$, we can use a quadrature rule with weights $\frac{1}{M}$ so that we obtain the approximation $$g(x)\approx\frac{1}{M}\sum_{j=1}^{N}f(x_{j})\,\tilde{w}(x_{j}-x)=\frac{1}{M}\,% \tilde{g}(x)$$ (3.28) with the function $\tilde{g}(x)$ as in (2.7). Then we can approximate $c_{k}(g)$ by $\frac{1}{M}c_{k}(\tilde{g})$ and therefore we have $$c_{k}(f)=\frac{c_{k}(g)}{c_{-k}(\tilde{w})}\approx\frac{c_{k}(\tilde{g})}{M% \cdot c_{-k}(\tilde{w})}\overset{\eqref{eq_approx_nfft*}}{=}\tfrac{1}{M}\tilde% {h}_{k}=\left(\tfrac{1}{M}\,\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}% ^{*}\boldsymbol{f}\right)_{k}.$$ Thus, for evaluations $\boldsymbol{f}=\left(f(x_{j})\right)_{j=1}^{N}$ at given nodes $x_{j}$ we consider the estimates $\boldsymbol{\check{f}}=\left(\check{f}_{k}\right)_{k=-\frac{M}{2}}^{\frac{M}{2% }-1}=\frac{1}{M}\,\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}% \boldsymbol{f}$ and $\boldsymbol{\check{f}}_{\rm opt}=\left((\check{f}_{\rm opt})_{k}\right)_{k=-% \frac{M}{2}}^{\frac{M}{2}-1}=\frac{1}{M}\,\boldsymbol{D}^{*}\boldsymbol{F}^{*}% \boldsymbol{B}_{\rm opt}^{*}\boldsymbol{f}$ of the Fourier coefficients and compare them to the exact Fourier coefficients of $f$ which we compute analytically via the formula (3.26). Here we reconstruct $M=32$ Fourier coefficients from function values $f(x_{j})$ given at jittered equispaced nodes $x_{j}$ for both, an overdetermined and an underdetermined setting, namely $N=128$ and $N=8$. We choose the B-Spline and the Dirichlet kernel with $m=2$ and $\sigma=1.0$. Figure 3.5 depicts the reconstruction, i. e., the Fourier coefficients $c_{k}(f)$ compared to $\check{f}_{k}$ and $(\check{f}_{\rm opt})_{k}$ as well as the pointwise errors $|c_{k}(f)-\check{f}_{k}|$ and $|c_{k}(f)-(\check{f}_{\rm opt})_{k}|$, $k=-\tfrac{M}{2},\dots,\tfrac{M}{2}-1$. We see that the approximation could be improved in the underdetermined case $M>N$ as well as in the overdetermined case $M<N$. When using other window functions or considering Chebyshev nodes the results look quite the same. Hence, all these tests are omitted here.        $\hfill\rule{6.45pt}{6.45pt}\\ $ Remark 9 Problem (3.2) can be solved similarly by searching an approximation of the form $\boldsymbol{B}\boldsymbol{K}^{*}=\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}% \boldsymbol{A}^{*}\approx M\boldsymbol{I}_{N}$. Therefore, we consider the optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{B}\boldsymbol{K}^{*}% -M\boldsymbol{I}_{N}\|_{\textrm{F}}^{2}.$$ This is equivalent to the transposed problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{K}\boldsymbol{B}^{*}% -M\boldsymbol{I}_{N}\|_{\textrm{F}}^{2},$$ which is what we discussed in Section 3.2.1 and hence can be solved likewise.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Example 4 Finally, we discuss the analogs to the examples mentioned above for problem (3.2). Since it is clear that the optimization problems are equivalent we refer to Example 2 for results with respect to the minimization of the norm. Similarly to Example 3, we check if we are able to perform an inverse adjoint NFFT for the trigonometric function (3.25). This time we consider the approximations $\boldsymbol{\tilde{f}}=\left(\tilde{f}_{j}\right)_{j=1}^{N}=\frac{1}{M}\,% \boldsymbol{B}\boldsymbol{F}\boldsymbol{D}(\boldsymbol{A}^{*}\boldsymbol{f})$ and $\boldsymbol{\tilde{f}}_{\rm opt}=\left((\tilde{f}_{\rm opt})_{j}\right)_{j=1}^% {N}=\frac{1}{M}\,\boldsymbol{B}_{\rm opt}\boldsymbol{F}\boldsymbol{D}(% \boldsymbol{A}^{*}\boldsymbol{f})$ of the function values $\boldsymbol{f}=\left(f(x_{j})\right)_{j=1}^{N}=\left(f_{j}\right)_{j=1}^{N}$ for $N=32$ jittered equispaced nodes $x_{j}$. We consider the reconstruction, namely the comparison of $\boldsymbol{\tilde{f}}$ and $\boldsymbol{\tilde{f}}_{\rm opt}$ to the function values $\boldsymbol{f}$ as well as the pointwise errors $|f_{j}-\tilde{f}_{j}|$ and $|f_{j}-(\tilde{f}_{\rm opt})_{j}|$, $j=1,\dots,N$. The corresponding results can be found in Figure 3.6. We see that our new algorithm leads to better approximations in both, the underdetermined and the overdetermined case but they are best in the overdetermined setting $M>N$. Further results are left out for the same reasons as in Example 3.        $\hfill\rule{6.45pt}{6.45pt}\\ $ 3.2.2 Inverse NFFT – overdetermined case Previously, in Section 3.2.1, we studied $\boldsymbol{K}\boldsymbol{B}^{*}$ and $\boldsymbol{B}\boldsymbol{K}^{*}$, where $\boldsymbol{K}=\boldsymbol{A}\boldsymbol{D}^{*}\boldsymbol{F}^{*}\in\mathbb{C}% ^{N\times M_{\sigma}}$ and $\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}$. There we have seen that the inversion based on the minimization related to these matrices works best for $M>N$, which is the underdetermined case for the inverse NFFT as well as the overdetermined case for the inverse adjoint NFFT. However, often we are given nonequispaced samples with $M<N$ and search a corresponding trigonometric polynomial of degree $M$. Hence, we look for another approach, which yields the best results in this overdetermined setting $M<N$. Similar to the explanations at the beginning of Section 3.2.1 this new approach is desired to lead to substantial improvements in the overdetermined case. To this end, we investigate $\boldsymbol{B}^{*}\boldsymbol{K}$. Initially, we consider the function $\tilde{g}(x)=\sum_{j=1}^{N}f_{j}\,\tilde{w}_{m}(x_{j}-x)$. Then the vector $\tilde{\boldsymbol{g}}\coloneqq\left(\tilde{g}\hskip-2.5pt\left(\tfrac{l}{M_{% \sigma}}\right)\right)_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}$ can be represented by $\tilde{\boldsymbol{g}}=\boldsymbol{B}^{*}\boldsymbol{f}$. Furthermore, we know by (2.8) that the adjoint NFFT can be written as $(\tilde{h}_{k})_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\eqqcolon\tilde{\boldsymbol{h}% }=\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\boldsymbol{f}$ and thereby we have $\tilde{\boldsymbol{h}}=\boldsymbol{D}^{*}\boldsymbol{F}^{*}\tilde{\boldsymbol{% g}}$. Now we claim $\tilde{\boldsymbol{h}}\overset{!}{\approx}\boldsymbol{\hat{f}}$. Thus, it follows $$\tilde{\boldsymbol{g}}=\boldsymbol{B}^{*}\boldsymbol{f}=\boldsymbol{B}^{*}% \boldsymbol{A}\boldsymbol{\hat{f}}\overset{!}{\approx}\boldsymbol{B}^{*}% \boldsymbol{A}\tilde{\boldsymbol{h}}=\boldsymbol{B}^{*}\boldsymbol{A}% \boldsymbol{D}^{*}\boldsymbol{F}^{*}\tilde{\boldsymbol{g}}=\boldsymbol{B}^{*}% \boldsymbol{K}\tilde{\boldsymbol{g}},$$ i. e., we seek $\boldsymbol{B}^{*}$ as solution of the optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \left\|\boldsymbol{B}^{*}% \boldsymbol{K}-\boldsymbol{I}_{M_{\sigma}}\right\|_{\mathrm{F}}^{2}.$$ This is equivalent to the transposed problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \left\|\boldsymbol{K}^{*}% \boldsymbol{B}-\boldsymbol{I}_{M_{\sigma}}\right\|_{\mathrm{F}}^{2}.$$ (3.29) By means of definitions (3.13) and (2.6) we obtain $$\boldsymbol{K}^{*}\boldsymbol{B}=\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*% }\boldsymbol{B}=\left[\sum_{j=1}^{N}\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\frac% {1}{M_{\sigma}\hat{w}(k)}\,\mathrm{e}^{-2\pi\mathrm{i}k\left(x_{j}-\frac{s}{M_% {\sigma}}\right)}\,\tilde{w}_{m}\hskip-2.75pt\left(x_{j}-\tfrac{l}{M_{\sigma}}% \right)\right]_{s,\,l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}.$$ (3.30) Analogously to (3.15), we define the set $$I_{M_{\sigma},m}(l):=\left\{j\in\left\{1,\dots,N\right\}:\exists\,z\in\mathbb{% Z}\ \text{with}-m\leq M_{\sigma}x_{j}-l+M_{\sigma}z\leq m\right\}.$$ (3.31) Hence, we can rewrite (3.29) by analogy with Section 3.2.1 as $$\left\|\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{B}-% \boldsymbol{I}_{M_{\sigma}}\right\|_{\mathrm{F}}^{2}=\sum_{l=-\frac{M_{\sigma}% }{2}}^{\frac{M_{\sigma}}{2}-1}\|\boldsymbol{F}\boldsymbol{D}\boldsymbol{H}_{l}% \boldsymbol{b}_{l}-\boldsymbol{e}_{l}\|_{2}^{2},$$ where $$\boldsymbol{b}_{l}\coloneqq\bigg{(}\tilde{w}_{m}\hskip-2.5pt\left(x_{j}-\tfrac% {l}{M_{\sigma}}\right)\bigg{)}_{j\in I_{M_{\sigma},m}(l)},\quad\boldsymbol{H}_% {l}\coloneqq\left(\mathrm{e}^{-2\pi\mathrm{i}kx_{j}}\right)_{k=-\frac{M}{2},\,% j\in I_{M_{\sigma},m}(l)}^{\frac{M}{2}-1}$$ and $\boldsymbol{e}_{l}$ denote the columns of the identity matrix $\boldsymbol{I}_{M_{\sigma}}$. We obtain, cf. (3.19), $$\boldsymbol{L}_{l}\eqqcolon\boldsymbol{F}\boldsymbol{D}\boldsymbol{H}_{l}=% \left[\frac{1}{M_{\sigma}}\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\frac{1}{\hat{w% }(k)}\,\mathrm{e}^{2\pi\mathrm{i}k\left(\frac{l}{M_{\sigma}}-x_{j}\right)}% \right]_{l=-\frac{M_{\sigma}}{2},\,j\in I_{M_{\sigma},m}(l)}^{\frac{M_{\sigma}% }{2}-1}\hskip-40.0pt\in\mathbb{C}^{M_{\sigma}\times|I_{M_{\sigma},m}(l)|}.$$ (3.32) Thereby we receive the optimization problems $$\underset{\tilde{\boldsymbol{b}}_{l}\in\mathbb{R}^{2m+1}}{\text{Minimize }}\ % \|\boldsymbol{L}_{l}\tilde{\boldsymbol{b}}_{l}-\boldsymbol{e}_{l}\|_{2}^{2},% \quad l=-\tfrac{M_{\sigma}}{2},\dots,\tfrac{M_{\sigma}}{2}-1.$$ If the matrix $\boldsymbol{L}_{l}\in\mathbb{C}^{M_{\sigma}\times|I_{M_{\sigma},m}(l)|}$ has full rank the solution of (3.29) is given by $$\tilde{\boldsymbol{b}}_{l}=\left(\boldsymbol{L}_{l}^{*}\boldsymbol{L}_{l}% \right)^{-1}\boldsymbol{L}_{l}^{*}\boldsymbol{e}_{l},\quad l=-\tfrac{M_{\sigma% }}{2},\dots,\tfrac{M_{\sigma}}{2}-1.$$ (3.33) This time we cannot tell anything about the dimensions of $\boldsymbol{L}_{l}$ in general since the size of the set $I_{M_{\sigma},m}(l)$ depends on several parameters. Having this vectors $\tilde{\boldsymbol{b}}_{l}$ we can compose the modified matrix $\boldsymbol{B}_{\rm opt}$, observing that $\tilde{\boldsymbol{b}}_{l}$ only consist of the nonzero entries of $\boldsymbol{B}_{\rm opt}$. Then the approximation of the Fourier coefficients is given by $$\boldsymbol{\hat{f}}\approx\tilde{\boldsymbol{h}}=\boldsymbol{D}^{*}% \boldsymbol{F}^{*}\boldsymbol{B}_{\rm opt}^{*}\boldsymbol{f}.$$ (3.34) In other words, this approach yields another way to invert the NFFT by also modifying the adjoint NFFT. Analogously to Section 3.2.1, we are able to compute the entries of the matrix $\boldsymbol{L}_{l}$, see (3.32), by dint of an NFFT with the $M$ coefficients $$\hat{f}_{k}=\frac{1}{M_{\sigma}\hat{w}(k)},\quad k=-\tfrac{M}{2},\dots,\tfrac{% M}{2}-1,$$ (3.35) and nodes $y_{l,j}\coloneqq\tfrac{l}{M_{\sigma}}-x_{j},\,l=-\tfrac{M_{\sigma}}{2},\dots,% \tfrac{M_{\sigma}}{2}-1,\,j\in I_{M_{\sigma},m}(l),$ which are at most $M_{\sigma}N$ many. Here we also require only one NFFT by writing the columns of $\boldsymbol{L}_{l}$ one below the other. The obtained vector including all entries of $\boldsymbol{L}_{l}$ has to be reshaped afterwards. This leads to the following algorithm. Remark 10 If we assume the nodes are somewhat uniformly distributed, like for instance jittered equispaced nodes, we can get rid of the complexity related to $N$ and end up with arithmetic costs of $\mathcal{O}(M^{2})$.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Remark 11 It is also possible to simplify the computation of $\boldsymbol{L}_{l}$ by incorporating the Dirichlet kernel (3.22), i. e., we set $\hat{w}(k)=1$ for all $k=-\frac{M}{2}+1,\dots,\frac{M}{2}-1,$ and the last nonzero entry $\frac{1}{\hat{w}(\frac{M}{2})}$ of the matrix $\boldsymbol{D}^{*}$ is set to zero. Hence, the entries of the matrix $$\boldsymbol{L}_{l}=\left[\frac{1}{M_{\sigma}}\,D_{\frac{M}{2}-1}\left(\tfrac{l% }{M_{\sigma}}-x_{j}\right)\right]_{l=-\frac{M_{\sigma}}{2},\,j\in I_{M_{\sigma% },m}(l)}^{\frac{M_{\sigma}}{2}-1}$$ can explicitly be stated and therefore the term $M\log M$ in the computational costs of Algorithm 3.2.2 can be eliminated. Nevertheless, even if we assume uniformly distributed nodes as in Remark 10, we remain with arithmetic costs of $\mathcal{O}(M^{2})$.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Example 5 As in Example 2 we verify at first that the optimization was successful. On that account, we compare the norms $$\left\|\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{B}-% \boldsymbol{I}_{M_{\sigma}}\right\|_{\mathrm{F}}\quad\text{ and }\quad\|% \boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{B}_{\rm opt}-% \boldsymbol{I}_{M_{\sigma}}\|_{\mathrm{F}},$$ (3.36) where $\boldsymbol{B}$ denotes the original matrix from the NFFT and $\boldsymbol{B}_{\rm opt}$ the optimized matrix generated by Algorithm 3.2.2. Although our method is attributed to the overdetermined setting, this again means no restriction. Therefore, also the underdetermined setting is tested. (i) Again we examine at first jittered equispaced nodes, see (3.8). We choose $M=128$ and consider the norms (3.36) for $N=2^{c}$ nodes with $c=2,\dots,14$. In order to compute the NFFT in Algorithm 3.2.2 we choose the Kaiser-Bessel window, an oversampling of $\sigma_{2}=2.0$ and the cut-off parameter $m_{2}=2m$ to achieve results comparable to Example 2. However, one could also choose a larger cut-off to receive more accuracy for growing $N$. In Figure 3.7 one can find the comparison of the norms (3.36) for different values of $m$ and $\sigma$ for $\boldsymbol{B}_{\rm opt}$ generated using B-Splines as well as the Dirichlet kernel mentioned in Remark 11. It can be seen that for $\sigma=1.0$ the minimization was very successful especially for large $N$ compared to $M$. For $N<M$ the minimization was not successful. Similarly to Example 2 this results from the fact that the corresponding matrix $\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{B}$ is of low rank. Therefore, Algorithm 3.2.2 is especially attributed to the overdetermined case. Having a look at the graphs with high oversampling we recognize that the norms of the optimized matrices remain stable for all sizes of $N$. Thus, also for this method the optimization seems not to work for high oversampling. While the computational costs could not be scaled down, we see in Figure 3.8 that using the Dirichlet kernel reduced the run-time for all sizes of $N$. Results for other window functions are omitted since they show the same behavior. (ii) Next we repeat the example using Chebyshev nodes, cf. (3.24). The corresponding results for B-Splines can be found in Figure 3.9. We recognize that these graphs look similar to Figure 3.7, in contrast to Example 2 where the optimization for Chebyshev nodes was quite difficult.        $\hfill\rule{6.45pt}{6.45pt}\\ $ Example 6 Similarly to Example 3, we have a look at the trigonometric function (3.25). We compare our approximations $\boldsymbol{\check{f}}=\left(\check{f}_{k}\right)_{k=-\frac{M}{2}}^{\frac{M}{2% }-1}=\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B}^{*}\boldsymbol{f}$ and $\boldsymbol{\check{f}}_{\rm opt}=\big{(}(\check{f}_{\rm opt})_{k}\big{)}_{k=-% \frac{M}{2}}^{\frac{M}{2}-1}=\boldsymbol{D}^{*}\boldsymbol{F}^{*}\boldsymbol{B% }_{\rm opt}^{*}\boldsymbol{f}$ to the exact Fourier coefficients $c_{k}(f)$ but now we approximate the function (3.27) with a quadrature rule using weights $1$ so that we receive the approximation $g(x)\approx\tilde{g}(x)$ instead of (3.28) and hence $c_{k}(f)\approx\check{f}_{k}$. Considering the reconstruction and the errors like mentioned in Example 3 we obtain the results displayed in Figure 3.10. There we see that we are not able to reconstruct the Fourier coefficients in the underdetermined case $M>N$ whereas the approximation in the overdetermined setting $N>M$ is even better than in Example 3.        $\hfill\rule{6.45pt}{6.45pt}\\ $ Remark 12 Having a look at the remaining matrix product $\boldsymbol{K}^{*}\boldsymbol{B}$ we recognize that the corresponding optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \left\|\boldsymbol{K}^{*}% \boldsymbol{B}-\boldsymbol{I}_{M_{\sigma}}\right\|_{\mathrm{F}}^{2}$$ was already solved to find a solution for the transposed problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \left\|\boldsymbol{B}^{*}% \boldsymbol{K}-\boldsymbol{I}_{M_{\sigma}}\right\|_{\mathrm{F}}^{2}.$$ Hence, we merely examine the current approximation. Due to the minimization we have $\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{B}=\boldsymbol{K}^{*% }\boldsymbol{B}\approx\boldsymbol{I}_{M_{\sigma}}$. Because $\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}$ is rectangular and therefore not invertible we multiply by a right-inverse of $\boldsymbol{B}$, i. e., a matrix $\boldsymbol{B}^{\prime}\in\mathbb{R}^{M_{\sigma}\times N}$ that holds $\boldsymbol{B}\boldsymbol{B}^{\prime}=\boldsymbol{I}_{N}$, and receive $\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\approx\boldsymbol{B}^{\prime}$. Multiplying by a vector $\boldsymbol{f}$ yields $\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\boldsymbol{f}\approx\boldsymbol% {B}^{\prime}\boldsymbol{f}$, which can be written by means of $\boldsymbol{A}^{*}\boldsymbol{f}=\boldsymbol{h}$ as $\boldsymbol{F}\boldsymbol{D}\boldsymbol{h}\approx\boldsymbol{B}^{\prime}% \boldsymbol{f}$. Finally, we multiply left-hand by $\boldsymbol{B}$, which results in the approximation $\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{h}\approx\boldsymbol{f}$ and thus provides another method to invert the adjoint NFFT by modifying the NFFT.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Example 7 Finally, we have a look at the reconstruction of the trigonometric function (3.25), cf. Example 4. To this end, we consider the approximations $\boldsymbol{\tilde{f}}=\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}(\boldsymbol{% A}^{*}\boldsymbol{f})=\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{h}$ and $\boldsymbol{\tilde{f}}_{\rm opt}=\boldsymbol{B}_{\rm opt}\boldsymbol{F}% \boldsymbol{D}(\boldsymbol{A}^{*}\boldsymbol{f})=\boldsymbol{B}_{\rm opt}% \boldsymbol{F}\boldsymbol{D}\boldsymbol{h}$ and compare them to the function values for jittered equispaced nodes. In Figure 3.11 we see that our optimization was not successful for $N>M$ since there is no reasonable chance to approximate the function values in this setting. But also in the overdetermined case $N<M$ the approximations are not as good as in Example 4.        $\hfill\rule{6.45pt}{6.45pt}\\ $ 4 Frames During the last few decades the popularity of frames rose rapidly and more and more articles are concerned with this topic. Recently, an approach was published in [11] connecting frame approximation to the adjoint NFFT. Thus, in this section we consider the concept of frames and discuss an approach for inverting the NFFT based on [11]. Besides the basic information about the approximation of the inverse frame operator, a link to the methods explained in Section 3.2 is provided. 4.1 Approximation of the inverse frame operator First of all, we sum up the main idea of frames and frame approximation, basically adapted from [11] and [3]. Definition 1 Let $\mathcal{H}$ be a separable Hilbert space with inner product $\langle\cdot,\cdot\rangle$. Then a sequence $\{\varphi_{j}\}_{j=1}^{\infty}\subset\mathcal{H}$ is called frame if there exist constants $A,B>0$ such that $$A\|f\|^{2}\leq\sum_{j=1}^{\infty}|\langle f,\varphi_{j}\rangle|^{2}\leq B\|f\|% ^{2}\quad\forall f\in\mathcal{H}.$$ The operator $S\colon\mathcal{H}\to\mathcal{H},Sf=\sum_{j=1}^{\infty}\langle f,\varphi_{j}% \rangle\varphi_{j}$, is named the frame operator.       $\hfill\rule{6.45pt}{6.45pt}\\ $ Given this definition we can already state one of the most important results in frame theory, the so-called frame decomposition. If $\{\varphi_{j}\}_{j=1}^{\infty}$ is a frame with frame operator $S$, then $$f=\sum_{j=1}^{\infty}\langle f,S^{-1}\varphi_{j}\rangle\varphi_{j}=\sum_{j=1}^% {\infty}\langle f,\varphi_{j}\rangle S^{-1}\varphi_{j}\quad\forall f\in% \mathcal{H}.$$ (4.1) In other words, every element of $\mathcal{H}$ can be represented as a linear combination of the elements of the frame, which is a property similar to an orthonormal basis. Though, to apply (4.1) it is necessary to state the inverse operator $S^{-1}$ explicitly. However, this is mostly difficult (or even impossible). Hence, it is necessary to be able to approximate $S^{-1}$. For this purpose, we use the method from [10] by analogy with [11], which is based on so-called admissible frames, see [11, Definition 1]. We suppose $\{\psi_{l}\}_{l=-\infty}^{\infty}$ is an admissible frame with respect to $\{\varphi_{j}\}_{j=1}^{\infty}$. As shown in [10], the dual frame $\{S^{-1}\varphi_{j}\}_{j=1}^{\infty}$ can then be approximated by $$S^{-1}\varphi_{j}\approx\tilde{\varphi}_{j}\coloneqq\sum_{l=-\frac{M_{\sigma}}% {2}}^{\frac{M_{\sigma}}{2}-1}p_{l,j}\,\psi_{l},\quad j=1,\dots,N,$$ (4.2) where $\boldsymbol{\Phi}^{\dagger}\eqqcolon\left[p_{l,j}\right]_{l=-\frac{M_{\sigma}}% {2},\,j=1}^{\frac{M_{\sigma}}{2}-1,\;N}$ is the Moore-Penrose pseudoinverse of the matrix $$\boldsymbol{\Phi}\coloneqq\left[\langle\varphi_{j},\psi_{l}\rangle\right]_{j=1% ,\,l=-\frac{M_{\sigma}}{2}}^{N,\ \frac{M_{\sigma}}{2}-1}.$$ (4.3) In so doing, the matrix dimensions have to fulfill the condition $N\geq M_{\sigma}+cM_{\sigma}^{\frac{1}{2s-1}}$. Given this approximation of the dual frame, inserting (4.2) in (4.1) and cutting off the infinite sum yields the approximation $$f\approx\tilde{f}\coloneqq\sum_{j=1}^{N}\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{% M_{\sigma}}{2}-1}\langle f,\varphi_{j}\rangle\,p_{l,j}\,\psi_{l}.$$ (4.4) 4.2 Linking the frame-theoretical approach to the iNFFT Now we aim to find a link between the frame approximation (4.4) and the iNFFT from Section 3.2. To this end, we consider a discrete version of the frames recommended in [11], i. e., $$\{\varphi_{j}(k)\coloneqq\mathrm{e}^{-2\pi\mathrm{i}kx_{j}},\,j\in\mathbb{N}\}% \quad\text{ and }\quad\left\{\psi_{l}(k)\coloneqq\frac{\mathrm{e}^{-2\pi% \mathrm{i}kl/M_{\sigma}}}{M_{\sigma}\hat{w}(-k)},\,l\in\mathbb{Z}\right\}$$ (4.5) for $k\in\mathbb{Z}$, where $x_{j}\in[-\frac{1}{2},\frac{1}{2})$ denote the nonequispaced nodes. Note that we changed time and frequency domain to match our notations in Section 2. Thereby, we receive the scalar product $$\langle\varphi_{j},\psi_{l}\rangle_{\ell_{2}}=\sum_{k=-\infty}^{\infty}\varphi% _{j}(k)\,\overline{\psi_{l}(k)}=\sum_{k=-\infty}^{\infty}\frac{1}{M_{\sigma}% \hat{w}(k)}\,\mathrm{e}^{-2\pi\mathrm{i}k\left(x_{j}-\frac{l}{M_{\sigma}}% \right)}.$$ Truncating the infinite sum yields an approximation of the matrix $\boldsymbol{\Phi}$ in (4.3) by $$\boldsymbol{\Phi}_{\ell_{2}}=\bigg{(}\overline{K\hskip-2.0pt\left(x_{j}-\tfrac% {l}{M_{\sigma}}\right)}\bigg{)}_{j=1,\,l=-\frac{M_{\sigma}}{2}}^{N,\ \frac{M_{% \sigma}}{2}-1}$$ (4.6) with the kernel $K(x)$ from (3.14). In the following explanations we choose $\boldsymbol{\Phi}=\boldsymbol{\Phi}_{\ell_{2}}$. Remark 13 In general, we do not have admissible frames for our known window functions $w$ because of the factor $\frac{1}{\hat{w}(k)}$, $k=-\infty,\dots,\infty$. Only for finite frames the appropriate conditions can be satisfied. In addition, it must be pointed out that for other sampling patterns than the jittered equispaced nodes it was already mentioned in [4] that the admissibility condition may not hold or even the conditions for constituting a frame may fail, cf. [11].       $\hfill\rule{6.45pt}{6.45pt}\\ $ 4.2.1 Theoretical results For these given frames we consider again the frame approximation (4.4). Our aim is to show that the inversion of the NFFT illustrated in Section 3.2 can also be expressed by means of a frame-theoretical approach, i. e., by approximating a function $\hat{f}$ in frequency domain, cf. (2.3), and subsequently sampling at equispaced points $k=-\frac{M}{2},\dots,\frac{M}{2}-1$. The frame approximation of the function $\hat{f}$ is given by $$\tilde{\hat{f}}=\sum_{j=1}^{N}\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}% }{2}-1}\langle\hat{f},\varphi_{j}\rangle_{\ell_{2}}\,p_{l,j}\,\psi_{l}$$ (4.7) with $p_{l,j}$ as defined in (4.2). Hence, we are acquainted with two different methods to compute the Fourier coefficients $\hat{f}_{k}$ from given data $\langle\hat{f},\varphi_{j}\rangle\eqqcolon f_{j}$, the frame approximation (4.7) as well as the adjoint NFFT (2.8). In what follows, we suppose that we can achieve a reconstruction via frames. Utilizing this, we modify the adjoint NFFT so that we can use this simple method to invert the NFFT. Thus, we are looking for an approximation of the form $\tilde{h}_{k}\approx\tilde{\hat{f}}(k)\approx\hat{f}_{k}$,  $k=-\tfrac{M}{2},\dots,\tfrac{M}{2}-1$. To compare the adjoint NFFT and the frame approximation we firstly rewrite the approximation (2.8) of the adjoint NFFT by analogy with [11]. This yields $$\tilde{h}_{k}=\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}c_{l}\,% \psi_{l}(k),\quad k=-\tfrac{M}{2},\dots,\tfrac{M}{2}-1,$$ (4.8) with coefficients vector $$\boldsymbol{c}\coloneqq\left(c_{l}\right)_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{% \sigma}}{2}-1}=\left(\sum_{j=1}^{N}f_{j}\,\tilde{w}_{m}\hskip-2.75pt\left(x_{j% }-\tfrac{l}{M_{\sigma}}\right)\right)_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{% \sigma}}{2}-1}=\boldsymbol{B}^{*}\boldsymbol{f},$$ (4.9) where $\boldsymbol{f}\coloneqq\left(f_{j}\right)_{j=1}^{N}=(\langle\hat{f},\varphi_{j% }\rangle_{\ell_{2}})_{j=1}^{N}.$ Likewise we rewrite (4.7), cf. [11], as $$\tilde{\tilde{h}}_{k}\coloneqq\tilde{\hat{f}}(k)=\sum_{l=-\frac{M_{\sigma}}{2}% }^{\frac{M_{\sigma}}{2}-1}d_{l}\,\psi_{l}(k),\quad k=-\tfrac{M}{2},\dots,% \tfrac{M}{2}-1,$$ (4.10) with $\boldsymbol{d}\coloneqq\left(d_{l}\right)_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{% \sigma}}{2}-1}=\boldsymbol{\Phi}^{\dagger}\boldsymbol{f}.$ Furthermore, we define the vectors $\tilde{\boldsymbol{h}}\coloneqq(\tilde{h}_{k})_{k=-\frac{M}{2}}^{\frac{M}{2}-1}$ and $\tilde{\tilde{\boldsymbol{h}}}\coloneqq(\tilde{\tilde{h}}_{k})_{k=-\frac{M}{2}% }^{\frac{M}{2}-1}$ as well as the matrix $\boldsymbol{\Psi}\coloneqq(\psi_{l}(k))_{k=-\frac{M}{2},\,l=-\frac{M_{\sigma}}% {2}}^{\frac{M}{2}-1,\ \frac{M_{\sigma}}{2}-1}$. Thereby, (4.8) and (4.10) can be represented by $\tilde{\boldsymbol{h}}=\boldsymbol{\Psi}\boldsymbol{c}$ and $\tilde{\tilde{\boldsymbol{h}}}=\boldsymbol{\Psi}\boldsymbol{d}$. Hence, we can now estimate the difference between both approximations. Theorem 1 Let $\hat{\boldsymbol{w}}\coloneqq\left((\hat{w}(-k))^{-1}\right)_{k=-\frac{M}{2}}^% {\frac{M}{2}-1}$ be a vector satisfying $\|\hat{\boldsymbol{w}}\|_{2}<\infty$. Then the following estimates hold. (i) For $M_{\sigma}<N$ we have $$\left\|\tilde{\boldsymbol{h}}-\tilde{\tilde{\boldsymbol{h}}}\right\|_{2}\leq% \frac{1}{\sqrt{M_{\sigma}}}\left\|\hat{\boldsymbol{w}}\right\|_{2}\,\|% \boldsymbol{\Phi}\boldsymbol{B}^{*}-\boldsymbol{I}_{N}\|_{\textrm{F}}\,\|% \boldsymbol{\Phi}^{\dagger}\boldsymbol{f}\|_{2}.$$ (4.11) (ii) For $M_{\sigma}>N$ we have $$\left\|\tilde{\boldsymbol{h}}-\tilde{\tilde{\boldsymbol{h}}}\right\|_{2}\leq% \frac{1}{\sqrt{M_{\sigma}}}\left\|\hat{\boldsymbol{w}}\right\|_{2}\,\|% \boldsymbol{B}^{*}\boldsymbol{\Phi}-\boldsymbol{I}_{M_{\sigma}}\|_{\textrm{F}}% \,\|\boldsymbol{\Phi}^{\dagger}\boldsymbol{f}\|_{2},$$ (4.12) where $\boldsymbol{B}^{*}$ denotes the adjoint matrix of (2.6) and $\boldsymbol{\Phi}=\boldsymbol{\Phi}_{\ell_{2}}$ is given as in (4.6). $\hfill\rule{6.45pt}{6.45pt}\\ $ Proof By analogy with [11] Definitions (4.8) and (4.10) imply $$\begin{split}\displaystyle\left\|\tilde{\boldsymbol{h}}-\tilde{\tilde{% \boldsymbol{h}}}\right\|_{2}=\left\|\boldsymbol{\Psi}\boldsymbol{c}-% \boldsymbol{\Psi}\boldsymbol{d}\right\|_{2}\leq\left\|\boldsymbol{\Psi}\right% \|_{\mathrm{F}}\left\|\boldsymbol{c}-\boldsymbol{d}\right\|_{2}=\sqrt{\sum_{k=% -\frac{M}{2}}^{\frac{M}{2}-1}\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}% {2}-1}|\psi_{l}(k)|^{2}}\,\cdot\left\|\boldsymbol{c}-\boldsymbol{d}\right\|_{2% }\\ \displaystyle=\frac{1}{M_{\sigma}}\sqrt{\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}% \left|\frac{1}{\hat{w}(-k)}\right|^{2}\sum_{l=-\frac{M_{\sigma}}{2}}^{\frac{M_% {\sigma}}{2}-1}\underbrace{\left|\mathrm{e}^{-2\pi\mathrm{i}kl/M_{\sigma}}% \right|^{2}}_{\leq 1}}\,\cdot\left\|\boldsymbol{c}-\boldsymbol{d}\right\|_{2}% \leq\frac{1}{\sqrt{M_{\sigma}}}\left\|\hat{\boldsymbol{w}}\right\|_{2}\left\|% \boldsymbol{c}-\boldsymbol{d}\right\|_{2}.\end{split}$$ Next we consider the norm $\|\boldsymbol{c}-\boldsymbol{d}\|_{2}$ separately. (i) For $M_{\sigma}<N$ we have by (4.9) and (4.10) that $$\boldsymbol{c}-\boldsymbol{d}=\big{(}\boldsymbol{B}^{*}-\boldsymbol{\Phi}^{% \dagger}\big{)}\boldsymbol{f}=\big{(}\boldsymbol{B}^{*}-(\boldsymbol{\Phi}^{*}% \boldsymbol{\Phi})^{-1}\boldsymbol{\Phi}^{*}\big{)}\boldsymbol{f}=\big{(}(% \boldsymbol{\Phi}^{*}\boldsymbol{\Phi})^{-1}\boldsymbol{\Phi}^{*}\big{)}(% \boldsymbol{\Phi}\boldsymbol{B}^{*}-\boldsymbol{I}_{N})\boldsymbol{f}.$$ This leads to $$\|\boldsymbol{c}-\boldsymbol{d}\|_{2}\leq\|\boldsymbol{\Phi}\boldsymbol{B}^{*}% -\boldsymbol{I}_{N}\|_{\textrm{F}}\,\|\big{(}(\boldsymbol{\Phi}^{*}\boldsymbol% {\Phi})^{-1}\boldsymbol{\Phi}^{*}\big{)}\boldsymbol{f}\|_{2}\leq\|\boldsymbol{% \Phi}\boldsymbol{B}^{*}-\boldsymbol{I}_{N}\|_{\textrm{F}}\,\|\boldsymbol{\Phi}% ^{\dagger}\boldsymbol{f}\|_{2}.$$ (ii) In analogy, for $M_{\sigma}>N$ we have that $$\boldsymbol{c}-\boldsymbol{d}=\big{(}\boldsymbol{B}^{*}-\boldsymbol{\Phi}^{% \dagger}\big{)}\boldsymbol{f}=\big{(}\boldsymbol{B}^{*}-\boldsymbol{\Phi}^{*}(% \boldsymbol{\Phi}\boldsymbol{\Phi}^{*})^{-1}\big{)}\boldsymbol{f}=(\boldsymbol% {B}^{*}\boldsymbol{\Phi}-\boldsymbol{I}_{M_{\sigma}})\big{(}\boldsymbol{\Phi}^% {*}(\boldsymbol{\Phi}\boldsymbol{\Phi}^{*})^{-1}\big{)}\boldsymbol{f}$$ and thereby $$\|\boldsymbol{c}-\boldsymbol{d}\|_{2}\leq\|\boldsymbol{B}^{*}\boldsymbol{\Phi}% -\boldsymbol{I}_{M_{\sigma}}\|_{\textrm{F}}\,\|\big{(}\boldsymbol{\Phi}^{*}(% \boldsymbol{\Phi}\boldsymbol{\Phi}^{*})^{-1}\big{)}\boldsymbol{f}\|_{2}\leq\|% \boldsymbol{B}^{*}\boldsymbol{\Phi}-\boldsymbol{I}_{M_{\sigma}}\|_{\textrm{F}}% \,\|\boldsymbol{\Phi}^{\dagger}\boldsymbol{f}\|_{2}.$$ ${}_{\blacksquare}$ 4.2.2 Optimization Our aim is to minimize the distances shown in (4.11) and (4.12) to modify the adjoint NFFT such that we can achieve an inversion of the NFFT. To this end, we suppose we are given nodes $x_{j}$ as well as frames $\{\varphi_{j}\}$ and $\{\psi_{l}\}$ and thereby the matrix $\boldsymbol{\Phi}=\boldsymbol{\Phi}_{\ell_{2}}$. Thus, our purpose is to improve the approximation of the adjoint NFFT by modifying the matrix $\boldsymbol{B}^{*}$. Connection to the first approach Firstly, we consider the case $M_{\sigma}<N$. Minimizing the distance in (4.11) yields the optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{\Phi}\boldsymbol{B}^% {*}-\boldsymbol{I}_{N}\|_{\textrm{F}}^{2},$$ (4.13) which is of quite similar form like those seen in Section 3.2. For solving this problem we have a closer look at the matrix $\boldsymbol{\Phi}\boldsymbol{B}^{*}$. By Definitions (2.6) and (4.6) we obtain $$\boldsymbol{\Phi}\boldsymbol{B}^{*}=\left[\sum_{l=-\frac{M_{\sigma}}{2}}^{% \frac{M_{\sigma}}{2}-1}\sum_{k=-\frac{M}{2}}^{\frac{M}{2}-1}\frac{1}{M_{\sigma% }\hat{w}(k)}\,\mathrm{e}^{-2\pi\mathrm{i}k\left(x_{j}-\frac{l}{M_{\sigma}}% \right)}\,\tilde{w}_{m}\hskip-2.75pt\left(x_{h}-\tfrac{l}{M_{\sigma}}\right)% \right]_{j,\,h=1}^{N}.$$ (4.14) In addition, we consider analogously to (3.30) $$\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}=\left[\sum_{l=-% \frac{M_{\sigma}}{2}}^{\frac{M_{\sigma}}{2}-1}\sum_{k=-\frac{M}{2}}^{\frac{M}{% 2}-1}\frac{1}{M_{\sigma}\hat{w}(k)}\,\mathrm{e}^{-2\pi\mathrm{i}k\left(x_{j}-% \frac{l}{M_{\sigma}}\right)}\,\tilde{w}_{m}\hskip-2.75pt\left(x_{h}-\tfrac{l}{% M_{\sigma}}\right)\right]_{h,\,j=1}^{N}.$$ (4.15) Comparing these matrices we recognize that (4.14) is exactly the transposed of (4.15), i. e., $\boldsymbol{\Phi}\boldsymbol{B}^{*}=\left(\boldsymbol{B}\boldsymbol{F}% \boldsymbol{D}\boldsymbol{A}^{*}\right)^{T}=\left(\boldsymbol{B}\boldsymbol{K}% ^{*}\right)^{T}.$ Thereby, (4.13) is equivalent to the problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{B}\boldsymbol{F}% \boldsymbol{D}\boldsymbol{A}^{*}-\boldsymbol{I}_{N}\|_{\textrm{F}}^{2}$$ and can be solved like already seen in Section 3.2.1. It may be recognized that the objective is a slightly different one since now we seek an approximation of the form $\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\approx\boldsymbol% {I}_{N}$ instead of $\boldsymbol{B}\boldsymbol{F}\boldsymbol{D}\boldsymbol{A}^{*}\approx M% \boldsymbol{I}_{N}$. However, the constant does not change the method and thus the same fast algorithm can be used. Connection to the second approach For $M_{\sigma}>N$ we consider the estimate (4.12) where minimization leads to the optimization problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{B}^{*}\boldsymbol{% \Phi}-\boldsymbol{I}_{M_{\sigma}}\|_{\textrm{F}}^{2}.$$ (4.16) Again we have a closer look at the appropriate matrix $$\boldsymbol{B}^{*}\boldsymbol{\Phi}=\left[\sum_{j=1}^{N}\sum_{k=-\frac{M}{2}}^% {\frac{M}{2}-1}\frac{1}{M_{\sigma}\hat{w}(k)}\,\mathrm{e}^{-2\pi\mathrm{i}k% \left(x_{j}-\frac{s}{M_{\sigma}}\right)}\,\tilde{w}_{m}\hskip-2.75pt\left(x_{j% }-\tfrac{l}{M_{\sigma}}\right)\right]_{l,\,s=-\frac{M_{\sigma}}{2}}^{\frac{M_{% \sigma}}{2}-1}$$ (4.17) and additionally consider the matrix from (3.30). Once more, a comparison of (4.17) and (3.30) yields $\boldsymbol{B}^{*}\boldsymbol{\Phi}=\left(\boldsymbol{F}\boldsymbol{D}% \boldsymbol{A}^{*}\boldsymbol{B}\right)^{T}=\left(\boldsymbol{K}^{*}% \boldsymbol{B}\right)^{T},$ i. e., they are equal except for transposition. Because (4.16) is hence equivalent to the transposed problem $$\underset{\boldsymbol{B}\in\mathbb{R}^{N\times M_{\sigma}}\colon\boldsymbol{B}% \,(2m+1)\text{-sparse }}{\text{Minimize }}\ \|\boldsymbol{F}\boldsymbol{D}% \boldsymbol{A}^{*}\boldsymbol{B}-\boldsymbol{I}_{M_{\sigma}}\|_{\textrm{F}}^{2},$$ (4.16) can be solved like already discussed in Section 3.2.2. Therefore, we have shown that the frame-theoretical approach can be traced back to the methods for inverting the NFFT introduced in Section 3.2. In other words, the explanations in Section 4 can be seen as simply having a different point of view to the problem of Section 3.2. Remark 14 Note that the method of [11] is based only on optimizing the diagonal matrix $\boldsymbol{D}$ whereas we used similar ideas to modify the sparse matrix $\boldsymbol{B}$.       $\hfill\rule{6.45pt}{6.45pt}\\ $ 5 Conclusion In the present paper we developed new direct methods for computing an inverse NFFT, i. e., for the reconstruction of $M$ Fourier coefficients $\hat{f}_{k}$ from given $N$ nonequispaced data $f_{j}$. Furthermore, solutions for the adjoint problem, the reconstruction of function values $f_{j}$ from given data $h_{k}$, were proposed. For both problems we derived efficient algorithms for the quadratic setting as well as for the overdetermined and underdetermined case. In the quadratic setting we used a relation between two evaluations of a trigonometric polynomial, which can be deduced by dint of Lagrange interpolation. Approximation of corresponding coefficients by means of the fast summation yields algorithms of complexity $\mathcal{O}(N\log N)$. The main idea for the overdetermined and underdetermined cases was the minimization of a certain Frobenius norm so that the solution can be deduced by means of the least squares method. All in all, we ended up with precomputational algorithms of complexity $\mathcal{O}(N^{2})$ and $\mathcal{O}(M^{2})$, respectively, whereas the algorithms for the inversion require only $\mathcal{O}(M\log M+N)$ arithmetic operations. Finally, we investigated an approach based on [11] considering frame approximation, which can be used to approximate a function $\hat{f}$ in frequency domain and subsequently sample at equispaced points. By comparing this procedure to the adjoint NFFT we modified the last-mentioned to achieve an iNFFT. In so doing, we found out that the thereby obtained approaches can be traced back to the methods for the inversion introduced for the overdetermined and underdetermined cases. For the future it might be of interest to study for what kind of distribution of nodes and which window functions the frame-theoretical approach is applicable. Moreover, a generalization of the presented methods to higher dimensions is subject of ongoing research. Acknowledgements The first named author gratefully acknowledges the funding support from the European Union and the Free State of Saxony (ESF). References [1] A. P. Austin and L. N. Trefethen, Trigonometric interpolation and quadrature in perturbed points, SIAM J. Numer. Anal., 55 (2017), pp. 2113–2122. [2] G. Beylkin, On the fast Fourier transform of functions with singularities, Appl. Comput. Harmon. Anal., 2 (1995), pp. 363–381. [3] O. Christensen, An introduction to frames and Riesz bases (Second Edition), Applied and Numerical Harmonic Analysis, Birkhäuser Basel, 2016. [4] J. Davis, A. Gelb, and G. Song, A high-dimensional inverse frame operator approximation technique, SIAM J. Numer. Anal., 54 (2016), pp. 2282–2301, https://doi.org/10.1137/15M1047593. [5] A. J. W. Duijndam and M. A. Schonewille, Nonuniform fast Fourier transform, Geophysics, 64 (1999), pp. 539–551. [6] A. Dutt and V. Rokhlin, Fast Fourier transforms for nonequispaced data, SIAM J. Sci. Stat. Comput., 14 (1993), pp. 1368–1393. [7] A. Dutt and V. Rokhlin, Fast Fourier transforms for nonequispaced data II, Appl. Comput. Harmon. Anal., 2 (1995), pp. 85–100. [8] H. G. Feichtinger, K. Gröchenig, and T. Strohmer, Efficient numerical methods in non-uniform sampling theory, Numer. Math., 69 (1995), pp. 423–440. [9] K. Fourmont, Non equispaced fast Fourier transforms with applications to tomography, J. Fourier Anal. Appl., 9 (2003), pp. 431–450. [10] A. Gelb and G. Song, Approximating the inverse frame operator from localized frames, Appl. Comput. Harm. Anal., 35 (2013), pp. 94–110, https://doi.org/10.1016/j.acha.2012.08.002. [11] A. Gelb and G. Song, A frame theoretic approach to the nonuniform fast Fourier transform, SIAM J. Numer. Anal., 52 (2014), pp. 1222–1242, https://doi.org/10.1137/13092160X. [12] L. Greengard and J.-Y. Lee, Accelerating the nonuniform fast Fourier transform, SIAM Rev., 46 (2004), pp. 443–454. [13] K. Gröchenig, Reconstruction algorithms in irregular sampling, Math. Comput., 59 (1992), pp. 181–194. [14] J. Keiner, S. Kunis, and D. Potts, NFFT 3.4.1, C and MATLAB subroutine library. http://www.tu-chemnitz.de/~potts/nfft. [15] J. Keiner, S. Kunis, and D. Potts, Using NFFT3 - a software library for various nonequispaced fast Fourier transforms, ACM Trans. Math. Software, 36 (2009), pp. Article 19, 1–30. [16] M. Kircheis, Die direkte inverse NFFT. Bachelorarbeit, Fakultät für Mathematik, Technische Universität Chemnitz, 2017. [17] S. Kunis and D. Potts, Stability results for scattered data interpolation by trigonometric polynomials, SIAM J. Sci. Comput., 29 (2007), pp. 1403–1419. [18] A. Nieslony and G. Steidl, Approximate factorizations of Fourier matrices with nonequispaced knots, Linear Algebra Appl., 266 (2003), pp. 337–351. [19] D. Potts and G. Steidl, Fast summation at nonequispaced knots by NFFTs, SIAM J. Sci. Comput., 24 (2003), pp. 2013–2037. [20] D. Potts, G. Steidl, and M. Tasche, Fast Fourier transforms for nonequispaced data: A tutorial, in Modern Sampling Theory: Mathematics and Applications, J. J. Benedetto and P. J. S. G. Ferreira, eds., Boston, MA, USA, 2001, Birkhäuser, pp. 247–270. [21] D. Ruiz-Antolin and A. Townsend, A nonuniform Fast Fourier Transform based on low rank approximation, SIAM J. Sci. Comput., 40 (2018), pp. A529–A547, https://doi.org/10.1137/17M1134822. [22] J. Selva, Efficient type-4 and type-5 non-uniform fft methods in the one-dimensional case, IET Signal Processing, 12 (2018), pp. 74–81, https://doi.org/10.1049/iet-spr.2016.0509. [23] G. Steidl, A note on fast Fourier transforms for nonequispaced grids, Adv. Comput. Math., 9 (1998), pp. 337–353.
Gravitational Waves from Double Hybrid Inflation G. Lazarides [email protected] School of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece    C. Panagiotakopoulos [email protected] School of Rural and Surveying Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece (November 20, 2020) Abstract We present a two stage hybrid inflationary scenario in non-minimal supergravity which can predict values of the tensor-to-scalar ratio of the order of ${\rm few}\times 10^{-2}$. For the parameters considered, the underlying supersymmetric particle physics model possesses two inflationary paths, the trivial and the semi-shifted one. The trivial path is stabilized by supergravity corrections and supports a first stage of inflation with a limited number of e-foldings. The tensor-to-scalar ratio can become appreciable while the value of the scalar spectral index remains acceptable as a result of the competition between the relatively mild supergravity corrections and the strong radiative corrections to the inflationary potential. The additional number of e-foldings required for solving the puzzles of hot big bang cosmology are generated by a second stage of inflation taking place along the semi-shifted path. This is possible only because the semi-shifted path is almost perpendicular to the trivial one and, thus, not affected by the strong radiative corrections along the trivial path and also because the supergravity effects remain mild. At the end of inflation, cosmic strings are produced, which may contribute to the primordial curvature perturbation. The requirement that this contribution be restricted to an acceptable level limits the possible values of the tensor-to-scalar ratio not to exceed about $3\times 10^{-2}$. pacs: 98.80.Cq ††preprint: UT-STPD-15/02 I Introduction Inflation (for a review see e.g. Ref. lectures ) is by now considered to be an integral part of standard cosmology thanks to a plethora of precise observations on the cosmic microwave background radiation (CMBR) and the large-scale structure in the universe. Therefore, it is very important to construct realistic inflationary models based on particle theory and consistent with all the available cosmological and phenomenological requirements. Undoubtedly, hybrid inflation linde is one of the most promising inflationary scenarios. It is cop ; dss naturally realized in the context of supersymmetric (SUSY) grand unified theory (GUT) models based on gauge groups with rank greater than or equal to five. In standard SUSY hybrid inflation, however, the GUT gauge symmetry is spontaneously broken only at the end of inflation and, thus, if magnetic monopoles are predicted by this symmetry breaking, they are copiously produced smooth , leading to a cosmological catastrophe. This disaster is avoided in the smooth smooth or shifted shift variants of SUSY hybrid inflation, where the GUT gauge symmetry is broken already during inflation. These variants were based on non-renormalizable superpotential terms. It was, though, subsequently shown that a new smooth nsmooth and a new shifted nshift hybrid inflation scenario can be constructed with only renormalizable superpotential terms within an extended Pati-Salam (PS) SUSY GUT model, which was initially introduced quasi for solving a very different problem. Namely, the simplest SUSY PS model predicts (see Ref. hw ) exact Yukawa unification als and, if it is supplemented with universal boundary conditions, yields unacceptable $b$-quark mass values. This problem is solved in the extended model, where Yukawa unification is naturally and moderately violated. After the first accurate measurement wmap07 of the scalar spectral index $n_{\rm s}$, however, it has been realized that there is a tension between all these well-motivated and natural inflationary scenarios and the measured value of this index. Indeed, within the standard power-law cosmological model with cold dark matter and a cosmological constant, the data imply that $n_{\rm s}$ is clearly lower than unity – for the latest results on $n_{\rm s}$ see Ref. planck15 . Inflationary scenarios, on the other hand, such as the ones mentioned above, within supergravity (SUGRA) with minimal Kähler potential, yield senoguz $n_{\rm s}$’s which are very close to unity or even exceed it. One idea mhin for reducing the predicted spectral index is based on the observation that $n_{\rm s}$ generally decreases with the number of e-foldings suffered by our present horizon scale during inflation. So, reducing this number of e-foldings, we can achieve values of $n_{\rm s}$ compatible with the recent data without having to abandon the use of a minimal Kähler potential. The additional number of e-foldings required for solving the horizon and flatness problems of standard hot big bang cosmology can be provided by a subsequent second stage of inflation. It is interesting to note that the extended SUSY PS model of Ref. quasi , which can lead to new smooth nsmooth or new shifted nshift hybrid inflation, can also provide us with a double inflation scenario called standard-smooth hybrid inflation stsmhi which solves the above mentioned spectral index problem along the lines just discussed. The cosmological scales exit the horizon during the main stage of inflation, which is of the standard hybrid type and occurs as the system slowly rolls down a trivial classically flat direction on which the PS gauge group is unbroken. This direction is subsequently destabilized giving its place to a classically non-flat valley of minima along which new smooth hybrid inflation takes place with the PS GUT gauge group being broken. Consequently, magnetic monopoles are produced only at the end of the first stage of inflation, but they are adequately diluted by the second stage, which also provides the extra e-foldings needed for solving the puzzles of hot big bang cosmology. After the recent results of BICEP2 bicep2 on the B-mode in the polarization of the CMBR at degree angular scales, it seems possible that the inflationary scenarios will have to face a new challenge. Namely, they should be able to accommodate appreciable values of the tensor-to-scalar ratio $r$, since a B-mode of primordial origin could be due to the production of gravitational waves during inflation. We should, however, consider this possibility with reservation since some serious criticism criticism to the original BICEP2 analysis has already appeared claiming that the foreground from Galactic polarized-dust emission has been underestimated. On the other hand, after the recently released Planck HFI 353 GHz dust polarization data planck , the first attempts to make a joint analysis of the Planck and BICEP2 data have been presented joint1 ; joint2 . They showed that, although $r$ is smaller than initially claimed, significant values of $r$ – of order 0.01 – cannot be excluded. The most recent joint analysis joint2 yields an upper limit on $r$ of about 0.12 at $95\%$ confidence level. Unfortunately, all the above mentioned variants of SUSY hybrid inflation predict negligible values of $r$. So, it is certainly worth investigating whether realistic SUSY hybrid inflation models accommodating appreciable values of $r$ can be constructed. In Ref. seto , a double inflation scenario has been proposed which is compatible with the BICEP2 data bicep2 . The first stage of inflation is of the SUSY hybrid type, while the second stage is left unspecified, which makes the scenario incomplete. The inflationary potential is supplemented with a mass-squared term for the inflaton attributed to SUGRA corrections and with a logarithmic term representing very strong radiative corrections due to the SUSY breaking during inflation. It is the competition between these two contributions which allows appreciable values of $r$ while $n_{\rm s}$ remains acceptable. The assumption, however, that an inflaton mass-squared term is the only relevant SUGRA correction during inflation in a scenario with Planck-scale inflaton field values seems totally unjustified. In addition, this paper follows the usual practice of only taking into account the radiative corrections in the derivatives of the inflationary potential when calculating the slow-roll parameters and neglecting them when calculating the potential itself. In the case of extremely strong radiative corrections, however, this may lead to erroneous results. Also, Ref. ck attempts to accommodate the BICEP2 results in a double hybrid inflation model where the inflaton potential changes dynamically with the evolution of the inflaton fields. The particular implementation of this interesting idea, though, appears to have some problems of naturalness in the design of the superpotential. In addition, the treatment of SUGRA seems to be incomplete. Finally, Ref. rsw shows that, in SUSY hybrid inflation models, it is possible to obtain values of $r$ close to $0.03$ by employing an expansion of a non-minimal Kähler potential with appropriate coefficients. The validity of this approach may, however, be questionable since the inflaton takes values close to the Planck scale. In this paper, we will show that a reduced version of the extended SUSY PS model of Ref. quasi can yield a two stage inflationary scenario which can predict values of $r$ up to about $0.03$. In the range of the model parameters considered here, the model in the global SUSY limit possesses practically two classically flat directions, namely the trivial and the semi-shifted semi one. After including SUGRA corrections, the trivial path, on which the full GUT gauge group is unbroken, is stabilized and a first stage of inflation can occur as the system slowly rolls down this path. All the cosmological scales exit the horizon during this stage and our present horizon undergoes a limited number of e-foldings. The obtained tensor-to-scalar ratio can be appreciable while the scalar spectral index assumes acceptable values thanks to the competing effect of the sufficiently mild SUGRA corrections resulting from the construction of Ref. pana and the strong radiative corrections to the inflationary potential. Subsequently, a second stage of inflation takes place along the semi-shifted path, where $U(1)_{B-L}$ remains unbroken, and provides us with the additional number of e-foldings required for solving the standard problems of hot big bang cosmology. This is possible since, for our choice of parameters, the semi-shifted direction is almost perpendicular to the trivial path and, thus, is not affected by the strong radiative corrections along the trivial path. It is also important that the SUGRA corrections on the semi-shifted path are kept sufficiently mild again by the mechanism of Ref. pana . After the end of inflation, the system falls into the vacuum and $U(1)_{B-L}$ breaks spontaneously leading to the production of $B-L$ cosmic strings. In order to keep the contribution of these strings to the primordial curvature perturbation at an acceptable level, one must impose strings an upper bound on the vacuum expectation values (VEVs) which break $U(1)_{B-L}$. This limits somewhat the possible values of the tensor-to-scalar ratio, but values of order ${\rm few}\times 10^{-2}$ can be easily obtained. We first present, in Sec. II, the salient features of the model in global SUSY. In Sec. III, we then calculate the SUGRA and one-loop radiative corrections to the potential and discuss our double inflationary scenario. Finally, in Sec. IV, we summarize our conclusions. Throughout, we will use units where the reduced Planck mass $m_{\rm P}=2.4355\times 10^{18}~{}{\rm GeV}$ is equal to unity. II The model in global SUSY We consider a reduced version of the extended SUSY PS model of Ref. quasi . This version is based on the left-right symmetric gauge group $G_{\rm LR}=SU(3)_{c}\times SU(2)_{\rm L}\times SU(2)_{\rm R}\times U(1)_{B-L}$, which is a subgroup of the PS group. The superfields of the model which are relevant for inflation are the following. A gauge singlet $S$, a conjugate pair of superfields $\Phi$, $\bar{\Phi}$ belonging to the $(1,1,3)_{0}$ representation of $G_{\rm LR}$, and a conjugate pair of Higgs superfields $H$ and $\bar{H}$ belonging to the $(1,1,2)_{1}$ and $(1,1,2)_{-1}$ representations of $G_{\rm LR}$, respectively. The field $\Phi$ acquires a VEV which breaks $G_{\rm LR}$ to $G_{\rm SM}\times U(1)_{B-L}$, while the VEVs of $H$ and $\bar{H}$ cause the breaking of $G_{\rm LR}$ to the standard model (SM) gauge group $G_{\rm SM}$. The full superfield content and superpotential, the global symmetries, and the charge assignments can be easily derived from the extended SUSY PS model of Ref. quasi by simply reducing its GUT gauge group to $G_{\rm LR}$. The only global symmetry of the model which is relevant here is its $U(1)$ R symmetry under which $S$ and $\bar{\Phi}$ have charge 1 with all the other superfields mentioned above being neutral. The superpotential terms relevant for inflation are $$W=\kappa S\left(M^{2}-\Phi^{2}\right)-\gamma SH\bar{H}+m\Phi\bar{\Phi}-\lambda% \bar{\Phi}H\bar{H},$$ (1) where $M$, $m$ are superheavy masses and $\kappa$, $\gamma$, $\lambda$ are dimensionless coupling constants. These parameters are normalized so that they correspond to the couplings between the SM singlet components of the superfields. The mass parameters $M$, $m$ and any two of the three dimensionless parameters $\kappa$, $\gamma$, $\lambda$ can always be made real and positive by appropriately redefining the phases of the superfields. The third dimensionless parameter, however, remains generally complex. For definiteness, we will choose this parameter to be real and positive too. The F–term scalar potential obtained from the superpotential in Eq. (1) is given by $$\displaystyle V^{0}_{F}$$ $$\displaystyle=$$ $$\displaystyle|\kappa(M^{2}-\Phi^{2})-\gamma H\bar{H}|^{2}$$ (2) $$\displaystyle+|m\bar{\Phi}-2\kappa S\Phi|^{2}+|m\Phi-\lambda H\bar{H}|^{2}$$ $$\displaystyle+|\gamma S+\lambda\bar{\Phi}\,|^{2}\left(|H|^{2}+|\bar{H}|^{2}% \right),$$ where the complex scalar fields which belong to the SM singlet components of the superfields are denoted by the same symbol. From this potential and the vanishing of the D–terms (which implies that $\bar{H}^{*}=e^{i\theta}H$), one finds semi two distinct continua of SUSY vacua: $$\displaystyle\Phi=\Phi_{+},\,\,\bar{H}^{*}=H,\,\,|H|=\sqrt{\frac{m\Phi_{+}}{% \lambda}},\,\,S=\bar{\Phi}=0,$$ (3) $$\displaystyle\Phi=\Phi_{-},\,\,\bar{H}^{*}=-H,\,\,|H|=\sqrt{\frac{-m\Phi_{-}}{% \lambda}},\,\,S=\bar{\Phi}=0,$$ (4) where $$\Phi_{\pm}\equiv\pm M\sqrt{1+\left(\frac{\gamma m}{2\kappa\lambda M}\right)^{2% }}-\frac{\gamma m}{2\kappa\lambda}.$$ (5) The potential in Eq. (2), generally, possesses semi three flat directions. The first one is the usual trivial flat direction at $$\Phi=\bar{\Phi}=H=\bar{H}=0$$ (6) with $$V^{0}_{F}=V_{\text{tr}}\equiv\kappa^{2}M^{4}.$$ (7) On this direction, $G_{\rm LR}$ is unbroken. The second one, which appears at $$\displaystyle\Phi=-\frac{\gamma m}{2\kappa\lambda},\quad\bar{\Phi}=-\frac{% \gamma}{\lambda}\,S,$$ $$\displaystyle H\bar{H}=\frac{\kappa\gamma(M^{2}-\Phi^{2})+\lambda m\Phi}{% \gamma^{2}+\lambda^{2}},$$ $$\displaystyle V^{0}_{F}=V_{\text{nsh}}\equiv\kappa^{2}M^{4}\left(\frac{\lambda% ^{2}}{\gamma^{2}+\lambda^{2}}\right)\left(1+\frac{\gamma^{2}m^{2}}{4\kappa^{2}% \lambda^{2}M^{2}}\right)^{2},$$ (8) is the trajectory for the new shifted hybrid inflation nshift . On this direction, $G_{\rm LR}$ is broken to $G_{\rm SM}$. The third flat direction, which exists only if $M^{2}>m^{2}/2\kappa^{2}$, lies at $$\Phi=\pm\,M\sqrt{1-\frac{m^{2}}{2\kappa^{2}M^{2}}},\quad\bar{\Phi}=\frac{2% \kappa\Phi}{m}\,S,\quad H=\bar{H}=0.$$ (9) It is the path along which semi-shifted hybrid inflation semi takes place with $$V^{0}_{F}=V_{\text{ssh}}\equiv m^{2}M^{2}\left(1-\frac{m^{2}}{4\kappa^{2}M^{2}% }\right).$$ (10) Along this direction $G_{\rm LR}$ is broken to $G_{\rm SM}\times U(1)_{B-L}$. We choose to consider the case where $M^{2}>m^{2}/2\kappa^{2}$ and, thus, the semi-shifted flat direction exists. One can show – see Ref. semi – that, in this case, we always have $V_{\text{ssh}}<V_{\text{nsh}}$ and $V_{\text{ssh}}<V_{\text{tr}}$. Therefore, the semi-shifted flat direction, if it exists, always lies lower than both the trivial and the new shifted one. One the other hand, the new shifted flat direction may either lie lower or higher than the trivial one depending on the values of the parameters. Here we will take $\kappa\sim 1$, $\gamma\ll\lambda\ll\kappa$, $m\ll M$, and $|S|<1$. In this case, the new shifted flat direction practically coincides with the trivial one and, thus, plays no independent role in our scheme. III The double inflationary scenario In this section, we will show that, after including SUGRA corrections, the trivial path becomes stable for large absolute values of the real canonically normalized inflaton. Thus, it can support a first stage of inflation during which the universe undergoes a number of e-foldings which, although limited, is adequately large for all the cosmological scales to exit the horizon. Strong radiative corrections to the inflationary potential, which are controlled by the parameter $\kappa$, in conjunction with mild SURGA corrections then guarantee that an appreciable value of the tensor-to-scalar ratio can be achieved together with an acceptable value of the scalar spectral index. A subsequent second stage of inflation along the semi-shifted path can provide us with the additional number of e-foldings required for solving the horizon and flatness problems of the standard hot big bang cosmology. This is possible since, for the parameters chosen, this direction is almost orthogonal to the trivial path and, thus, it is not affected by the strong radiative corrections present during the first stage of inflation. In this connection, it is also important that the SUGRA corrections on the semi-shifted path remain mild. After the end of the second inflationary stage, the system falls into the vacuum and $U(1)_{B-L}$ breaks leading to the production of cosmic strings. This puts strings an upper limit on the VEVs which break $U(1)_{B-L}$ since, for larger VEVs, the contribution of these strings to the primordial curvature perturbation would be unacceptably large. As a consequence, the possible values of the tensor-to-scalar ratio, in our model, cannot be very large. However, values of the order of ${\rm few}\times 10^{-2}$ can be readily obtained. III.1 The first inflationary stage We adopt here the following Kähler potential $$\displaystyle K$$ $$\displaystyle=$$ $$\displaystyle-\ln\left(1-|S|^{2}\right)-\ln\left(1-|\bar{\Phi}|^{2}\right)+|% \Phi|^{2}+|H|^{2}$$ (11) $$\displaystyle+|\bar{H}|^{2}-2\ln\left(-\ln|Z_{1}|^{2}\right)+|Z_{2}|^{2}$$ ($|S|,\,|\bar{\Phi}|<1,\,0<|Z_{1}|<1$), where we included two extra $G_{\rm LR}$ singlet superfields $Z_{1}$ and $Z_{2}$, which do not enter the superpotential at all because they transform non-trivially under additional anomalous $U(1)$ gauge symmetries. The resulting F–term potential in SUGRA is given by $$V_{F}=\left[\sum_{i}|W_{X_{i}}+K_{X_{i}}W|^{2}K_{X_{i}{X_{i}}^{*}}^{-1}-3|W|^{% 2}\right]e^{K},$$ (12) where a subscript $X_{i}$ denotes derivation with respect to the field $X_{i}$ and the sum extends over all the seven fields $S,\,\bar{\Phi},\,\Phi,\,H,\,\bar{H},\,Z_{1},\,Z_{2}$. The values of $Z_{1}$ and $Z_{2}$ are assumed to be fixed pana by anomalous D–terms. Note that the superfields $S,\,\bar{\Phi},\,Z_{1}$ possess Kähler potentials of the no-scale type which for $Z_{2}=0$, in view of the relation $$|K_{Z_{1}}|^{2}K_{Z_{1}{Z_{1}}^{*}}^{-1}=2,$$ (13) guarantee the exact flatness of the potential along the trivial path pana and its approximate flatness on the semi-shifted one – see below. These paths are, respectively, parametrized by the complex inflatons $S$ and $\bar{\Phi}$ (approximately). However, as we shall see – cf. Ref. pana – , the relation $$|K_{Z_{2}}|^{2}K_{Z_{2}{Z_{2}}^{*}}^{-1}=|Z_{2}|^{2}\equiv\beta$$ (14) implies that these inflatons acquire masses proportional to $\beta$ as soon as the value of $Z_{2}$ becomes non-zero. Using the R and $U(1)_{B-L}$ symmetries of the model, we can rotate $S$ and $H$ on the real axis – cf. e.g. Ref. semi . The fields $\bar{\Phi},\,H,\,\bar{H}$ remain in general complex. However, for simplicity, we will also restrict them on the real axis and define the canonically normalized real scalar fields $\sigma,\,\bar{\phi},\,\phi,\,h,\,\bar{h}$ corresponding to the Kähler potential in Eq. (11) as follows – cf. Ref. pana – : $$S=\tanh\frac{\sigma}{\sqrt{2}},\quad\bar{\Phi}=\tanh\frac{\bar{\phi}}{\sqrt{2}},$$ (15) $$\Phi=\frac{\phi}{\sqrt{2}},\quad H=\frac{h}{\sqrt{2}},\quad\bar{H}=\frac{\bar{% h}}{\sqrt{2}}.$$ (16) We can now evaluate the potential $V_{F}$ in Eq. (12) with the overall factor $\exp{\left[-2\ln\left(-\ln|Z_{1}|^{2}\right)+|Z_{2}|^{2}\right]}$ absorbed into redefined parameters $\kappa$, $\gamma$, $m$, and $\lambda$ and find $$\displaystyle V_{F}$$ $$\displaystyle=$$ $$\displaystyle\left[A_{1}^{2}\cosh^{2}\frac{\bar{\phi}}{\sqrt{2}}-A_{2}^{2}% \sinh^{2}\frac{\bar{\phi}}{\sqrt{2}}+\beta A_{3}^{2}+A_{4}^{2}+A_{5}^{2}\right.$$ (17) $$\displaystyle+\left.\frac{1}{2}\left(h^{2}+\bar{h}^{2}\right)A_{6}^{2}+\frac{1% }{2}\left(\phi^{2}+h^{2}+\bar{h}^{2}\right)A_{3}^{2}\right.$$ $$\displaystyle+\left.\left(\sqrt{2}\phi A_{5}-2h\bar{h}A_{6}\right)A_{3}\right]% e^{\frac{1}{2}\left(\phi^{2}+h^{2}+\bar{h}^{2}\right)}.$$ Here $$A_{1}=\kappa\left(M^{2}-\frac{\phi^{2}}{2}\right)-\frac{\gamma}{2}h\bar{h},$$ (18) $$A_{2}=m\frac{\phi}{\sqrt{2}}-\frac{\lambda}{2}h\bar{h},$$ (19) $$A_{3}=A_{1}\sinh\frac{\sigma}{\sqrt{2}}\cosh\frac{\bar{\phi}}{\sqrt{2}}+A_{2}% \cosh\frac{\sigma}{\sqrt{2}}\sinh\frac{\bar{\phi}}{\sqrt{2}},$$ (20) $$A_{4}=A_{1}\sinh\frac{\sigma}{\sqrt{2}}\sinh\frac{\bar{\phi}}{\sqrt{2}}+A_{2}% \cosh\frac{\sigma}{\sqrt{2}}\cosh\frac{\bar{\phi}}{\sqrt{2}},$$ (21) $$A_{5}=m\cosh\frac{\sigma}{\sqrt{2}}\sinh\frac{\bar{\phi}}{\sqrt{2}}-{\sqrt{2}}% \kappa\phi\sinh\frac{\sigma}{\sqrt{2}}\cosh\frac{\bar{\phi}}{\sqrt{2}},$$ (22) and $$A_{6}=\gamma\sinh\frac{\sigma}{\sqrt{2}}\cosh\frac{\bar{\phi}}{\sqrt{2}}+% \lambda\cosh\frac{\sigma}{\sqrt{2}}\sinh\frac{\bar{\phi}}{\sqrt{2}}.$$ (23) On the trivial trajectory where $\bar{\phi},\,\phi,\,h,\,\bar{h}=0$, the F–term potential takes the form $$V_{F}=\kappa^{2}M^{4}\left[1+\beta\sinh^{2}\frac{\sigma}{\sqrt{2}}\right].$$ (24) The mass-squared eigenvalues in the directions perpendicular to this trajectory for $\sinh^{2}\left(\sigma/\sqrt{2}\right)\gg M^{2}/2$ can also be found from Eq. (17) to be $$m_{\phi}^{2}\simeq 4\kappa^{2}\sinh^{2}\frac{\sigma}{\sqrt{2}},$$ (25) $$m_{\bar{\phi}}^{2}\simeq\kappa^{2}M^{4}\left(1+(1+\beta)\sinh^{2}\frac{\sigma}% {\sqrt{2}}\right),$$ (26) $$m^{2}_{{\chi}_{1}}\simeq(\kappa M^{2}-\gamma)\left[\kappa M^{2}+\left((1+\beta% )\kappa M^{2}-\gamma\right)\sinh^{2}\frac{\sigma}{\sqrt{2}}\right],$$ (27) and $$m^{2}_{{\chi}_{2}}\simeq(\kappa M^{2}+\gamma)\left[\kappa M^{2}+\left((1+\beta% )\kappa M^{2}+\gamma\right)\sinh^{2}\frac{\sigma}{\sqrt{2}}\right],$$ (28) where $\chi_{1}=(h+\bar{h})/\sqrt{2}$ and $\chi_{2}=(h-\bar{h})/\sqrt{2}$. Thus, assuming that $\gamma<\kappa M^{2}$, we see that the trivial path, which is flat in the limit $\beta\to 0$, is stable for large absolute values of the inflaton $\sigma$. Note that Eqs. (27) and (28) hold for any value of $\sinh^{2}\left(\sigma/\sqrt{2}\right)$. On the contrary, one can show that as $\sinh^{2}\left(\sigma/\sqrt{2}\right)$ decreases, the eigenvalues and eigenstates of the mass-squared matrix of the $\phi-\bar{\phi}$ system change. In particular, when $\sinh^{2}\left(\sigma/\sqrt{2}\right)\simeq M^{2}/2+m^{2}/(2\kappa^{2}M^{2})$, the mass-squared matrix of the $\phi-\bar{\phi}$ system acquires a zero eigenvalue with $\bar{\phi}$ dominating the corresponding eigenstate. Subsequently, as $\sinh^{2}\left(\sigma/\sqrt{2}\right)$ approaches the value $M^{2}/2$, the eigenvalues of the $\phi-\bar{\phi}$ mass-squared matrix become almost opposite to each other with $\phi$ and $\bar{\phi}$ contributing almost equally to both the eigenstates. A further decrease of $\sinh^{2}\left(\sigma/\sqrt{2}\right)$ leads to the domination of the unstable eigenstate by $\phi$. Since the field $\phi$ is required to develop a nonzero VEV in order to cancel the false vacuum energy density $\kappa^{2}M^{4}$ on the trivial trajectory – see Eq. (2) or Eqs. (17) and (18) – , we will take as critical value of $\sigma$ at which the trivial path is destabilized the one determined by the relation $$\sinh^{2}\frac{\sigma_{\rm c}}{\sqrt{2}}=\frac{M^{2}}{2}.$$ (29) To the F–term scalar potential $V_{F}$ in Eq. (17) has to be added during the first stage of inflation (i.e. for $|\sigma|\geq|\sigma_{\rm c}|$) the term $$V_{r}^{\phi}=\kappa^{2}M^{4}\left(\frac{N_{\phi}\kappa^{2}}{8\pi^{2}}\right)% \ln\frac{2\tanh^{2}\frac{\sigma}{\sqrt{2}}}{M^{2}}$$ (30) corresponding to the dominant one-loop radiative corrections to the inflationary potential due to the $N_{\phi}$-dimensional supermultiplet $\Phi$ ($N_{\phi}=3$). Notice that the renormalization scale in these radiative corrections is chosen such that $V_{r}^{\phi}$ vanishes at $|\sigma|=|\sigma_{\rm c}|$ ($\tanh^{2}\left(\sigma_{\rm c}/\sqrt{2}\right)\simeq\sinh^{2}\left(\sigma_{\rm c% }/\sqrt{2}\right)=M^{2}/2$). Setting $$\delta_{\phi}=\frac{N_{\phi}\kappa^{2}}{2\pi^{2}},$$ (31) we can rewrite the full inflationary potential and its derivatives (denoted by primes) with respect to the canonically normalized real inflaton field $\sigma$ as follows: $$\frac{V}{\kappa^{2}M^{4}}=1+\beta\sinh^{2}\frac{\sigma}{\sqrt{2}}+\frac{\delta% _{\phi}}{4}\ln\frac{2\tanh^{2}\frac{\sigma}{\sqrt{2}}}{M^{2}}\equiv C(\sigma),$$ (32) $$\frac{V^{\prime}}{\kappa^{2}M^{4}}=\frac{1}{\sqrt{2}}\sinh(\sqrt{2}\sigma)% \left(\beta+\frac{\delta_{\phi}}{\sinh^{2}(\sqrt{2}\sigma)}\right),$$ (33) $$\frac{V^{\prime\prime}}{\kappa^{2}M^{4}}=\cosh(\sqrt{2}\sigma)\left(\beta-% \frac{\delta_{\phi}}{\sinh^{2}(\sqrt{2}\sigma)}\right),$$ (34) and $$\displaystyle\frac{V^{\prime\prime\prime}}{\kappa^{2}M^{4}}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{2}\sinh(\sqrt{2}\sigma)\left(\beta-\frac{\delta_{\phi}}{% \sinh^{2}(\sqrt{2}\sigma)}\right)$$ (35) $$\displaystyle+\frac{2\sqrt{2}\delta_{\phi}}{\tanh^{2}(\sqrt{2}\sigma)\sinh(% \sqrt{2}\sigma)}.$$ The usual slow-roll parameters for inflation are then written as $$\epsilon=\frac{1}{2}\left(\frac{V^{\prime}}{\kappa^{2}M^{4}}\right)^{2}\frac{1% }{C^{2}(\sigma)},$$ (36) $$\eta=\left(\frac{V^{\prime\prime}}{\kappa^{2}M^{4}}\right)\frac{1}{C(\sigma)},$$ (37) and $$\displaystyle\xi$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{V^{\prime}}{\kappa^{2}M^{4}}\right)\left(\frac{V^{% \prime\prime\prime}}{\kappa^{2}M^{4}}\right)\frac{1}{C^{2}(\sigma)}=2\left|% \tanh(\sqrt{2}\sigma)\right|\eta\sqrt{\epsilon}$$ (38) $$\displaystyle+\frac{4\delta_{\phi}\sqrt{\epsilon}}{C(\sigma)\tanh^{2}(\sqrt{2}% \sigma)\left|\sinh(\sqrt{2}\sigma)\right|}.$$ Using these expressions, we can evaluate the scalar spectral index $n_{\rm s}$, its running $\alpha_{\rm s}$, and the tensor-to-scalar ratio $r$ from the formulas $$n_{\rm s}=1+2\eta-6\epsilon,$$ (39) $$\alpha_{\rm s}=16\eta\epsilon-24\epsilon^{2}-2\xi,$$ (40) $$r=16\epsilon.$$ (41) Finally, the scalar potential on the trivial inflationary path can be written in terms of the scalar power spectrum amplitude $A_{\rm s}$ and $r$ as follows $$V=\frac{3\pi^{2}}{2}A_{\rm s}r.$$ (42) As a numerical example, we take the value of the real inflaton field $\sigma$ at horizon exit of the pivot scale $k_{*}=0.05\ \rm{Mpc}^{-1}$ to be $\sigma_{*}=1.45$. Also, we take $\kappa=1.7$, $\beta=0.022$, and the scalar power spectrum amplitude $A_{\rm s}=2.215\times 10^{-9}$ at the same pivot scale planck15 . With these input numbers, we then find $M=3.493\times 10^{-3}$, $C(\sigma_{*})=2.2941$, $\epsilon=0.00188$, $\eta=-0.01389$, $n_{\rm s}=0.9609$, $r=0.0301$, and $\alpha_{\rm s}=-0.01674$. As one can see, our predictions can not only be perfectly consistent with the latest data released by the Planck satellite experiment planck15 , but also accommodate large values of the tensor-to-scalar ratio $r$ of order ${\rm few}\times 10^{-2}$. As it is obvious from Eq. (41), such values of $r$ require relatively large values of $\epsilon$, which in turn reduce the scalar spectral index $n_{\rm s}$ in Eq. (39) below unity, but not quite adequately to make it compatible with the data. So we need an appreciable negative value of $\eta$, which requires that the parenthesis in the right hand side of Eq. (34) be dominated by the second term. A similar parenthesis appears in the right hand side of Eq. (33) too, but with the two terms added rather than subtracted. As it turns out, both these terms have to be appreciable with the second one being larger in order to be able to bring $n_{\rm s}$ near its best-fit value from the Planck data. This is certainly possible only for large values of the parameter $\kappa$ controlling the radiative corrections on the trivial path. Note that the first stage of inflation ends before the system reaches the critical point in Eq. (29) by violating the slow-roll conditions and the obtained number of e-foldings is limited due to the large values of $\epsilon$ involved and the fact that $\sigma_{*}\sim 1$. III.2 The second inflationary stage For the rest of the parameters of the model, we chose the values $m=1.827\times 10^{-5}$, $\lambda=0.1$, and $\gamma=10^{-6}$. We solved numerically the differential equations of the system with potential energy density given by the exact $V_{F}$ in Eq. (17) supplemented with the relevant radiative corrections and the D–terms involving the fields $H$, $\bar{H}$. The numerical investigation then revealed that there exist appropriate small initial absolute values of the scalar fields $\bar{\phi},\,\phi,\,h,\,\bar{h}$ for which, after the first stage of inflation and the elapse of a sufficient amount of time for the energy density to approach $m^{2}M^{2}$, we have ${\phi^{2}}\simeq 2M^{2}$, $h,\,\bar{h}\simeq 0$, and the scalar fields $\sigma$ and $\bar{\phi}$ take values such that $A_{5}\simeq 0$ with $|\sigma|\ll 1$. So it is obvious that the system reaches the semi-shifted inflationary path in Eq. (9) – note that the second relation in this equation is equivalent to $A_{5}=0$. It is remarkable that $|\bar{\phi}|$, which at the end of the first inflationary stage is extremely small, manages to attain values of the order of ${\rm few}\times 10^{-1}$ at the onset of the second stage. For a negligible value of $\gamma$, ${\phi^{2}}\simeq 2M^{2}$, $A_{5}\simeq 0$, and $|\sigma|\ll 1$, we find that $A_{1}\simeq 0$, $A_{3}\simeq A_{2}\sinh\left(\bar{\phi}/\sqrt{2}\right)$, $A_{4}\simeq A_{2}\cosh\left(\bar{\phi}/\sqrt{2}\right)$, $A_{6}\simeq\lambda\sinh\left(\bar{\phi}/\sqrt{2}\right)$, and the F–term scalar potential becomes $$\displaystyle V_{F}$$ $$\displaystyle\simeq$$ $$\displaystyle\left[A_{2}^{2}+(\beta+M^{2})A_{2}^{2}\sinh^{2}\frac{\bar{\phi}}{% \sqrt{2}}\right.$$ (43) $$\displaystyle+\left.\frac{1}{2}\left(h^{2}+\bar{h}^{2}\right)\left(\lambda^{2}% +A_{2}^{2}\right)\sinh^{2}\frac{\bar{\phi}}{\sqrt{2}}\right.$$ $$\displaystyle\left.-2h\bar{h}\lambda A_{2}\sinh^{2}\frac{\bar{\phi}}{\sqrt{2}}% \right]e^{M^{2}+\frac{1}{2}\left(h^{2}+\bar{h}^{2}\right)}$$ $$\displaystyle\overset{h,\bar{h}\simeq 0}{\underset{M^{2}\ll\beta}{\simeq}}$$ $$\displaystyle m^{2}M^{2}\left[1+\beta\sinh^{2}\frac{\bar{\phi}}{\sqrt{2}}% \right].$$ (44) The expression in Eq. (44) gives approximately the F–term potential on the semi-shifted path. Notice the striking similarity of this expression with the one in Eq. (24) involving the same parameter $\beta$. From $A_{5}\simeq 0$ and the fact that $A_{5}\propto m\tanh\left(\bar{\phi}/\sqrt{2}\right)-\sqrt{2}\kappa\phi\tanh% \left(\sigma/\sqrt{2}\right)$, it follows that the combination of $S$ and $\bar{\Phi}$ which could remain large when the energy density approaches $m^{2}M^{2}$ and plays the role of the complex inflaton in the second stage of inflation is $$\frac{mS+2\kappa<\Phi>\bar{\Phi}}{\sqrt{m^{2}+4\kappa^{2}M^{2}}}\simeq\bar{% \Phi},$$ (45) since the contribution of $\bar{\Phi}$ in this combination is about $2\kappa M/m\simeq 650$ times bigger than the one of $S$. From Eq. (43), we can construct the mass-squared matrix for the $h-\bar{h}$ system during the second stage of inflation. We find that the mass eigenstates are given by the combinations $\chi_{1}=(h+\bar{h})/\sqrt{2}$ and $\chi_{2}=(h-\bar{h})/\sqrt{2}$ with masses-squared $$m^{2}_{{\chi}_{1}}\simeq\left(\lambda-mM\right)\left[\left(\lambda-(1+\beta)mM% \right)\sinh^{2}\frac{\bar{\phi}}{\sqrt{2}}-mM\right],$$ (46) $$m^{2}_{{\chi}_{2}}\simeq\left(\lambda+mM\right)\left[\left(\lambda+(1+\beta)mM% \right)\sinh^{2}\frac{\bar{\phi}}{\sqrt{2}}+mM\right].$$ (47) We see that ${\chi}_{1}$ develops an instability which terminates the valley along which the second stage of inflation takes place with the critical value of the real canonically normalized inflaton $\bar{\phi}$ being approximately determined from the relation $$\sinh^{2}\frac{\bar{\phi}_{\rm c}}{\sqrt{2}}=\frac{mM}{\lambda}.$$ (48) To the F–term scalar potential $V_{F}$ during the second stage of inflation (i.e. for $|\bar{\phi}|\geq|\bar{\phi}_{\rm c}|$ and $|\sigma|<|\sigma_{\rm c}|$ ) has to be added the following term $$\displaystyle V_{r}^{h}$$ $$\displaystyle=$$ $$\displaystyle m^{2}M^{2}\left(\frac{N_{h}\lambda^{2}}{16\pi^{2}}\right)$$ (49) $$\displaystyle\times\ln\frac{\left(\tanh\frac{\sigma}{\sqrt{2}}+\sqrt{2}\kappa% \frac{<\phi>}{m}\tanh\frac{\bar{\phi}}{\sqrt{2}}\right)^{2}}{\left(1+4\kappa^{% 2}\frac{M^{2}}{m^{2}}\right)\left(\frac{mM}{\lambda}\right)},$$ which may be approximated as $$V_{r}^{h}\simeq m^{2}M^{2}\left(\frac{N_{h}\lambda^{2}}{16\pi^{2}}\right)\ln% \frac{\lambda\tanh^{2}\frac{\bar{\phi}}{\sqrt{2}}}{mM}$$ (50) and corresponds to the dominant one-loop radiative corrections due to the $N_{h}$-dimensional supermultiplets $H$, $\bar{H}$ ($N_{h}=2$). Notice that the renormalization scale is chosen such that $V_{r}^{h}$ vanishes at $|\bar{\phi}|=|\bar{\phi}_{\rm c}|$ ($\tanh^{2}\left(\bar{\phi}_{\rm c}/\sqrt{2}\right)\simeq\sinh^{2}\left(\bar{% \phi}_{\rm c}/\sqrt{2}\right)=mM/\lambda$). The one-loop radiative corrections involving the $\Phi$ supermultiplet are neglected since they are relatively very small. This is because $\Phi$ couples to the combination which plays the role of the complex inflaton during the second stage of inflation only through $S$ and the contribution of $S$ to this combination is severely suppressed. Indeed, the slope of the potential along the semi-shifted path generated by the radiative corrections involving the $\Phi$ supermultiplet is suppressed relative to the one involving the $H$, $\bar{H}$ supermultiplets by, approximately, a factor $\left(N_{\phi}/8N_{h}\right)\left(m/\lambda M\right)^{2}\sim 5\times 10^{-4}$. This is a very important property of our model resulting from the fact that, for the parameters chosen, the semi-shifted path is almost perpendicular to the trivial one. So the very strong radiative corrections on the trivial trajectory, which are controlled by the strong coupling constant $\kappa$ and are needed, as we have seen, for accommodating appreciable values of $r$, do not affect the second stage of inflation. This is very crucial since otherwise the semi-shifted path would become too steep and there would be no way of generating the extra e-foldings required for solving the puzzles of hot big bang cosmology. The number of e-foldings during the second stage of inflation between an initial value $\bar{\phi}_{\rm{in}}$ and a final value $\bar{\phi}_{\rm{f}}$ of the inflaton $\bar{\phi}$ is given, in the slow-roll approximation, by $N(\bar{\phi}_{\rm{in}})-N(\bar{\phi}_{\rm{f}})$, where $$N(\bar{\phi})\simeq\frac{1}{2\beta\sqrt{1-(\delta_{h}/\beta)}}\ln\frac{\cosh(% \sqrt{2}\bar{\phi})-\sqrt{1-(\delta_{h}/\beta)}}{\cosh(\sqrt{2}\bar{\phi})+% \sqrt{1-(\delta_{h}/\beta)}}$$ (51) with $$\delta_{h}=\frac{N_{h}{\lambda^{2}}}{{4\pi^{2}}}.$$ (52) The termination of slow-roll inflation is due to the radiative corrections in Eq. (50) and takes place at a value $\bar{\phi}_{\rm f}$ ($|\bar{\phi}_{\rm f}|\gg|\bar{\phi}_{\rm c}|$) of $\bar{\phi}$ given by $$\cosh(\sqrt{2}\bar{\phi}_{\rm f})\simeq\frac{\delta_{h}}{2}+\sqrt{1+\frac{% \delta_{h}^{2}}{4}}.$$ (53) It turns out numerically that, with the chosen values of the parameters, the pivot scale $k_{*}=0.05~{}{\rm Mpc}^{-1}$ suffers about 13 e-foldings during the first stage of inflation. As a consequence, approximately another 38-39 e-foldings must be provided by the second inflationary stage, which requires a value of $|\bar{\phi}_{\rm in}|\simeq 0.23$ at the onset of this stage. This requirement can indeed be fulfilled in our numerical example as we have shown by extensive numerical studies. It is worth noticing that, due to the presence of mild but appreciable SUGRA corrections and not too weak radiative corrections, the second stage of inflation is able to generate a relatively limited number of e-foldings. Consequently, this number is not too sensitive to the value of $\bar{\phi}_{\rm in}$. III.3 The formation of cosmic strings After the end of inflation the system settles in one of the two distinct continua of SUSY vacua in Eqs. (3) and (4) with $\Phi_{\pm}\simeq\pm M$ in our case and the $U(1)_{B-L}$ gauge symmetry breaks spontaneously leading to the formation of local cosmic strings. These strings can have a small contribution to the CMBR power spectrum parametrized bevis by the dimensionless string tension $G\mu_{\rm s}$, where $G$ is the Newton’s gravitational constant and $\mu_{\rm s}$ is the string tension, i.e. the energy per unit length of the string. Applying in our case the results of Ref. bevis , which considered local strings within the Abelian Higgs model in the Bogomol’nyi limit, we write, for the string tension, $$\mu_{\rm s}=4\pi|\left<H\right>|^{2},$$ (54) where $\left<H\right>$ is the VEV of $H$. Although the strings in our model are more complicated than in the Bogomol’nyi limit of the Abelian Higgs model, we think that the above estimate for the string tension is good enough for our purposes here. In order to keep the contribution of these strings to the primordial curvature perturbation at an acceptable level, one must impose an upper bound on the string tension, which, for the Abelian-Higgs field theory model, is found to be strings $$G\mu_{\rm s}\lesssim 3.2\times 10^{-7}.$$ (55) This puts an upper bound on $\left<H\right>$, which, in turn, restricts the possible values of the tensor-to-scalar ratio $r$ not to exceed about $3\times 10^{-2}$. In our numerical example, the string tension $\mu_{\rm s}$ of the $B-L$ strings turns out to be given by $$G\mu_{\rm s}=\frac{|\left<H\right>|^{2}}{2}\simeq\frac{mM}{2\lambda}\simeq 3.1% 9\times 10^{-7}$$ (56) and, thus, almost saturates the bound in Eq. (55). IV Conclusions In view of the recent results joint1 ; joint2 indicating that appreciable values of the tensor-to-scalar ratio in the CMBR cannot be excluded, we addressed the question whether such values can be obtained in SUSY hybrid inflation models resulting from particle physics. To this end, we have considered a reduced version of the extended SUSY PS model of Ref. quasi , which was initially constructed for solving the $b$-quark mass problem of the simplest SUSY PS model with universal boundary conditions. The reason for focusing on this model is that it is known to support successful versions of hybrid inflation like the standard-smooth one stsmhi . This scenario is compatible with all the recent data even with a minimal Kähler potential, but predicts negligible values of the tensor-to-scalar ratio. In the context of this particle physics model, we demonstrated that a two stage hybrid inflationary scenario which can predict values of the tensor-to-scalar ratio of the order of ${\rm few}\times 10^{-2}$ can be constructed. For the values of the parameters considered in this paper, the model in the global SUSY limit possesses practically two classically flat directions, the trivial and the semi-shifted semi one. The SUGRA corrections to the potential stabilize the trivial flat direction so that it becomes able to support a first stage of inflation. All the cosmological scales exit the horizon during this stage and our present horizon undergoes a limited number of e-foldings. The tensor-to-scalar ratio can acquire appreciable values as a result of sufficiently mild SUGRA corrections combined with strong radiative corrections to the inflationary potential, while the value of the scalar spectral index remains acceptable. The additional number of e-foldings required for solving the standard problems of hot big bang cosmology are generated by a second stage of inflation taking place along the semi-shifted path, where $U(1)_{B-L}$ is unbroken. This is possible since the semi-shifted direction, being almost perpendicular to the trivial path, is not affected by the strong radiative corrections on the trivial path and also because the SUGRA corrections on the semi-shifted path remain mild. After the end of inflation, the system falls into the vacuum and $B-L$ cosmic strings are produced. To restrict the contribution of these strings to the primordial curvature perturbation to an acceptable level, one must impose strings an upper bound on the $U(1)_{B-L}$ breaking VEVs, which limits somewhat the possible values of the tensor-to-scalar ratio, but values up to about $3\times 10^{-2}$ can be easily obtained. References (1) G. Lazarides, Lect. Notes Phys. 592, 351 (2002), hep-ph/0111328; J. Phys. Conf. Ser. 53, 528 (2006), hep-ph/0607032. (2) A.D. Linde, Phys. Rev. D 49, 748 (1994). (3) E.J. Copeland, A.R. Liddle, D.H. Lyth, E.D. Stewart, and D. Wands, Phys. Rev. D 49, 6410 (1994). (4) G.R. Dvali, Q. Shafi, and R.K. Schaefer, Phys. Rev. Lett. 73, 1886 (1994); G. Lazarides, R.K. Schaefer, and Q. Shafi, Phys. Rev. D 56, 1324 (1997). (5) G. Lazarides and C. Panagiotakopoulos, Phys. Rev. D 52, R559 (1995). (6) R. Jeannerot, S. Khalil, G. Lazarides, and Q. Shafi, J. High Energy Phys. 10, 012 (2000). (7) G. Lazarides and A. Vamvasakis, Phys. Rev. D 76, 083507 (2007). (8) R. Jeannerot, S. Khalil, and G. Lazarides, J. High Energy Phys. 07, 069 (2002). (9) M.E. Gomez, G. Lazarides, and C. Pallis, Nucl. Phys. B638, 165 (2002). (10) G. Lazarides and C. Panagiotakopoulos, Phys. Lett. B 337, 90 (1994); S. Khalil, G. Lazarides, and C. Pallis, ibid. 508, 327 (2001). (11) B. Ananthanarayan, G. Lazarides, and Q. Shafi, Phys. Rev. D 44, 1613 (1991); Phys. Lett. B 300, 245 (1993). (12) D.N. Spergel et al., Astrophys. J. Suppl. 170, 377 (2007). (13) P.A.R. Ade et al. [Planck Collaboration], arXiv:1502. 01589. (14) C. Panagiotakopoulos, Phys. Rev. D 55, R7335 (1997); A.D. Linde and A. Riotto, ibid. 56, R1841 (1997); V.N. Şenoğuz and Q. Shafi, Phys. Lett. B 567, 79 (2003); ibid. 582, 6 (2004). (15) G. Lazarides and C. Pallis, Phys. Lett. B 651, 216 (2007); G. Lazarides, arXiv:0706.1436. (16) G. Lazarides and A. Vamvasakis, Phys. Rev. D 76, 123514 (2007). (17) P.A.R. Ade et al. [BICEP2 Collaboration], Phys. Rev. Lett. 112, 241101 (2014). (18) R. Flauger, J.C. Hill, and D.N. Spergel, J. Cosmol. Astropart. Phys. 08, 039 (2014); M. Cortês, A.R. Liddle, and D. Parkinson, arXiv:1409.6530. (19) R. Adam et al. [Planck Collaboration], arXiv:1409.5738. (20) M.J. Mortonson and U. Seljak, J. Cosmol. Astropart. Phys. 10, 035 (2014); C. Cheng, Q.G. Huang, and S. Wang, J. Cosmol. Astropart. Phys. 12, 044 (2014); L. Xu, arXiv:1409.7870. (21) P.A.R. Ade et al. [BICEP2/Keck and Planck Collaborations], Phys. Rev. Lett. 114, 101301 (2015). (22) T. Kobayashi and O. Seto, arXiv:1404.3102. (23) K.-Y. Choi and B. Kyae, Phys. Lett. B 735, 391 (2014). (24) M. Ur Rehman, Q. Shafi, and J. R. Wickman, Phys. Rev. D 83, 067304 (2011). (25) G. Lazarides, I.N.R. Peddie, and A. Vamvasakis, Phys. Rev. D 78, 043518 (2008). (26) C. Panagiotakopoulos, Phys. Lett. B 459, 473 (1999); Phys. Rev. D 71, 063516 (2005). (27) P.A.R. Ade et al. [Planck Collaboration], Astron. Astrophys. 571, A25 (2014). (28) N. Bevis, M. Hindmarsh, M. Kunz, and J. Urrestilla, Phys. Rev. D 75, 065015 (2007); ibid. 76, 043005 (2007); Phys. Rev. Lett. 100, 021301 (2008).
Efficient Asymmetric Co-Tracking using Uncertainty Sampling Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii Graduate School of Informatics Kyoto University Sakyo-ward, Yoshida-honmachi, Kyoto 606–8501 Email: [email protected] Abstract Adaptive tracking-by-detection approaches are popular for tracking arbitrary objects. They treat the tracking problem as a classification task and use online learning techniques to update the object model. However, these approaches are heavily invested in the efficiency and effectiveness of their detectors. Evaluating a massive number of samples for each frame (e.g., obtained by a sliding window) forces the detector to trade the accuracy in favor of speed. Furthermore, misclassification of borderline samples in the detector introduce accumulating errors in tracking. In this study, we propose a co-tracking based on the efficient cooperation of two detectors: a rapid adaptive exemplar-based detector and another more sophisticated but slower detector with a long-term memory. The sampling labeling and co-learning of the detectors are conducted by an uncertainty sampling unit, which improves the speed and accuracy of the system. We also introduce a budgeting mechanism which prevents the unbounded growth in the number of examples in the first detector to maintain its rapid response. Experiments demonstrate the efficiency and effectiveness of the proposed tracker against its baselines and its superior performance against state-of-the-art trackers on various benchmark videos. I Introduction Nowadays, visual tracking is an inseparable component for high-level visual tasks such as human-computer interface, human behavior analysis, smart appliances, virtual/augmented reality and surveillance. When applied to video sequences in real-life situations, trackers should cope with challenging appearance changes due to illumination variations, motion blur, non-rigid deformations, rotations, mobile imaging platforms and occlusions [1]. While some successful generative trackers [2, 3, 4, 5] models the target object, they ignore the information hidden in the background. On the other hand discriminative trackers [6, 7, 8, 9, 10, 11, 12] pose the tracking problem as a classification task. In these models instead of trying to build a complex model of the object, the algorithms seek a decision boundary that best separates the target and background. This re-formulation undermine the inherent issues of generative models like background clutter and model over-simplification [13]. There are many tracking-by-detection visual trackers, which heavily rely on their detector to handle different tracking challenges [9], such as rotations and scale changes. Such schemes treat tracking as a binary classification problem, which separates the object from its local background using a classifier, in which a discriminative classifier is trained with the samples obtained from the tracking sequence, and their performances are affected by their sampling policy. Most trackers only utilize one positive sample, i.e., the tracking result in the current frame [14]. If the tracked location is not accurate, the classifier will be updated with the contaminated appearance of the target, leading to a drift over time. To alleviate this problem, multiple samples in the proximity of the estimated target can be used to train the tracker [7, 9]. However, such algorithms are heavily invested in the efficiency and effectiveness of their detectors. Evaluating a massive number of samples for each input frame forces the detector to trade the accuracy in favor of speed to meet the real-time processing requirements. While some trackers aim to enhance the detectors’ speed while preserving their accuracy using statistical properties of images (e.g., [12]), generally achieving an adjustable balance between speed and accuracy is desired. Furthermore, dealing with rotations and scale changes challenges such mechanisms. Additionally, misclassification of the borderline input samples in the detector (Figure 1) may introduce accumulating errors in the tracker, degrading its performance significantly [7]. Furthermore, the growth of sample repository in online learning schemes degrades the speed. If not handled properly, the tracker cannot perform long-term tracking [9]. In this study, we propose an efficient co-tracking framework in which an active learning unit orchestrate the information exchange. It consists of a rapid detector with short-term memory, and an uncertainty sampling switcher that query the label of the most uncertain samples of the first detector from an accurate detector with a long-term memory (called the “oracle”). An importance sampling scheme combines the results of the two trackers and handles the scale variation of the target. An exemplar-based detector is employed as the rapid detector and we introduce a budgeting mechanism to prevent the unbounded growth in the number of examples in this detector to maintain its rapid response. In summary we (i) employed active learning in co-tracking framework that leads to increasing the speed and generalization power of the tracker, (ii) actively control the memory of tracker by balancing between short- and long-term memories and (iii) introduced an intuitive budgeting method for a nearest neighbor classifier. The difference between the proposed co-tracking framework and that of Tang et al. [13] is four-fold: (i) the classifiers do not exchange all the data they have problems in labeling, instead, the most informative samples are selected by uncertainty sampling, and exchanged. (ii) the update rate of classifiers is different to realize a short and long-term memory mixture, (iii) the samples that are labeled for the target localization can be re-used for training and the need for an extra round of sampling and labeling is revoked, (iv) since in the proposed asymmetric co-tracking, one of the classifiers scaffolds the other one instead of participating in every labeling process, a more sophisticated classifier with higher computational complexity can be used. II Related Work Many discriminative models have been adopted in object tracking, where a binary classifier is learned online to separate the target from the background. Numerous supervised or semi-supervised classifiers have been employed for object tracking, such as SVM [15], structured output SVM [9], boosting [14], semi-boosting [16], and online multi-instance boosting [7]. They follow different approaches in tackling foreground-background separation like incorporating a trained SVM into an optical flow tracker [15], using an ensemble of online learned weak classifiers to decides whether a pixel belongs to the target region or background [17], or utilizing online boosting to select discriminative features for separation of target and background [14]. Multiple instance learning (MIL) tracker put all of the ambiguous positive and negative samples into bags to learn a discriminative model [7]. In another stream of studies, the most discriminative feature combination in learned online to build a confidence map for foreground detection [18]. Combining multiple supervised and semi-supervised classifiers [6], and governing a learning method using positive and negative constraints [11] are some other of the most successful discriminative approaches for tracking. Such methods are specifically designed to resolve the label noise problem, in which the classifier get confused by even the smallest mistakes in the labeling process. Since the classifier is using a self-learning loop, such mistakes can accumulate over time and cause the tracker to drift. One of the solutions to this problem is co-tracking [13], in which the self-learning loop is broken and labeling is done collaboratively. Without model update schemes, trackers accumulate error during run-time (drift) and typically fail if the object disappears temporarily. To address this issue, some online appearance update models have been proposed, e.g., incremental subspace update scheme and adaptive sparse representation update [19]. In the case of tracking-by-detection approaches, this essentially means that the classifier should be re-trained with the relevant samples. In such algorithms, after selecting the samples and labeling them, they are used to update the classifier. The classifier aims for labeling data, while the tracker attempts to localize the target and these two objectives are sometimes contradicting (e.g., in the presence of target-like distractors). Increasing the accuracy of the detector and using unlabeled samples is a typical approach to this problem (e.g., [16, 7]), while trackers such as STRUCK [9] couple these two objectives in a joint learning framework. Another instance was presented in [11] where the recent samples are added to the classifier only if their classifier label is different from the label of constrained classifiers that monitors the performance of the tracker. Combining short and long-term memory to deal with rapid changing targets, occlusions, and environmental change in tracking is another research avenue for model updating [20], however, the proposed schemes are hardly integrated into general trackers. Active learning techniques build upon such discriminative models and try to improve their convergence speed and generalization power. For instance, Fisher information criterion evaluates the uncertainty of classification model in the MIL tracker [7] to perform active feature selection [21]. Reducing the number of necessary labeled samples [22], unified sample learning and feature selection procedure [23] and reducing the sampling bias by controlling the variance [24] are some of the improvements that active learning provides for the discriminative trackers. Active learner selects the samples that don’t know how to label. Uncertainty sampling [25], as one of the most popular forms of active learning, try to query the sample that minimizes a utility function from the oracle. The utility function can be classification confidence [26], margin [27], and entropy [28]. Uncertainty sampling in this sense, optimize the query selection with respect to the utility function. However, this approach is a form of Gibbs sampling and requires probabilistic learning models, it should be treated differently for non-probabilistic classifiers. In this regard, decision trees [26] and nearest neighbors [29] are used with uncertainty sampling with the class label obtained by voting, and for SVMs [30] the proximity to the decision boundary is considered as the utility function. III Proposed Method III-A Architecture of the System The proposed tracker (Figure 2) consists of two classifiers $\theta^{(1)}_{t}$ and $\theta^{(2)}_{t}$, who exchange their information using an uncertainty sampling scheme. The samples are obtained from the region-of-interest (ROI) defined by optical flow with Gaussian probability. Then, the samples are labeled by a collaborative effort of the classifiers. Later, these samples and their labels are employed to update the classifiers. The first classifier ($\theta^{(1)}_{t}$) is a short-memory highly adaptive exemplar-based classifier that is updated (with respect to a memory budget) with the most informative samples in each tracking episode. The second classifier, the oracle ($\theta^{(2)}_{t}$), is a long-term memory tracker that is updated with all the samples at fixed intervals to grant the robustness against occlusions and temporal target changes to the tracker. Sampling: In each frame $F_{t},t\in\{1,\ldots,T\}$, a set of $n$ random samples $\mathbf{p}^{j}_{t}$ is generated. These samples are selected from a region-of-interest $\mathcal{R}_{t}$ determined by optical flow [31]. To handle still objects, the last known target area is added to the ROI New samples are selected from ROI, based on a Gaussian distribution centered on the last target position $\mathcal{N}(\mathbf{p}_{t-1},\Sigma_{search})$. In each frame, an additional $n^{\prime}$ samples are selected uniformly from distant locations of the frame (distance larger than $3\times\Sigma_{search}$), to sample global background, and are automatically labeled as the background. This sampling scheme empowers the foreground-background separability in short term by exploiting the locality of the target [9]. Furthermore, it enables the tracker to handle in-scene distractors (e.g., the moving non-target objects in the ROI) and potential occluders using the global sampling for negative exemplars. Labeling: In the proposed asymmetric co-tracking framework, one classifier attempts to label the sample, and it queries the label from the other classifier if a certain condition is met. This is in contrast with using a linear combination of both classifiers based on their general classification accuracy, as adopted in [13]. The proposed tracker can decide for each sample based on the classifier confidence, i.e., for sample $\mathbf{p}_{t}^{j}$ we define a score $s^{j}_{t}$ $$s^{j}_{t}=h\big{(}\mathbf{x}_{t}^{\mathbf{p}_{t}^{j}}|\theta_{t}^{(1)}\big{)}$$ (1) that reflects the classification score for the image patch $\mathbf{x}_{t}^{\mathbf{p}_{t}^{j}}$, with values closer to +1 as possible targets and values closer to -1 as background. Based on uncertainty sampling (elaborated in section III-D), the samples for which the classification score is more uncertain (i.e., $s^{j}_{t}\rightarrow 0$), contains more information for the classifier if they are labeled by the other classifier (i.e., the oracle). Therefore, the scores of all samples are sorted, and $m$ samples with the closest values to 0 are selected to be queried from $\theta_{t}^{(2)}$. To handle the situations for which the number of highly uncertain samples are more then $m$, a range of scores are defined by lower and higher thresholds ($\tau_{l}$ and $\tau_{u}$) and all the samples in this range are considered highly uncertain. $$\mathcal{U}_{t}=\{\mathbf{p}^{i}_{t}|\tau_{l}<s_{t}^{i}<\tau_{u}\,\mathrm{or}% \,\Bigm{\lvert}\{\exists j\neq i|s^{j}_{t}\leq s^{i}_{t}\}\Bigm{\lvert}<m\}$$ (2) in which $\mathcal{U}_{t}$ is the list of uncertain samples. The label of the samples $\ell^{j}_{t}\in\mathcal{L}_{t}$ are then determined by $$\displaystyle\ell^{j}_{t}$$ $$\displaystyle=\begin{cases}sign\Big{(}h\big{(}\mathbf{x}_{t}^{\mathbf{p}_{t}^{% j}}|\theta_{t}^{(1)}\big{)}\Big{)}&,\mathbf{p}_{t}^{j}\in\mathcal{U}_{t}\\ sign\Big{(}h\big{(}\mathbf{x}_{t}^{\mathbf{p}_{t}^{j}}|\theta_{t}^{(2)}\big{)}% \Big{)}&,\mathbf{p}_{t}^{j}\notin\mathcal{U}_{t}\\ \end{cases}$$ (3) and all image patches $\mathbf{x}_{t}^{\mathbf{p}_{t}^{j}}$ and labels $\ell^{j}_{t}$ are stored in $\mathcal{D}_{t}$. Localizing: To determine the state of the target $\hat{\mathbf{p}_{t}}$, we follow the importance sampling mechanism originally employed by particle filter trackers, $$\hat{\mathbf{p}_{t}}=\frac{\sum_{j=1}^{n}\pi_{t}^{j}\mathbf{p}_{t}^{j}}{\sum_{% j=1}^{1}\pi_{t}^{j}}.$$ (4) where $\pi^{j}_{t}=s^{j}_{t}\mathds{1}(\ell^{j}_{t}>0)$ and $\mathds{1}(.)$ is the indicator function, 1 if true, zero otherwise. This mechanism approximate the state of the target, based on the effect of positive samples, in which samples with higher scores gravitates the final results more toward themselves. Upon the events such as massive occlusion or target loss, this sampling mechanism degenerates [19]. In such cases, the number of positive samples and their corresponding weights shrinks significantly, and the importance sampling is prone to outliers, distractors and occluded patches. To address this issue, two thresholds ($\tau_{p}$ and $\tau_{a}$) are set on the number and average scores of positive samples. If either of thresholds are not exceeded, the target is deemed occluded to avoid tracker degeneracy. Updating: In the proposed tracker, the first classifier, $\theta_{t}^{(1)}$ is updated using the samples that it queried from $\theta_{t}^{(2)}$, i.e, with the those it was uncertain about and had oracle label them ($\mathcal{U}_{t}$). This uncertainty is either due to the model shortcomings of the classifier (e.g., simple observation model) or because of intrinsic ambiguity of the sample. Using a sophisticated classifier as the oracle alleviate these issues, and by providing the label back to the first classifier, it scaffolds it for better classification in similar circumstances, potentially improving the speed of future sample evaluation as well as the generalization. $$\theta_{t+1}^{(1)}=\psi(\theta_{t}^{(1)},\mathcal{U}_{t},\mathcal{L}_{t})$$ (5) in which $\psi(.)$ is the update function (r.t. III-C). On the other hand, to realize a dual-memory scheme to handle temporal target changes and occlusions, the oracle is equipped with a non-volatile memory and updated less frequently (every $\Delta$ frames) with all the data sampled during this period, $$\displaystyle\theta^{(2)}_{t+1}=\begin{cases}u(\theta^{(2)}_{t},\mathcal{D}_{t% -\Delta,..,t})&,\mathrm{if}\;t\neq k\Delta\\ \theta^{(2)}_{t}&,\mathrm{if}\;t=k\Delta\end{cases}$$ (6) in which $u(.)$ is a classifier re-train function. In summary, the first detector rapidly adapts to the target in order to estimate the target location considering its recent changes. The second detector, however, cares for best object detection performance, has a long-term memory and is robust to noise and temporal occlusions. The co-training of these two detectors balances the desired level of speed and accuracy for the tracker. This differs from the dual memory scheme of MUSTer [20], in which the short-term memory is based on the information obtained from the current frame, and the long-term memory that has exponential forgetting curve and get updated only when no occlusion is detected. III-B Realization In this study, the short-term memory classifier is implemented using a k-nearest neighbor classifier in which all the samples have a short lifetime to realize the budgeting mechanism. The histogram of colors and bag of visual words (of SIFT) forms the feature vector of every patch $\mathbf{x}^{\mathbf{p}^{j}_{t}}_{t}$, and its dimensionality is reduced to 20 using PCA. The KD-tree-based KNN classifier, its budgeted memory, the lazy update behavior of KNNs, and the reduced dimensionality of the feature space render the KNN suitable for real-time tracking. The oracle in this study is a part-based detector [32]. The features, part-base detector dictionary, and the parameters ($k$ of KNN, thresholds $\tau_{l},\tau_{u},\tau_{a},\tau_{p}$, number of samples $n,n^{\prime}$, search radius $\Sigma_{search}$, and update latency $\Delta$) are trained/tuned via a cross-validation approach. III-C Budgeting Online learning of discriminative trackers has its own challenges. The sample size of most of the adaptive classifiers is constantly growing, making them slower by the time. For a KNN classifier, even with a robust KD-tree architecture, the computation cost rapidly increase over time. This is similar to the curse of kernelization, in which the number of support vectors increases with the amount of training data in kernel classifiers. To allow for real-time operation, there is a need to control the number of support vectors. Recently, approaches have been proposed for online learning of classification SVMs on a fixed budget, meaning that the number of support vectors is constrained to remain within a specified limit as it is employed in [9]). Reducing the dataset for KNN classification has been studied in the literature (e.g., the condensed nearest neighbor [33]), yet it is not suitable for tracking in which the distribution of target and background is non-stationary, and there is a need to keep/remove the samples based on the temporal properties of tracking task. We propose a simple accounting method for a sample $\mathbf{x}$ with the nearest neighborhood $\eta(\mathbf{x})$ as KNN budgeting rules: 1. Discard the new sample $\mathbf{x}$ for which all $\eta(\mathbf{x})$ have similar labels (absorbed); 2. Attach a timer $\alpha$ to each new $\mathbf{x}$ that counts down upon processing each new frame. If the timer goes off ($\alpha\rightarrow 0$), the sample is flagged; 3. Mark the sample for which all neighbors have opposite labels as outlier; 4. For each added sample $\mathbf{x}$, increment the timer of all $\eta(\mathbf{x})$ whose labels differ with the new sample; 5. For each flagged sample if all of $\eta(\mathbf{x})$ has similar labels to $\mathbf{x}$ it is discarded (absorbed), if $\mathbf{x}$ is an outlier and none of $\eta(\mathbf{x})$ has the same label, it should also be discarded. The remaining flagged samples are called prototypes. This scheme tends to discard the most futile samples from the sample pool while preserving the recent or essential ones. Figure 3 depicts the sample 2D feature space of a KNN classifier and demonstrate how this budgeting mechanism preserve the classification power while reducing the number of samples. This budgeting scheme serves as a forgetting mechanism for the KNN classifier as well. If a distractor is very similar to the target in the feature space, the samples obtained from it will be labeled positive and added again to the KNN classifier, reinforcing the classifier false belief about the label of such samples. Such cases act as local minima for feature space in KNN classifier and can only be resolved if the oracle investigates and disprove them. To rescue the KNN classifier from its local minima, it requires a forgetting mechanism such as the proposed budgeting scheme. This concept is illustrated in Figure 4. III-D Uncertainty Sampling When using a probabilistic model for binary classification, target/non-target in the case of tracking-by-detection, uncertainty sampling simply queries the instance whose posterior probability of being is halfway between positive values and negative values [25]. Uncertainty sampling strategies may also be employed with non-probabilistic algorithms like memory-based classifiers [29]. Inspired by these studies, we calculated the score of a sample as the KNN classifier confidence score, i.e., we allow the $k$ nearest neighbors to vote on the class label of $\mathbf{p}_{t}^{i}$, and the sum of these votes representing the score. As mentioned earlier, UST queries the $m$ most uncertain samples (having the closest scores to 0) from the oracle. To handle a large number of uncertain samples, we decided to query all the samples in the range of $(\tau_{l},\tau_{p})$ from the oracle. This backup mechanism along with forgetting mechanism realized by the budgeting helps the KNN detector to escape from its local minima induced by feature-space similar objects and partial occlusions (Figure 4). IV Evaluation This section reports on a set of quantitative experiments comparing the UST with relevant algorithms. To evaluate the performance of the proposed tracker, the experiments is conducted on 100 challenging video sequences from [1]. These sequences include many of the visual tracking challenges such as scale variation, fast motion and motion blur, illumination variations, in-plane and out-of-plane rotations, low resolution and shear problem, background clutter and various types of occlusion. The performance of the tracker is measured by the area under the surface of its success plot. A tracker in time $t$ succeed to track the object if its response $\hat{\mathbf{p}_{t}}$ overlaps with the ground truth $\mathbf{p}_{t}^{*}$ more than a threshold $\tau_{o}$. Success plot, graphs the success of the tracker against different values of the threshold $\tau_{o}$ and its AUC is $$AUC=\frac{1}{T}\int_{0}^{1}\sum_{t=1}^{T}\mathds{1}\left[\frac{|\hat{\mathbf{p% }_{t}}\cap{\mathbf{p}_{t}^{*}}|}{|\hat{\mathbf{p}_{t}}\cup{\mathbf{p}_{t}^{*}}% |}>\tau_{o}\right]d_{\tau_{o}}$$ (7) where $\tau$ is the length of sequence, $|.|$ denotes the area of the region, $\cap$ and $\cup$ stands for intersection and union of the regions respectively. Since UST has non-deterministic sampling parts, we run it 5 times and the average of the results is reported. IV-A Comparison with Baseline This experiment strives to demonstrate the advantages of the proposed tracker in comparison to the trackers that consists of either of its detectors in isolation. To this end, we construct several trackers from the components of this tracker to serve as the baselines for this experiment. In all of these trackers, the ROI detection and input sampling are similar to the UST tracker. KNN(10) and KNN(25) trackers utilize only the feature-based nearest neighbor detector for the tracking with the neighborhood size of $k=10$ and $k=25$ respectively. KNN+(10) and KNN+(25) trackers additionally incorporate the proposed budgeting mechanism into their detector. SVM tracker employs the oracle to track the target whereas SVM+ include the classifier update in its framework. Figure 5 compares the performance of these baseline tracker with that of UST, and demonstrates that the uncertainty sampling data exchange, effectively connect the classifiers to construct a robust and efficient tracker. IV-B Comparison with state-of-the-art To establish a fair comparison with the state-of-the-art, we select some of the most popular discriminative trackers based on [1] and perform a benchmark on the whole videos of the dataset, along with partial subsets of the dataset with a distinguishing attribute to evaluate the tracker performance under different situations. These trackers are BSBT [6], CSK [12], CT [3], CXT [10], DFT [34], FOT [35], FRAG [2], LOT [4], LSHT [36], LSK [37], MIL [7], SBT [8], STRUCK [9], TLD [11], and VR [18]. Figure 6 depicts the performance of all of the investigated trackers. As it is evident from this plot, UST outperforms the other trackers by having the highest AUC. Interestingly, UST also outperform these trackers when facing illumination, scale and shape changes that show the resilience of this double appearance model used by the two detectors (Table I). Rotations, shear and fast motions are well addressed by the proposed tracker and only STRUCK handled motion blur as good as UST. However, background clutter and low resolution targets challenged the UST. Not equipped with the means to handle LR, the results seems acceptable. In the case of background clutter, the use of context information in CXT and local sparsity in LSK outperforms our proposed tracker, shedding some light on the future direction of research. Some examples of tracking results are depicted in Figure 7. Finally, it is prudent to note that UST achieved an average speed of 28.3 fps on a Pentium 4 Core i7 @ 3.2 GHz by Matlab/C++ implementation.This experiment demonstrated that with a adequate information exchange in co-tracking, it is possible to balance a good trade-off between speed and accuracy, while the tracker is capable of properly under various tracking challenges. V Conclusion The key component is a co-tracking framework consisting of a frequently updated (KNN-based) classifier and a more conservative (part-based) detector. Built upon uncertainty sampling foundation, samples deemed uncertain by the KNN classification are labeled by the part-based detector (the oracle). A memory budgeting mechanism keep classifier updates tractable. This accounting method, along with an optical-flow based ROI detection ensures that the proposed tracker, UST, meet th ereal-time criteria for tracking. Experimental results on challenging video sequences demonstrated that the UTS tracker achieve comparable accuracy to the state-of-the-art trackers, while outperforming them in terms of efficiency and robustness. References [1] Y. Wu, J. Lim, and M.-H. Yang, “Object tracking benchmark,” PAMI, vol. 37, no. 9, pp. 1834–1848, 2015. [2] A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in CVPR’06, 2006. [3] K. Zhang, L. Zhang, and M.-H. Yang, “Real-time compressive tracking,” in ECCV’12.   Springer, 2012, pp. 864–877. [4] S. Oron, A. Bar-Hillel, D. Levi, and S. Avidan, “Locally orderless tracking,” IJCV, vol. 111, no. 2, pp. 213–228, 2015. [5] A. Taalimi, H. Qi, and R. Khorsandi, “Online multi-modal task-driven dictionary learning and robust joint sparse representation for visual tracking,” in AVSS’15.   IEEE, 2015, pp. 1–6. [6] S. Stalder, H. Grabner, and L. Van Gool, “Beyond semi-supervised tracking: Tracking should be as simple as detection, but not simpler than recognition,” in ICCVw’09.   IEEE, 2009, pp. 1409–1416. [7] B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in CVPR’09, 2009. [8] H. Grabner, J. Matas, L. Van Gool, and P. Cattin, “Tracking the invisible: Learning where the object might be,” in CVPR’10, 2010. [9] S. Hare, A. Saffari, and P. H. Torr, “Struck: Structured output tracking with kernels,” in ICCV’11.   IEEE, 2011, pp. 263–270. [10] T. B. Dinh, N. Vo, and G. Medioni, “Context tracker: Exploring supporters and distracters in unconstrained environments,” in CVPR’11. [11] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” PAMI, vol. 34, no. 7, pp. 1409–1422, 2012. [12] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in ECCV’12.   Springer, 2012, pp. 702–715. [13] F. Tang, S. Brennan, Q. Zhao, and H. Tao, “Co-tracking using semi-supervised support vector machines,” in ICCV’07, 2007. [14] H. Grabner, M. Grabner, and H. Bischof, “Real-time tracking via on-line boosting.” in BMVC’06, vol. 1, no. 5, 2006, p. 6. [15] S. Avidan, “Support vector tracking,” PAMI, 2004. [16] H. Grabner, C. Leistner, and H. Bischof, “Semi-supervised on-line boosting for robust tracking,” in ECCV’08, 2008. [17] S. Avidan, “Ensemble tracking,” PAMI, 2007. [18] R. T. Collins, Y. Liu, and M. Leordeanu, “Online selection of discriminative tracking features,” PAMI. [19] C. Bao, Y. Wu, H. Ling, and H. Ji, “Real time robust l1 tracker using accelerated proximal gradient approach,” in CVPR’12, 2012. [20] Z. Hong, Z. Chen, C. Wang, X. Mei, D. Prokhorov, and D. Tao, “Multi-store tracker (muster): A cognitive psychology inspired approach to object tracking,” in CVPR’15, 2015. [21] K. Zhang, L. Zhang, M.-H. Yang, and Q. Hu, “Robust object tracking via active feature selection,” CSVT, 2013. [22] C. H. Lampert and J. Peters, “Active structured learning for high-speed object detection,” in PR.   Springer, 2009, pp. 221–231. [23] C. Li, X. Wang, W. Dong, J. Yan, Q. Liu, and H. Zha, “Active sample learning and feature selection: A unified approach,” 2015. [24] A. Beygelzimer, S. Dasgupta, and J. Langford, “Importance weighted active learning,” in ICML’09, 2009. [25] D. D. Lewis and W. A. Gale, “A sequential algorithm for training text classifiers,” in ACM SIGIR’94, 1994, pp. 3–12. [26] D. D. Lewis and J. Catlett, “Heterogeneous uncertainty sampling for supervised learning,” in ICML’94, 1994, pp. 148–156. [27] T. Scheffer, C. Decomain, and S. Wrobel, “Active hidden markov models for information extraction,” in ISIDA’01, 2001. [28] B. Settles and M. Craven, “An analysis of active learning strategies for sequence labeling tasks,” in EMNLP’08, 2008. [29] M. Lindenbaum, S. Markovitch, and D. Rusakov, “Selective sampling for nearest neighbor classifiers,” JML, 2004. [30] S. Tong and D. Koller, “Support vector machine active learning with applications to text classification,” JMLR, vol. 2, pp. 45–66, 2002. [31] T. Brox, C. Bregler, and J. Malik, “Large displacement optical flow,” in CVPR’09.   IEEE, 2009, pp. 41–48. [32] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” PAMI, vol. 32, no. 9, pp. 1627–1645, 2010. [33] F. Angiulli, “Fast condensed nearest neighbor rule,” in ICML’05. [34] L. Sevilla-Lara and E. Learned-Miller, “Distribution fields for tracking,” in CVPR’12.   IEEE, 2012, pp. 1910–1917. [35] J. Matas and T. Vojir, “Robustifying the flock of trackers,” in CVWW’11.   Citeseer, 2011, p. 91. [36] S. He, Q. Yang, R. Lau, J. Wang, and M.-H. Yang, “Visual tracking via locality sensitive histograms,” in CVPR’13, 2013. [37] B. Liu, J. Huang, L. Yang, and C. Kulikowsk, “Robust tracking using local sparse appearance model and k-selection,” in CVPR’11.
Molecular dynamics simulations of shear-induced thermophoresis and non-Newtonian flow in compressible fluids Madhu Priya${}^{*}$ [email protected] Department of Physics and Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat Gan 52900, Israel    Yitzhak Rabin [email protected] Department of Physics and Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat Gan 52900, Israel NYU-ECNU Institutes of Physics and Mathematical Sciences at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China (November 26, 2020) Abstract We use molecular dynamics simulations to study the behavior of a compressible Lennard-Jones fluid in simple shear flow in a two-dimensional nanochannel. The system is equilibrated in the fluid phase close to the triple point at which gas, liquid and solid phases coexist and is subjected to steady shear in Couette geometry. It is observed that at higher shear rates, the system develops a density gradient perpendicular to the direction of flow and exhibits solid-like layering near the boundaries. Both the number of solid-like layers and the number of layers that move with the velocity of the neighboring wall, increase with the shear rate. We argue that the inhomogeneous density profile develops as the consequence of thermophoresis due to the non-uniform temperature profile produced by shear-induced viscous heating in the simulated flow cell. The above phenomena are accompanied by non-Newtonian effects such as nonlinear velocity profiles, inhomogeneous stress distributions and shear rate dependent viscosity which exhibits shear thinning followed by shear thickening as the shear rate is increased. The connection between these phenomena is discussed. I Introduction Molecular dynamics (MD) is an important tool for investigating properties of a fluid under flow and is often used to explore systems under conditions which are difficult to achieve and control in experiments, e.g., flows at high shear rates in nanochannels Jabbarzadeh2000 . The boundary conditions (BC’s) for such computer experiments are very important, especially for systems of nanoscale dimensions in which properties of the wall-fluid interface have a significant effect on the flow. Some of the earlier MD simulations of Couette flow considered smooth walls and reported wall slip Trozzi1984 . However, a more recent study showed that for wetting liquids, slip arises only at very high shear rates at which the response of the fluid to the applied shear is no longer linear Barrat1994 . Some authors introduced the no-slip BC’s explicitly, i.e., they assumed that the fluid layer next to the wall moves with the velocity of the wall, but subsequent studies have shown that imposition of such BC’s is incorrect since whether the fluid particles will slip at the wall or not, depends on the strength of the wall-fluid interaction Thompson1990 ; Thompson1997 . Along with implementing the correct BC’s, it is important to control the temperature in the system by choosing a thermostat which closely mimics experiment and is computationally efficient. Recent computer simulation studies showed that the choice of a thermostat has major effects on fluid flow at high shear rates in confined channels Bernardi2010 ; Yong2013a . The authors considered several scenarios: (1) thermostating the wall (TW), (2) the fluid (TF) and (3) both the wall and the fluid (TWTF). In TW the walls are made up of particles which are tethered by springs to their equilibrium lattice positions. TW simulations can reproduce the temperature profiles in actual experiments where the extra heat due to shear is dissipated through the walls. However, since the wall particles are oscillating around their mean positions on some characteristic time scale, the effective roughness of the walls depends on the applied shear rate and therefore TW simulations fail to describe systems in which wall roughness does not depend on the flow rate. In order to maintain constant wall roughness a possible choice is to use TF. This can be done either by assuming a linear velocity profile (profile-biased thermostat), or by measuring the actual velocity profile obtained in the simulation and implementing the thermostat using this profile (profile-unbiased thermostat). A profile-biased thermostat is limited in its accuracy since in many cases the linear velocity profile assumption breaks down at high shear rates. While this problem can be solved by using a profile-unbiased thermostat, TF thermostats cannot reproduce the experimental conditions in which only the walls are thermostated. Finally, TWTF simulations are computationally expensive and tend to distort the effects of viscous heating on fluid dynamics that become increasingly important at higher shear rates. In the present work, we use MD simulations to study the effect of simple shear on a fluid of monodisperse particles that interact with each other via a Lennard-Jones (LJ) potential and are confined in a two-dimensional (2D) nanochannel. In order to amplify the effects of shear on the temperature and density profiles in the nanochannel, we study this system in the region of the phase diagram where the compressibility is large, i.e., at the triple point density and at temperature that is slightly higher than that of the liquid-solid transition. Steady shear is applied by moving the upper wall with a constant velocity while the lower wall remains at rest throughout the simulation. The walls are made of particles which are fixed with respect to each other and therefore the roughness of wall remains constant during the simulation. We thermostat two layers of fluid particles next to each wall Rapaport1995 ; RapaportDis , and therefore allow temperature gradients to develop between the walls and the bulk of the fluid at high shear rates where shear-induced heating becomes important. The paper is organized as follows. We provide details of our simulation setup in Sec. II. In Sec. III we present the calculated density, flow and temperature profiles, discuss the connection between these results and thermophoresis in temperature gradients and analyze the various non-Newtonian characteristics of flow at high shear rates. We conclude the paper by summarizing and discussing our results in Sec. IV. II Simulation details The fluid particles interact with each other via the LJ potential, $$U_{LJ}(r)=4\epsilon\Big{[}\Big{(}\frac{\sigma}{r}\Big{)}^{12}-\Big{(}\frac{% \sigma}{r}\Big{)}^{6}\Big{]},$$ (1) which is terminated and shifted at $r=r_{\rm cut}=2.5\sigma$, so that the truncated potential $\bar{U}_{LJ}(r)$ is defined as, $$\bar{U}_{LJ}(r)=\begin{cases}U_{LJ}(r)-U_{LJ}(r_{\rm cut})&\quad{\text{if}}~{}% r<r_{\rm cut}\\ 0&\quad{\text{if}}~{}r\geq r_{\rm cut}\\ \end{cases}$$ . We use reduced LJ units in which the interaction parameter $\epsilon$, the mass $m$ and length scale $\sigma$ are taken to be unity (the Boltzmann constant is taken as unity as well). The simulations are performed in $NVT$ ensemble. The dynamics is solved by using a velocity-Verlet integrator with a time step of $\delta t=0.005\tau_{LJ}$, where $\tau_{LJ}=\sigma(m/\epsilon)^{1/2}=1$ is the LJ time unit. Most of the results were obtained at triple point density (0.694) that corresponds to $N=1600$ particles in an area of $48\sigma\times 48\sigma$ (other densities were obtained by changing the number of particles at fixed volume of the system). The nanochannel is constructed by placing two parallel solid walls at $y=0$ and $y=L_{y}$, where $L_{y}=48\sigma$ is the distance between the walls. Each of the walls is made of $43$ particles with centers located at positions $y=0$ and $y=48\sigma$, respectively, such that the horizontal separation between the wall particles is $1.12\sigma$, which corresponds to the minimum of the LJ potential. The wall-fluid particle interactions are the same as between fluid particles. Periodic boundary conditions with period $L_{x}=48\sigma$ along the $x$ direction are imposed. We start the simulation from a configuration in which the fluid particles are placed on a square lattice, between the two solid walls. The initial velocity of the $i$th fluid particle is chosen from a Maxwell-Boltzmann distribution at temperature $T$, to which we add a velocity given by the product of the $y$-position of the particle and the shear rate where the latter is defined by the constant velocity of the upper wall $U\hat{x}$ as $\dot{\gamma}=U/L$. The distance between the walls is kept fixed during the simulation. The fluid particles obey the following equations of motion, $$\displaystyle{\textbf{v}}_{i}\Big{(}t+\frac{\Delta t}{2}\Big{)}={\textbf{v}}_{% i}(t)+\frac{\Delta t}{2}{\textbf{F}}_{i}(t),$$ (2) $$\displaystyle{\textbf{r}}_{i}(t+\Delta t)={\textbf{r}}_{i}(t)+{\textbf{v}}_{i}% \Big{(}t+\frac{\Delta t}{2}\Big{)}\Delta t,$$ (3) $$\displaystyle{\textbf{v}}_{i}\Big{(}t+\Delta t\Big{)}={\textbf{v}}_{i}\Big{(}t% +\frac{\Delta t}{2}\Big{)}+\frac{\Delta t}{2}\textbf{F}_{i}\Big{(}t+\frac{% \Delta t}{2}\Big{)}.$$ (4) In the above set of equations ${\textbf{v}}_{i}(t)$ and ${\textbf{F}}_{i}(t)$ represent the instantaneous velocity of particle $i$ and the instantaneous force acting on it, respectively. In order to define a local instantaneous temperature $T(y,t)$, we divide the system into $40$ bins along $y$-axis such that each bin contains approximately a single layer of particles. The average velocity of particles in bin $\alpha$ is defined as $${\textbf{u}}_{\alpha}(t)=\frac{1}{N_{\alpha}}\sum_{i=1}^{N_{\alpha}}{\textbf{v% }}_{i}(t),$$ (5) where we sum over the instantaneous velocities of the $N_{\alpha}$ particles in this bin. We define the peculiar velocity of particle $i$ in this bin as Loose1992 $${\textbf{v}_{i}^{p}}(t)={\textbf{v}}_{i}(t)-{\textbf{u}}_{b}(t).$$ (6) Note that the peculiar velocity is defined in the rest frame of the average particle in the bin and can therefore be used to define the local temperature in the $\alpha$th bin as $$T_{\alpha}(t)=\frac{\sum_{i}^{N_{\alpha}}[{\textbf{v}}_{i}^{p}(t)]^{2}}{2(N_{% \alpha}-1)}.$$ (7) where, in the denominator, we subtracted $2$ from the number of degrees of freedom ($2N_{\alpha}$) in the $\alpha$-th bin because they were already included in the definition of local streaming velocity, Eq. (5). A constant temperature ($T$) is maintained near the walls by thermostating two liquid layers adjacent to each wall ($\alpha=1,2$ and $\alpha=39,40$, respectively) using velocity rescaling. At each time step the velocities of the particles in the above four bins are rescaled by a factor $\lambda$ defined in terms of the instantaneous temperature $T_{\alpha}(t)$ and the fixed temperature $T$ as $\lambda=\sqrt{(T/T_{\alpha}(t))}$. The above procedure allows us to maintain simultaneously constant roughness of the walls and a physically realistic temperature distribution inside the simulation box. III Results Typical snapshots of the system in equilibrium $\dot{\gamma}=0$ and under strong shear $\dot{\gamma}=0.3$, are shown in Figs. 1(a) and  1(b), respectively. Even though strong density fluctuations are observed in both figures, inspection of Fig. 1(a) shows that (with the exception of $1-2$ fluid layers near the walls where some ordering is visible) the average density is uniform across the system in equilibrium. This is not the case in the high shear limit where the steady state density is minimal at the center of the system ($y=L/2$) and strongly increases towards the walls, Fig. 1(b). In order to quantify the effect of steady shear on the density profile we divide the system into $400$ bins and average the density in each bin over $x$ and over time. The resulting equilibrium and steady state profiles $\langle\rho(y)\rangle$ are plotted in Fig. 2. While density oscillations are clearly observed in both cases, the amplitudes of the peaks increase and their width and the separation between them decrease with shear rate, a signature of shear-induced solid-like layering near the walls. The ratio of the amplitudes of the corresponding high shear rate and equilibrium peaks increases with distance from the walls, in agreement with the observation of shear-induced broadening of the solid-like boundary layers in Figs. 1(a) and  1(b). Note that the enhancement and broadening of solid-like layering is accompanied by reduction of the bulk density $\rho_{b}$ in the center of the channel, to a lower value ($0.63$) than the average density of the system ($0.694$). Having established the effect of shear on the density profile we turn to examine its effect on the flow by measuring the y-dependence of the average velocity $\langle v_{x}(y)\rangle$ for a range of applied shear rates (the average velocity of the particles in the $y$-direction vanishes, as expected on symmetry grounds). To this end we divide the system into $100$ bins along the $y$ direction (we choose a lower number of bins as compared to the density measurements, to avoid bins with zero particles). As shown in Fig. 3, around shear rate of $0.05$ one begins to observe deviations from a linear velocity profile, $v_{x}(y)=\dot{\gamma}y$. These deviations manifest themselves in the formation of a boundary layer that moves together with the neighboring wall (and another boundary layer that remains at rest with respect to the stationary wall). This phenomenon has been previously observed in the case of strong wall-fluid interactions in a three dimensional LJ system and has been referred to as locking Thompson1990 . When the shear rate is further increased, the number of layers moving with the wall velocity increases and the velocity gradients in the bulk of the system increase as well beyond their nominal value ($\dot{\gamma}$). Since we would like to gain insight about the origin of the observed layering and locking phenomena we proceed to examine the temperature profiles that develop in the system with increasing shear rate. We find that the temperature profile is parabolic (in $y$), with a maximum at the center of the system (the height of this maximum increases with shear rate as $\dot{\gamma}^{2}$ - see inset in Fig. 4), and decreases to the nominal temperature $T$ at the two thermostated layers near each wall, as shown in Fig. 4. Similar temperature profiles were also observed in other computer simulations of shear flow Yong2013a ; Yong2013b . This concurs with the expectation that shear-induced viscous heating leads to higher temperature gradients, since the only way to remove excess heat from the system is to increase these gradients in order to enhance the diffusion of heat towards the thermostated walls. In order to check whether the temperature profiles completely determine the corresponding density profiles (at the same shear rates), in Fig. 5 we compare the steady state density profile of a sheared fluid with $\dot{\gamma}=0.3$ to that of a fluid at rest but with an identical temperature profile. Since the resulting density profiles are indistinguishable, we conclude that shear-induced layering arises as the result of the coupling between density and temperature gradients in the sheared fluid and thus the density depends on the shear rate through its effect on the temperature profile, i.e., $\langle\rho(y)\rangle$ is a function of $\langle T(y)\rangle$ only. In the absence of shear, the coupling between local temperature and concentration profiles gives rise to thermophoresis, also known as the Ludwig-Soret effect Ludwig1856 ; Soret1880 ; Groot1984 . Although the Ludwig-Soret effect has been mostly studied in colloidal dispersions and binary mixtures Duhr2006a ; Duhr2006b ; Wuerger2007 , self-thermophoresis in compressible single-component fluids has also been discussed Brenner2010 . Since in our case, the temperature profile depends on the shear rate, we expect the Soret coefficient $S_{T}(\dot{\gamma})$ to be a function of $\dot{\gamma}$. In order to calculate $S_{T}(\dot{\gamma})$ , we make use of the fact that our compressible fluid can be considered as a binary mixture of particles and vacancies. Defining $\rho_{b}$ as the density profile at the center of the flow channel $y/L=0.5$ (note that $\rho_{b}$ is a function of $\dot{\gamma}$), the equation that connects the steady-state distribution of the average density of particles $\langle\rho(y)\rangle$ to the steady-state temperature gradient $\langle T(y)\rangle$ is Gans2003 , $$\frac{\partial\langle\rho\rangle}{\partial y}=-S_{T}\rho_{b}(1-\rho_{b})\frac{% \partial\langle T\rangle}{\partial y}.$$ (8) Note that while the temperature profile $\langle T(y)\rangle$ is always parabolic, the density profile $\langle\rho(y)\rangle$ is not (compare Figs. 4 and 5). Upon some reflection we conclude that the linear response relation Eq. 8 is valid only in the central region of the channel, where the deviations from $\rho_{b}$ are small. In Fig. 6 we show that the measured density profiles in the region $0.2\leq y/L\leq 0.8$ away from the walls, are indeed parabolic and therefore the density and the temperature profiles can be fitted by the quadratic expressions $\langle\rho(y,\dot{\gamma})\rangle=a(\dot{\gamma})\times(y-0.5)^{2}+\rho_{b}(% \dot{\gamma})$ and $\langle T(y,\dot{\gamma})\rangle=b(\dot{\gamma})\times(y-0.5)^{2}+T_{b}(\dot{% \gamma})$ respectively. In these expressions, $a$ and $b$ are the constants obtained by fitting the density and temperature plots with a parabola and $T_{b}$ is the temperature at $y/L=0.5$. Substituting the above expressions into Eq. 8 we obtain the following expression for the Soret coefficient $S_{T}$: $$S_{T}(\dot{\gamma})=-\frac{a}{b\rho_{b}(\dot{\gamma})[1-\rho_{b}(\dot{\gamma})% ]}.$$ (9) $S_{T}$ as a function of shear rate $\dot{\gamma}$ is plotted in Fig. 7. The equilibrium Soret coefficient ($S_{T}\approx 8$) can be obtained upon extrapolating the available data points to $\dot{\gamma}=0$. Since at high shear rates large deviations from the linear velocity profile are observed, we expect other signatures of non-Newtonian fluid behavior to appear as well (e.g., a non-uniform distribution of stresses, shear thinning/thickening of viscosity, etc). In order to compute the stress tensor from our MD results, we express the $xy$ component of the microscopic stress tensor in terms of instantaneous particle velocities and interparticle forces: $$\sigma_{xy}=\sum_{i=1}^{N}mv_{x}^{i}v_{y}^{i}+\sum_{j>i}^{N}r_{ijx}F_{ijy}.$$ (10) We then divide the system into $20$ layers (bins) along the $y$-axis and average the stress in each layer over $x$ and over time. As expected, at lower shear rates (e.g., $\dot{\gamma}=0.01$), the shear stress is distributed uniformly perpendicular to the direction of flow. At higher shear rates $\langle\sigma_{xy}\rangle$ becomes a function of $y$, with a minimum at the center of the channel (Fig. 8). The shear viscosity $\eta_{xy}$ is given by $$\eta_{xy}=\frac{\langle\langle\sigma_{xy}\rangle\rangle}{\dot{\gamma}},$$ (11) where $\langle\langle\rangle\rangle$ denotes averaging over the volume of the system and over time. In Fig. 9, we present the shear viscosity $\eta_{xy}$ as a function of shear rate $\dot{\gamma}$. Within the accuracy of our simulation, the viscosity remains constant up to $\dot{\gamma}\approx 0.01$ (see inset in Fig. 9). As the shear rate is further increased, there is a gradual transition to a shear thinning regime in which viscosity decreases with increasing shear rate. This regime extends up to $\dot{\gamma}\approx 0.1$ at which point shear thinning is replaced by shear thickening and viscosity increases with shear rate. IV Discussion In this paper we used computer simulations to study the dynamics of a compressible fluid in steady shear flow. We found that as the shear rate is increased the fluid develops a highly non-uniform density profile, with pronounced solid-like layering near the walls, the extent of which increases progressively with shear rate. This shear-induced layering originates in viscous heating of the fluid by the imposed shear that leads to the appearance of large temperature gradients between the bulk of the fluid and the confining walls which are kept at constant temperature (for technical reasons we thermostat the fluid layers near the walls rather than the walls themselves). The coupling between temperature and density gradients gives rise to the Ludwig-Soret effect and has been the subject of numerous studies in the past but, to the best of our knowledge, the present work is the first to demonstrate that such thermophoretic effects can take place in a homogeneous fluid in shear flow. In addition to the study of the density and the temperature profiles we also looked at other dynamical properties of the fluid such as its velocity profile, shear stress and viscosity. We observed large deviations from linear velocity profiles, and other non-Newtonian phenomena at high shear rates, such as inhomogeneous stress distributions, shear thinning and shear thickening. While locking has been found in previous computer simulations in the limit of large wall roughness, the observation of shear-enhanced locking has not been reported prior to this work. All the results reported so far were obtained for a particular value of temperature ($0.44$) and density ($0.694$), not far from the triple point of the two dimensional LJ system. We studied the behavior of the system at other temperatures and densities as well (not shown). As expected, we find that if the temperature is raised from $0.44$ to $1.0$ at triple point density (0.694) all the effects reported in this work (e.g., layering and deviations from linear velocity profile) are strongly suppressed. If the temperature is reduced towards the fluid-solid transition temperature of $0.4$, the above effects are strongly enhanced but since under these conditions the lengthscale of density fluctuations becomes comparable to system size, we did not undertake a careful study of this regime. We found that both layering and locking effects can be enhanced by keeping the temperature constant (at $0.44$) and increasing the density to 0.77 which is close to liquid-solid coexistence density at this temperature. Since the results are qualitatively similar to the ones reported in this work, we will not present them here. We would like to conclude with a comment on the experimental relevance of our results. Even though we simulated the flow of compressible fluids in two dimensions, we expect similar behavior to be observed in compressible near-critical fluids in three dimensions as well. While the shear rates reached in the simulations are unrealistically high, we believe that shear-induced heating of the kind described in our work can be experimentally realized under less extreme conditions in real fluids. Another interesting and experimentally-relevant possibility is that shear-induced thermophoresis can lead to spatial segregation in multicomponent fluid mixtures. MD simulations of such systems are currently under way. Acknowledgements. The authors gratefully acknowledge fruitful discussions with D. Osmanovic and D. C. Rapaport. YR’s work was supported by a grant from the Israel Science Foundation. References (1) A. Jabbarzadeh, J. D. Atkinson, and R. I. Tanner, Phys. Rev. E 61, 690 (2000). (2) C. Trozzi and G. Ciccotti, Phys. Rev. A 29, 916 (1984). (3) L. Bocquet and J.-L. Barrat, Phys. Rev. E 49, 3079 (1994). (4) P. A. Thompson and M. O. Robbins, Phys. Rev. A 41, 6830 (1990). (5) P. A. Thompson and S. M. Troian, Nature (London) 389, 360 (1997). (6) S. Bernardi, B. D. Todd, and D. J. Searles, J. Chem. Phys. 132, 244706 (2010). (7) X. Yong and L. T. Zhang, J. Chem. Phys. 138, 084503 (2013). (8) D. C. Rapaport, The art of molecular dynamics simulation (Cambridge Univ. Press, 1995). (9) Private discussions with D. C. Rapaport. (10) W. Loose and G. Ciccotti, Phys. Rev. A 45, 3859 (1992). (11) X. Yong and L. T. Zhang, Microfluid. Nanofluid. 14, 299 (2013). (12) C. Ludwig, Sitzungsber. K. Preuss, Akad. Wiss. 20 539 (1856). (13) C. Soret, Compt. Rend. 91, 289 (1880). (14) S. R. De Groot and P. Mazur, Non-Equilibrium Thermodynamics (Dover Publication Inc., New York, 1984). (15) S. Duhr and D. Braun, Proc. Natl. Acad. Sci. U.S.A. 103, 19678 (2006). (16) S. Duhr and D. Braun, Phys. Rev. Lett. 96, 168301 (2006). (17) A. Würger, Phys. Rev. Lett. 98, 138301 (2007). (18) H. Brenner, Phys. Rev. E 82, 036325 (2010). (19) B.-J. de Gans, R. Kita, S. Wiegand, and J. Luettmer-Strathmann, Phys. Rev. Lett. 91, 245501 (2003).
On radio-bright Active Galactic Nuclei in a complete Spectroscopic Redshift Survey Pietro Reviglio, David J. Helfand Astronomy Department, Columbia University, New York, NY 10027 [email protected], [email protected] Abstract Analysis of the frequency and physical properties of galaxies with star-formation and AGN activity in different environments in the local universe is a cornerstone for understanding structure formation and galaxy evolution. We have built a new multiwavelength catalog for galaxies in a complete redshift survey (the 15R Survey), gathering information on their $H\alpha$, R-band, radio, far-infrared, and X-ray emission, as well as their radio and optical morphologies, and have developed a classification scheme to compare different selection methods and to select accurately samples of radio emitting galaxies with AGN and star-forming activity. While alternative classification schemes do not lead to major differences for star-forming galaxies, we show that spectroscopic and photometric classifications of AGN lead to incomplete samples. In particular, a large population of AGN-containing galaxies with absorption-line spectra, and in many cases extended radio structures (jets, lobes), is missed in the standard Baldwin-Phillips-Terlevich emission-line classification of active galaxies. This missed class of objects accounts for roughly half of the radio AGN population. Similarly, for X-ray selected AGN in our sample, we find that absorption-line AGN account for half of the sample. Spectroscopically unremarkable, passive galaxies with AGN activity are not an exception, but the norm, and we show that although they exist in all environments, these systems preferentially reside in higher density regions. Because of the existence of this population, the fractional abundance of AGN increases with increasing density, in contrast to the results based on emission-line AGN extracted from the 15R, Sloan and 2DF redshift surveys. Since emission-line radio AGN are mostly associated with late-type galaxies and absorption-line radio AGN with early-type galaxies, the trends found are connected to the well-known but poorly understood density-morphology relation. galaxies: active — galaxies: star-forming— galaxies: active galactic nuclei— galaxies: radio galaxies— galaxies: large scale structure ††slugcomment: 1 Introduction Models of galaxy formation and evolution require a clear understanding of star-formation and AGN activity in galaxies and how these are related to other galaxy properties such as morphological type, gas and stellar content, and age. The link between the presence of these types of activity and the surrounding environment may also shed light on the process of structure formation and its evolution. Emission lines in galaxy spectra are indicative of processes that excite and ionize neutral hydrogen and metals present in the interstellar medium. Two major processes are known to lead to the formation of emission-line spectra: star-formation, where the ionizing radiation is primarily the UV radiation of hot, young, massive stars, and the presence of an active galactic nucleus, where the ionizing radiation is generally attributed to the process of accretion onto a supermassive black hole residing in the galaxy’s center. AGN in particular show a variety of features in their emission line spectra: Seyfert 1 galaxies have very broad emission lines that include both allowed and forbidden lines (HI, HeI, HeII, [OIII]) while Seyfert 2 galaxies have only narrow lines. Similarly, radio galaxies have been observed to divide into two families, one with broad emission lines and the other exhibiting only narrow lines or even lacking emission lines entirely Antonucci (1993) Baldwin, Phillips and Terlevich (BPT; 1981) first pointed out that, in general, AGN and starforming galaxies can be distinguished on the basis of the ratio of their emission lines: AGN should have greater ratios of [OIII]5007 to the $H\beta$ line because the creation of [OIII] by photoionization demands photons above 35 eV which are rare in stellar spectra but common in AGN continua; similarly, the [NII]6583/H$\alpha$ ratio should be higher in AGN. BPT showed that when these two quantities are plotted for emission-line galaxies, samples separate into two families of AGN-dominated and star-forming galaxies. The BTP method and various improvements thereto Veilleux & Osterbrock (1987); Dopita et al. (2000) is now routinely used to classify emission-line galaxies. One major use of this diagnostic is the classification of galaxies in large redshift surveys in order to understand the dependence of star-formation and AGN activity on the physical properties of the host galaxy and the environment in which it resides. For spectroscopically selected samples, it has been established that the fraction of emission line galaxies decreases with increasing galaxian density, while the fraction of absorption-line galaxies shows an increasing trend in dense environments Carter et al. (2001); Mateus & Sodré (2004). When emission-line galaxies are split into two families using the BTP diagnostic, these studies have shown that the fractional abundance of star-forming galaxies decreases with increasing environmental density. The trend for AGN is unclear, however. The fraction of emission-line-classified AGN does not vary significantly with density in several recent studies based on redshift surveys such as 15R Carter et al. (2001), SDSS Miller et al. (2003), and 2DF Mateus & Sodré (2004). Kauffman et al. Kauffmann et al. (2004) find that AGN with strong [OIII] line emission tend to reside preferentially in low-density environments. An accurate constraint on the environmental dependence of AGN activity can shed light on the complex interelationship between environment, galaxy morphology, and activity, as well as help to define the properties of the mechanism(s) powering active galactic nuclei. Different models for AGN activity predict different fractions of AGN in different environments. If, for example, the fuel of AGN activity is the cold gas that also fuels star-formation, AGN should follow the decreasing trend with increasing density found for star-forming galaxies (e.g., Kauffmann et al. 2004) If instead the critical factor is the supermassive black hole which is connected to the existence of a bulge, then the distribution of AGN should trace the distribution of bulges. Even less understood is the interplay between AGN activity and star-formation activity in a galaxy. Recent studies of the hosts of AGN have shown that powerful Seyfert 2 galaxies have young stellar populations Kauffmann et al. (2003), suggesting the possibility that strong AGN activity requires host galaxies rich in cold gas. Radio emission at 1.4 GHz is a tracer of both star-formation and AGN activity, and has the great advantage of being unaffected by dust obscuration. It offers an alternative to optical line emission for pursuing studies of the dependence of galactic activity on environment and other factors. Moreover, radio morphologies provide an independent way to classify radio-emitting galaxies as predominantly star-forming galaxies or AGN. Star-forming galaxies show extended radio emission arising through the interaction of cosmic rays accelerated by supernova explosions with interstellar magnetic fields Condon (1992). AGN, on the contrary, have point-like emission or strong radio jets and lobes. Given the relatively high sensitivity thresholds for radio surveys, only brighter star-forming galaxies and AGN are detectable. Nevertheless, these constitute a well-defined sub-sample of a larger optical galaxy catalog. The analysis presented in this paper has been carried out on a sample of radio-emitting galaxies extracted from the 15R Redshift Survey Carter et al. (2001). This survey has the advantage that the median redshift is sufficiently low that the angular resolution of existing radio surveys provides a strong morphological diagnostic; in addition, it is small enough to allow the visual classification of the morphologies for all selected radio sources and their host galaxies. These points are crucial for studies of radio-selected samples, since the complicated morphology of radio sources, jets in particular, renders problematic automated selection and classification algorithms. The techniques developed in this work have been the basis for extending our work to larger samples of galaxies. The results obtained in this paper have been confirmed using two larger statistical samples drawn from the 2DF and the SDSS 2DR Surveys, the results of which will be presented in a subsequent paper. They are also wholly consistent with the recent results of Best et al.(2005) derived from their SDSS sample of radio-emitting galaxies. The structure of this paper is as follows: in section 2 we briefly summarize the properties of the redshift survey used in this work, while in section 3 we present our multiwavelength database built for the 15R sample. This database gathers information on $H\alpha$, R-band, radio, far-infrared, and X-ray emission as well as radio and optical morphologies. In section 4 we compare the classification of galaxies based on standard spectral classification with a classification system based on the radio and optical morphologies. We outline a classification scheme that merges the information from the two approaches and select clean samples of star-forming galaxies and AGN with radio emission. In section 7 we evaluate the dependence on the environment of star-formation and AGN activity for the 15R sample, highlighting differences with similar analyses based on optically selected samples. Our conclusion are summarized in section 8 2 The optical sample The 15R-North galaxy redshift survey is a uniform spectroscopic survey (with signal/noise ratio of order 10) covering the range 3650-7400Å for 3149 galaxies with a median redshift 0.05 within two $2.5^{\circ}$ strips covering a portion of the sky delimited by $8^{h}\leq\alpha\leq 17^{h}$ in right ascension and $26.5^{\circ}\leq\delta\leq 29.0^{\circ}$ or $30.0^{\circ}\leq\delta\leq 32.5^{\circ}$ in declination (B1950). For this survey, 2395 galaxies constitute a magnitude limited sample 90% complete to a Kron-Cousins R magnitude $R=15.4$. The median slit covering fraction is 24% of the galaxy, apparently sufficient to minimize the effects of aperture bias on the $H\alpha$ equivalent widths Carter et al. (2001). The spectral types of the 15R galaxies have been classified using ratios of strong emission lines (H$\alpha$, [N II] 6583, [O III] 5007, and H$\beta$), according to the prescription of BPT and Veilleux & Osterbrock (1987). The spectra have been divided into HII-like (20 % of the sample), AGN-like (17% of the sample) and absorption-line spectra (51%). The remaining 12% of the spectra show unclassifiable emission lines and may include a hybrid population of galaxies with both star formation and AGN activity Carter et al. (2001). This survey has been analyzed previously to evaluate the environmental dependence of star-formation and AGN activity Carter et al. (2001). The results found with this survey have been confirmed by other authors in larger samples such as 2DF and the SDSS DR1 (Miller et al. 2003; Mateus & Sodre${}^{\prime}$ 2004). We therefore can compare our results on the environmental dependence of AGN and star-formation activity in radio-emitting galaxies with those found using optical spectral classification alone. 3 The multiwavelength analysis of the 15R sample Several biases can affect the standard BTP spectral classification scheme, including aperture bias, dilution of the spectral lines, and dust obscuration. For example, it has been shown Moran, Filippenko, & Chornock (2002) that when the angular size of a Seyfert galaxy is comparable to the slit width, the nuclear spectral features can be overwhelmed by the host galaxy light, leading to the exclusion of these galaxies from the sample of AGN. Martini et al. (2002) showed that in the cluster A2104, five out of six X-ray emitting AGN do not show any typical AGN optical spectral features and would therefore be missed by using spectroscopy as a tool to select AGN. We show here that spectroscopically unremarkable AGN are a substantial fraction of radio-emitting AGN and, as a result, standard methods of classification for AGN may lead to substantial incompleteness, while for star-forming galaxies there is general agreement among classification techniques. 3.1 A multiwavelength database We cross-correlated the 15R sample with the FIRST Becker, White, & Helfand (1995), and NVSS radio catalogs Condon et al. (1993), the IRAS Point Source (IPAC 1986) and Faint Source catalogs Moshir et al. (1990), and the ROSAT Bright Source Voges et al. (1999), Faint Source Voges et al. (2000), and WGACAT catalogs White, Giommi, & Angelini (2000). NVSS and FIRST both sample the radio continuum emission at 1.4 GHz. We choose to use both NVSS and FIRST in order to gather information about the radio morphology of our sources, which is better sampled in FIRST thanks to its higher angular resolution ($5^{\prime\prime}$), and to obtain the best estimate for the flux densities which are more accurate in the lower-resolution NVSS. Whenever available, NVSS radio flux densities have been used. When radio sources were split into several components, we summed the flux densities from the individual FIRST sources unless there was an unresolved source in NVSS comprised of all components in which case we used the flux density of the unresolved source as the best estimate. We measured source sizes using the semimajor axis from an elliptical Gaussian fit to the source surface brightness distribution or, in the case of multiple-component sources, measured extents directly from the images. IRAS sources are mostly not resolved, making impossible a comparison with FIRST for the region of the emission within the sampled galaxies. We selected only point-like ROSAT sources. Extended X-ray emission associated with galaxies might arise in a hot intracluster medium surrounding the galaxies which is not directly relevant to galaxy activity. In order to define the optimal search radius in all the cross correlations, we built for each catalog four false catalogs by offsetting all positions by $1^{\prime}$ in each of the four cardinal directions. We calculated the number of false matches expected for different search radii by cross-correlating these false catalogs with the 15R catalog. The final search radius $R_{s}$ was chosen to strike a balance between having the largest number of real matches and a modest number of spurius ones. Generally speaking we tended to use larger radii and to check by hand the more distant matches, even when the nominal positional accuracy of the sources was high. When more then one source was associated with an optical galaxy, we checked those matches and assigned the source using the images available. This avoided missing the large extended radio sources which are resolved into multiple radio components by FIRST; it also included sources for which the peak of the radio emission was not centered on the optical galaxy centroid which defines the center of our search radius. Selecting the radio sample by means of an automated procedure could miss these features and lead to incompleteness. The search radii used were $R_{s}=18^{\prime\prime}$ for the FIRST survey, $R_{s}=28^{\prime\prime}$ for NVSS, $R_{s}=7^{\prime\prime}$ for 2MASS, and $R_{s}=60^{\prime\prime}$for ROSAT. Based on our matching to false catalogs, these radii were selected to ensure 90% completeness and 90% accuracy in the final catalog. In the case of the IRAS catalogs, we took into account the ellipticity of the position error regions, using ellipses instead of circles to find the IRAS-galaxy matches. The search ellipses vary for each source and the major and minor axis are a multiple of the major and minor axes of the error ellipses given in the IRAS catalog. We allowed these ellipses to be stretched by different amounts and checked the false matches expected in each case. We chose 5 as best stretch factor. For the magnitude-limited sample of 15R we find 315 matches with FIRST, 370 matches with NVSS, 312 matches with IRAS, 108 with ROSAT. The final radio sample is comprised of 520 sources; 79 of them have emission detected only in FIRST, while 166 are detected only in NVSS. FIRST has a lower detection threshold than NVSS for compact sources (1.0  vs 2.5 mJy) accounting for the FIRST-only detections; on the other hand, FIRST has much higher resolution and tends to miss some low-surface brightness extended radio sources detected in NVSS. The systems detected in only one of the two radio surveys constitute a sizable fraction of the total radio sample (46%); use of both surveys is essential for defining a large radio sample with a flux density limit of $\sim 1-2$ mJy. 3.2 An independent classification of radio sources Radio emitting galaxies are either star-forming, host an AGN, or are a combination of the two. Star-forming radio galaxies are typically late-type galaxies and their radio emission is usually aligned with the galaxy’s optical emission, or centered on it for face-on objects. Radio AGN are either point-sources in late type galaxies or radio sources in early type galaxies. Jets are generally not aligned with the optical emission, being frequently found perpendicular to the galaxy’s major axis. Classification of the type of activity in a sample of radio-emitting galaxies can be done successfully if one has a reliable classification of the morphology of the radio emission and either the host galaxy type (early-type or late-type) or, at least, the relative alignment of the radio and optical emission. We undertook to classify the morphologies of our radio sources and their host galaxies using this information. For each radio source, two cut-outs of the radio images were produced from the cut-out server of the FIRST catalog (White et al. 1997): $5^{\prime}\times 5^{\prime}$ and $1^{\prime}\times 1^{\prime}$. For the radio classification, we examined all cut-outs associated with galaxies in 15R and used the major axis of the radio sources given in the FIRST catalog as an additional indicator to discriminate between extended and point sources (point sources are defined as having a deconvolved major axis smaller than $2.5^{\prime\prime}$). This led to the division of radio sources into five broad categories: FRI (bright nuclear point source and faint radio lobes – Fanaroff & Riley 1974), FRII (bright radio lobes, faint nuclear source), extended sources, point sources, and sources for which the morphology was unclear. In order to explore further the nature of the radio sources with ambiguous morphologies we used the optical morphological classification of their hosts. The slice of the universe covered by the 15R survey has been widely studied by several authors and many galaxies in this survey have known morphologies. We therefore searched for the optical morphologies for 15R galaxies using the NED database. We were primarily interested in simply dividing our sample of galaxies into early types and late types, since the two classes differ substantially in terms of ongoing star-formation activity. We were able to find published classifications for 49% of the hosts of our radio sources. Most of these galaxies are listed in the Third Reference Catalogue of Bright Galaxies de Vaucouleurs et al. (1995). In additon, we established the optical morphology for a further 10% of the radio sample by looking at the DSS images. The large majority of this small fraction of galaxies classified by eye are nearby galaxies that show clear spiral structure. In the final sample of 260 galaxies with radio emission and available optical classifications, 29% are classified as early-type (ellipticals or S0) and the rest as late type. The ratio of early-type to late-type galaxies for our final sample of classified galaxies, 0.41, is in good agreement with the fraction of early-type to late-type objects found in other surveys with systematic classification (e.g., the Century Survey Geller et al. (1997)). These optical morphologies have been used together with the radio morphology to obtain our classification of active galaxies: extended radio sources associated with late-type galaxies are classified as star-forming, point-like radio sources associated with late-type galaxies are classified as Seyferts (AGN), extended radio sources associated with ellipticals are classified as jet-like AGN and point-like radio sources (or unresolved radio sources from NVSS) associated with ellipticals were also classified as AGN. For extended radio sources with no available host optical classification, we took advantage of the image of the galaxies on the DSS plates and compared the alignment of the radio and optical emission. If the radio source was extended and the radio emission was aligned with and/or centered on it we classified it as a star-forming galaxy. If the radio source showed no alignment with the major axis of the optical emission, we classified it as a jet-type AGN. One potential problem with this classification is that it might bias us towards excluding nuclear starbursts, since spiral galaxies with point-like radio emission are all classified as AGN (Seyferts); however we show in section 4 that our morphological classification for these galaxies largely agrees with the spectral classification and thus this should not be a significant problem. Furthermore, the implied fraction of galaxies classified as radio emitting Seyferts in this bright optical sample is roughly consistent with what has been found in previous work. We have 2590 galaxies in 15R with $R<15.4$; the vast majority have absolute magnitudes $-18>R>-22$. We have classified 106 point-like radio sources as AGN which gives a fraction of $\sim$4%. For comparison Huchra and Burg (1992) find in the CfA Survey a fraction of 1% and Ulvestad and Ho (2001) find a fraction of $9.6\pm 3\%$ in the Palomar spectroscopic survey. We also checked the optical images for the FR1 and FR2 sources to make sure that the classification was correct and there were no optical galaxies associated with the putative lobes. Sources with no clear classification have been classified as either extended (EXT) or point-like (PNT) if detected in FIRST and left unclassified (UNC) if detected only in the NVSS. The distribution in radio luminosities for different classes of objects is shown in Fig. 1 4 Comparison of the Two Classification Schemes We first compared the classification system based on the standard BTP diagram and our classification based on radio (and optical) morphology for these radio-emitting galaxies. Radio-emitting galaxies must be either star-forming or have AGN activity (or a combination of both). How do the two classification schemes perform in classifying this subsample of active galaxies? Of the 520 galaxies in 15R detected in either NVSS or FIRST, we find 138 AGN, 113 HII, 67 PNT, 52 EXT, and 150 UNC. The classification of these same sources according to their optical spectra is as follows: 69 AGN, 155 HII galaxies, 171 weak or strong emission line galaxies that are not classifiable based on their optical spectra, and 125 absorption-line galaxies (ABG). Our radio classification system provides a classification for 48% of the sources, while optically only 43% have a definitive class assigned. For emission line systems the two classifications largely agree: 38% of the spectra-classified AGN are classified as AGN in the radio, 34% are left unclassified (as either PNT, EXT, or UNC) and 18 galaxies have conflicting calssifications (we classify them as HII galaxies on the basis of being spirals with extended radio emsission roughly aligned with the optical morphology. Likewise, 33% of the spectra-classified HII galaxies are classified by radio morphology as HII, 53% are left unclassified, and 15 galaxies are classfied as AGN on the basis of their radio morphology. The large fraction of sources which are classified with one method but not the other suggests that using these two method of classification together will significantly enhance the number of galaxies with reliable classifications. We describe a joint approach in section 5 A total of 38% of the galaxies we classify as Seyferts (late type galaxies with a point-like radio source) are classified as AGN according to their spectra, with only 14% classified as HII galaxies (the rest have unclassified spectral types). This shows substantial agreement between the two classification schemes for this class of objects. The 14% of galaxies with conflicting classifications may be nuclear starbursts or composite objects misclassified in our scheme as pure Seyfert galaxies. We partially rectify this by using the ratio of FIR to radio emission, as discussed in §6. The most interesting result of this comparison is the fact that while the fraction of HII galaxies classified are similar with the two methods, the fraction of AGN is much different: with the radio classification scheme, 27% of the radio sources are classified as AGN, while with spectral classification only 13% of the sample is so classified. To understand the origin of this discrepancy we examined the spectral classifications of our radio AGNs: only 19% of them are classified as AGN optically, while 28% are listed as unclassifiable emission line systems and 15% as HII; the plurality – 38% – are classified as ABG. This means that more than one-third of our radio AGN are in ABG systems and are therefore missed by the standard BTP classification scheme. Since the ABG systems represent 24% of our whole sample of radio sources, the radio AGN among this group represent $\sim 10\%$ of our whole sample, a fraction comparable to the 13% contributed by emission-line-classified AGN; i.e., the AGN we identify are not recognizable as AGN from their optical spectra. This class of spectroscopically unremarkable objects have distinct morphologies: 52% of these radio sources are classified as FRI, FRII, jet, or simply extended, while this percentage drops to 20% in radio AGN recognized as such from their optical spectra. For those with known optical morphologies, we find that 82% reside in ellipticals or lenticular galaxies, while emission-line AGN, on the contrary, are found preferentially in late-type galaxies (62% of radio AGN with emission-line spectra are associated with spirals or irregulars) and tend to be radio point sources (77%). Roughly 10% of the absorption-line AGN show associated X-ray emission. A similar fraction of X-ray emitting sources is found among the emission-line AGN. As noted earlier, we independently classified our sources based only on their X-ray emission: galaxies with X-ray luminosities higher than $10^{42}$ erg s${}^{-1}$ are classified as AGN, since no pure starburst galaxies have been identified with higher luminosities (Ranalli et al. 2003); there are, however, AGN below this threshold, but this cut provides a clean sample of AGN which is of principal interest here. In this sample of 67 X-ray-selected AGN, with or without radio emission, 49% have absorption-line spectra and 51% emission-line spectra. The population found by Martini et al. Martini et al. (2002) are representatives of the former sample. The nature of the difference between absorption-line AGN and emission-line AGN is unclear. It is possible that this can result from an effect of dilution of the spectral lines caused by the light of the host galaxy, although a large fraction of this kind of system is also found in 2dF and SDSS where better spectra are available (Reviglio 2003; Best 2004; Best et al. 2005). More importantly, these objects represent a class with radio properties which are rare in the emission-line systems, suggesting a physical link between the morphology of the optical galaxy, its radio AGN morphology, and the lack of emission-lines. Kauffman showed that AGN with strong OIII emission preferentially inhabit massive galaxies with young stellar populations, suggesting the possibility that the existence of these AGN depend on the gas content of the host galaxy. X-ray emitting AGN do not seem to favor either of the two spectral categories. If X-ray emission is to be regarded as the principal signature of the process of accretion onto supermassive black holes that powers AGNs, then the fact that X-ray emission can be found in galaxies of both spectral types suggests that spectral lines may be just a signature of the galactic environment in which the AGN happens to reside (gas-rich or gas-depleted) and therefore of the morphology of the host. A reservoir of cold gas in the host galaxies, as traced by the presence of emission-lines, does not seem to be required for powering an AGN, since radio- and X-ray-emitting AGN are clearly present in line-free systems. This is similar to the case for radio quasars where the radio-loudness is unrelated to the amount of gas present in the host galaxy Dunlop et al. (2003). Another possibility is that these systems are heavily obscured. However, this would pose the question of why high obscuration should be at work in ellipticals and not in spirals, where absorption-line AGN systems are rarely found. Furthermore, if strong dust obscuration is at work in systems lacking emission lines, one would expect a higher fraction of radio AGN with absorption lines to have FIR emission from hot dust reradiation than emission-line systems. On the contrary we find that only 3% of the absorption-line radio AGN have detected FIR emission at $60\mu$m, while the emission-line systems have a far-IR-detected fraction of 25%. 4.1 Below the faint limit For any study examining the properties of galaxies, it is essential to know if the sample of galaxies selected is representative of the underlying population. We have shown that emission-line subsamples exclude a large fraction of AGN and thus introduce a potential bias. It is appropriate to ask whether our radio-selection introduces a similar bias, since it is clear that our subsample is much smaller than the emission-line fraction of the 15R Survey; i.e., are galaxies with emission-line or absorption-line spectra that are undetected in our radio surveys physically different, or is their radio emission just too faint to be detected? While a definitive answer must await the next generation of radio surveys, we have taken a first step toward understanding the properties of galaxies below the limit of the current surveys by employing a stacking procedure developed by Glikman et al. (2004) which derives the mean radio flux density for any class of objects with undetected radio emission in the FIRST Survey. Applying this procedure to all 1078 absorption-line galaxies in 15R with no radio emission detected, we obtain a $3\sigma$ detection of a point-like radio source with a flux density of 0.02 mJy. A similar analysis of the 74 undetected, optically classified AGN yields a mean flux density of $0.41\pm 0.04$ mJy, while the 197 stacked HII galaxies reveal, as expected, an extended source with a lower mean flux density of $0.19\pm 0.04$ mJy. It is worth noting that if a galaxy is star-forming, it must have radio emission. The exceptions might be objects that have lost their population of cosmic rays which are not being resupplied by ongoing supernovae, such as might occur at the end of a starburst; however, this phase will be short-lived, and such instances should be rare. It is less clear if all galaxies with AGN activity must also have radio emission. The radio-quiet/radio-loud AGN dichotomy has been a matter of debate for some time (cf. Cirasuolo et al. 2003 and Ivezic et al. 2004 for a recent debate on this issue). While the precise shape of the radio luminosity distribution and the presence in the sample of truly radio-silent objects cannot be determined from the stacking procedure, it is clear that the mean radio flux density of undetected AGN candidates is only a factor of $\sim 2$ below the FIRST survey threshold. 5 A merged classification scheme We classify our sample of 520 galaxies by integrating the information from the different classification schemes. We classify all sources as HII, AGN or unclear (UNC). When the radio and spectral classifications agree, we classify the source as either AGN or HII. When only one scheme provides a definite classification (as either AGN or HII) and the other has no classification, we adopt the classification available . When the two classifications disagree, or in the cases for which neither scheme suggests a source type, we classify the source as unclear. We also added a further AGN indicator: galaxies with X-ray emission stronger than $10^{42}$ erg s${}^{-1}$ have been classified as AGN, since only active galactic nuclei can power such strong X-ray emission; these represent a small fraction of our sample. The result of this merging is as follows: 212 AGN, 186 HII, and 122 UNC. We note that now 77% of our sample has a classification, a much larger fraction than the 48% obtained solely with the radio classification and the 43% obtained with the spectral classification alone. Half of the UNC objects (12% of the total radio sample) result from conflicting classifications; many of these might be hybrid systems exhibiting both star-formation and AGN activity. 6 Further classifications based on photometry In order reduce further the number of objects with the UNC classification, we examined two methods developed in the past by Machalski and Condon (1999) which rely solely on radio, FIR and optical photometry. According to these authors, the separation of star-forming from AGN galaxies can be made by calculating one of two parameters, $R$ and $Q$. $R$ represents the radio-to-optical ratio defined as $R=log(S_{1.4}/{f_{R}})$ where $S_{1.4}$ is the radio flux density at 1.4 GHz expressed in mJy and $f_{R}$ is the flux density at 0.70 microns (photometric R-band) given by $f_{R}=2.87\times 10^{6-0.4R}~{}{\rm mJy}$. Radio loud objects are defined as having higher $R$ parameters. $Q$ compares the total far-infrared emission to the radio emission and is defined (Helou et al. 1985) as $Q=log(\frac{FIR}{3.75\times 10^{12}Hz)}-log(S_{1.4}\times 10^{-29})$ where $FIR=1.26\times 10^{-17}(2.58S_{60\mu m}+S_{100\mu m})~{}{\rm mJy}$. Radio-loud objects tend to have lower $Q$ parameters. According to the R-parameter criterion AGN have $R>1$ while star-forming galaxies have $R<1$; likewise, $Q>1.8$ indicates a star-forming galaxy, while $Q<1.7$ indicates an AGN-dominated source. We have assessed the completeness and accuracy of these criteria for the selection of AGN by applying them to our sample of classified galaxies. Figure 2 shows the distribution of the $R$ parameter for the three classes of radio sources: HII, AGN and UNC. From these plots it is clear that AGN and HII galaxies do not divide cleanly into two classes with different $R$ parameters, largely as a consequence of the many AGN classified as point-like radio sources in spiral galaxies. For these sources the radio emission is fainter than for the extended AGN such as FRI or FRII galaxies, and is comparable to that of star-forming galaxies. The distribution of the $R$ parameter for these low-luminosity AGN and star-forming galaxies are similar. The radio-to-FIR emission ratio given by the Q parameter, yields the distribution presented in figure 2. The populations of low-luminosity AGN and star-forming galaxies do not differ significantly in this case either. However, we note that the HII galaxies decidedly outnumber the AGN: most radio AGN lack strong FIR emission. Therefore, in order to classify additional galaxies in the unclear class, we have adopted the FIR emission as another indicator of ongoing star-formation. The contamination of the FIR-emitting AGN we introduce is only $\sim$ 20% of the UNC galaxies with FIR emission or about a dozen objects (or $\sim 7\%$ of the total HII galaxies). We obtain a final sample of radio-emitting galaxies for the analysis of the environmental dependence of the star-formation and AGN activity which has 223 HII, 212 AGN and 85 unclassified galaxies. In figure 3 we display the optical luminosity ($L_{R}$) and redshift distributions for the 15R survey as a whole and compare it to the radio-detected subsamples. Unsurprisingly, the optical luminosities of the radio sources are biased high compared to the distribution of whole sample, reflecting the general correlation of radio and optical luminosities and the flux-limited nature of the radio survey. The redshift distributions are similar; the larger fraction of detections at the highest redshifts reflect the small number of radio-loud AGN at the highest luminosities. In figure 4, we show the same distributions for the galaxies by class, distinguishing between the optical (unshaded) and radio (shaded) classification schemes. The radio-emitting ABG galaxies, concentrated at the highest optical luminosities, are ultimately included in our radio AGN sample as discussed above. 7 Environmental Dependence of Radio-emitting AGN and Star-forming Galaxies Previous studies have claimed that the fraction of emission-line AGN remains constant across several orders of magnitude in local galaxy density Carter et al. (2001); Miller et al. (2003) or even decreases with increasing environmental density. We have shown here the existence of a large fraction of radio-emitting AGN which lack emission lines, all of which are missed in work based on optical spectral classification. Since these sources are most frequently associated with early type galaxies, we expect them to populate denser environments. In order to investigate the correlation of star-formation and AGN activity with the density of the environment, we use a magnitude-limited sample with galaxy redshifts in the range $0.0033<z<0.075$ in order to avoid the Virgo infall region where the measure of the redshifts are unreliable and to avoid the distant, poorly sampled regions of the survey. The final sample of classified radio sources in this redshift range is 372 sources. To associate a volume density with each galaxy, we have implemented the procedure outlined in Carter et al. (2001) for magnitude-limited samples. The idea of this method is to calculate the relative densities of different environments by considering the volume enclosing the ten nearest observed neighbors of a given galaxy j and dividing by the number of galaxies actually present in the volume by the volume itself $V_{j}$. In order to find the number of galaxies $N_{j}$ actually present in $V_{j}$, it is necessary to correct the number of galaxies observed in the volume (ten, by definition) to account for those which fall below the magnitude limit of the sample. Assuming that the Schechter optical luminosity function is indeed universal (changes in the functional form in different environments are not too large – e.g. Christlein 2000), the correction can be obtained by considering that in order to have one galaxy with absolute magnitude M in a unit volume, there must be a certain number of galaxies, as defined by the luminosity function, which are less bright in the same volume. The estimate of the environment density for the $j^{th}$ galaxy we used is therefore: $$\rho_{j}=\frac{N_{j}}{V_{j}}$$ (1) where $N_{j}=\frac{\int_{-\infty}^{+\infty}\phi(M)d(M)}{\int_{-}\infty^{M}_{i}\phi(M)% d(M)}$ with $M_{i}$ is the faintest absolute magnitude detectable in the survey at the $j^{th}$ position and $\phi(M)$ is the Schechter luminosity function. Since the absolute magnitude of galaxies effectively ranges not between plus and minus infinity, but a maximum and minimum value, we have adopted the maximum and minimum absolute magnitude in the survey to calculate the integrals. For galaxies close to the survey borders, we have corrected the estimate of the density by evaluating the number density of galaxies inside the fraction of the volume given by the intersection of their spheres with the survey borders. Since 15R photometry has been calibrated with the Century Survey, we used the Century Survey luminosity function, with Schechter’s parameters $\phi_{*}=0.025\rm{Mpc^{-3}}$, $\alpha=-1.17$, $M_{*}=-20.73$ (Geller et al. 1997). Distances have been calculated from redshifts using the luminosity distance $D_{Mpc}$ for a universe described by $\Omega_{0}=1$ and $H_{0}=70\rm{km\ s^{-1}Mpc^{-1}}$. The R-band absolute magnitudes used have been K-corrected and also corrected for Galactic absorption. After obtaining the density surrounding each galaxy, we grouped the galaxies in logarithmic bins of 0.5 and determined the fraction of galaxies in each bin with star-forming or AGN activity. The results are displayed in figure 5 The errors are assigned assuming a Poisson distribution; statistical errors are larger than any other source of error. The points in the highest and lowest bins should be regarded with caution since they are calculated with just a few galaxies, although we have excluded from all of plots the points derived from fewer than three galaxies. Figure 5 suggests that the star-forming fraction decreases as the galaxy density increases, while the opposite is true for the AGN fraction which increases as the density increases. These two trends are such that the fraction of radio sources mildly decreases with density. For the star-forming galaxies, the trend shown by our data is consistent with the results of Carter et al. (2001): the fraction of star-forming galaxies decreases when we move from the field into denser regions such as galaxy clusters. Since star formation is typical of spiral galaxies, and spiral galaxies tend to inhabit the low-density regions, this result can be regarded as a consequence of the morphology-density relation. These results are also in agreement with those of other authors (e.g., Mateus and Sodre${}^{\prime}$ 2003) For the AGN sample, on the contrary, our results disagree with those found by Carter et al. (2001) using 15R, as well as with more recent studies by Miller et al. (2003) using the Sloan Digital Sky Survey and Mateus and Sodre${}^{\prime}$ (2003) using the 2DF Survey. All these studies showed the fraction of AGN to be independent of local galaxy density. Even more discrepant with our results are those by Kauffmann et al. (2004) which find a decreasing fraction of AGN with increasing galaxy density. Our observed increase in the fractional abundance of radio AGN is not caused by a general enhancement of the radio luminosity in denser regions which raises more sources above the detectability threshold. Indeed, the median radio luminosity in different density bins for the radio AGN does not vary significantly (see Fig. 6). The same statement is true for HII galaxies; since their median radio luminosity does not vary with density, the decreasing fraction of star-forming galaxies is not a result of differences in strength of their radio emission. The difference leading to the different trend for our AGN-classified sources is the inclusion of the large fraction of absorption-line systems with associated radio AGN, which preferentially reside in denser environments. This is shown in fig. 8. These sources can be found at all densities, but their fractional abundance is higher in denser ones. These conclusions, first described in Reviglio (2003), are in agreement with the recent SDSS study of Best et al. (2005). 8 Conclusion By employing a nearby galaxy sample and matching it to the best available radio surveys (along with surveys in other bands), we have shown that emission-line galaxy samples are substantially incomplete for identifying galaxies hosting AGN. Since the absorption line systems hosting AGN are found preferentially in dense environments, the inclusion of this missed population of AGN in spectrally defined samples reverses the trend noted earlier: the total population of AGN increases with increasing local galaxy density. Examination of the stacked radio images for the optical objects not detected at radio wavelengths shows consistency with the objects in the radio subsample. It is clear that the use of such properties as radio morphology and X-ray emission are valuable adjuncts when classifying objects from large optical surveys. 9 Acknowledgments We want to express our gratitude to Margaret Geller for very helpful suggestions in developing this project and for allowing us to use her data prior to publication. We want also to thank Eilat Glikman for enabling us to use her stacking procedure. This work also strongly benefited from discussions with Jacqueline van Gorkom and Antonaldo Diaferio. This work was part of the XVI Cycle of the Dottorato di Ricerca in Fisica at the University of Torino from which P.R. gratefully acknowledges support. References Antonucci (1993) Antonucci, R. 1993, ARA&A, 31, 473 Baldwin, Phillips, & Terlevich (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5 Barton, Geller, & Kenyon (2000) Barton, E. J., Geller, M. J., & Kenyon, S. J. 2000, ApJ, 530, 660 Becker, White, & Helfand (1995) Becker, R. H., White, R. L., & Helfand, D. J. 1995, ApJ, 450, 559 Best (2004) Best, P. N. 2004, MNRAS, 351, 70 Best et al. (2005) Best, P. N., Kauffmann, G., Heckman, T. M., & Ivezić, Ž. 2005, MNRAS, 362, 9 Carter et al. (2001) Carter, B.J., Fabricant, D.G., Geller, M.J., & Kurtz, M.J.,2001,ApJ,559,606 Condon (1992) Condon, J. J. 1992, ARA&A, 30, 575 Christlein (2000) Christlein, D. 2000, ApJ, 545, 145 Cirasuolo et al. (2003) Cirasuolo, M., Magliocchetti, M., Celotti, A., & Danese, L. 2003, MNRAS, 341, 993 Condon et al. (1993) Condon, J. J., Cotton, W. D., Greisen, E. W., Perley, R. A., Yin, Q. F., & Broderick, J. J.  1993, Bulletin of the American Astronomical Society, 25, 1389 de Vaucouleurs et al. (1995) de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Buta, R. J., Paturel, G., & Fouque, P. 1995, VizieR Online Data Catalog, 7155, 0 Dopita et al. (2000) Dopita, M. A., Kewley, L. J., Heisler, C. A., & Sutherland, R. S. 2000, ApJ, 542, 224 Dunlop et al. (2003) Dunlop, J. S., McLure, R. J., Kukula, M. J., Baum, S. A., O’Dea, C. P., & Hughes, D. H. 2003, MNRAS, 340, 1095 Fanaroff & Riley (1974) Fanaroff, B. L., & Riley, J. M. 1974, MNRAS, 167, 31P Geller et al. (1997) Geller, M. J., et al. 1997, AJ, 114, 2205 Glikman et al. (2004) Glikman, E., Helfand, D. J., Becker, R. H., & White, R. L. 2004, ASP Conference Series 311: AGN Physics with the Sloan Digital Sky Survey 311, 351 Gómez et al. (2003) Gómez, P. L., et al. 2003, ApJ, 584, 210 Ivezić et al. (2004) Ivezić, Z., et al. 2004, ASP Conf. Ser. 311: AGN Physics with the Sloan Digital Sky Survey, 311, 347 Helou, Soifer, & Rowan-Robinson (1985) Helou, G., Soifer, B. T., & Rowan-Robinson, M. 1985, ApJ, 298, L7 Huchra & Burg (1992) Huchra, J., & Burg, R. 1992, ApJ, 393, 90 Kauffmann et al. (2003) Kauffmann, G., et al. 2003, MNRAS, 346, 1055 Kauffmann et al. (2004) Kauffmann, G., White, S. D. M., Heckman, T. M., Ménard, B., Brinchmann, J., Charlot, S., Tremonti, C., & Brinkmann, J. 2004, MNRAS, 353, 713 Martini et al. (2002) Martini, P., Kelson, D. D., Mulchaey, J. S., & Trager, S. C.,2002, ApJ, 576, 109L Mateus & Sodré (2004) Mateus, A. & Sodré, L. 2004, MNRAS, 349, 1251 Miller et al. (2003) Miller, C. J., Nichol, R. C., Gómez, P. L., Hopkins, A. M., & Bernardi, M. 2003, ApJ, 597, 142 Moran, Filippenko, & Chornock (2002) Moran, E. C., Filippenko, A. V., & Chornock, R. 2002, ApJ, 579, L71 Moshir et al. (1990) Moshir, M., et al.  1990, BAAS, 22, 1325 Ranalli et al. (2003) Ranalli, P., Comastri, A., & Setti, G. 2003, A&A, 399, 39 Reviglio (2003) Reviglio P., “Multiwavelength Analysis of Star-Forming Galaxies and Active Galactic Nuclei in a Complete Redshift Survey”. Ph.D. Thesis, Università di Torino, XVI Ciclo, 2001-2003 Ulvestad & Ho (2001) Ulvestad, J. S., & Ho, L. C. 2001, ApJ, 558, 561 Veilleux & Osterbrock (1987) Veilleux, S. & Osterbrock, D. E. 1987, ApJS, 63, 295 Voges et al. (1999) Voges, W., et al. 1999, A&A, 349, 389 Voges et al. (2000) Voges, W., et al. 2000, VizieR Online Data Catalog, 9029, 0 Wegner et al. (2001) Wegner, G., et al.  2001, AJ, 122, 2893 White, Giommi, & Angelini (2000) White, N. E., Giommi, P., & Angelini, L. 2000, VizieR Online Data Catalog, 9031, 0
On the fourth order accuracy of the finite difference implementation of $C^{0}$-$Q^{2}$ finite element method for elliptic equations Hao Li and Xiangxiong Zhang Department of Mathematics, Purdue University, 150 N. University Street, West Lafayette, IN 47907-2067, USA Email: [email protected] author. Email: [email protected] ( ) Abstract The classical continuous finite element method with Lagrangian $Q^{2}$ basis reduces to a finite difference scheme when all the integrals are replaced by the $3\times 3$ Gauss-Lobatto quadrature. By deriving an explicit representation of the quadrature error, we prove that this finite difference scheme is fourth order accurate in the discrete 2-norm for an elliptic equation $-\nabla(\mathbf{a}\nabla u)+\mathbf{b}\cdot\nabla u+cu=f$ with Dirichlet boundary conditions, which is a superconvergence result of function values. Superconvergence, fourth order accurate discrete Laplacian, elliptic equations, finite difference implementation of finite element method, $3\times 3$ Gauss-Lobatto quadrature. \shortauthorlist H. Li and X. Zhang 1 Introduction 1.1 Motivation In this paper we consider solving a two-dimensional elliptic equation with smooth coefficients on a rectangular domain with continuous finite element method using tensor product polynomials of degree two on a rectangular mesh. Consider the following model problem as an example: a variable coefficient Poisson equation $-\nabla(a(\mathbf{x})\nabla u)=f,a(\mathbf{x})>0$ on a square domain $\Omega=(0,1)\times(0,1)$ with homogeneous Dirichlet boundary conditions. The variational form is to find $u\in H_{0}^{1}(\Omega)=\{v\in H^{1}(\Omega):v|_{\partial\Omega}=0\}$ satisfying $$A(u,v)=(f,v),\quad\forall v\in H_{0}^{1}(\Omega),$$ where $A(u,v)=\iint_{\Omega}a\nabla u\cdot\nabla vdxdy$, $(f,v)=\iint_{\Omega}fvdxdy.$ Let $h$ be the mesh size of an uniform rectangular mesh and $V_{0}^{h}\subseteq H^{1}_{0}(\Omega)$ be the continuous finite element space consisting of piecewise $Q^{k}$ polynomials (i.e., tensor product of piecewise polynomials of degree $k$), then the $C^{0}$-$Q^{k}$ finite element solution is defined as $u_{h}\in V_{0}^{h}$ satisfying $$A(u_{h},v_{h})=(f,v_{h}),\quad\forall v_{h}\in V_{0}^{h}.$$ (1) Standard error estimates of (1) are $\|u-u_{h}\|_{1}\leq Ch^{k}\|u\|_{k+1}$ and $\|u-u_{h}\|_{0}\leq Ch^{k+1}\|u\|_{k+1}$ where $\|\cdot\|_{k}$ denotes $H^{k}(\Omega)$-norm, see Ciarlet (1991). For $k\geq 2$, $\mathcal{O}(h^{k+1})$ superconvergence for the gradient at Gauss quadrature points and $\mathcal{O}(h^{k+2})$ superconvergence for functions values at Gauss-Lobatto quadrature points were proven for one-dimensional case in Lesaint & Zlamal (1979); Chen (1979); Bakker (1982) and for two-dimensional case in Douglas et al. (1974); Wahlbin (2006); Chen (2001); Lin & Yan (1996). To implement the scheme (1), integrals are usually approximated by quadrature. In practice the most convenient choice of quadrature for $Q^{2}$ element is to use $3\times 3$ Gauss-Lobatto quadrature rule since the quadrature points are also the exact degree of freedoms to represent the Lagrangian $Q^{2}$ basis, see Figure 1. Such a quadrature scheme can be denoted as finding $u_{h}\in V_{0}^{h}$ satisfying $$A_{h}(u_{h},v_{h})=\langle f,v_{h}\rangle_{h},\quad\forall v_{h}\in V_{0}^{h},$$ (2) where $A_{h}(u_{h},v_{h})$ and $\langle f,v_{h}\rangle_{h}$ denote using tensor product of $3$-point Gauss Lobatto quadrature for integrals $A(u_{h},v_{h})$ and $(f,v_{h})$ respectively. It is well-known that many classical finite difference schemes are exactly finite element methods with specific quadrature scheme, see Ciarlet (1991). The scheme (2) becomes a finite difference scheme, which will be explained in Section 7. On the one hand, such a finite difference implementation provides an efficient way for assembling the stiffness matrix especially for a variable coefficient problem. On other hand, (2) is the variational approach to construct a high order accurate finite difference scheme with advantages inherited from the variational formulation such as symmetry of stiffness matrix and easiness of handling boundary conditions in high order schemes. Classical quadrature error estimates imply that standard finite element error estimates still hold for (2), see Ciarlet & Raviart (1972); Ciarlet (1991). The focus of this paper is to prove that the superconvergence of function values at Gauss-Lobatto points still holds. To be more specific, for Dirichlet type boundary conditions, we will show that (2) is a fourth order accurate finite difference scheme in the discrete 2-norm under suitable smoothness assumptions on the exact solution and the coefficients. 1.2 Related work and difficulty in using standard tools The finite element method with Lagrangian quadratic polynomial basis for solving $-\Delta u=f$ on a regular triangular mesh (two adjacent triangles form a rectangle) is equivalent to a finite difference scheme (Whiteman (1975)) since the quadrature using three edge centers and three vertices on a triangle is exact for integrating quadratic polynomials over this triangle thus the quadrature is exact for the bilinear form in the finite element method. Superconvergence of function values in $C^{0}$-$P^{2}$ finite element method at the three vertices and three edge centers can also be proven for solving $-\Delta u=f$ (Chen (2001); Wahlbin (2006)). See also Huang & Xu (2008) for superconvergence of $P^{2}$ finite element method. Thus one can also construct a fourth order accurate finite difference scheme by using $P^{2}$ finite element method discussed in Whiteman (1975) for solving $-\Delta u=f$. Since the quadrature is exact for the bilinear form, the superconvergence results for $C^{0}$-$P^{2}$ finite element method hold trivially after using the quadrature in the bilinear form, but only for solving $-\Delta u=f$. For a variable coefficient Poisson equation or a general elliptic problem, since such a quadrature is only third order accurate, we do not expect the fourth order accuracy in the corresponding finite difference scheme. For computing the bilinear form in the scheme (1), another convenient implementation is to replace the smooth coefficient $a(x,y)$ by a piecewise $Q^{2}$ polynomial $a_{I}(x,y)$ obtained by interpolating $a(x,y)$ at the quadrature points in each cell shown in Figure 1. Then one can compute the integrals in the bilinear form exactly since the integrand is a polynomial. Superconvergence of function values for such an approximated coefficient scheme was proven in Li & Zhang (2019b) and the proof can be easily extended to higher order polynomials and three-dimensional cases. This result might seem surprising since interpolation error $a(x,y)-a_{I}(x,y)$ is of third order. On the other hand, all the tools used in Li & Zhang (2019b) are standard in the literature. From a practical point of view, (2) is more interesting since it gives a genuine finite difference scheme. It is straightforward to use standard tools in the literature for showing superconvergence still holds for accurate enough quadrature. Even though the $3\times 3$ Gauss-Lobatto quadrature is fourth order accurate, the standard quadrature error estimates cannot be used to establish the fourth order accuracy of (2). To be specific, in order to extend standard superconvergence proof to the scheme (2), it is necessary to establish the following consistency estimate: $$A(u,v_{h})-A_{h}(u,v_{h})=\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_{2}.$$ As will be explained in Remark 3.15 in Section 3.3, such an estimate cannot be obtained by using standard quadrature estimating tools, i.e., the Bramble-Hilbert Lemma. The Bramble-Hilbert Lemma gives a sharp quadrature error estimate for each cell but not for the whole bilinear form since it does not count in the cancellation of some quadrature errors between neighboring cells. In order to obtain a sharp estimate of $A(u,v_{h})-A_{h}(u,v_{h})$, we will derive an explicit error term of the Gauss-Lobatto quadrature, with which standard superconvergence proof can be applied. We can also rewrite (2) as a standard finite difference scheme and try to apply traditional finite difference approaches to analyze its convergence order. However, the local truncation error is only second order as will be shown in Section 7.4. The phenomenon that truncation errors have lower orders was named supraconvergence in the literature. The second order local truncation error makes it extremely difficult to establish the fourth order accuracy following any traditional finite difference analysis approaches. 1.3 Contributions and organization of the paper The main contribution of this paper is to establish the fourth order accuracy of the simple scheme (2) for a general elliptic equation $-\nabla(\mathbf{a}\nabla u)+\mathbf{b}\cdot\nabla u+cu=f$ with Dirichlet boundary conditions. The same proof also applies to Neumann type boundary condition but only $3.5$ order of accuracy can be proven, even though fourth order accuracy holds in numerical tests. This paper is organized as follows. In Section 2, we introduce our notations and assumptions. In Section 3, standard quadrature estimates are reviewed and an explicit 3-point Gauss-Lobatto quadrature error is proposed. Superconvergence of bilinear forms with quadrature is shown in Section 4. Then we prove the main result for homogeneous Dirichlet boundary conditions in Section 5 and for nonhomogeneous Dirichlet boundary conditions in Section 6. Section 7 provides a simple finite difference implementation of the discussed scheme. Section 8 contains numerical tests. Concluding remarks are given in Section 9. 2 Notations and assumptions 2.1 Notations and basic tools We will use the same notations as in Li & Zhang (2019b): • We only consider a rectangular domain $\Omega=(0,1)\times(0,1)$ with its boundary denoted as $\partial\Omega$. • Only for convenience, we assume $\Omega_{h}$ is an uniform rectangular mesh for $\bar{\Omega}$ and $e=[x_{e}-h,x_{e}+h]\times[y_{e}-h,y_{e}+h]$ denotes any cell in $\Omega_{h}$ with cell center $(x_{e},y_{e})$. The assumption of an uniform mesh is not essential to the discussion of superconvergence. • $Q^{k}(e)=\left\{p(x,y)=\sum\limits_{i=0}^{k}\sum\limits_{j=0}^{k}p_{ij}x^{i}y^% {j},(x,y)\in e\right\}$ is the set of tensor product of polynomials of degree $k$ on a cell $e$. • $V^{h}=\{p(x,y)\in C^{0}(\Omega_{h}):p|_{e}\in Q^{2}(e),\quad\forall e\in\Omega% _{h}\}$ denotes the continuous piecewise $Q^{2}$ finite element space on $\Omega_{h}$. • $V^{h}_{0}=\{v_{h}\in V^{h}:v_{h}=0\quad\mbox{on}\quad\partial\Omega\}.$ • The norm and seminorms for $W^{k,p}(\Omega)$ and $1\leq p<+\infty$, with standard modification for $p=+\infty$: $$\|u\|_{k,p,\Omega}=\left(\sum\limits_{i+j\leq k}\iint_{\Omega}|\partial_{x}^{i% }\partial_{y}^{j}u(x,y)|^{p}dxdy\right)^{1/p},$$ $$|u|_{k,p,\Omega}=\left(\sum\limits_{i+j=k}\iint_{\Omega}|\partial_{x}^{i}% \partial_{y}^{j}u(x,y)|^{p}dxdy\right)^{1/p},$$ $$[u]_{k,p,\Omega}=\left(\iint_{\Omega}|\partial_{x}^{k}u(x,y)|^{p}dxdy+\iint_{% \Omega}|\partial_{y}^{k}u(x,y)|^{p}dxdy\right)^{1/p}.$$ Notice that $[u]_{k+1,p,\Omega}=0$ if $u$ is a $Q^{k}$ polynomial. • For simplicity, sometimes we may use $\|u\|_{k,\Omega}$, $|u|_{k,\Omega}$ and $[u]_{k,\Omega}$ denote norm and seminorms for $H^{k}(\Omega)=W^{k,2}(\Omega)$. • When there is no confusion, $\Omega$ may be dropped in the norm and seminorms, e.g., $\|u\|_{k}=\|u\|_{k,2,\Omega}$. • For any $v_{h}\in V^{h}$, $1\leq p<+\infty$ and $k\geq 1$, $$\|v_{h}\|_{k,p,\Omega}:=\left(\sum_{e}\|v_{h}\|_{k,p,e}^{p}\right)^{\frac{1}{p% }},\quad|v_{h}|_{k,p,\Omega}:=\left(\sum_{e}|v_{h}|_{k,p,e}^{p}\right)^{\frac{% 1}{p}},\quad[v_{h}]_{k,p,\Omega}:=\left(\sum_{e}[v_{h}]_{k,p,e}^{p}\right)^{% \frac{1}{p}}.$$ • Let $Z_{0,e}$ denote the set of $3\times 3$ Gauss-Lobatto points on a cell $e$. • $Z_{0}=\bigcup_{e}Z_{0,e}$ denotes all Gauss-Lobatto points in the mesh $\Omega_{h}$. • Let $\|u\|_{2,Z_{0}}$ and $\|u\|_{\infty,Z_{0}}$ denote the discrete 2-norm and the maximum norm over $Z_{0}$ respectively: $$\|u\|_{2,Z_{0}}=\left[h^{2}\sum_{(x,y)\in Z_{0}}|u(x,y)|^{2}\right]^{\frac{1}{% 2}},\quad\|u\|_{\infty,Z_{0}}=\max_{(x,y)\in Z_{0}}|u(x,y)|.$$ • For a continuous function $f(x,y)$, let $f_{I}(x,y)$ denote its piecewise $Q^{2}$ Lagrange interpolant at $Z_{0,e}$ on each cell $e$, i.e., $f_{I}\in V^{h}$ satisfies: $$f(x,y)=f_{I}(x,y),\quad\forall(x,y)\in Z_{0}.$$ • $P^{k}(t)$ denotes the polynomial of degree $k$ of variable $t$. • $(f,v)_{e}$ denotes the inner product in $L^{2}(e)$ and $(f,v)$ denotes the inner product in $L^{2}(\Omega)$: $$(f,v)_{e}=\iint_{e}fv\,dxdy,\quad(f,v)=\iint_{\Omega}fv\,dxdy=\sum_{e}(f,v)_{e}.$$ • $\langle f,v\rangle_{e,h}$ denotes the approximation to $(f,v)_{e}$ by using $3\times 3$-point Gauss Lobatto quadrature for integration over cell $e$. • $\langle f,v\rangle_{h}$ denotes the approximation to $(f,v)$ by using $(k+1)\times(k+1)$-point Gauss Lobatto quadrature for integration over each cell $e$. • $\hat{K}=[-1,1]\times[-1,1]$ denotes a reference cell. • For $f(x,y)$ defined on $e$, consider $\hat{f}(s,t)=f(sh+x_{e},th+y_{e})$ defined on $\hat{K}$. Let $\hat{f}_{I}$ denote the $Q^{2}$ Lagrange interpolation of $\hat{f}$ at the $3\times 3$ Gauss Lobatto quadrature points on $\hat{K}$. • $(\hat{f},\hat{v})_{\hat{K}}=\iint_{\hat{K}}\hat{f}\hat{v}\,dsdt.$ • $\langle\hat{f},\hat{v}\rangle_{\hat{K}}$ denotes the approximation to $(\hat{f},\hat{v})_{\hat{K}}$ by using $3\times 3$-point Gauss Lobatto quadrature. • On the reference cell $\hat{K}$, for convenience we use the superscript $h$ over the $ds$ or $dt$ to denote we use 3-point Gauss-Lobatto quadrature on the corresponding variable. For example, $$\iint_{\hat{K}}\hat{f}d^{h}sdt=\frac{1}{3}\int_{-1}^{1}[\hat{f}(-1,t)+4\hat{f}% (0,t)+\hat{f}(1,t)]dt.$$ Since $(\hat{f}\hat{v})_{I}$ coincides with $\hat{f}\hat{v}$ at the quadrature points, we have $$\iint_{\hat{K}}(\hat{f}\hat{v})_{I}dxdy=\iint_{\hat{K}}(\hat{f}\hat{v})_{I}d^{% h}xd^{h}y=\iint_{\hat{K}}\hat{f}\hat{v}d^{h}xd^{h}y=\langle\hat{f},\hat{v}% \rangle_{\hat{K}}.$$ The following are commonly used tools and facts: • For two-dimensional problems, $$h^{k-2/p}|v|_{k,p,e}=|\hat{v}|_{k,p,\hat{K}},\quad h^{k-2/p}[v]_{k,p,e}=[\hat{% v}]_{k,p,\hat{K}},\quad 1\leq p\leq\infty.$$ • Inverse estimates for polynomials: $$\|v_{h}\|_{k+1,e}\leq Ch^{-1}\|v_{h}\|_{k,e},\quad\forall v_{h}\in V^{h},k\geq 0.$$ (3) • Sobolev’s embedding in two and three dimensions: $H^{2}(\hat{K})\hookrightarrow C^{0}(\hat{K})$. • The embedding implies $$\|\hat{f}\|_{0,\infty,\hat{K}}\leq C\|\hat{f}\|_{k,2,\hat{K}},\quad\forall\hat% {f}\in H^{k}(\hat{K}),k\geq 2,$$ $$\|\hat{f}\|_{1,\infty,\hat{K}}\leq C\|\hat{f}\|_{k+1,2,\hat{K}},\quad\forall% \hat{f}\in H^{k+1}(\hat{K}),k\geq 2.$$ • Cauchy-Schwarz inequalities in two dimensions: $$\sum_{e}\|u\|_{k,e}\|v\|_{k,e}\leq\left(\sum_{e}\|u\|^{2}_{k,e}\right)^{\frac{% 1}{2}}\left(\sum_{e}\|v\|^{2}_{k,e}\right)^{\frac{1}{2}},\quad\|u\|_{k,1,e}=% \mathcal{O}(h)\|u\|_{k,2,e}.$$ • Poincaré inequality: let $\bar{u}$ be the average of $u\in H^{1}(\Omega)$ on $\Omega$, then $$|u-\bar{u}|_{0,p,\Omega}\leq C|\nabla u|_{0,p,\Omega},\quad p\geq 1.$$ If $\bar{u}$ is the average of $u\in H^{1}(e)$ on a cell $e$, we have $$|u-\bar{u}|_{0,p,e}\leq Ch|\nabla u|_{0,p,e},\quad p\geq 1.$$ • For $k\geq 2$, the $(k+1)\times(k+1)$ Gauss-Lobatto quadrature is exact for integration of polynomials of degree $2k-1\geq k+1$ on $\hat{K}$. • Define the projection operator $\hat{\Pi}_{1}:\hat{u}\in L^{1}(\hat{K})\rightarrow\hat{\Pi}_{1}\hat{u}\in Q^{1% }(\hat{K})$ by $$\iint_{\hat{K}}(\hat{\Pi}_{1}\hat{u})wdsdt=\iint_{\hat{K}}\hat{u}wdsdt,\forall w% \in Q^{1}(\hat{K}).$$ (4) Notice that all degree of freedoms of $\hat{\Pi}_{1}\hat{u}$ can be represented as a linear combination of $\iint_{\hat{K}}\hat{u}(s,t)p(s,t)dsdt$ for $p(s,t)=1,s,t,st$, thus $\hat{\Pi}_{1}$ is a continuous linear mapping from $L^{2}(\hat{K})$ to $H^{1}(\hat{K})$ (or $H^{2}(\hat{K})$) by Cauchy-Schwarz inequality $|\iint_{\hat{K}}\hat{u}(s,t)\hat{p}(s,t)dsdt|\leq\|\hat{u}\|_{0,2,\hat{K}}\|% \hat{p}\|_{0,2,\hat{K}}\leq C\|\hat{u}\|_{0,2,\hat{K}}$. 2.2 Elliptic regularity and $V^{h}$ ellipticity We consider the elliptic variational problem of finding $u\in H_{0}^{1}(\Omega)$ to satisfy $$A(u,v):=\iint_{\Omega}(\nabla v^{T}\mathbf{a}\nabla u+\mathbf{b}\nabla uv+cuv)% \,dxdy=(f,v),\forall v\in H^{1}_{0}(\Omega),$$ (5) where $\mathbf{a}=\begin{pmatrix}a^{11}&a^{12}\\ a^{21}&a^{22}\end{pmatrix}$ is positive definite and $\mathbf{b}=[b^{1}\quad b^{2}]$. Assume the coefficients $\mathbf{a}$, $\mathbf{b}$ and $c$ are smooth with uniform upper bounds, thus $A(u,v)\leq C\|u\|_{1}\|v\|_{1}$ for any $u,v\in H^{1}_{0}(\Omega)$. Assume the eigenvalues of $\mathbf{a}$ have a uniform positive lower bound and $\nabla\cdot\mathbf{b}\leq 2c$, so that coercivity of the bilinear form can be easily achieved. Since $$(\mathbf{b}\cdot\nabla u,v)=\int_{\partial\Omega}uv\mathbf{b}\cdot\mathbf{n}ds% -(\nabla\cdot(v\mathbf{b}),u)=\int_{\partial\Omega}uv\mathbf{b}\cdot\mathbf{n}% ds-(\mathbf{b}\cdot\nabla v,u)-(v\nabla\cdot\mathbf{b},u),$$ we have $$2(\mathbf{b}\cdot\nabla v,v)+2(cv,v)=\int_{\partial\Omega}v^{2}\mathbf{b}\cdot% \mathbf{n}ds+((2c-\nabla\cdot\mathbf{b})v,v)\geq 0,\quad\forall v\in H^{1}_{0}% (\Omega).$$ (6) By the equivalence of two norms $|\cdot|_{1}$ and $\|\cdot\|_{1}$ for the space $H^{1}_{0}(\Omega)$ (see Ciarlet (1991)), we conclude that the bilinear form $A(u,v)=(\mathbf{a}\nabla u,\nabla v)+(\mathbf{b}\cdot\nabla u,v)+(cu,v)$ satisfies coercivity $A(v,v)\geq C\|v\|_{1}$ for any $v\in H^{1}_{0}(\Omega)$. We need to make two additional assumptions for the general elliptic operator (5): 1. The elliptic regularity holds for the dual problem. Let $A^{*}$ be the dual operator of $A$, i.e., $A^{*}(u,v)=A(v,u)$. We assume the elliptic regularity $\|w\|_{2}\leq C\|f\|_{0}$ holds for the exact dual problem of finding $w\in H^{1}_{0}(\Omega)$ satisfying $A^{*}(w,v)=(f,v),\quad\forall v\in H_{0}^{1}(\Omega)$. See Savaré (1998) and Grisvard (2011) for the elliptic regularity with Lipschitz continuous coefficients on a Lipschitz domain. 2. The bilinear form $A_{h}$ satisfies the $V_{h}$-ellipticity: $$\forall v_{h}\in V^{h}_{0},\quad C\|v_{h}\|^{2}_{1}\leq A_{h}(v_{h},v_{h}).$$ (7) In Section 5.1, we will show that $V_{h}$-ellipticity holds if $h$ is small enough. 3 Quadrature error estimates In the following, we will use $\hat{\quad}$ for a function to emphasize the function is defined on or transformed to the reference cell $\hat{K}=[-1,1]\times[-1,1]$ from a mesh cell. 3.1 Standard estimates The Bramble-Hilbert Lemma for $Q^{k}$ polynomials can be stated as follows, see Exercise 3.1.1 and Theorem 4.1.3 in Ciarlet (2002): Theorem 3.1. If a continuous linear mapping $\hat{\Pi}:H^{k+1}(\hat{K})\rightarrow H^{k+1}(\hat{K})$ satisfies $\hat{\Pi}\hat{v}=\hat{v}$ for any $\hat{v}\in Q^{k}(\hat{K})$, then $$\|\hat{u}-\hat{\Pi}\hat{u}\|_{k+1,\hat{K}}\leq C[\hat{u}]_{k+1,\hat{K}},\quad% \forall\hat{u}\in H^{k+1}(\hat{K}).$$ (8) Thus if $l(\cdot)$ is a continuous linear form on the space $H^{k+1}(\hat{K})$ satisfying $l(\hat{v})=0,\forall\hat{v}\in Q^{k}(\hat{K}),$ then $$|l(\hat{u})|\leq C\|l\|^{\prime}_{k+1,\hat{K}}[\hat{u}]_{k+1,\hat{K}},\quad% \forall\hat{u}\in H^{k+1}(\hat{K}),$$ where $\|l\|^{\prime}_{k+1,\hat{K}}$ is the norm in the dual space of $H^{k+1}(\hat{K})$. By applying Bramble-Hilbert Lemma, we have the following standard quadrature estimates. See Li & Zhang (2019b) for the detailed proof. Theorem 3.2. For a sufficiently smooth function $a(x,y)$ and its $Q^{2}$ interpolation $a_{I}$, let $m$ is an integer satisfying $2\leq m\leq 4$, we have $$\iint_{e}a(x,y)dxdy-\iint_{e}a_{I}(x,y)dxdy=\mathcal{O}(h^{m+1})[a]_{m,e}=% \mathcal{O}(h^{m+2})[a]_{m,\infty,e}.$$ Theorem 3.3. If $f\in H^{4}(\Omega)$, $(f,v_{h})-\langle f,v_{h}\rangle_{h}=\mathcal{O}(h^{4})\|f\|_{4}\|v_{h}\|_{2},% \quad\forall v_{h}\in V^{h}.$ Remark 3.4. By the theorems above, on the reference cell $\hat{K}$, we have $$\iint_{\hat{K}}\hat{a}(s,t)-\hat{a}_{I}(s,t)dsdt\leq C[\hat{a}]_{4,\hat{K}}% \leq C[\hat{a}]_{4,\infty,\hat{K}},$$ (9) and $$|\hat{a}-\hat{a}_{I}|_{3,\hat{K}}\leq C[\hat{a}]_{3,\hat{K}}.$$ (10) The following two results are also standard estimates obtained by applying the Bramble-Hilbert Lemma. Lemma 3.5. If $f\in H^{2}(\Omega)$ or $f\in V^{h}$, we have $(f,v_{h})-\langle f,v_{h}\rangle_{h}=\mathcal{O}(h^{2})|f|_{2}\|v_{h}\|_{0},% \quad\forall v_{h}\in V^{h}.$ Proof 3.6. For simplicity, we ignore the subscript in $v_{h}$. Let $E(f)$ denote the quadrature error for integrating $f(x,y)$ on $e$. Let $\hat{E}(\hat{f})$ denote the quadrature error for integrating $\hat{f}(s,t)=f(x_{e}+sh,y_{e}+th)$ on the reference cell $\hat{K}$. Due to the embedding $H^{2}(\hat{K})\hookrightarrow C^{0}(\hat{K})$, we have $$\displaystyle|\hat{E}(\hat{f}\hat{v})|\leq C|\hat{f}\hat{v}|_{0,\infty,\hat{K}% }\leq C|\hat{f}|_{0,\infty,\hat{K}}|\hat{v}|_{0,\infty,\hat{K}}\leq C\|\hat{f}% \|_{2,\hat{K}}\|\hat{v}\|_{0,\hat{K}}.$$ Thus the mapping $\hat{f}\rightarrow E(\hat{f}\hat{v})$ is a continuous linear form on $H^{2}(\hat{K})$ and its norm is bounded by $C\|\hat{v}\|_{0,\hat{K}}$. If $\hat{f}\in Q^{1}(\hat{K})$, then we have $\hat{E}(\hat{f}\hat{v})=0$. By the Bramble-Hilbert Lemma Theorem 3.1 on this continuous linear form, we get $$|\hat{E}(\hat{f}\hat{v})|\leq C[\hat{f}]_{2,\hat{K}}\|\hat{v}\|_{0,\hat{K}}.$$ So on a cell $e$, we get $$E(fv)=h^{2}\hat{E}(\hat{f}\hat{v})\leq Ch^{2}[\hat{f}]_{2,\hat{K}}\|\hat{v}\|_% {0,\hat{K}}\leq Ch^{2}|f|_{2,e}\|v\|_{0,e}.$$ (11) Summing over all elements and use Cauchy-Schwarz inequality, we get the desired result. Theorem 3.7. Assume all coefficients of (5) are in $W^{2,\infty}(\Omega)$. We have $$A(z_{h},v_{h})-A_{h}(z_{h},v_{h})=\mathcal{O}(h)\|v_{h}\|_{2}\|z_{h}\|_{1},% \quad\forall v_{h},z_{h}\in V^{h}.$$ Proof 3.8. By setting $f=a^{11}(v_{h})_{x}$ in (11), we get $$\displaystyle|(a^{11}(z_{h})_{x},(v_{h})_{x})-\langle a^{11}(z_{h})_{x},(v_{h}% )_{x}\rangle_{h}|=Ch^{2}\|a^{11}(v_{h})_{x}\|_{2}\|(z_{h})_{x}\|_{0}$$ $$\displaystyle\leq$$ $$\displaystyle Ch^{2}\|a^{11}\|_{2,\infty}\|v_{h}\|_{3}|z_{h}|_{1}\leq Ch\|a^{1% 1}\|_{2,\infty}\|v_{h}\|_{2}|z_{h}|_{1},$$ where the inverse estimate (3) is used in the last inequality. Similarly, we have $$\displaystyle(a^{12}(z_{h})_{x},(v_{h})_{y})-\langle a^{12}(z_{h})_{x},(v_{h})% _{y}\rangle_{h}$$ $$\displaystyle=Ch\|a^{12}\|_{2,\infty}\|v_{h}\|_{2}|z_{h}|_{1},$$ $$\displaystyle(a^{22}(z_{h})_{y},(v_{h})_{y})-\langle a^{22}(z_{h})_{y},(v_{h})% _{y}\rangle_{h}$$ $$\displaystyle=Ch\|a^{22}\|_{2,\infty}\|v_{h}\|_{2}|z_{h}|_{1},$$ $$\displaystyle(b^{1}(z_{h})_{x},v_{h})-\langle b^{1}(z_{h})_{x},v_{h}\rangle_{h}$$ $$\displaystyle=Ch\|b^{1}\|_{2,\infty}\|v_{h}\|_{2}|z_{h}|_{0},$$ $$\displaystyle(b^{2}(z_{h})_{y},v_{h})-\langle b^{2}(z_{h})_{y},v_{h}\rangle_{h}$$ $$\displaystyle=Ch\|b^{2}\|_{2,\infty}\|v_{h}\|_{2}|z_{h}|_{0},$$ $$\displaystyle(cz_{h},v_{h})-\langle cz_{h},v_{h}\rangle_{h}$$ $$\displaystyle=Ch\|c\|_{2,\infty}\|v_{h}\|_{1}|z_{h}|_{0},$$ which implies $$A(z_{h},v_{h})-A_{h}(z_{h},v_{h})=\mathcal{O}(h)\|v_{h}\|_{2}|z_{h}|_{1}.$$ 3.2 Explicit quadrature error terms Define $p(t)=\frac{1}{24}t^{4}-\frac{1}{9}t^{3}+\frac{1}{12}t^{2}-\frac{1}{72}$ and let $\tilde{p}(t)$ denote its even extension: $$\tilde{p}(t)=\begin{cases}\frac{1}{24}t^{4}-\frac{1}{9}t^{3}+\frac{1}{12}t^{2}% -\frac{1}{72},&t\geq 0,\\ \frac{1}{24}t^{4}+\frac{1}{9}t^{3}+\frac{1}{12}t^{2}-\frac{1}{72},&t<0.\end{cases}$$ Lemma 3.9. If $\hat{g}\in W^{4,1}([0,1])$ with $\hat{g}^{\prime}(0)=\hat{g}^{(3)}(0)=0$, then $$\displaystyle\int_{0}^{1}\hat{g}(t)dt=\frac{1}{3}\hat{g}(1)+\frac{2}{3}\hat{g}% (0)+\int_{0}^{1}p(t)\hat{g}^{(4)}(t)dt.$$ (12) Proof 3.10. First we assume that $\hat{g}\in C^{4}([0,1])$, then it can be shown through integration by parts. It is straightforward to check $$\displaystyle\int_{0}^{1}\hat{g}(t)dt=\frac{1}{3}\hat{g}(1)+\frac{2}{3}\hat{g}% (0)-\int_{0}^{1}(t-\frac{2}{3})\hat{g}^{\prime}(t)dt.$$ And we have $$\displaystyle-\int_{0}^{1}(t-\frac{2}{3})\hat{g}^{\prime}(t)dt=-\int_{0}^{1}% \hat{g}^{\prime}(t)d(\frac{t^{2}}{2}-\frac{2t}{3}+\frac{1}{6})$$ $$\displaystyle=$$ $$\displaystyle\left.-\left[(\frac{t^{2}}{2}-\frac{2t}{3}+\frac{1}{6})\hat{g}^{% \prime}(t)\right]\right|_{0}^{1}+\int_{0}^{1}(\frac{t^{2}}{2}-\frac{2t}{3}+% \frac{1}{6})\hat{g}^{\prime\prime}(t)dt$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}\hat{g}^{\prime\prime}(t)d(\frac{t^{3}}{6}-\frac{t^{2% }}{3}+\frac{t}{6})$$ $$\displaystyle=$$ $$\displaystyle\left.\left[(\frac{t^{3}}{6}-\frac{t^{2}}{3}+\frac{t}{6})\hat{g}^% {\prime\prime}(t)\right]\right|_{0}^{1}-\int_{0}^{1}(\frac{t^{3}}{6}-\frac{t^{% 2}}{3}+\frac{t}{6})\hat{g}^{(3)}(t)dt$$ $$\displaystyle=$$ $$\displaystyle-\int_{0}^{1}\hat{g}^{(3)}(t)d(\frac{1}{24}t^{4}-\frac{1}{9}t^{3}% +\frac{1}{12}t^{2}-\frac{1}{72})$$ $$\displaystyle=$$ $$\displaystyle\left.\left[(\frac{1}{24}t^{4}-\frac{1}{9}t^{3}+\frac{1}{12}t^{2}% -\frac{1}{72})\hat{g}^{(3)}(t)\right]\right|_{0}^{1}+\int_{0}^{1}p(t)\hat{g}^{% (4)}(t)dt$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}p(t)\hat{g}^{(4)}(t)dt.$$ By standard global approximation to $\hat{g}$ by smooth functions we know the result holds for $\hat{g}\in W^{4,1}([0,1])$. By Lemma 3.9, it is straightforward to show the following result. Lemma 3.11. Suppose $\hat{f}\in W^{4,1}([-1,1])$, then $$\displaystyle\int_{-1}^{1}\hat{f}(t)dt=$$ $$\displaystyle\frac{1}{3}\left[\hat{f}(-1)+4\hat{f}(0)+\hat{f}(1)\right]+\int_{% -1}^{1}\tilde{p}(t)\hat{f}^{(4)}dt$$ (13) $$\displaystyle=$$ $$\displaystyle\int_{-1}^{1}\hat{f}(t)d^{h}t+\int_{-1}^{1}\tilde{p}(t)\hat{f}^{(% 4)}dt.$$ (14) Proof 3.12. Let $\hat{g}(t)=\hat{f}(t)+\hat{f}(-t)$ then $\hat{g}^{\prime}(0)=\hat{g}^{(3)}(0)=0$. Apply Lemma 3.9 to $\hat{g}(t)$, we have $$\int_{-1}^{1}\hat{f}(t)dt=\int_{0}^{1}\hat{f}(-t)dt+\int_{0}^{1}\hat{f}(t)dt=% \int_{0}^{1}\hat{g}(t)dt=\frac{1}{3}\hat{g}(1)+\frac{2}{3}\hat{g}(0)+\int_{0}^% {1}p(t)\hat{g}^{(4)}(t)dt,$$ $$\int_{0}^{1}p(t)\hat{g}^{(4)}(t)dt=\int_{0}^{1}p(t)[\hat{f}^{(4)}(t)+\hat{f}^{% (4)}(-t)]dt=\int_{-1}^{1}\tilde{p}(t)\hat{f}^{(4)}(t)dt.$$ Remark 3.13. For a function $u(x)\in W^{4,1}([x_{e}-h,x_{e}+h])$, if we map it from cell $e=[x-x_{e},x+x_{e}]$ to the reference cell $\hat{K}=[-1,1]$ then apply Lemma 3.11 and map it back, we get the error estimation of 3-point Gauss-Lobatto quadrature: $$\displaystyle\int_{x_{e}-h}^{x_{e}+h}u(x)dx=$$ $$\displaystyle\frac{h}{3}\left[u(x_{e}-h)+4u(x_{e})+u(x_{e}+h)\right]+h^{4}\int% _{x_{e}-h}^{x_{e}+h}\tilde{p}(\frac{x-x_{e}}{h})u^{(4)}(x)dx.$$ (15) 3.3 A refined consistency error In this subsection, we will show how to establish the desired consistency error estimate for smooth enough coefficients: $$A(u,v_{h})-A_{h}(u,v_{h})=\begin{cases}\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_{2% },\quad\forall v_{h}\in V^{h}_{0}\\ \mathcal{O}(h^{3.5})\|u\|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V^{h}\end{% cases}.$$ Theorem 3.14. Assume $a(x,y)\in W^{4,\infty}(\Omega)$, $u\in H^{5}(\Omega)$, then $$\displaystyle(a\partial_{x}u,\partial_{x}v_{h})-\langle a\partial_{x}u,% \partial_{x}v_{h}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V^{h}_{0},$$ (16a) $$\displaystyle(a\partial_{x}u,\partial_{x}v_{h})-\langle a\partial_{x}u,% \partial_{x}v_{h}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V^{h},$$ (16b) $$\displaystyle(a\partial_{x}u,\partial_{y}v_{h})-\langle a\partial_{x}u,% \partial_{y}v_{h}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V^{h}_{0},$$ (17a) $$\displaystyle(a\partial_{x}u,\partial_{y}v_{h})-\langle a\partial_{x}u,% \partial_{y}v_{h}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V^{h},$$ (17b) $$(a\partial_{x}u,v_{h})-\langle a\partial_{x}u,v_{h}\rangle_{h}=\mathcal{O}(h^{% 4})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V^{h}_{0},$$ (18) $$(au,v_{h})-\langle au,v_{h}\rangle_{h}=\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|% _{4}\|v_{h}\|_{2},\quad\forall v_{h}\in V^{h}_{0}.$$ (19) Remark 3.15. We emphasize that Theorem 3.14 cannot be proven by applying the Bramble-Hilbert Lemma. Consider the constant coefficient case $a(x,y)\equiv 1$ as an example, $$(\partial_{x}u,\partial_{x}v_{h})-\langle\partial_{x}u,\partial_{x}v_{h}% \rangle_{h}=\sum_{e}\left(\iint_{e}u_{x}(v_{h})_{x}dxdy-\iint_{e}u_{x}(v_{h})_% {x}d^{h}xd^{h}y\right).$$ Since the $3\times 3$ Gauss-Lobatto quadrature is exact for integrating $Q^{3}$ polynomials, by Theorem 3.1 we have $$\left|\iint_{e}u_{x}(v_{h})_{x}dxdy-\iint_{e}u_{x}(v_{h})_{x}d^{h}xd^{h}y% \right|=\left|\iint_{\hat{K}}\hat{u}_{s}(\hat{v}_{h})_{s}dsdt-\iint_{\hat{K}}% \hat{u}_{s}(\hat{v}_{h})_{s}d^{h}sd^{h}t\right|\leq C[\hat{u}_{s}(\hat{v}_{h})% _{s}]_{4,\hat{K}}.$$ Notice that $\hat{v}_{h}$ is $Q^{2}$ thus $(\hat{v}_{h})_{stt}$ does not vanish and $[(\hat{v}_{h})_{s}]_{4,\hat{K}}\leq C|\hat{v}_{h}|_{3,\hat{K}}$. So by Bramble-Hilbert Lemma for $Q^{k}$ polynomials, we can only get $$\iint_{e}u_{x}(v_{h})_{x}dxdy-\iint_{e}u_{x}(v_{h})_{x}d^{h}xd^{h}y=\mathcal{O% }(h^{4})\|u\|_{5,e}\|v_{h}\|_{3,e}.$$ Thus by Cauchy-Schwarz inequality after summing over $e$, we only have $$(\partial_{x}u,\partial_{x}v_{h})-\langle\partial_{x}u,\partial_{x}v_{h}% \rangle_{h}=\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_{3}.$$ In order to get the desired estimate involving only the $H^{2}$-norm of $v_{h}$, we propose to derive the explicit error term of the Gauss-Lobatto quadrature. Proof 3.16. For simplicity, we ignore the subscript ${}_{h}$ of $v_{h}$ in this proof and all the following $v$ are in $V^{h}$ which are $Q^{2}$ polynomials in each cell. First, by Theorem 3.3, we easily obtain (18) and (19): $$(au_{x},v)-\langle au_{x},v\rangle_{h}=\mathcal{O}(h^{4})\|au_{x}\|_{4}\|v\|_{% 2}=\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{5}\|v\|_{2},$$ $$(au,v)-\langle au,v\rangle_{h}=\mathcal{O}(h^{4})\|au\|_{4}\|v\|_{2}=\mathcal{% O}(h^{4})\|a\|_{4,\infty}\|u\|_{4}\|v\|_{2}.$$ We will only discuss $(au_{x},v_{x})-\langle au_{x},v_{x}\rangle_{h}$ and the same discussion also applies to derive (17a) and (17b). Since we have $$\displaystyle(au_{x},v_{x})-\langle au_{x},v_{x}\rangle_{h}=\sum_{e}\left(% \iint_{e}au_{x}v_{x}dxdy-\iint_{e}au_{x}v_{x}d^{h}xd^{h}y\right)$$ $$\displaystyle=$$ $$\displaystyle\sum_{e}\left(\iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}dsdt-% \iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}d^{h}sd^{h}t\right)=\sum_{e}\left(% \iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}dsdt-\iint_{\hat{K}}(\hat{a}\hat{u% }_{s})_{I}\hat{v}_{s}d^{h}sd^{h}t\right).$$ For fixed $t$, $(\hat{a}\hat{u}_{s})_{I}\hat{v}_{s}$ is a polynomial of degree $3$ w.r.t. variable $s$. Thus on $\hat{K}$ the 3-point Gauss-Lobatto quadrature is exact for $s$-integration: $$\iint_{\hat{K}}(\hat{a}\hat{u}_{s})_{I}\hat{v}_{s}d^{h}sd^{h}t=\iint_{\hat{K}}% (\hat{a}\hat{u}_{s})_{I}\hat{v}_{s}dsd^{h}t.$$ We apply Lemma 3.11 to $\iint_{\hat{K}}(\hat{a}\hat{u}_{s})_{I}\hat{v}_{s}dsdt$ on $t$-integration: $$\displaystyle\iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}dsdt-\iint_{\hat{K}}(% \hat{a}\hat{u}_{s})_{I}\hat{v}_{s}d^{h}sd^{h}t=\iint_{\hat{K}}\hat{a}\hat{u}_{% s}\hat{v}_{s}dsdt-\iint_{\hat{K}}(\hat{a}\hat{u}_{s})_{I}\hat{v}_{s}dsd^{h}t$$ $$\displaystyle=$$ $$\displaystyle\iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}dsdt-\iint_{\hat{K}}(% \hat{a}\hat{u}_{s})_{I}\hat{v}_{s}dsdt+\iint_{\hat{K}}\tilde{p}(t)\partial_{t}% ^{4}((\hat{a}\hat{u}_{s})_{I}\hat{v}_{s})dsdt.$$ (20) Let $\overline{\hat{v}_{s}}$ be the cell average of $\hat{v}_{s}$ on $\hat{K}$, then for the first two terms in (20) we have $$\displaystyle\iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}dsdt-\iint_{\hat{K}}(% \hat{a}\hat{u}_{s})_{I}\hat{v}_{s}dsdt=\iint_{\hat{K}}\left(\hat{a}\hat{u}_{s}% -(\hat{a}\hat{u}_{s})_{I}\right)\overline{\hat{v}_{s}}dsdt+\iint_{\hat{K}}% \left(\hat{a}\hat{u}_{s}-(\hat{a}\hat{u}_{s})_{I}\right)(\hat{v}_{s}-\overline% {\hat{v}_{s}})dsdt.$$ By (9), we have $$\left|\iint_{\hat{K}}\left(\hat{a}\hat{u}_{s}-(\hat{a}\hat{u}_{s})_{I}\right)% \overline{\hat{v}_{s}}dsdt\right|\leq C[\hat{a}\hat{u}_{s}]_{4,\hat{K}}\left|% \overline{\hat{v}_{s}}\right|=\mathcal{O}(h^{4})\|\hat{a}\|_{4,\infty,e}\|\hat% {u}\|_{5,e}\|\hat{v}\|_{1,e}.$$ By Cauchy-Schwarz inequality, Bramble-Hilbert Lemma on interpolation error and Poincaré inequality, we have $$\displaystyle\left|\iint_{\hat{K}}\left(\hat{a}\hat{u}_{s}-(\hat{a}\hat{u}_{s}% )_{I}\right)(\hat{v}_{s}-\overline{\hat{v}_{s}})dsdt\right|\leq|\hat{a}\hat{u}% _{s}-(\hat{a}\hat{u}_{s})_{I}|_{0,\hat{K}}|\hat{v}_{s}-\overline{\hat{v}_{s}}|% _{0,\hat{K}}\leq C[\hat{a}\hat{u}_{s}]_{3,\hat{K}}|\hat{v}|_{2,\hat{K}}=% \mathcal{O}(h^{4})\|a\|_{3,\infty,e}\|u\|_{4,e}\|v\|_{2,e}.$$ Thus we have $$\iint_{\hat{K}}\hat{a}\hat{u}_{s}\hat{v}_{s}dsdt-\iint_{\hat{K}}(\hat{a}\hat{u% }_{s})_{I}\hat{v}_{s}dsdt=\mathcal{O}(h^{4})\|a\|_{4,\infty,e}\|u\|_{5,e}\|v\|% _{2,e}.$$ For the last term in (20), notice that $v$ is a $Q^{2}$ polynomial on $e$ thus some of its high order derivatives vanish. With product rule and integration by parts on $s$, we get $$\displaystyle\iint_{\hat{K}}\tilde{p}(t)\partial_{t}^{4}\left((\hat{a}\hat{u}_% {s})_{I}\hat{v}_{s}\right)dsdt=6\iint_{\hat{K}}\tilde{p}(t)\partial_{t}^{2}(% \hat{a}\hat{u}_{s})_{I}\hat{v}_{stt}dsdt$$ $$\displaystyle=$$ $$\displaystyle 6\left.\int_{-1}^{1}\tilde{p}(t)\partial_{t}^{2}(\hat{a}\hat{u}_% {s})_{I}\hat{v}_{tt}dt\right|_{s=-1}^{s=1}-6\iint_{\hat{K}}\tilde{p}(t)% \partial_{s}\partial_{t}^{2}(\hat{a}\hat{u}_{s})_{I}\hat{v}_{tt}dsdt.$$ By Bramble-Hilbert Lemma (Theorem 3.1), we have $$\displaystyle\iint_{\hat{K}}\tilde{p}(t)\partial_{s}\partial_{t}^{2}(\hat{a}% \hat{u}_{s})_{I}\hat{v}_{tt}dsdt=\iint_{\hat{K}}\tilde{p}(t)\partial_{s}% \partial_{t}^{2}\left((\hat{a}\hat{u}_{s})_{I}-\hat{a}\hat{u}_{s}\right)\hat{v% }_{tt}dsdt+\iint_{\hat{K}}\tilde{p}(t)\partial_{s}\partial_{t}^{2}(\hat{a}\hat% {u}_{s})\hat{v}_{tt}dsdt$$ $$\displaystyle\leq$$ $$\displaystyle C|(\hat{a}\hat{u}_{s})_{I}-\hat{a}\hat{u}_{s}|_{3,\hat{K}}|\hat{% v}|_{2,\hat{K}}+C|\hat{a}\hat{u}_{s}|_{3,\hat{K}}|\hat{v}|_{2,\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C[\hat{a}\hat{u}_{s}]_{3,\hat{K}}|\hat{v}|_{2,\hat{K}}+C|\hat{a}% \hat{u}_{s}|_{3,\hat{K}}|\hat{v}|_{2,\hat{K}}=\mathcal{O}(h^{4})\|a\|_{3,% \infty,e}\|u\|_{4,e}\|v\|_{2,e}.$$ Now we only need to discuss the line integral term. $$\displaystyle\left.\int_{-1}^{1}\tilde{p}(t)\partial_{t}^{2}(\hat{a}\hat{u}_{s% })_{I}\hat{v}_{tt}dt\right|_{s=-1}^{s=1}=\left.\int_{-1}^{1}\tilde{p}(t)% \partial_{t}^{2}\left[(\hat{a}\hat{u}_{s})_{I}-\hat{a}\hat{u}_{s}\right]\hat{v% }_{tt}dt\right|_{s=-1}^{s=1}+\left.\int_{-1}^{1}\tilde{p}(t)\partial_{t}^{2}(% \hat{a}\hat{u}_{s})\hat{v}_{tt}dt\right|_{s=-1}^{s=1}.$$ Notice that $\hat{v}^{2}_{tt}(s,t)$ is a quartic polynomial thus its integral over $\hat{K}$ is equal to using $4$-point Gauss-Lobatto quadrature for the $s$-variable. Therefore, by considering 4-point Gauss-Lobatto quadrature for the $s$-variable in the integral $\iint_{\hat{K}}\hat{v}^{2}_{tt}dsdt$, we can obtain $$\int_{-1}^{1}\hat{v}_{tt}^{2}(\pm 1,t)dt\leq C\int_{-1}^{1}\int_{-1}^{1}\hat{v% }_{tt}^{2}(s,t)dsdt.$$ (21) Thus by Cauchy-Schwarz inequality, trace inequality and Theorem 3.1, we have $$\displaystyle\left.\int_{-1}^{1}\tilde{p}(t)\partial_{t}^{2}\left[(\hat{a}\hat% {u}_{s})_{I}-\hat{a}\hat{u}_{s}\right]\hat{v}_{tt}dt\right|_{s=-1}^{s=1}\leq C% |\hat{v}_{tt}|_{0,\hat{K}}|\partial_{t}^{2}\left[(\hat{a}\hat{u}_{s})_{I}-\hat% {a}\hat{u}_{s}\right]|_{0,\partial\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C|\hat{v}|_{2,\hat{K}}\|\partial_{t}^{2}\left[(\hat{a}\hat{u}_{s% })_{I}-\hat{a}\hat{u}_{s}\right]\|_{1,\hat{K}}\leq C|\hat{v}|_{2,\hat{K}}\|(% \hat{a}\hat{u}_{s})_{I}-\hat{a}\hat{u}_{s}\|_{3,\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C|\hat{v}|_{2,\hat{K}}[\hat{a}\hat{u}_{s}]_{3,\hat{K}}=\mathcal{% O}(h^{4})\|a\|_{3,\infty,e}\|u\|_{4,e}\|v\|_{2,e}.$$ After mapping back to the cell $e$, we have $$\displaystyle\left.\int_{-1}^{1}\tilde{p}(t)\partial_{t}^{2}(\hat{a}\hat{u}_{s% })\hat{v}_{tt}dt\right|_{s=-1}^{s=1}=h^{4}\left.\int_{y_{e}-h}^{y_{e}+h}\tilde% {p}(\frac{y-y_{e}}{h})\left[\partial_{y}^{2}(au_{x})v_{yy}\right]dy\right|_{x=% x_{e}-h}^{x=x_{e}+h}.$$ Let $L_{2}$ and $L_{4}$ denote the left and right boundary of $\Omega$ and let $l_{2}$ and $l_{4}$ denote the left and right edge of element $e$. Since $\partial_{y}^{2}(au_{x})$ and $v_{yy}$ are continuous across $l_{2}$ and $l_{4}$, after summing over all elements $e$, the line integrals along the inner edges are cancelled out and only the line integrals on $L_{2}$ and $L_{4}$ remain. $$\displaystyle\sum_{e}\left.h^{4}\int_{y_{e}-h}^{y_{e}+h}\tilde{p}(\frac{y-y_{e% }}{h})\partial_{y}^{2}(au_{x})v_{yy}dy\right|_{x=x_{e}-h}^{x=x_{e}+h}$$ $$\displaystyle=$$ $$\displaystyle\sum_{e\cap L_{4}\neq\emptyset}h^{4}\int_{y_{e}-h}^{y_{e}+h}% \tilde{p}(\frac{y-y_{e}}{h})\partial_{y}^{2}(au_{x})v_{yy}(1,y)dy-\sum_{e\cap L% _{2}\neq\emptyset}h^{4}\int_{y_{e}-h}^{y_{e}+h}\tilde{p}(\frac{y-y_{e}}{h})% \partial_{y}^{2}(au_{x})v_{yy}(0,y)dy.$$ Since $(v_{yy})^{2}$ is a quartic polynomial, by considering $4$-point Gauss-Lobatto quadrature for $x$-integration in $\iint_{e}v^{2}_{yy}dxdy$, we get $$h\int_{y_{e}-h}^{y_{e}+h}(v_{yy})^{2}(x_{e}-h,y)dy\leq C\iint_{e}(v_{yy})^{2}(% x,y)dxdy.$$ For the line integrals along $L_{2}$, we have $$\displaystyle\sum_{e\cap L_{2}\neq\emptyset}h^{4}\int_{y_{e}-h}^{y_{e}+h}% \tilde{p}(\frac{y-y_{e}}{h})(au_{x})_{yy}v_{yy}(0,y)dy$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{4})\sum_{e\cap L_{2}\neq\emptyset}\sqrt{\int_{y_{e% }-h}^{y_{e}+h}((au_{x})_{yy})^{2}(0,y)dy}\sqrt{\int_{y_{e}-h}^{y_{e}+h}v_{yy}^% {2}(0,y)dy}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\sum_{e\cap L_{2}\neq% \emptyset}\|u\|_{3,2,l_{2}}\sqrt{h\int_{y_{e}-h}^{y_{e}+h}v_{yy}^{2}(0,y)dy}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\sum_{e\cap L_{2}\neq% \emptyset}\|u\|_{3,2,l_{2}}|v|_{2,e}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\|u\|_{3,2,L_{2}}\|v\|_{2,% \Omega}=\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\|u\|_{4}\|v\|_{2},$$ where trace inequality $\|u\|_{3,\partial\Omega}\leq C\|u\|_{4,\Omega}$ is applied. With the same argument we have $$h^{4}\sum_{e\cap L_{4}\neq\emptyset}\int_{y_{e}-h}^{y_{e}+h}\tilde{p}(\frac{y-% y_{e}}{h})(au_{x})_{yy}v_{yy}(1,y)dy=\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\|u\|% _{4}\|v\|_{2}.$$ Combine all the estimates above, we get (16b). Since the $\frac{1}{2}$ order loss is only due to the line integral along the boundary $\partial\Omega$. If $v\in V_{0}^{h}$, $v_{yy}=0$ on $L_{2}$ and $L_{4}$ so we have (16a). 4 Superconvergence of bilinear forms The M-type projection in Chen (1981, 2001) is a very convenient tool for discussing the superconvergence of function values. Let $u_{p}$ be the M-type $Q^{2}$ projection of the smooth exact solution $u$ and its definition will be given in the following subsection. To establish the superconvergence of the original finite element method (1) for a generic elliptic problem (5) with smooth coefficients, one can show the following superconvergence of bilinear forms, see Chen (2001); Lin & Yan (1996) (see also Li & Zhang (2019b) for a detailed proof): $$A(u-u_{p},v_{h})=\begin{cases}\mathcal{O}(h^{3.5})\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V^{h},\\ \mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V_{0}^{h}.\end{cases}$$ In this section we will show the superconvergence of the bilinear form $A_{h}$: $$\displaystyle A_{h}(u-u_{p},v_{h})=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|u\|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V% ^{h},$$ (22a) $$\displaystyle A_{h}(u-u_{p},v_{h})=$$ $$\displaystyle\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V_% {0}^{h}.$$ (22b) 4.1 Definition of M-type projection We first recall the definition of M-type projection. More detailed definition can also be found in Li & Zhang (2019b). Legendre polynomials on the reference interval $[-1,1]$ are given as $$l_{k}(t)=\frac{1}{2^{k}k!}\frac{d^{k}}{dt^{k}}(t^{2}-1)^{k}:l_{0}(t)=1,l_{1}(t% )=t,l_{2}(t)=\frac{1}{2}(3t^{2}-1),\cdots,$$ which are $L^{2}$-orthogonal to one another. Define their antiderivatives as M-type polynomials: $$M_{k+1}(t)=\frac{1}{2^{k}k!}\frac{d^{k-1}}{dt^{k-1}}(t^{2}-1)^{k}:M_{0}(t)=1,M% _{1}(t)=t,M_{2}(t)=\frac{1}{2}(t^{2}-1),M_{3}(t)=\frac{1}{2}(t^{3}-t),\cdots.$$ Since Legendre polynomials form a complete orthogonal basis for $L^{2}([-1,1])$, for any $\hat{f}(t)\in H^{1}([-1,1])$, its derivative $\hat{f}^{\prime}(t)$ can be expressed as Fourier-Legendre series $$\hat{f}^{\prime}(t)=\sum_{j=0}^{\infty}\hat{b}_{j+1}l_{j}(t),\quad\hat{b}_{j+1% }=(j+\frac{1}{2})\int_{-1}^{1}\hat{f}^{\prime}(t)l_{j}(t)dt.$$ The one-dimensional M-type projection is defined as $$\hat{f}_{k}(t)=\sum_{j=0}^{k}\hat{b}_{j}M_{j}(t),$$ where $\hat{b}_{0}=\frac{\hat{f}(1)+\hat{f}(-1)}{2}$ is determined by $\hat{b}_{1}=\frac{\hat{f}(1)-\hat{f}(-1)}{2}$ so that $\hat{f}_{k}(\pm 1)=\hat{f}(\pm 1)$. We have $\hat{f}(t)=\lim\limits_{k\to\infty}\hat{f}_{k}(t)=\sum\limits_{j=0}^{\infty}% \hat{b}_{j}M_{j}(t).$ The remainder $\hat{R}[\hat{f}]_{k}(t)$ of one-dimensional M-type projection is $$\hat{R}[\hat{f}]_{k}(t)=\hat{f}(t)-\hat{f}_{k}(t)=\sum_{j=k+1}^{\infty}\hat{b}% _{j}M_{j}(t).$$ For a function $\hat{f}(s,t)\in H^{2}(\hat{K})$ on the reference cell $\hat{K}=[-1,1]\times[-1,1]$, its two-dimensional M-type expansion is given as $$\hat{f}(s,t)=\sum_{i=0}^{\infty}\sum_{j=0}^{\infty}\hat{b}_{i,j}M_{i}(s)M_{j}(% t),$$ where $$\displaystyle\hat{b}_{0,0}$$ $$\displaystyle=\frac{1}{4}[\hat{f}(-1,-1)+\hat{f}(-1,1)+\hat{f}(1,-1)+\hat{f}(1% ,1)],$$ $$\displaystyle\hat{b}_{0,j},\hat{b}_{1,j}$$ $$\displaystyle=\frac{2j-1}{4}\int_{-1}^{1}[\hat{f}_{t}(1,t)\pm\hat{f}_{t}(-1,t)% ]l_{j-1}(t)dt,\quad j\geq 1,$$ $$\displaystyle\hat{b}_{i,0},\hat{b}_{i,1}$$ $$\displaystyle=\frac{2i-1}{4}\int_{-1}^{1}[\hat{f}_{s}(s,1)\pm\hat{f}_{s}(s,-1)% ]l_{i-1}(s)ds,\quad i\geq 1,$$ $$\displaystyle\hat{b}_{i,j}$$ $$\displaystyle=\frac{(2i-1)(2j-1)}{4}\iint_{\hat{K}}\hat{f}_{st}(s,t)l_{i-1}(s)% l_{j-1}(t)dsdt,\quad i,j\geq 1.$$ The M-type $Q^{2}$ projection of $\hat{f}$ on $\hat{K}$ and its remainder are defined as $$\hat{f}_{2,2}(s,t)=\sum_{i=0}^{2}\sum_{j=0}^{2}\hat{b}_{i,j}M_{i}(s)M_{j}(t),% \quad\hat{R}[\hat{f}]_{2,2}(s,t)=\hat{f}(s,t)-\hat{f}_{2,2}(s,t).$$ The M-type $Q^{k}$ projection is equivalent to the point-line-plane interpolation used in Lin et al. (1991); Lin & Yan (1996). See Li & Zhang (2019b) for the proof of the following fact: Theorem 4.1. The M-type $Q^{k}$ projection is equivalent to the $Q^{k}$ point-line-plane projection $\Pi$ defined as follows: 1. $\Pi\hat{u}=\hat{u}$ at four corners of $\hat{K}=[-1,1]\times[-1,1]$. 2. $\Pi\hat{u}-\hat{u}$ is orthogonal to polynomials of degree $k-2$ on each edge of $\hat{K}$. 3. $\Pi\hat{u}-\hat{u}$ is orthogonal to any $\hat{v}\in Q^{k-2}(\hat{K})$ on $\hat{K}$. For $f(x,y)$ on $e=[x_{e}-h,x_{e}+h]\times[y_{e}-h,y_{e}+h]$, let $\hat{f}(s,t)=f(sh+x_{e},th+y_{e})$ then the M-type $Q^{k}$ projection of $f$ on $e$ and its remainder are defined as $$f_{k,k}(x,y)=\hat{f}_{k,k}(\frac{x-x_{e}}{h},\frac{y-y_{e}}{h}),\quad R[f]_{k,% k}(x,y)=f(x,y)-f_{k,k}(x,y).$$ Now consider a function $u(x,y)\in H^{k+2}(\Omega)$, let $u_{p}(x,y)$ denote its piecewise M-type $Q^{k}$ projection on each element $e$ in the mesh $\Omega_{h}$. The first two properties in Theorem 4.1 imply that $u_{p}(x,y)$ on each edge of $e$ is uniquely determined by $u(x,y)$ along that edge. So $u_{p}(x,y)$ is a piecewise continuous $Q^{k}$ polynomial on $\Omega_{h}$. M-type projection has the following properties. See Li & Zhang (2019b) for the proof. Theorem 4.2. $$\|u-u_{p}\|_{2,Z_{0}}=\mathcal{O}(h^{k+2})\|u\|_{k+2},\quad\forall u\in H^{k+2% }(\Omega).$$ $$\|u-u_{p}\|_{\infty,Z_{0}}=\mathcal{O}(h^{k+2})\|u\|_{k+2,\infty},\quad\forall u% \in W^{k+2,\infty}(\Omega).$$ Lemma 4.3. For $\hat{f}\in H^{k+1}(\hat{K})$, $k\geq 2$, 1. $|\hat{R}[\hat{f}]_{k,k}|_{0,\infty,\hat{K}}\leq C[\hat{f}]_{k+1,\hat{K}},\quad% |\partial_{s}\hat{R}[\hat{f}]_{k,k}|_{0,\infty,\hat{K}}\leq C[\hat{f}]_{k+1,% \hat{K}}.$ 2. $\hat{R}[\hat{f}]_{k+1,k+1}-\hat{R}[\hat{f}]_{k,k}=M_{k+1}(t)\sum_{i=0}^{k}\hat% {b}_{i,k+1}M_{i}(s)+M_{k+1}(s)\sum_{j=0}^{k+1}\hat{b}_{k+1,j}M_{j}(t).$ 3. $|\hat{b}_{i,k+1}|\leq C_{k}|\hat{f}|_{k+1,2,\hat{K}},|\hat{b}_{k+1,i}|\leq C_{% k}|\hat{f}|_{k+1,2,\hat{K}},\quad 0\leq i\leq k+1.$ 4. If $\hat{f}\in H^{k+2}(\hat{K})$, then $|\hat{b}_{i,k+1}|\leq C_{k}|\hat{f}|_{k+2,2,\hat{K}},\quad 1\leq i\leq k+1.$ 4.2 Estimates of M-type projection with quadrature Lemma 4.4. Assume $\hat{f}(s,t)\in H^{5}(\hat{K})$, $$\langle\hat{R}[\hat{f}]_{3,3}-\hat{R}[\hat{f}]_{2,2},1\rangle_{\hat{K}}=0,% \quad|\langle\partial_{s}\hat{R}[\hat{f}]_{3,3},1\rangle_{\hat{K}}|\leq C|\hat% {f}|_{5,\hat{K}}.$$ Proof 4.5. First, we have $$\displaystyle\langle\hat{R}[\hat{f}]_{3,3}-\hat{R}[\hat{f}]_{2,2},1\rangle_{% \hat{K}}=\langle M_{3}(t)\sum_{i=0}^{2}\hat{b}_{i,3}M_{i}(s)+M_{3}(s)\sum_{j=0% }^{3}\hat{b}_{3,j}M_{j}(t),1\rangle_{\hat{K}}=0$$ due to the fact that $M_{3}(0)=M_{3}(\pm 1)=0$. We have $$\displaystyle\langle\partial_{s}\hat{R}[\hat{f}]_{3,3},1\rangle_{\hat{K}}=$$ $$\displaystyle\langle\partial_{s}\hat{R}[\hat{f}]_{4,4},1\rangle_{\hat{K}}+% \langle\partial_{s}(\hat{R}[\hat{f}]_{4,4}-\hat{R}[\hat{f}]_{3,3}),1\rangle_{% \hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle\partial_{s}\hat{R}[\hat{f}]_{4,4},1\rangle_{\hat{K}}+% \langle M_{4}(t)\sum_{i=0}^{3}\hat{b}_{i,4}M_{i}^{\prime}(s)+M_{4}^{\prime}(s)% \sum_{j=0}^{4}\hat{b}_{4,j}M_{j}(t),1\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle\partial_{s}\hat{R}[\hat{f}]_{4,4},1\rangle_{\hat{K}}+% \langle M_{4}(t)\sum_{i=0}^{2}\hat{b}_{i+1,4}l_{i}(s),1\rangle_{\hat{K}}+% \langle l_{3}(s)\sum_{j=0}^{4}\hat{b}_{4,j}M_{j}(t),1\rangle_{\hat{K}}.$$ Then by Lemma 4.3, $$|\langle\partial_{s}\hat{R}[\hat{f}]_{4,4},1\rangle_{\hat{K}}|\leq C|\hat{f}|_% {5,\hat{K}}.$$ Notice that we have $\langle l_{3}(s)\sum_{j=0}^{4}\hat{b}_{4,j}M_{j}(t),1\rangle_{\hat{K}}=0$ since the 3-point Gauss-Lobatto quadrature for $s$-integration is exact and $l_{3}(s)$ is orthogonal to $1$. Lemma 4.3 implies $|\hat{b}_{i+1,4}|\leq C[\hat{f}]_{5,\hat{K}}$ for $i=0,1,2$, thus we have $|\langle M_{4}(t)\sum_{i=0}^{2}\hat{b}_{i+1,4}l_{i}(s),1\rangle_{\hat{K}}|\leq C% [\hat{f}]_{5,\hat{K}}$. Lemma 4.6. Assume $a(x,y)\in W^{2,\infty}(\Omega).$ Then $$\langle a(u-u_{p})_{x},(v_{h})_{x}\rangle_{h}=\mathcal{O}(h^{4})\|a\|_{2,% \infty}\|u\|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V^{h}.$$ Proof 4.7. As before, we ignore the subscript of $v_{h}$ for simplicity. We have $$\langle a(u-u_{p})_{x},v_{x}\rangle_{h}=\sum_{e}\langle a(u-u_{p})_{x},v_{x}% \rangle_{e,h},$$ and on each cell $e$, $$\displaystyle\langle a(u-u_{p})_{x},v_{x}\rangle_{e,h}=\langle(R[u]_{2,2})_{x}% ,av_{x}\rangle_{e,h}=\langle(\hat{R}[\hat{u}]_{2,2})_{s},\hat{a}\hat{v}_{s}% \rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}_{s}\rangle_{% \hat{K}}+\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}% \hat{v}_{s}\rangle_{\hat{K}}.$$ (23) For the first term in (23), we have $$\displaystyle\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}_{s}\rangle_{% \hat{K}}=\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\overline{\hat{v}_{s}}% \rangle_{\hat{K}}+\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}(\hat{v}_{s}-% \overline{\hat{v}_{s}})\rangle_{\hat{K}}.$$ By Lemma 4.4, $$\langle(\hat{R}[\hat{u}]_{3,3})_{s},\overline{\hat{a}}\,\overline{\hat{v}_{s}}% \rangle_{\hat{K}}\leq C|\hat{a}|_{0,\infty}|\hat{u}|_{5,\hat{K}}|\hat{v}|_{1,% \hat{K}}.$$ By Lemma 4.3, $$|(\hat{R}[\hat{u}]_{3,3})_{s}|_{0,\infty,\hat{K}}\leq C[\hat{u}]_{4,2,\hat{K}}.$$ By Bramble-Hilbert Lemma Theorem 3.1 we have $$\displaystyle\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\overline{\hat{v}_{s}}% \rangle_{\hat{K}}=\langle(\hat{R}[\hat{u}]_{3,3})_{s},\overline{\hat{a}}\,% \overline{\hat{v}_{s}}\rangle_{\hat{K}}+\langle(\hat{R}[\hat{u}]_{3,3})_{s},(% \hat{a}-\overline{\hat{a}})\overline{\hat{v}_{s}}\rangle_{\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C(|\hat{a}|_{0,\infty}|\hat{u}|_{5,\hat{K}}|\hat{v}|_{1,\hat{K}}% +|\hat{a}-\overline{\hat{a}}|_{0,\infty}|\hat{u}|_{4,\hat{K}}|\hat{v}|_{1,\hat% {K}})$$ $$\displaystyle\leq$$ $$\displaystyle C(|\hat{a}|_{0,\infty}|\hat{u}|_{5,\hat{K}}|\hat{v}|_{1,\hat{K}}% +|\hat{a}|_{1,\infty}|\hat{u}|_{4,\hat{K}}|\hat{v}|_{1,\hat{K}})=\mathcal{O}(h% ^{4})\|a\|_{1,\infty,e}\|u\|_{5,e}\|v\|_{1,e},$$ and $$\displaystyle\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}(\hat{v}_{s}-\overline% {\hat{v}_{s}})\rangle_{\hat{K}}\leq C[\hat{u}]_{4,2,\hat{K}}|\hat{a}|_{0,% \infty,\hat{K}}|\hat{v}_{s}-\overline{\hat{v}_{s}}|_{0,\infty,\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C[\hat{u}]_{4,2,\hat{K}}|\hat{a}|_{0,\infty,\hat{K}}|\hat{v}_{s}% -\overline{\hat{v}_{s}}|_{0,2,\hat{K}}=\mathcal{O}(h^{4})[u]_{4,2,e}|a|_{0,% \infty,e}|v|_{2,2,e}.$$ Thus, $$\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}_{s}\rangle_{\hat{K}}=% \mathcal{O}(h^{4})\|a\|_{1,\infty,e}|u|_{5,2,e}\|v\|_{2,e}.$$ (24) For the second term in (23), we have $$\displaystyle\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a% }\hat{v}_{s}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle(M_{3}(t)\sum_{i=0}^{2}\hat{b}_{i,3}M_{i}(s)+M_{3}(s)\sum_% {j=0}^{3}\hat{b}_{3,j}M_{j}(t))_{s},\hat{a}\hat{v}_{s}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle M_{3}(t)\sum_{i=0}^{1}\hat{b}_{i+1,3}l_{i}(s)+l_{2}(s)% \sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v}_{s}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle M_{3}(t)\sum_{i=0}^{1}\hat{b}_{i+1,3}l_{i}(s),\hat{a}\hat% {v}_{s}\rangle_{\hat{K}}+\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),% \hat{a}\hat{v}_{s}\rangle_{\hat{K}}.$$ (25) Since $M_{3}(t)=\frac{1}{2}(t^{3}-t)$ vanishes at $t=0,\pm 1$, we have $$\langle M_{3}(t)\sum_{i=0}^{1}\hat{b}_{i+1,3}l_{i}(s),\hat{a}\hat{v}_{s}% \rangle_{\hat{K}}=0.$$ For the second term in (25), $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v% }_{s}\rangle_{\hat{K}}=\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),% \hat{a}\overline{\hat{v}_{s}}\rangle_{\hat{K}}+\langle l_{2}(s)\sum_{j=0}^{3}% \hat{b}_{3,j}M_{j}(t),\hat{a}(\hat{v}_{s}-\overline{\hat{v}_{s}})\rangle_{\hat% {K}}$$ $$\displaystyle=$$ $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),(\hat{a}-\hat% {\Pi}_{1}\hat{a})\overline{\hat{v}_{s}}\rangle_{\hat{K}}+\langle l_{2}(s)\sum_% {j=0}^{3}\hat{b}_{3,j}M_{j}(t),(\hat{\Pi}_{1}\hat{a})\overline{\hat{v}_{s}}% \rangle_{\hat{K}}$$ $$\displaystyle+\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),(\hat{a}-% \overline{\hat{a}})(\hat{v}_{s}-\overline{\hat{v}_{s}})\rangle_{\hat{K}}+% \langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\overline{\hat{a}}(\hat{v}% _{s}-\overline{\hat{v}}_{s})\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),(\hat{a}-\hat% {\Pi}_{1}\hat{a})\overline{\hat{v}}_{s}\rangle_{\hat{K}}+\langle l_{2}(s)\sum_% {j=0}^{3}\hat{b}_{3,j}M_{j}(t),(\hat{a}-\overline{\hat{a}})(\hat{v}_{s}-% \overline{\hat{v}}_{s})\rangle_{\hat{K}},$$ where the last step is due to the fact that $\hat{\Pi}_{1}\hat{a}(s,t)$ and $\hat{v}_{s}-\overline{\hat{v}}_{s}$ are linear functions with respect to variable $s$, the 3-point Gauss-Lobatto quadrature on $s$-integration is exact for polynomial of degree 3, and $l_{2}(s)$ is orthogonal to linear functions. By Lemma 4.3, we have $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v% }_{s}\rangle_{\hat{K}}\leq C|\hat{u}|_{3,2,\hat{K}}(|\hat{a}|_{2,\infty}|\hat{% v}|_{1,\hat{K}}+|\hat{a}|_{1,\infty}|\hat{v}|_{2,\hat{K}})=\mathcal{O}(h^{4})% \|a\|_{2,\infty}\|u\|_{3,e}\|v\|_{2,e}.$$ (26) Combined with (24), we have proved the estimate. Lemma 4.8. Assume $c(x,y)\in W^{2,\infty}(\Omega).$ Then $$\langle a(u-u_{p}),v_{h}\rangle_{h}=\mathcal{O}(h^{4})\|a\|_{2,\infty}\|u\|_{4% }\|v_{h}\|_{2},\quad\forall v_{h}\in V^{h}.$$ Proof 4.9. As before, we ignore the subscript of $v_{h}$ for simplicity and $$\langle a(u-u_{p}),v\rangle_{h}=\sum_{e}\langle a(u-u_{p}),v\rangle_{e,h}.$$ On each cell $e$ we have $$\displaystyle\langle a(u-u_{p}),v\rangle_{e,h}=\langle R[u]_{2,2},av\rangle_{e% ,h}=h^{2}\langle\hat{R}[\hat{u}]_{2,2},\hat{a}\hat{v}\rangle_{\hat{K}}=h^{2}% \langle\hat{R}[\hat{u}]_{2,2},\hat{a}\hat{v}-\overline{\hat{a}\hat{v}}\rangle_% {\hat{K}}+h^{2}\langle\hat{R}[\hat{u}]_{2,2},\overline{\hat{a}\hat{v}}\rangle_% {\hat{K}}.$$ (27) For the first term in (27), due to the embedding $H^{2}(\hat{K})\hookrightarrow C^{0}(\hat{K})$ and Bramble-Hilbert Lemma Theorem 3.1 and Lemma 4.3, we have $$\displaystyle h^{2}\langle\hat{R}[\hat{u}]_{2,2},\hat{a}\hat{v}-\overline{\hat% {a}\hat{v}}\rangle_{\hat{K}}\leq Ch^{2}|R[\hat{u}]_{2,2}|_{\infty}|\hat{a}\hat% {v}-\overline{\hat{a}\hat{v}}|_{\infty}\leq Ch^{2}|\hat{u}|_{3,\hat{K}}\|\hat{% a}\hat{v}-\overline{\hat{a}\hat{v}}\|_{2,\hat{K}}$$ $$\displaystyle\leq Ch^{2}|\hat{u}|_{3,\hat{K}}(\|\hat{a}\hat{v}-\overline{\hat{% a}\hat{v}}\|_{L^{2}(\hat{K})}+|\hat{a}\hat{v}|_{1,\hat{K}}+|\hat{a}\hat{v}|_{2% ,\hat{K}})$$ $$\displaystyle\leq Ch^{2}|\hat{u}|_{3,\hat{K}}(|\hat{a}\hat{v}|_{1,\hat{K}}+|% \hat{a}\hat{v}|_{2,\hat{K}})=\mathcal{O}(h^{4})\|a\|_{2,\infty,e}\|u\|_{3,e}\|% v\|_{2,e}.$$ For the second term in (27), we have $$\displaystyle h^{2}\langle\hat{R}[\hat{u}]_{2,2},\overline{\hat{a}\hat{v}}% \rangle_{\hat{K}}=h^{2}\langle\hat{R}[\hat{u}]_{3,3},\overline{\hat{a}\hat{v}}% \rangle_{\hat{K}}-h^{2}\langle\hat{R}[\hat{u}]_{3,3}-\hat{R}[\hat{u}]_{2,2},% \overline{\hat{a}\hat{v}}\rangle_{\hat{K}}.$$ By Lemma 4.3 and Lemma 4.4 we have $$h^{2}\langle\hat{R}[\hat{u}]_{3,3},\overline{\hat{a}\hat{v}}\rangle_{\hat{K}}% \leq Ch^{2}|\hat{u}|_{4,\hat{K}}|\hat{a}\hat{v}|_{0,\hat{K}}=\mathcal{O}(h^{4}% )\|a\|_{0,\infty,e}\|u\|_{4,e}\|v\|_{0,e},$$ and $$h^{2}\langle\hat{R}[\hat{u}]_{3,3}-\hat{R}[\hat{u}]_{2,2},\overline{\hat{a}% \hat{v}}\rangle_{\hat{K}}=0.$$ Thus, we have $\langle a(u-u_{p}),v_{h}\rangle_{h}=\mathcal{O}(h^{4})\|a\|_{2,\infty}\|u\|_{4% }\|v_{h}\|_{2}.$ Lemma 4.10. Assume $a(x,y)\in W^{2,\infty}(\Omega).$ Then $$\langle a(u-u_{p})_{x},v_{h}\rangle_{h}=\mathcal{O}(h^{4})\|a\|_{2,\infty}\|u% \|_{5}\|v_{h}\|_{2},\quad\forall v_{h}\in V^{h}.$$ Proof 4.11. As before, we ignore the subscript in $v_{h}$ and we have $$\langle a(u-u_{p})_{x},v\rangle_{h}=\sum_{e}\langle a(u-u_{p})_{x},v\rangle_{e% ,h}.$$ On each cell $e$, we have $$\displaystyle\langle a(u-u_{p})_{x},v\rangle_{e,h}=\langle(R[u]_{2,2})_{x},bv% \rangle_{e,h}=h\langle(\hat{R}[\hat{u}]_{2,2})_{s},\hat{a}\hat{v}\rangle_{\hat% {K}}$$ $$\displaystyle=$$ $$\displaystyle h\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}\rangle_{\hat% {K}}-h\langle(\hat{R}[\hat{u}]_{3,3}-\hat{R}[\hat{u}]_{2,2})_{s},\hat{a}\hat{v% }\rangle_{\hat{K}}.$$ (28) For the first term in (28), we have $$\displaystyle\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}\rangle_{\hat{K% }}\leq\langle(\hat{R}[\hat{u}]_{3,3})_{s},\overline{\hat{a}\hat{v}}\rangle_{% \hat{K}}+\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}-\overline{\hat{a}% \hat{v}}\rangle_{\hat{K}}$$ Due to Lemma 4.4, $$h\langle(\hat{R}[\hat{u}]_{3,3})_{s},\overline{\hat{a}\hat{v}}\rangle_{\hat{K}% }\leq Ch\|a\|_{0,\infty}|u|_{5,\hat{K}}\|v\|_{0,\hat{K}}=\mathcal{O}(h^{4})\|a% \|_{0,\infty}\|u\|_{5,e}\|v\|_{0,e},$$ and by the same arguments as in the proof of Lemma 4.8 we have $$\displaystyle h\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}-\overline{% \hat{a}\hat{v}}\rangle_{\hat{K}}\leq Ch|(R[\hat{u}]_{3,3})_{s}|_{\infty}|\hat{% a}\hat{v}-\overline{\hat{a}\hat{v}}|_{\infty}\leq Ch|\hat{u}|_{4,\hat{K}}\|% \hat{a}\hat{v}-\overline{\hat{a}\hat{v}}\|_{2,\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle Ch|\hat{u}|_{4,\hat{K}}(\|\hat{a}\hat{v}-\overline{\hat{a}\hat{v% }}\|_{L^{2}(\hat{K})}+|\hat{a}\hat{v}|_{1,\hat{K}}+|\hat{a}\hat{v}|_{2,\hat{K}% })\leq Ch|\hat{u}|_{4,\hat{K}}(|\hat{a}\hat{v}|_{1,\hat{K}}+|\hat{a}\hat{v}|_{% 2,\hat{K}})=\mathcal{O}(h^{4})\|a\|_{2,\infty}\|u\|_{4,e}\|v\|_{2,e}.$$ Thus $$h\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}\rangle_{\hat{K}}=\mathcal{% O}(h^{4})\|a\|_{2,\infty}\|u\|_{5,e}\|v\|_{2,e}.$$ (29) For the second term in (28), we have $$\displaystyle\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a% }\hat{v}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle(M_{3}(t)\sum_{i=0}^{2}\hat{b}_{i,3}M_{i}(s)+M_{3}(s)\sum_% {j=0}^{3}\hat{b}_{3,j}M_{j}(t))_{s},\hat{a}\hat{v}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle M_{3}(t)\sum_{i=0}^{1}\hat{b}_{i+1,3}l_{i}(s)+l_{2}(s)% \sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle M_{3}(t)\sum_{i=0}^{1}\hat{b}_{i+1,3}l_{i}(s),\hat{a}\hat% {v}\rangle_{\hat{K}}+\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{% a}\hat{v}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v% }\rangle_{\hat{K}},$$ where the last step is due to that $M_{3}(t)=\frac{1}{2}(t^{3}-t)$ vanishes at $t=0,\pm 1$. Then $$\displaystyle\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a% }\hat{v}\rangle_{\hat{K}}=\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),% \hat{a}\hat{v}\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v% }-\hat{\Pi}_{1}(\hat{a}\hat{v})\rangle_{\hat{K}}+\langle l_{2}(s)\sum_{j=0}^{3% }\hat{b}_{3,j}M_{j}(t),\hat{\Pi}_{1}(\hat{a}\hat{v})\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),\hat{a}\hat{v% }-\hat{\Pi}_{1}(\hat{a}\hat{v})\rangle_{\hat{K}},$$ where the last step is due to the facts that $\hat{\Pi}_{1}(\hat{a}\hat{v})$is a linear function on $s$-integration thus the 3-point Gauss-Lobatto quadrature on $s$-variable is exact, and $l_{2}(s)$ is orthogonal to linear functions. By Lemma 4.3 and Theorem 3.1, we have $$\displaystyle\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a% }\hat{v}\rangle_{\hat{K}}=\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),% \hat{a}\hat{v}-\hat{\Pi}_{1}(\hat{a}\hat{v})\rangle_{\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C|u|_{3,\hat{K}}|\hat{a}\hat{v}|_{2,\hat{K}}\leq C|u|_{3,\hat{K}% }(|\hat{a}|_{2,\infty,\hat{K}}|\hat{v}|_{0,\hat{K}}+|\hat{a}|_{1,\infty,\hat{K% }}|\hat{v}|_{1,\hat{K}}+|\hat{a}|_{0,\infty}|\hat{v}|_{2,\hat{K}})$$ Thus $$h\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}% \rangle_{\hat{K}}=\mathcal{O}(h^{4})\|a\|_{2,\infty}\|u\|_{3,e}\|v\|_{2,e}.$$ (30) By (29) and (30) and sum up over all the cells, we get the desired estimate. Lemma 4.12. Assume $a(x,y)\in W^{4,\infty}(\Omega).$ Then $$\displaystyle\langle a(u-u_{p})_{x},(v_{h})_{y}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{3.5})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V^{h},$$ (31a) $$\displaystyle\langle a(u-u_{p})_{x},(v_{h})_{y}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{5}\|v_{h}\|_{2},\quad% \forall v_{h}\in V_{0}^{h}.$$ (31b) Proof 4.13. We ignore the subscript in $v_{h}$ and we have $$\langle a(u-u_{p})_{x},v_{y}\rangle_{h}=\sum_{e}\langle a(u-u_{p})_{x},v_{y}% \rangle_{e,h},$$ and on each cell $e$ $$\displaystyle\langle a(u-u_{p})_{x},v_{y}\rangle_{e,h}=\langle(R[u]_{2,2})_{x}% ,av_{y}\rangle_{e,h}=\langle(\hat{R}[\hat{u}]_{2,2})_{s},\hat{a}\hat{v}_{t}% \rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}_{t}\rangle_{% \hat{K}}+\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}% \hat{v}_{t}\rangle_{\hat{K}}.$$ (32) By the same arguments as in the proof of Lemma 4.6, we have $$\langle(\hat{R}[\hat{u}]_{3,3})_{s},\hat{a}\hat{v}_{t}\rangle_{\hat{K}}=% \mathcal{O}(h^{4})\|a\|_{1,\infty}|u|_{5,2,e}\|v\|_{2,e},$$ (33) and $$\displaystyle\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a% }\hat{v}_{t}\rangle_{\hat{K}}=\langle l_{2}(s)\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}% (t),\hat{a}\hat{v}_{t}\rangle_{\hat{K}}.$$ For simplicity, define $$\hat{b}_{3}(t):=\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(t),$$ then by the third and fourth estimates in Lemma 4.3, we have $$|\hat{b}_{3}(t)|\leq C\sum_{j=0}^{3}|\hat{b}_{3,j}|\leq C|\hat{u}|_{3,\hat{K}}% ,\quad|\hat{b}_{3}^{\prime}(t)|\leq C\sum_{j=1}^{3}|\hat{b}_{3,j}|\leq C|\hat{% u}|_{4,\hat{K}},$$ $$|\hat{b}_{3}^{(2)}(t)|\leq C\sum_{j=2}^{3}|\hat{b}_{3,j}|\leq C|\hat{u}|_{4,% \hat{K}},\quad|\hat{b}_{3}^{(3)}(t)|\leq C|\hat{b}_{3,3}|\leq C|\hat{u}|_{4,% \hat{K}}.$$ We use the same technique in the proof of Theorem 3.14, $$\displaystyle\langle(\hat{R}[\hat{u}]_{2,2}-\hat{R}[\hat{u}]_{3,3})_{s},\hat{a% }\hat{v}_{t}\rangle_{\hat{K}}=\langle l_{2}(s)\hat{b}_{3}(t),\hat{a}\hat{v}_{t% }\rangle_{\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\iint_{\hat{K}}l_{2}(s)\hat{b}_{3}(t)\hat{a}\hat{v}_{t}ds^{h}dt^{% h}=\iint_{\hat{K}}(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}ds^{h}dt^{h}=\iint_{% \hat{K}}(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}ds^{h}dt.$$ By Lemma 3.11 we have $$\displaystyle\iint_{\hat{K}}(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}ds^{h}dt=% \iint_{\hat{K}}(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}dsdt-\iint_{\hat{K}}% \tilde{p}(s)\partial_{s}^{4}[(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}]dsdt.$$ By Theorem 3.1 and the estimate (9), we have $$\displaystyle\iint_{\hat{K}}[(l_{2}\hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}% \hat{a}]\hat{v}_{t}dsdt=\iint_{\hat{K}}[(l_{2}\hat{b}_{3}\hat{a})_{I}-l_{2}% \hat{b}_{3}\hat{a}]\overline{\hat{v}_{t}}+\iint_{\hat{K}}[(l_{2}\hat{b}_{3}% \hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}](\hat{v}_{t}-\overline{\hat{v}_{t}})$$ $$\displaystyle\leq$$ $$\displaystyle C[l_{2}\hat{b}_{3}\hat{a}]_{4,\hat{K}}|\hat{v}|_{1,\hat{K}}+C[l_% {2}\hat{b}_{3}\hat{a}]_{3,\hat{K}}|\hat{v}|_{2,\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C\left(\sum_{k=0}^{4}|\hat{a}|_{k,\infty,\hat{K}}\max_{t\in[-1,1% ]}|\hat{b}^{(4-k)}(t)|\right)|\hat{v}|_{1,\hat{K}}+C\left(\sum_{k=0}^{3}|\hat{% a}|_{k,\infty,\hat{K}}\max_{t\in[-1,1]}|\hat{b}^{(3-k)}(t)|\right)|\hat{v}|_{2% ,\hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C\left(\sum_{k=1}^{3}|\hat{a}|_{k,\infty,\hat{K}}|\hat{u}|_{4,% \hat{K}}+|\hat{a}|_{4,\infty,\hat{K}}|\hat{u}|_{3,\hat{K}}\right)|\hat{v}|_{1,% \hat{K}}+C\left(\sum_{k=0}^{2}|\hat{a}|_{k,\infty,\hat{K}}|\hat{u}|_{4,\hat{K}% }+|\hat{a}|_{3,\infty,\hat{K}}|\hat{u}|_{3,\hat{K}}\right)|\hat{v}|_{2,\hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{4,e}\|v\|_{2,e},$$ where the fact $\hat{b}^{(4)}(t)=0$ is used. After integration by parts with respect to the variable $s$, we have $$\displaystyle\iint_{\hat{K}}l_{2}(s)\hat{b}_{3}(t)\hat{a}\hat{v}_{t}dsdt=-% \iint_{\hat{K}}M_{3}(s)\hat{b}_{3}(t)(\hat{a}_{s}\hat{v}_{t}+\hat{a}\hat{v}_{% st})dsdt,$$ which is exactly the same integral estimated in the proof of Lemma 3.7 in Li & Zhang (2019b). By the same proof of Lemma 3.7 in Li & Zhang (2019b), after summing over all elements, we have the estimate for the term $\iint_{\hat{K}}l_{2}(s)\hat{b}_{3}(t)\hat{a}\hat{v}_{t}dsdt$: $$\sum_{e}\iint_{\hat{K}}l_{2}(s)\hat{b}_{3}(t)\hat{a}\hat{v}_{t}dsdt=\begin{% cases}\mathcal{O}(h^{3.5})\|a\|_{4,\infty}\|u\|_{5}\|v\|_{2},&\forall v\in V^{% h},\\ \mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{5}\|v\|_{2},&\forall v\in V_{0}^{h}.% \end{cases}$$ So we have finished estimating $$\iint_{\hat{K}}(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}dsdt=\iint_{\hat{K}}[(l% _{2}\hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}]\hat{v}_{t}dsdt+\iint_{% \hat{K}}l_{2}\hat{b}_{3}\hat{a}\hat{v}_{t}dsdt.$$ We only need to estimate the term $-\iint_{\hat{K}}\tilde{p}(s)\partial_{s}^{4}[(l_{2}\hat{b}_{3}\hat{a})_{I}\hat% {v}_{t}]dsdt$ now. By product rule of derivatives on the polynomial $(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{t}$ and integration by parts, we have $$\displaystyle-\iint_{\hat{K}}\tilde{p}(s)\partial_{s}^{4}[(l_{2}\hat{b}_{3}% \hat{a})_{I}\hat{v}_{t}]dsdt=-6\iint_{\hat{K}}\tilde{p}(s)\partial_{s}^{2}(l_{% 2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{sst}dsdt$$ $$\displaystyle=$$ $$\displaystyle 6\iint_{\hat{K}}\tilde{p}(s)\partial_{t}\partial_{s}^{2}(l_{2}% \hat{b}_{3}\hat{a})_{I}\hat{v}_{ss}dsdt-\left.6\int_{-1}^{1}\tilde{p}(s)% \partial_{s}^{2}(l_{2}\hat{b}_{3}\hat{a})_{I}\hat{v}_{ss}ds\right|_{t=-1}^{t=1}.$$ By Cauchy-Schwarz inequality and the estimate (10), we have $$\displaystyle\iint_{\hat{K}}\tilde{p}(s)\partial_{t}\partial_{s}^{2}(l_{2}\hat% {b}_{3}\hat{a})_{I}\hat{v}_{ss}dsdt=\iint_{\hat{K}}\tilde{p}(s)\partial_{t}% \partial_{s}^{2}\left[(l_{2}\hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}% \right]\hat{v}_{ss}dsdt+\iint_{\hat{K}}\tilde{p}(s)\partial_{t}\partial_{s}^{2% }(l_{2}\hat{b}_{3}\hat{a})\hat{v}_{ss}dsdt$$ $$\displaystyle\leq$$ $$\displaystyle C[l_{2}\hat{b}_{3}\hat{a}]_{3,\hat{K}}|\hat{v}|_{2,\hat{K}}+C|% \partial_{t}\partial_{s}^{2}(l_{2}\hat{b}_{3}\hat{a})|_{0,\hat{K}}|\hat{v}|_{2% ,\hat{K}}\leq C\left(|\hat{u}|_{3,\hat{K}}\sum_{k=1}^{3}|\hat{a}|_{k,\infty,% \hat{K}}+|\hat{u}|_{4,\hat{K}}\sum_{k=0}^{2}|\hat{a}|_{k,\infty,\hat{K}}\right% )|\hat{v}|_{2,\hat{K}},$$ which gives the estimate $\mathcal{O}(h^{4})\|a\|_{3,\infty,e}\|u\|_{4,e}\|v\|_{2,e}$. Now we only need to discuss the line integral term. $$\displaystyle\left.\int_{-1}^{1}\tilde{p}(s)\partial_{s}^{2}(l_{2}\hat{b}_{3}% \hat{a})_{I}\hat{v}_{ss}ds\right|_{t=-1}^{t=1}=\left.\int_{-1}^{1}\tilde{p}(s)% \partial_{s}^{2}\left[(l_{2}\hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}% \right]\hat{v}_{ss}ds\right|_{t=-1}^{t=1}+\left.\int_{-1}^{1}\tilde{p}(s)% \partial_{s}^{2}(l_{2}\hat{b}_{3}\hat{a})\hat{v}_{ss}ds\right|_{t=-1}^{t=1}.$$ By (21), trace inequality and Theorem 3.1, we have $$\displaystyle\left|\left.\int_{-1}^{1}\tilde{p}(s)\partial_{s}^{2}\left[(l_{2}% \hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}\right]\hat{v}_{ss}ds\right|_{t% =-1}^{t=1}\right|\leq C|\hat{v}_{ss}|_{0,\hat{K}}\left|\partial_{s}^{2}\left[(% l_{2}\hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}\right]\right|_{0,\partial% \hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C|\hat{v}|_{2,\hat{K}}\left\|\partial_{s}^{2}\left[(l_{2}\hat{b}% _{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}\right]\right\|_{1,\hat{K}}\leq C|\hat% {v}|_{2,\hat{K}}\|(l_{2}\hat{b}_{3}\hat{a})_{I}-l_{2}\hat{b}_{3}\hat{a}\|_{3,% \hat{K}}$$ $$\displaystyle\leq$$ $$\displaystyle C|\hat{v}|_{2,\hat{K}}[l_{2}\hat{b}_{3}\hat{a}]_{3,\hat{K}}=% \mathcal{O}(h^{4})\|a\|_{3,\infty,e}\|u\|_{4,e}\|v\|_{2,e}.$$ After mapping back to original cell $e$, we have $$\displaystyle\left.\int_{-1}^{1}\tilde{p}(s)\partial_{s}^{2}\left(l_{2}(s)\hat% {b}_{3}(t)\hat{a}\right)\hat{v}_{ss}ds\right|_{t=-1}^{t=1}=\left.h^{3}\int_{x_% {e}-h}^{x_{e}+h}\tilde{p}(\frac{x-x_{e}}{h})\partial_{x}^{2}\left(l_{2}(\frac{% x-x_{e}}{h})\hat{b}_{3}(\frac{y-y_{e}}{h})a\right)v_{xx}dx\right|_{y=y_{e}-h}^% {y=y_{e}+h}.$$ Notice that we have $$\displaystyle\hat{b}_{3}(1)=\sum_{j=0}^{3}\hat{b}_{3,j}M_{j}(1)=\hat{b}_{3,0}+% \hat{b}_{3,1}=\frac{5}{2}\int_{-1}^{1}\partial_{s}\hat{u}(s,1)l_{2}(s)ds=\frac% {5}{2}\int_{x_{e}-h}^{x_{e}+h}\partial_{x}u(x,y_{e}+h)l_{2}(\frac{x-x_{e}}{h})dx,$$ and similarly we get $$\hat{b}_{3}(-1)=\frac{5}{2}\int_{x_{e}-h}^{x_{e}+h}\partial_{x}u(x,y_{e}-h)l_{% 2}(\frac{x-x_{e}}{h})dx.$$ Thus the term $l_{2}(\frac{x-x_{e}}{h})\hat{b}_{3}(\frac{y-y_{e}}{h})a(x,y)$ is continuous across the top/bottom edge of cells, and so is term $\partial_{x}^{2}\left(l_{2}(\frac{x-x_{e}}{h})\hat{b}_{3}(\frac{y-y_{e}}{h})a\right)$. Therefore, if summing over all elements $e$, the line integral on the inner edges are cancelled out. Let $L_{1}$ and $L_{3}$ denote the top and bottom boundary edges of $\Omega$. Then the line integral after summing over $e$ consists of two line integrals along $L_{1}$ and $L_{3}$. We only need to discuss one of them. Let $l_{1}$ and $l_{3}$ denote the top and bottom edge of $e$. First, after integration by parts twice, we get $$\displaystyle\hat{b}_{3}(1)=\frac{5}{2}\int_{-1}^{1}\partial_{s}\hat{u}(s,1)l_% {2}(s)ds=\frac{5}{2}\int_{-1}^{1}\frac{\partial^{3}}{\partial s^{3}}\hat{u}(s,% 1)\frac{1}{8}(s^{2}-1)^{2}ds,$$ thus by Cauchy-Schwarz inequality we get $$|\hat{b}_{3}(1)|\leq C\sqrt{\int_{-1}^{1}\left[\frac{\partial^{3}}{\partial s^% {3}}\hat{u}(s,1)\right]^{2}ds}\leq Ch^{2.5}|u|_{3,2,l_{1}}.$$ Second, by (21), we get $$\int_{-1}^{1}|\hat{v}_{ss}(s,1)|ds\leq 2\int_{-1}^{1}|\hat{v}_{ss}(s,1)|^{2}ds% \leq C|\hat{v}_{ss}|_{0,\hat{K}}.$$ The line integral along $L_{1}$ can be estimated by considering each $e$ adjacent to $L_{1}$ in the reference cell: $$\displaystyle\sum_{e\cap L_{1}\neq\emptyset}\left|\int_{-1}^{1}\tilde{p}(s)% \partial_{s}^{2}\left(l_{2}(s)\hat{b}_{3}(1)\hat{a}\right)\hat{v}_{ss}(s,1)ds% \right|=\sum_{e\cap L_{1}\neq\emptyset}\left|\int_{-1}^{1}\tilde{p}(s)\hat{b}_% {3}(1)\left(l_{2}(s)\hat{a}\right)_{ss}\hat{v}_{ss}(s,1)ds\right|$$ $$\displaystyle\leq$$ $$\displaystyle\sum_{e\cap L_{1}\neq\emptyset}C\|\hat{a}\|_{2,\infty,\hat{K}}|% \hat{b}_{3}(1)|\int_{-1}^{1}|\hat{v}_{ss}(s,1)|ds\leq\sum_{e\cap L_{1}\neq% \emptyset}C\|\hat{a}\|_{2,\infty,\hat{K}}|\hat{b}_{3}(1)||\hat{v}_{ss}|_{0,% \hat{K}}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{3.5})\sum_{e\cap L_{1}\neq\emptyset}\|a\|_{2,% \infty}|u|_{3,l_{1}}|v|_{2,e}=\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\|u\|_{3,L_{% 1}}\|v\|_{2,\Omega}=\mathcal{O}(h^{3.5})\|a\|_{2,\infty}\|u\|_{4,\Omega}\|v\|_% {2,\Omega},$$ where the trace inequality $\|u\|_{3,\partial\Omega}\leq C\|u\|_{4,\Omega}$ is used. Combine all the estimates above, we get (31a). Since the $\frac{1}{2}$ order loss is only due to the line integral along $L_{1}$ and $L_{3}$, on which $v_{xx}=0$ if $v\in V^{h}_{0}$, we get (31b). By all the discussions in this subsection, we have proven (22a) and (22b). 5 Homogeneous Dirichlet Boundary Conditions 5.1 $V^{h}$-ellipticity In order to discuss the scheme (2), we need to show $A_{h}$ satisfies $V^{h}$-ellipticity (7). We first consider the $V_{h}$-ellipticity for the case $\mathbf{b}\equiv 0$. Lemma 5.1. Assume the coefficients in (5) satisfy that $\mathbf{b}\equiv 0$, both $c(x,y)$ and the eigenvalues of $\mathbf{a}(x,y)$ have a uniform upper bound and a uniform positive lower bound, then there exists two constants $C_{1},C_{2}>0$ independent of mesh size $h$ such that $$\forall v_{h}\in V_{0}^{h},\quad C_{1}\|v_{h}\|^{2}_{1}\leq A_{h}(v_{h},v_{h})% \leq C_{2}\|v_{h}\|^{2}_{1}.$$ Proof 5.2. Let $Z_{0,\hat{K}}$ denote the set of $3\times 3$ Gauss-Lobatto points on the reference cell $\hat{K}$. First we notice that the set $Z_{0,\hat{K}}$ is a $Q^{2}(\hat{K})$-unisolvent subset. Since the Gauss-Lobatto quadrature weights are strictly positive, we have $$\forall\hat{p}\in Q^{2}(\hat{K}),\,\sum_{i=1}^{n}\langle\partial_{i}\hat{p},% \partial_{i}\hat{p}\rangle_{\hat{K}}=0\Longrightarrow\partial_{i}\hat{p}=0% \textrm{ at quadrature points},$$ where $i=1,\dots,n$ representing the spatial derivative on variable $x_{i}$ respectively. Since $\partial_{i}\hat{p}\in Q^{2}(\hat{K})$ and it vanishes on a $Q^{2}(\hat{K})$-unisolvent subset, we have $\partial_{i}\hat{p}\equiv 0$. As a consequence, $\sqrt{\sum_{i=1}^{n}\langle\partial_{i}\hat{p},\partial_{i}\hat{p}\rangle_{h}}$ defines a norm over the quotient space $Q^{2}(\hat{K})/Q^{0}(\hat{K})$. Since that $|\cdot|_{1,\hat{K}}$ is also a norm over the same quotient space, by the equivalence of norms over a finite dimensional space, we have $$\forall\hat{p}\in Q^{2}(\hat{K}),\quad C_{1}|\hat{p}|^{2}_{1,\hat{K}}\leq\sum_% {i=1}^{n}\langle\partial_{i}\hat{p},\partial_{i}\hat{p}\rangle_{\hat{K}}\leq C% _{2}|\hat{p}|^{2}_{1,\hat{K}}.$$ On the reference cell $\hat{K}$, by the assumption on the coefficients, we have $$C_{1}|\hat{v}_{h}|^{2}_{1,\hat{K}}\leq C_{1}\sum_{i}^{n}\langle\partial_{i}% \hat{v}_{h},\partial_{i}\hat{v}_{h}\rangle_{\hat{K}}\leq\sum_{i,j=1}^{n}\left(% \langle\hat{a}_{ij}\partial_{i}\hat{v}_{h},\partial_{j}\hat{v}_{h}\rangle_{% \hat{K}}+\langle\hat{c}\hat{v}_{h},\hat{v}_{h}\rangle_{\hat{K}}\right)\leq C_{% 2}\|\hat{v}_{h}\|^{2}_{1,\hat{K}}$$ Mapping these back to the original cell $e$ and summing over all elements, by the equivalence of two norms $|\cdot|_{1}$ and $\|\cdot\|_{1}$ for the space $H^{1}_{0}(\Omega)\supset V^{h}_{0}$ (see Ciarlet (1991)), we get $C_{1}\|v_{h}\|^{2}_{1}\leq A_{h}(v_{h},v_{h})\leq C_{2}\|v_{h}\|^{2}_{1}.$ Remark 5.3. For discussing continuity of $A_{h}(\cdot,\cdot)$ when $\mathbf{b}$ is nonzero, if $\mathbf{b}$ has a uniform upper bound, we have $$\sum_{i}^{n}\langle b_{i}\partial_{i}\hat{v}_{h},\hat{v}_{h}\rangle_{\hat{K}}% \leq\sum_{i}^{n}|b_{i}|(\langle\partial_{i}\hat{v}_{h},\partial_{i}\hat{v}_{h}% \rangle_{\hat{K}})^{\frac{1}{2}}(\langle\hat{v}_{h},\hat{v}_{h}\rangle_{\hat{K% }})^{\frac{1}{2}}\leq C\|v_{h}\|^{2}_{1}.$$ Lemma 5.4. Assume $\mathbf{b}\in W^{4,\infty}(\bar{\Omega})$, $$(\mathbf{b}\cdot\nabla v_{h},v_{h})+(cv_{h},v_{h})-\langle\mathbf{b}\cdot% \nabla v_{h},v_{h}\rangle_{h}-\langle cv_{h},v_{h}\rangle_{h}=\mathcal{O}(h)\|% v_{h}\|_{1}^{2},\quad\forall v_{h}\in V_{0}^{h}.$$ Proof 5.5. By Bramble-Hilbert Lemma Theorem 3.1 and inverse estimates (3), on each cell $e$ we have $$\displaystyle(b^{1}\partial_{x}v_{h},v_{h})_{e}+(cv_{h},v_{h})_{e}-\langle b^{% 1}\partial_{x}v_{h},v_{h}\rangle_{e,h}-\langle cv_{h},v_{h}\rangle_{e,h}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{4})[b^{1}(\partial_{x}v_{h})v_{h}]_{4}+\mathcal{O}% (h^{4})[cv_{h}v_{h}]_{4}=O(h^{4})\|v_{h}\|_{3}\|v_{h}\|_{2}+O(h^{4})\|v_{h}\|_% {2}\|v_{h}\|_{2}=O(h)\|v_{h}\|_{1}\|v_{h}\|_{1},$$ where we have used the fact that certain high order derivatives of $v_{h}$ vanish since $v_{h}$ is a $Q^{2}$ polynomial. Summing over all elements $e$, we get the desired estimate. Since $V^{h}_{0}\subset H_{0}^{1}(\Omega)$, (6) still holds for $v=v_{h}$, thus $(\mathbf{b}\cdot\nabla v_{h},v_{h})+(cv_{h},v_{h})\geq 0,\forall v_{h}\in V_{0% }^{h}$. So combine Lemma 5.1 and Lemma 5.4, we can get $A_{h}(v_{h},v_{h})\geq C_{3}\|v_{h}\|^{2}_{1}-C_{4}h\|v_{h}\|^{2}_{1}$ where $C_{3},C_{4}>0$ is from the uniform lower bound of eigenvalues of $\mathbf{a}$ and $C_{4}$ is from Lemma 5.4. Thus if $h$ is small enough, (7) still holds for the variable coefficient $\mathbf{b}$. To discuss $V^{h}$-ellipticity for variable coefficient $\mathbf{b}$ with arbitrary mesh size $h$, it depends on whether (6) still holds with quadrature. We do not discuss this matter in this paper. 5.2 Standard estimates for the dual problem In order to apply the Aubin-Nitsche duality argument for establishing superconvergence of function values, we need certain estimates on a proper dual problem. Define $\theta_{h}:=u_{h}-u_{p}$. Then we consider the dual problem: find $w\in H_{0}^{1}(\Omega)$ satisfying $$A^{*}(w,v)=(\theta_{h},v),\quad\forall v\in H_{0}^{1}(\Omega),$$ (34) where $A^{*}(\cdot,\cdot)$ is the adjoint bilinear form of $A(\cdot,\cdot)$ such that $A^{*}(u,v)=A(v,u)$. Let $w_{h}\in V_{0}^{h}$ be the solution to $$A^{*}_{h}(w_{h},v_{h})=(\theta_{h},v_{h}),\quad\forall v_{h}\in V_{0}^{h}.$$ (35) Notice that the right hand side of (35) is different from the right hand side of the scheme (2). We need the following standard estimates on $w_{h}$ for the dual problem. Theorem 5.6. Assume all coefficients in (5) are in $W^{2,\infty}(\Omega)$, elliptic regularity and $V^{h}$ ellipticity holds, we have $$\|w-w_{h}\|_{1}\leq Ch\|w\|_{2},$$ $$\|w_{h}\|_{2}\leq C\|\theta_{h}\|_{0}.$$ Proof 5.7. By $V^{h}$ ellipticity, we have $C_{1}\|w_{h}-v_{h}\|_{1}^{2}\leq A^{*}_{h}(w_{h}-v_{h},w_{h}-v_{h})$. By the definition of the dual problem, we have $$A^{*}_{h}(w_{h},w_{h}-v_{h})=(\theta_{h},w_{h}-v_{h})=A^{*}(w,w_{h}-v_{h}),% \quad\forall v_{h}\in V_{0}^{h}.$$ Thus for any $v_{h}\in V_{0}^{h}$, by Theorem 3.7, we have $$\displaystyle C_{1}\|w_{h}-v_{h}\|_{1}^{2}\leq A^{*}_{h}(w_{h}-v_{h},w_{h}-v_{% h})$$ $$\displaystyle=$$ $$\displaystyle A^{*}(w-v_{h},w_{h}-v_{h})+[A_{h}^{*}(w_{h},w_{h}-v_{h})-A^{*}(w% ,w_{h}-v_{h})]+[A^{*}(v_{h},w_{h}-v_{h})-A^{*}_{h}(v_{h},w_{h}-v_{h})]$$ $$\displaystyle=$$ $$\displaystyle A^{*}(w-v_{h},w_{h}-v_{h})+[A(w_{h}-v_{h},v_{h})-A_{h}(w_{h}-v_{% h},v_{h})]$$ $$\displaystyle\leq$$ $$\displaystyle C\|w-v_{h}\|_{1}\|w_{h}-v_{h}\|_{1}+Ch\|v_{h}\|_{2}\|w_{h}-v_{h}% \|_{1}.$$ Thus $$\|w-w_{h}\|_{1}\leq\|w-v_{h}\|_{1}+\|w_{h}-v_{h}\|_{1}\leq C\|w-v_{h}\|_{1}+Ch% \|v_{h}\|_{2}.$$ (36) Now consider $\Pi_{1}w\in V^{h}_{0}$ where $\Pi_{1}$ is the piecewise $Q^{1}$ projection and its definition on each cell is defined through (4) on the reference cell. By the Bramble Hilbert Lemma Theorem 3.1 on the projection error, we have $$\|w-\Pi_{1}w\|_{1}\leq Ch\|w\|_{2},\quad\|w-\Pi_{1}w\|_{2}\leq C\|w\|_{2},$$ (37) thus $\|\Pi_{1}w\|_{2}\leq\|w\|_{2}+\|w-\Pi_{1}w\|_{2}\leq C\|w\|_{2}$. By setting $v_{h}=\Pi_{1}w$, from (36) we have $$\|w-w_{h}\|_{1}\leq C\|w-\Pi_{1}w\|_{1}+Ch\|\Pi_{1}w\|_{2}\leq Ch\|w\|_{2}.$$ (38) By the inverse estimate on the piecewise polynomial $w_{h}-\Pi_{1}w$, we get $$\|w_{h}\|_{2}\leq\|w_{h}-\Pi_{1}w\|_{2}+\|\Pi_{1}w-w\|_{2}+\|w\|_{2}\leq Ch^{-% 1}\|w_{h}-\Pi_{1}w\|_{1}+C\|w\|_{2}.$$ (39) By (37) and (38), we also have $$\displaystyle\|w_{h}-\Pi_{1}w\|_{1}\leq\|w-\Pi_{1}w\|_{1}+\|w-w_{h}\|_{1}\leq Ch% \|w\|_{2}.$$ (40) With (39), (40) and the elliptic regularity $\|w\|_{2}\leq C\|\theta_{h}\|_{0}$, we get $$\|w_{h}\|_{2}\leq C\|w\|_{2}\leq C\|\theta_{h}\|_{0}.$$ 5.3 Superconvergence of function values Theorem 5.8. Assume $a_{ij},b_{i},c\in W^{4,\infty}(\Omega)$ and $u(x,y)\in H^{5}(\Omega)$, $f(x,y)\in H^{4}(\Omega)$. Assume $h$ is small enough so that $V^{h}$ ellipticity holds. Then $u_{h}$ is a fourth order accurate approximation to $u$ in the discrete 2-norm over all the $3\times 3$ Gauss-Lobatto points: $$\|u_{h}-u\|_{2,Z_{0}}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4}).$$ Proof 5.9. By Theorem 3.14, Theorem 3.3, for any $v_{h}\in V^{h}_{0}$ $$\begin{array}[]{cl}&A_{h}(u-u_{h},v_{h})=[A(u,v_{h})-A_{h}(u_{h},v_{h})]+[A_{h% }(u,v_{h})-A(u,v_{h})]\\ =&A(u,v_{h})-A_{h}(u_{h},v_{h})+\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{5}\|v% _{h}\|_{2}\\ =&[(f,v_{h})-\langle f,v_{h}\rangle_{h}]+\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_% {2}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4})\|v_{h}\|_{2}.\end{array}$$ Let $\theta_{h}=u_{h}-u_{p}$, then $\theta_{h}\in V_{0}^{h}$ due to the properties of the M-type projection. So by (22b) and Theorem 5.6, we get $$\displaystyle\|\theta_{h}\|_{0}^{2}=(\theta_{h},\theta_{h})=A_{h}(\theta_{h},w% _{h})=A_{h}(u_{h}-u,w_{h})+A_{h}(u-u_{p},w_{h})$$ $$\displaystyle=$$ $$\displaystyle A_{h}(u-u_{p},w_{h})+\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4})\|w_% {h}\|_{2}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4})\|w_{h}\|_{2}=\mathcal{O}(h^{% 4})(\|u\|_{5}+\|f\|_{4})\|\theta_{h}\|_{0},$$ thus $$\|u_{h}-u_{p}\|_{0}=\|\theta_{h}\|_{0}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4}).$$ Finally, by the equivalence of the discrete 2-norm on $Z_{0}$ and the $L^{2}(\Omega)$ norm in finite-dimensional space $V^{h}$ and Theorem 4.2, we obtain $$\displaystyle\|u_{h}-u\|_{2,Z_{0}}\leq\|u_{h}-u_{p}\|_{2,Z_{0}}+\|u_{p}-u\|_{2% ,Z_{0}}\leq C\|u_{h}-u_{p}\|_{0}+\|u_{p}-u\|_{2,Z_{0}}=\mathcal{O}(h^{4})(\|u% \|_{5}+\|f\|_{4}).$$ Remark 5.10. To extend the discussions to Neumann type boundary conditions, due to (22a) and Lemma 3.14, we can only prove $3.5$-th order accuracy: $$\|u_{h}-u\|_{2,Z_{0}}=\mathcal{O}(h^{3.5})(\|u\|_{5}+\|f\|_{4}).$$ Remark 5.11. All key discussions can be extended to three-dimensional cases. 6 Nonhomogeneous Dirichlet Boundary Conditions We consider a two-dimensional elliptic problem on $\Omega=[0,1]^{2}$ with nonhomogeneous Dirichlet boundary condition, $$\begin{array}[]{cl}-\nabla(\mathbf{a}\nabla u)+\mathbf{b}\cdot\nabla u+cu&=f% \textrm{ on }\Omega\\ u&=g\textrm{ on }\partial\Omega.\end{array}$$ (41) Assume there is a function $\bar{g}\in H^{1}(\Omega)$ as a smooth extension of $g$ so that $\bar{g}|_{\partial\Omega}=g$. The variational form is to find $\tilde{u}=u-\bar{g}\in H_{0}^{1}(\Omega)$ satisfying $$A(\tilde{u},v)=(f,v)-A(\bar{g},v),\quad\forall v\in H_{0}^{1}(\Omega).$$ (42) In practice, $\bar{g}$ is not used explicitly. By abusing notations, the most convenient implementation is to consider $$g(x,y)=\begin{cases}0,&\mbox{if}\quad(x,y)\in(0,1)\times(0,1),\\ g(x,y),&\mbox{if}\quad(x,y)\in\partial\Omega,\\ \end{cases}$$ and $g_{I}\in V^{h}$ which is defined as the $Q^{2}$ Lagrange interpolation at $3\times 3$ Gauss-Lobatto points for each cell on $\Omega$ of $g(x,y)$. Namely, $g_{I}\in V^{h}$ is the piecewise quadratic interpolation of $g$ along the boundary grid points and $g_{I}=0$ at the interior grid points. The numerical scheme is to find $\tilde{u}_{h}\in V_{0}^{h}$, s.t. $$A_{h}(\tilde{u}_{h},v_{h})=\langle f,v_{h}\rangle-A_{h}(g_{I},v_{h}),\quad% \forall v_{h}\in V_{0}^{h}.$$ (43) Then $u_{h}=\tilde{u}_{h}+g_{I}$ will be our numerical solution for (41). Notice that (43) is not a straightforward approximation to (42) since $\bar{g}$ is never used. Assuming elliptic regularity and $V^{h}$ ellipticity hold, we will show that the numerical solution $u_{h}-u$ is of fourth order in the discrete 2-norm over all $3\times 3$ Gauss-Lobatto points. 6.1 Auxiliary schemes In order to discuss the superconvergence of (43), we need to prove the superconvergence of two auxiliary schemes. Notice that we discuss these two auxiliary schemes only for proving the accuracy of (43). In practice one should not implement the auxiliary schemes since (43) is a much more convenient implementation and they all have the same accuracy. The first auxiliary scheme is to find $\tilde{u}^{**}_{h}\in V_{0}^{h}$ satisfying $$A_{h}(\tilde{u}^{**}_{h},v_{h})=\langle f,v_{h}\rangle-A_{h}(\bar{g}_{p},v_{h}% ),\quad\forall v_{h}\in V_{0}^{h},$$ (44) where $\bar{g}_{p}\in V^{h}$ is the piecewise M-type $Q^{2}$ projection of the smooth extension function $\bar{g}$. Then $u^{**}_{h}=\tilde{u}^{**}_{h}+\bar{g}_{p}$ is the numerical solution of scheme (44) for problem (42). Define $\theta_{h}=u^{**}_{h}-u_{p}$, then by Theorem 4.1 we have $\theta_{h}\in V_{0}^{h}$. Following Section 5.2, define the following dual problem: find $w\in H_{0}^{1}(\Omega)$ satisfying $$A^{*}(w,v)=(\theta_{h},v),\quad\forall v\in H_{0}^{1}(\Omega).$$ (45) Let $w_{h}\in V_{0}^{h}$ be the solution to $$A^{*}_{h}(w_{h},v_{h})=(\theta_{h},v_{h}),\quad\forall v_{h}\in V_{0}^{h}.$$ (46) Notice that the dual problem has homogeneous Dirichlet boundary conditions. By Theorem 3.14, Theorem 3.3, for any $v_{h}\in V^{h}_{0}$, $$\begin{array}[]{cl}&A_{h}(u-u^{**}_{h},v_{h})=[A(u,v_{h})-A_{h}(u^{**}_{h},v_{% h})]+[A_{h}(u,v_{h})-A(u,v_{h})]\\ =&A(u,v_{h})-A_{h}(u^{**}_{h},v_{h})+\mathcal{O}(h^{4})\|a\|_{4,\infty}\|u\|_{% 5}\|v_{h}\|_{2}\\ =&[(f,v_{h})-\langle f,v_{h}\rangle_{h}]+\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_% {2}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4})\|v_{h}\|_{2}.\end{array}$$ By (22b) and Theorem 5.6, we get $$\displaystyle\|\theta_{h}\|_{0}^{2}=(\theta_{h},\theta_{h})=A_{h}(\theta_{h},w% _{h})=A_{h}(u^{**}_{h}-u,w_{h})+A_{h}(u-u_{p},w_{h})$$ $$\displaystyle=$$ $$\displaystyle A_{h}(u-u_{p},w_{h})+\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4})\|w_% {h}\|_{2}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4})\|w_{h}\|_{2}=\mathcal{O}(% h^{4})(\|u\|_{5}+\|f\|_{4})\|\theta_{h}\|_{0},$$ thus $\|u_{h}^{**}-u_{p}\|_{0}=\|\theta_{h}\|_{0}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|% _{4}).$ So Theorem 5.8 still holds for the first auxiliary scheme (44). Next define $g_{p}\in V^{h}$ as $g_{p}=\bar{g}_{p}$ on $\partial\Omega$ and $g_{p}=0$ at all the inner grids. The second auxiliary scheme is to find $\tilde{u}^{*}_{h}\in V_{0}^{h}$ satisfying $$A_{h}(\tilde{u}^{*}_{h},v_{h})=\langle f,v_{h}\rangle-A_{h}(g_{p},v_{h}),\quad% \forall v_{h}\in V_{0}^{h}.$$ (47) Then $u^{*}_{h}=\tilde{u}^{*}_{h}+g_{p}$ is the numerical solution. We have $$A_{h}(u^{**}_{h}-u^{*}_{h},v_{h})=0,\quad\forall v_{h}\in V_{0}^{h}.$$ Since $u^{**}_{h}-u^{*}_{h}\in V_{0}^{h}\subset H_{0}^{1}(\Omega)$, by $V^{h}$-ellipticity we have $$\|u^{**}_{h}-u^{*}_{h}\|^{2}_{1,\Omega}\leq CA_{h}(u^{**}_{h}-u^{*}_{h},u^{**}% _{h}-u^{*}_{h})=0.$$ Thus we get $u^{**}_{h}=u^{*}_{h}.$ So numerical solutions from (44) and (47) are the same. Thus Theorem 5.8 also holds for $u^{*}_{h}$: $$\|u^{*}_{h}-u\|_{2,Z_{0}}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4}).$$ (48) 6.2 The main result In order to extend Theorem 5.8 to (43), we only need to prove $$\|u_{h}-u^{*}_{h}\|=\mathcal{O}(h^{4}).$$ The difference between (47) and (43) is $$A_{h}(\tilde{u}^{*}_{h}-\tilde{u}_{h},v_{h})=A_{h}(g_{I}-g_{p},v_{h}),\quad% \forall v_{h}\in V_{0}^{h}.$$ (49) We need the following Lemma. Lemma 6.1. $$A_{h}(g_{I}-g_{p},v_{h})=\mathcal{O}(h^{4})\|u\|_{5,\Omega}\|v_{h}\|_{2,\Omega% },\quad\forall v_{h}\in V_{0}^{h}.$$ (50) Proof 6.2. For simplicity, we ignore the subscript ${}_{h}$ of $v_{h}$ in this proof and all the following $v$ are in $V^{h}$. Notice that $g_{I}-g_{p}\equiv 0$ in interior cells thus we only need to consider cells adjacent to $\partial\Omega$. Let $L_{1},L_{2},L_{3}$ and $L_{4}$ denote the top, left, bottom and right boundary edges of $\bar{\Omega}=[0,1]\times[0,1]$ respectively. Let $l_{1},l_{2},l_{3}$ and $l_{4}$ denote the top, left, bottom and right boundary edges of $e$ respectively. Without loss of generality, we only consider a cell $e=[x_{e}-h,x_{e}+h]\times[y_{e}-h,y_{e}+h]$ adjacent to the left boundary $L_{2}$, i.e., $x_{e}-h=0$. On $l_{2}\subset L_{2}$, we have $g_{p}(0,y_{e}+h)=g(0,y_{e}+h)$, $g_{p}(0,y_{e}-h)=g(0,y_{e}-h)$ and $$\int_{y_{e}-h}^{y_{e}+h}gdy=\int_{y_{e}-h}^{y_{e}+h}g_{p}dy=\frac{h}{3}\left[g% _{p}(0,y_{e}-h)+4g_{p}(0,y_{e})+g_{p}(0,y_{e}-h)\right].$$ Thus $$g_{p}(0,y_{e})=\frac{3}{4h}\int_{y_{e}-h}^{y_{e}+h}gdy-\frac{1}{4}g(0,y_{e}+h)% -\frac{1}{4}g(0,y_{e}-h).$$ By (15), we have $$\displaystyle g_{p}(0,y_{e})-g(0,y_{e})=$$ $$\displaystyle\frac{3}{4h}\left[\int_{y_{e}-h}^{y_{e}+h}gdy-\frac{h}{3}g(0,y_{e% }-h)+\frac{4h}{3}g(0,y_{e})+\frac{h}{3}g(0,y_{e}-h)\right]$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}(h^{3})\|g\|_{4,1,l_{2}}=\mathcal{O}(h^{3})\|u\|_{4,1,% l_{2}}=\mathcal{O}(h^{3.5})\|u\|_{4,2,l_{2}},$$ (51) where the last step is by Cauchy-Schwartz inequality. We first consider the case that the cell $e$ is not adjacent to $L_{1}$ or $L_{3}$. For this case, $g_{I}-g_{p}$ is nonzero at $(0,y_{e})$ and $g_{I}-g_{p}=0$ at other $3\times 3$ Gauss-Lobatto points. Let $\lambda=g_{I}(0,y_{e})-g_{p}(0,y_{e})$, then $(g_{I}-g_{p})|_{e}=\lambda q(x,y)$, where $q(x,y)$ is a $Q^{2}$ polynomial on cell $e$ satisfying that $q(0,y_{e})=1$ and $q(x,y)=0$ at other $3\times 3$ Gauss-Lobatto points. Next we estimate $\langle a(g_{I}-g_{p})_{x},v_{x}\rangle_{e}$, $$\displaystyle\langle a(g_{I}-g_{p})_{x},v_{x}\rangle_{e}=\langle a\lambda q_{x% },v_{x}\rangle_{e}=\lambda\langle\hat{a}\hat{q}_{s},\hat{v}_{s}\rangle_{\hat{K% }}=\lambda\iint_{\hat{K}}\hat{a}\hat{q}_{s}\hat{v}_{s}d^{h}sd^{h}t$$ $$\displaystyle=$$ $$\displaystyle\lambda\iint_{\hat{K}}(\hat{a}-\overline{\hat{a}})\hat{q}_{s}\hat% {v}_{s}d^{h}sd^{h}t+\lambda\iint_{\hat{K}}\overline{\hat{a}}\hat{q}_{s}\hat{v}% _{s}d^{h}sd^{h}t.$$ By Theorem 3.1 and the equivalence of norms on finite-dimensional space, we have $$\displaystyle\iint_{\hat{K}}(\hat{a}-\overline{\hat{a}})\hat{q}_{s}\hat{v}_{s}% d^{h}sd^{h}t\leq C|\hat{a}-\overline{\hat{a}}|_{\infty,\hat{K}}|\hat{q}_{s}|_{% \infty,\hat{K}}|\hat{v}_{s}|_{\infty,\hat{K}}\leq C|\hat{a}|_{1,\infty,\hat{K}% }|\hat{v}_{s}|_{0,\hat{K}}=\mathcal{O}(h)|a|_{1,\infty}|v|_{1,e}.$$ (52) We have $$\displaystyle\iint_{\hat{K}}\overline{\hat{a}}\hat{q}_{s}\hat{v}_{s}d^{h}sd^{h% }t=\overline{\hat{a}}\iint_{\hat{K}}\hat{q}_{s}\hat{v}_{s}dsd^{h}t=\left.% \overline{\hat{a}}\int_{-1}^{1}\hat{q}\hat{v}_{s}d^{h}t\right|_{s=-1}^{s=1}-% \overline{\hat{a}}\iint_{\hat{K}}\hat{q}\hat{v}_{ss}dsd^{h}t$$ $$\displaystyle=$$ $$\displaystyle\overline{\hat{a}}\int_{-1}^{1}\hat{q}(-1,t)\hat{v}_{s}(-1,t)d^{h% }t-\overline{\hat{a}}\iint_{\hat{K}}\hat{q}\hat{v}_{ss}dsd^{h}t$$ $$\displaystyle\leq$$ $$\displaystyle C|\overline{\hat{a}}|(|\hat{v}_{s}(-1,0)|+|\hat{v}_{ss}|_{0,\hat% {K}})\leq C|{\hat{a}}|_{\infty,\hat{K}}(|\hat{v}_{s}(-1,0)|+|\hat{v}_{ss}|_{0,% \hat{K}}).$$ By equivalence of norms for the finite dimensional Banach space consisting of all quadratic polynomials on $[-1,1]$, we have $$\displaystyle|\hat{v}_{s}(-1,0)|\leq\max_{t\in[-1,1]}|\hat{v}_{s}(-1,t)|\leq C% \left(\int_{-1}^{1}\hat{v}_{s}(-1,t)^{2}dt\right)^{\frac{1}{2}}.$$ So we have $$\displaystyle\iint_{\hat{K}}\overline{\hat{a}}\hat{q}_{s}\hat{v}_{s}d^{h}sd^{h}t$$ $$\displaystyle\leq C|{\hat{a}}|_{\infty,\hat{K}}\left[\left(\int_{-1}^{1}\hat{v% }_{s}(-1,t)^{2}dt\right)^{\frac{1}{2}}+|\hat{v}|_{2,\hat{K}}\right]\leq C|{a}|% _{\infty,e}(h^{\frac{1}{2}}|v_{x}|_{0,l_{2}}+h|v|_{2,e}).$$ (53) From (52) and (53), we get $$\displaystyle\langle a(g_{I}-g_{p})_{x},v_{x}\rangle_{e}=\mathcal{O}(h^{0.5})% \lambda\|{a}\|_{1,\infty}(|v|_{1,l_{2}}+h^{0.5}\|v\|_{2,e})=\mathcal{O}(h^{4})% \|{a}\|_{1,\infty}\|u\|_{4,l_{2}}(|v|_{1,l_{2}}+h^{0.5}\|v\|_{2,e}).$$ For the case that the cell $e$ is also adjacent to $L_{1}$ or $L_{3}$. Without loss of generality, assume $e$ is adjacent to $L_{3}$, then $y_{e}-h=0$ and $g_{I}-g_{p}$ are nonzero only at two of the nine Gauss-Lobatto points $(x_{e}-h,y_{e})=(0,y_{e})$ and $(x_{e},y_{e}-h)=(x_{e},0)$. Let $\lambda=g_{I}(0,y_{e})-g_{p}(0,y_{e})$ and $\mu=g_{I}(x_{e},0)-g_{p}(x_{e},0)$. Then $(g_{I}-g_{p})|_{e}=\lambda q(x,y)+\mu p(x,y)$, where $p(x,y)$ is a $Q^{2}$ polynomial on cell $e$ satisfying that $p(x_{e},0)=1$ and $p(x,y)=0$ at other $3\times 3$ Gauss-Lobatto points. Similar to (6.2), we can derive $\mu=\mathcal{O}(h^{3.5})\|u\|_{4,2,l_{3}}.$ We have $$\displaystyle\langle a(g_{I}-g_{p})_{x},v_{x}\rangle_{e}=\langle a\lambda q_{x% }+\mu p_{x},v_{x}\rangle_{e}=\lambda\langle\hat{a}\hat{q}_{s},\hat{v}_{s}% \rangle_{\hat{K}}+\mu\langle\hat{a}\hat{p}_{s},\hat{v}_{s}\rangle_{\hat{K}}.$$ We only need to estimate $\langle\hat{a}\hat{p}_{s},\hat{v}_{s}\rangle_{\hat{K}}=\langle(\hat{a}-\bar{% \hat{a}})\hat{p}_{s},\hat{v}_{s}\rangle_{\hat{K}}+\langle\bar{\hat{a}}\hat{p}_% {s},\hat{v}_{s}\rangle_{\hat{K}}$. By similar discussions as above, we have $$\iint_{\hat{K}}(\hat{a}-\overline{\hat{a}})\hat{p}_{s}\hat{v}_{s}d^{h}sd^{h}t=% \mathcal{O}(h)|a|_{1,\infty}|v|_{1,e},$$ and $$\iint_{\hat{K}}\overline{\hat{a}}\hat{p}_{s}\hat{v}_{s}d^{h}sd^{h}t=\overline{% \hat{a}}\iint_{\hat{K}}\hat{p}_{s}\hat{v}_{s}dsd^{h}t=-\overline{\hat{a}}\iint% _{\hat{K}}\hat{p}\hat{v}_{ss}dsd^{h}t\leq C|{\hat{a}}|_{\infty,\hat{K}}|\hat{v% }_{ss}|_{0,\hat{K}},$$ where the fact that $\hat{p}(-1,t)=\hat{p}(1,t)=0$ is used. Thus for the left lower corner cell $e$, we have $$\langle a(g_{I}-g_{p})_{x},v_{x}\rangle_{e}=\mathcal{O}(h^{4})\|{a}\|_{1,% \infty}\|u\|_{4,l_{2}}(|v|_{1,l_{2}}+h^{0.5}\|v\|_{2,e})+\mathcal{O}(h^{4.5})% \|{a}\|_{1,\infty}\|u\|_{4,l_{3}}\|v\|_{2,e}.$$ We can get similar estimates for all boundary cells. Sum up over all the boundary elements, by Cauchy-Schwartz inequality we have $$a(g_{I}-g_{p})_{x},v_{x}\rangle_{h}=\mathcal{O}(h^{4})\|{a}\|_{1,\infty}\|u\|_% {4,\partial\Omega}(|v|_{1,\Omega}+h^{0.5}\|v\|_{2,\Omega}).$$ With trace inequality $\|u\|_{4,\partial\Omega}\leq C\|u\|_{5,\Omega}$, we get $$\langle a(g_{I}-g_{p})_{x},v_{x}\rangle_{h}=\mathcal{O}(h^{4})\|{a}\|_{1,% \infty}\|u\|_{5}\|v\|_{2},\quad\forall v\in V^{h}_{0}.$$ (54) Similarly, for any $v\in V^{h}_{0}$, we have $$\displaystyle\langle a(g_{I}-g_{p})_{y},v_{y}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|{a}\|_{1,\infty}\|u\|_{5}\|v\|_{2},$$ $$\displaystyle\langle a(g_{I}-g_{p})_{x},v_{y}\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|{a}\|_{1,\infty}\|u\|_{5}\|v\|_{2},$$ $$\displaystyle\langle\textbf{b}\cdot\nabla(g_{I}-g_{p}),v\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|\mathbf{b}\|_{1,\infty}\|u\|_{5}\|v\|_{2},$$ $$\displaystyle\langle c(g_{I}-g_{p}),v\rangle_{h}=$$ $$\displaystyle\mathcal{O}(h^{4})\|{c}\|_{1,\infty}\|u\|_{5}\|v\|_{2}.$$ Thus we conclude that $$A_{h}(g_{I}-g_{p},v_{h})=\mathcal{O}(h^{4})\|u\|_{5}\|v_{h}\|_{2},\quad\forall v% _{h}\in V_{0}^{h}.$$ By (49) and Lemma 6.1, we have $$A_{h}(\tilde{u}^{*}_{h}-\tilde{u}_{h},v_{h})=\mathcal{O}(h^{4})\|u\|_{5}\|v_{h% }\|_{2},\quad\forall v_{h}\in V_{0}^{h}.$$ (55) Let $\theta_{h}=\tilde{u}^{*}_{h}-\tilde{u}_{h}\in V_{0}^{h}$. Following Section 5.2, define the following dual problem: find $w\in H_{0}^{1}(\Omega)$ satisfying $$A^{*}(w,v)=(\theta_{h},v),\quad\forall v\in H_{0}^{1}(\Omega).$$ (56) Let $w_{h}\in V_{0}^{h}$ be the solution to $$A^{*}_{h}(w_{h},v_{h})=(\theta_{h},v_{h}),\quad\forall v_{h}\in V_{0}^{h}.$$ (57) By (55) and Theorem 5.6, we get $$\|\theta_{h}\|_{0}^{2}=(\theta_{h},\theta_{h})=A_{h}^{*}(w_{h},\theta_{h})=A_{% h}(\tilde{u}^{*}_{h}-\tilde{u}_{h},w_{h})=\mathcal{O}(h^{4})\|u\|_{5}\|w_{h}\|% _{2}=\mathcal{O}(h^{4})\|u\|_{5}\|\theta_{h}\|_{0},$$ thus $\|\tilde{u}^{*}_{h}-\tilde{u}_{h}\|_{0}=\|\theta_{h}\|_{0}=\mathcal{O}(h^{4})% \|u\|_{5}.$ By equivalence of norms for polynomials, we have $$\|\tilde{u}^{*}_{h}-\tilde{u}_{h}\|_{2,Z_{0}}\leq C\|\tilde{u}^{*}_{h}-\tilde{% u}_{h}\|_{0}=\mathcal{O}(h^{4})\|u\|_{5,\Omega}.$$ (58) Notice that both $\tilde{u}_{h}$ and $\tilde{u}^{*}_{h}$ are constant zero along $\partial\Omega$, and $u_{h}|_{\partial\Omega}=g_{I}$ is the Lagrangian interpolation of $g$ along $\partial\Omega$. With (48), we have proven the following main result. Theorem 6.3. For a nonhomogeneous Dirichlet boundary problem (41), with suitable smoothness assumptions $a_{ij},b_{i},c\in W^{4,\infty}(\Omega)$, $u(x,y)\in H^{5}(\Omega)$ and $f(x,y)\in H^{4}(\Omega)$, the numerical solution $u_{h}$ by scheme (43) is a fourth order accurate approximation to $u$ in the discrete 2-norm over all the $3\times 3$ Gauss-Lobatto points: $$\|u_{h}-u\|_{2,Z_{0}}=\mathcal{O}(h^{4})(\|u\|_{5}+\|f\|_{4}).$$ 7 Finite difference implementation In this section we present the finite difference implementation of the scheme (43) on a uniform mesh. The finite difference implementation of the nonhomogeneous Dirichlet boundary value problem is based on a homogeneous Neumann boundary value problem, which will be discussed first. We demonstrate how it is derived for the one-dimensional case then give the two-dimensional implementation. It provides efficient assembling of the stiffness matrix and one can easily implement it in MATLAB. 7.1 One-dimensional case Consider a homogeneous Neumann boundary value problem $-(au^{\prime})^{\prime}=f\textrm{ on }[0,1],u^{\prime}(0)=0,u^{\prime}(1)=0,$ and its variational form is to seek $u\in H^{1}([0,1])$ satisfying $$\displaystyle(au^{\prime},v^{\prime})=(f,v),\quad\forall v\in H^{1}([0,1]).$$ (59) Consider a uniform mesh $x_{i}=ih$, $i=0,1,\dots,n+1$, $h=\frac{1}{n+1}$. Assume $n$ is odd and let $N=\frac{n+1}{2}$. Define intervals $I_{k}=[x_{2k},x_{2k+2}]$ for $k=0,\dots,N-1$ as a finite element mesh for $P^{2}$ basis. Define $$V^{h}=\{v\in C^{0}([0,1]):v|_{I_{k}}\in P^{2}(I_{k}),k=0,\dots,N-1\}.$$ Let $\{v_{i}\}_{i=0}^{n+1}\subset V^{h}$ be a basis of $V^{h}$ such that $v_{i}(x_{j})=\delta_{ij},\,i,j=0,1,\dots,n+1$. With $3$-point Gauss-Lobatto quadrature, the $C^{0}$-$P^{2}$ finite element method for (59) is to seek $u_{h}\in V^{h}$ satisfying $$\displaystyle\langle au_{h}^{\prime},v_{i}^{\prime}\rangle_{h}=\langle f,v_{i}% \rangle_{h},\quad i=0,1,\dots,n+1.$$ (60) Let $u_{j}=u_{h}(x_{j})$, $a_{j}=a(x_{j})$ and $f_{j}=f(x_{j})$ then $u_{h}(x)=\sum\limits_{j=0}^{n+1}u_{j}v_{j}(x)$. We have $$\sum_{j=0}^{n+1}u_{j}\langle av_{j}^{\prime},v_{i}^{\prime}\rangle_{h}=\langle au% _{h}^{\prime},v_{j}^{\prime}\rangle_{h}=\langle f,v_{i}\rangle_{h}=\sum_{j=0}^% {n+1}f_{j}\langle v_{j},v_{i}\rangle_{h},\quad i=0,1,\dots,n+1.$$ The matrix form of this scheme is $\bar{S}\bar{\mathbf{u}}=\bar{M}\bar{\mathbf{f}}$, where $$\displaystyle\bar{\textbf{u}}=\begin{bmatrix}u_{0},u_{1},\dots,u_{n},u_{n+1}% \end{bmatrix}^{T},\quad\bar{\textbf{f}}=\begin{bmatrix}f_{0},f_{1},\dots,f_{n}% ,f_{n+1}\end{bmatrix}^{T},$$ the stiffness matrix $\bar{S}$ is has size $(n+2)\times(n+2)$ with $(i,j)$-th entry as $\langle av_{i}^{\prime},v_{j}^{\prime}\rangle_{h}$, and the lumped mass matrix $M$ is a $(n+2)\times(n+2)$ diagonal matrix with diagonal entries $h\begin{pmatrix}\frac{1}{3},\frac{4}{3},\frac{2}{3},\frac{4}{3},\frac{2}{3},% \dots,\frac{2}{3},\frac{4}{3},\frac{1}{3}\end{pmatrix}$. Next we derive an explicit representation of the matrix $\bar{S}$. Since basis functions $v_{i}\in V^{h}$ and $u_{h}(x)$ are not $C^{1}$ at the knots $x_{2k}$ ($k=1,2,\dots,N-1$), their derivatives at the knots are double valued. We will use superscripts $+$ and $-$ to denote derivatives obtained from the right and from the left respectively, e.g., $v^{\prime+}_{2k}$ and $v^{\prime-}_{2k+2}$ denote the derivatives of $v_{2k}$ and $v_{2k+2}$ respectively in the interval $I_{k}=[x_{2k},x_{2k+2}]$. Then in the interval $I_{k}=[x_{2k},x_{2k+2}]$ we have the following representation of derivatives $$\begin{bmatrix}v^{\prime+}_{2k}(x)\\ v^{\prime}_{2k+1}(x)\\ v^{\prime-}_{2k+2}(x)\end{bmatrix}=\frac{1}{2h}\begin{bmatrix}-3&4&-1\\ -1&0&1\\ 1&-4&3\end{bmatrix}\begin{bmatrix}v_{2k}(x)\\ v_{2k+1}(x)\\ v_{2k+2}(x)\end{bmatrix}.$$ (61) By abusing notations, we use $(v_{i})^{\prime}_{2k}$ to denote the average of two derivatives of $v_{i}$ at the knots $x_{2k}$: $$(v_{i})^{\prime}_{2k}=\frac{1}{2}[(v_{i}^{\prime})_{2k}^{-}+(v_{i}^{\prime})^{% +}_{2k}].$$ Let $[v_{i}]$ denote the difference between the right derivative and left derivative: $$[v_{i}^{\prime}]_{0}=[v_{i}^{\prime}]_{n+2}=0,\quad[v_{i}^{\prime}]_{2k}:=(v_{% i}^{\prime})^{+}_{2k}-(v_{i}^{\prime})^{-}_{2k},\quad k=1,2,\dots,N-1.$$ Then at the knots, we have $$(v_{i}^{\prime})^{-}_{2k}(v_{j}^{\prime})^{-}_{2k}+(v_{i}^{\prime})^{+}_{2k}(v% _{j}^{\prime})^{+}_{2k}=2(v_{i}^{\prime})_{2k}(v_{j}^{\prime})_{2k}+\frac{1}{2% }[v_{i}]_{2k}[v_{j}]_{2k}.$$ (62) We also have $$\langle av_{j}^{\prime},v_{i}^{\prime}\rangle_{I_{2k}}=h\left[\frac{1}{3}a_{2k% }(v_{j}^{\prime})^{+}_{2k}(v_{j}^{\prime})^{+}_{2k}+\frac{4}{3}a_{2k+1}(v_{j}^% {\prime})_{2k+1}(v_{i}^{\prime})_{2k+1}+\frac{1}{3}a_{2k+2}(v_{j}^{\prime})^{-% }_{2k+2}(v_{j}^{\prime})^{-}_{2k+2}\right].$$ (63) Let $\mathbf{v}_{i}$ denote a column vector of size $n+2$ consisting of grid point values of $v_{i}(x)$. Plugging (62) into (63), with (61), we get $$\langle av_{j}^{\prime},v_{i}^{\prime}\rangle_{h}=\sum_{k=0}^{N-1}\langle av_{% j}^{\prime},v_{i}^{\prime}\rangle_{I_{2k}}=\frac{1}{h}\textbf{v}_{i}^{T}(D^{T}% WAD+E^{T}WAE)\textbf{v}_{j},$$ where $A$ is a diagonal matrix with diagonal entries $a_{0},a_{1},\dots,a_{n},a_{n+1}$, and $$\displaystyle D=\frac{1}{2}\left(\begin{smallmatrix}-3&4&-1&&&&&&\\ -1&0&1&&&&&&\\ \frac{1}{2}&-2&0&2&-\frac{1}{2}&&&&&\\ &&-1&0&1&&&\\ &&\frac{1}{2}&-2&0&2&-\frac{1}{2}&&\\ &&&&-1&0&1&\\ &&&&&\ddots&\ddots&\ddots&&\\ &&&&&-1&0&1&\\ &&&&&\frac{1}{2}&-2&0&2&-\frac{1}{2}\\ &&&&&&&-1&0&1\\ &&&&&&&1&-4&3\end{smallmatrix}\right)_{(n+2)\times(n+2)},E=\frac{1}{2}\left(% \begin{smallmatrix}0&0&0&&&&&&\\ 0&0&0&&&&&&\\ -\frac{1}{2}&2&-3&2&-\frac{1}{2}&&&&&\\ &&0&0&0&&&\\ &&-\frac{1}{2}&2&-3&2&-\frac{1}{2}&&\\ &&&&0&0&0&\\ &&&&&\ddots&\ddots&\ddots&&\\ &&&&&0&0&0&\\ &&&&&-\frac{1}{2}&2&-3&2&-\frac{1}{2}\\ &&&&&&&0&0&0\\ &&&&&&&0&0&0\end{smallmatrix}\right)_{(n+2)\times(n+2)}.$$ Since $\{v_{i}\}_{i=0}^{n}$ are the Lagrangian basis for $V^{h}$, we have $$\bar{S}=\frac{1}{h}(D^{T}WAD+E^{T}WAE).$$ (64) Now consider the one-dimensional Dirichlet boundary value problem: $$\displaystyle-(au^{\prime})^{\prime}=$$ $$\displaystyle f\textrm{ on }[0,1],$$ $$\displaystyle u(0)=\sigma_{1},$$ $$\displaystyle u(1)=\sigma_{2}.$$ Consider the same mesh as above and define $$V^{h}_{0}=\{v\in C^{0}([0,1]):v|_{I_{k}}\in P^{2}(I_{k}),k=0,\dots,N-1;v(0)=v(% 1)=0\}.$$ Then $\{v_{i}\}_{i=1}^{n}\subset V^{h}$ is a basis of $V^{h}_{0}$ for $\{v_{i}\}_{i=0}^{n+1}$ defined above. The one-dimensional version of (43) is to seek $u_{h}\in V^{h}_{0}$ satisfying $$\begin{split}\displaystyle\langle au_{h}^{\prime},v_{i}^{\prime}\rangle_{h}&% \displaystyle=\langle f,v_{i}\rangle_{h}-\langle ag_{I}^{\prime},v_{i}^{\prime% }\rangle_{h},\quad i=1,2,\dots,n,\\ \displaystyle g_{I}(x)&\displaystyle=\sigma_{0}v_{0}(x)+\sigma_{1}v_{n+1}(x).% \end{split}$$ (65) Notice that we can obtain (65) by simply setting $u_{h}(0)=\sigma_{0}$ and $u_{h}(1)=\sigma_{1}$ in (60). So the finite difference implementation of (65) is given as follows: 1. Assemble the $(n+2)\times(n+2)$ stiffness matrix $\bar{S}$ for homogeneous Neumann problem as in (64). 2. Let $S$ denote the $n\times n$ submatrix $\bar{S}(2:n+1,2:n+1)$, i.e., $[\bar{S}_{ij}]$ for $i,j=2,\cdots,n+1$. 3. Let $\mathbf{l}$ denote the $n\times 1$ submatrix $\bar{S}(2:n+1,1)$ and $\mathbf{r}$ denote the $n\times 1$ submatrix $\bar{S}(2:n+1,n+2)$, which correspond to $v_{0}(x)$ and $v_{n+1}(x)$. 4. Let $\mathbf{u}=\begin{bmatrix}u_{1}&u_{2}&\cdots&u_{n}\end{bmatrix}^{T}$ and $\mathbf{f}=\begin{bmatrix}f_{1}&f_{2}&\cdots&f_{n}\end{bmatrix}^{T}$. Define $\mathbf{w}=\begin{bmatrix}\frac{4}{3},\frac{2}{3},\frac{4}{3},\frac{2}{3},% \dots,\frac{2}{3},\frac{4}{3}\end{bmatrix}$ as a column vector of size $n$. The scheme (65) can be implemented as $$S\mathbf{u}=h\mathbf{w}^{T}\mathbf{f}-\sigma_{0}\mathbf{l}-\sigma_{1}\mathbf{r}.$$ 7.2 Notations and tools for the two-dimensional case We will need two operators: • Kronecker product of two matrices: if $A$ is $m\times n$ and $B$ is $p\times q$, then $A\otimes B$ is $mp\times nq$ give by $$A\otimes B=\begin{pmatrix}a_{11}B&\cdots&a_{1n}B\\ \vdots&\vdots&\vdots\\ a_{m1}B&\cdots&a_{mn}B\end{pmatrix}.$$ • For a $m\times n$ matrix $X$, $vec(X)$ denotes the vectorization of the matrix $X$ by rearranging $X$ into a vector column by column. The following properties will be used: 1. $(A\otimes B)(C\otimes D)=AC\otimes BD$. 2. $(A\otimes B)^{-1}=A^{-1}\otimes B^{-1}$. 3. $(B^{T}\otimes A)vec(X)=vec(AXB)$. 4. $(A\otimes B)^{T}=A^{T}\otimes B^{T}.$ Consider a uniform grid $(x_{i},y_{j})$ for a rectangular domain $\bar{\Omega}=[0,1]\times[0,1]$ where $x_{i}=ih_{x}$, $i=0,1,\dots,n_{x}+1$, $h_{x}=\frac{1}{n_{x}+1}$ and $y_{j}=jh_{y}$, $j=0,1,\dots,n_{y}+1$, $h_{y}=\frac{1}{n_{y}+1}$. Assume $n_{x}$ and $n_{y}$ are odd and let $N_{x}=\frac{n_{x}+1}{2}$ and $N_{y}=\frac{n_{y}+1}{2}$. We consider rectangular cells $e_{kl}=[x_{2k},x_{2k+2}]\times[y_{2l},y_{2l+2}]$ for $k=0,\dots,N_{x}-1$ and $l=0,\dots,N_{y}-1$ as a finite element mesh for $Q^{2}$ basis. Define $$V^{h}=\{v\in C^{0}(\Omega):v|_{e_{kl}}\in Q^{2}(e_{kl}),k=0,\dots,N_{x}-1,l=0,% \dots,N_{y}-1\},$$ $$V^{h}_{0}=\{v\in C^{0}(\Omega):v|_{e_{kl}}\in Q^{2}(e_{kl}),k=0,\dots,N_{x}-1,% l=0,\dots,N_{y}-1;v|_{\partial\Omega}\equiv 0\}.$$ For the coefficients $\mathbf{a}(x,y)=\begin{pmatrix}a^{11}&a^{12}\\ a^{21}&a^{22}\end{pmatrix}$, $\mathbf{b}=[b^{1}\quad b^{2}]$ and $c$ in the elliptic operator (5), consider their grid point values in the following form: $$\displaystyle A^{kl}=\begin{pmatrix}a_{00}&a_{01}&\dots&a_{0,n_{x}+1}\\ a_{10}&a_{11}&\dots&a_{1,n_{x}+1}\\ \vdots&\vdots&&\vdots\\ a_{n_{y}+1,0}&a_{n_{y}+1,1}&\dots&a_{n_{y}+1,,n_{x}+1}\end{pmatrix}_{(n_{y}+2)% \times(n_{x}+2)},\quad a_{ij}=a^{kl}(x_{j},y_{i}),\quad k,l=1,2,$$ $$\displaystyle B^{m}=\begin{pmatrix}b_{00}&b_{01}&\dots&b_{0,n_{x}+1}\\ b_{10}&b_{11}&\dots&b_{1,n_{x}+1}\\ \vdots&\vdots&&\vdots\\ b_{n_{y}+1,0}&b_{n_{y}+1,1}&\dots&b_{n_{y}+1,n_{x}+1}\end{pmatrix}_{(n_{y}+2)% \times(n_{x}+2)},\quad b_{ij}=b^{m}(x_{j},y_{i}),\quad m=1,2,$$ $$\displaystyle C=\begin{pmatrix}c_{00}&c_{01}&\dots&c_{0,n_{x}+1}\\ c_{10}&c_{11}&\dots&c_{1,n_{x}+1}\\ \vdots&\vdots&&\vdots\\ c_{n_{y}+1,0}&c_{n_{y}+1,1}&\dots&c_{n_{y}+1,n_{x}+1}\end{pmatrix}_{(n_{y}+2)% \times(n_{x}+2)},\quad c_{ij}=c(x_{j},y_{i}).$$ Let $diag(\mathbf{x})$ denote a diagonal matrix with the vector $\mathbf{x}$ as diagonal entries and define $$\bar{W}_{x}=diag\begin{pmatrix}\frac{1}{3},\frac{4}{3},\frac{2}{3},\frac{4}{3}% ,\frac{2}{3},\dots,\frac{2}{3},\frac{4}{3},\frac{1}{3}\end{pmatrix}_{(n_{x}+2)% \times(n_{x}+2)},$$ $$\bar{W}_{y}=diag\begin{pmatrix}\frac{1}{3},\frac{4}{3},\frac{2}{3},\frac{4}{3}% ,\frac{2}{3},\dots,\frac{2}{3},\frac{4}{3},\frac{1}{3}\end{pmatrix}_{(n_{y}+2)% \times(n_{y}+2)},$$ $${W}_{x}=diag\begin{pmatrix}\frac{4}{3},\frac{2}{3},\frac{4}{3},\frac{2}{3},% \dots,\frac{2}{3},\frac{4}{3}\end{pmatrix}_{n_{x}\times n_{x}},{W}_{y}=diag% \begin{pmatrix}\frac{4}{3},\frac{2}{3},\frac{4}{3},\frac{2}{3},\dots,\frac{2}{% 3},\frac{4}{3}\end{pmatrix}_{n_{y}\times n_{y}}.$$ Let $s=x$ or $y$, we define the $D$ and $E$ matrices with dimension ${(n_{s}+2)\times(n_{s}+2)}$ for each variable: $$\displaystyle D_{s}=\frac{1}{2}\left(\begin{smallmatrix}-3&4&-1&&&&&&\\ -1&0&1&&&&&&\\ \frac{1}{2}&-2&0&2&-\frac{1}{2}&&&&&\\ &&-1&0&1&&&\\ &&\frac{1}{2}&-2&0&2&-\frac{1}{2}&&\\ &&&&-1&0&1&\\ &&&&&\ddots&\ddots&\ddots&&\\ &&&&&-1&0&1&\\ &&&&&\frac{1}{2}&-2&0&2&-\frac{1}{2}\\ &&&&&&&-1&0&1\\ &&&&&&&1&-4&3\end{smallmatrix}\right),\quad E_{s}=\frac{1}{2}\left(\begin{% smallmatrix}0&0&0&&&&&&\\ 0&0&0&&&&&&\\ -\frac{1}{2}&2&-3&2&-\frac{1}{2}&&&&&\\ &&0&0&0&&&\\ &&-\frac{1}{2}&2&-3&2&-\frac{1}{2}&&\\ &&&&0&0&0&\\ &&&&&\ddots&\ddots&\ddots&&\\ &&&&&0&0&0&\\ &&&&&-\frac{1}{2}&2&-3&2&-\frac{1}{2}\\ &&&&&&&0&0&0\\ &&&&&&&0&0&0\end{smallmatrix}\right).$$ Define an inflation operator $Infl:\mathbbm{R}^{n_{y}\times n_{x}}\longrightarrow\mathbbm{R}^{(n_{y}+2)% \times(n_{x}+2)}$ by adding zeros: $$Infl(U)=\begin{pmatrix}0&\cdots&0\\ \vdots&U&\vdots\\ 0&\cdots&0\\ \end{pmatrix}_{(n_{y}+2)\times(n_{x}+2)}$$ and its matrix representation is given as $\tilde{I}_{x}\otimes\tilde{I}_{y}$ where $$\tilde{I}_{x}=\begin{pmatrix}\mathbf{0}\\ I_{n_{x}\times n_{x}}\\ \mathbf{0}\end{pmatrix}_{(n_{x}+2)\times n_{x}},\tilde{I}_{y}=\begin{pmatrix}% \mathbf{0}\\ I_{n_{y}\times n_{y}}\\ \mathbf{0}\end{pmatrix}_{(n_{y}+2)\times n_{y}}.$$ Its adjoint is a restriction operator $Res:\mathbbm{R}^{(n_{y}+2)\times(n_{x}+2)}\longrightarrow\mathbbm{R}^{n_{y}% \times n_{x}}$ as $$Res(X)=X(2:n_{y}+1,2:n_{x}+1)\quad,\forall X\in\mathbbm{R}^{(n_{y}+2)\times(n_% {x}+2)},$$ and its matrix representation is $\tilde{I}_{x}^{T}\otimes\tilde{I}_{y}^{T}.$ 7.3 Two-dimensional case For $\bar{\Omega}=[0,1]^{2}$ we first consider an elliptic equation with homogeneous Neumann boundary condition: $$\displaystyle-\nabla\cdot(\mathbf{a}\nabla u)+\mathbf{b}\nabla u+cu=$$ $$\displaystyle f\textrm{ on }\Omega,$$ (66) $$\displaystyle\mathbf{a}\nabla u\cdot\mathbf{n}=$$ $$\displaystyle 0\textrm{ on }\partial\Omega.$$ (67) The variational form is to find $u\in H^{1}(\Omega)$ satisfying $$A(u,v)=(f,v),\quad\forall v\in H^{1}(\Omega).$$ (68) The $C^{0}$-$Q^{2}$ finite element method with $3\times 3$ Gauss-Lobatto quadrature is to find $u_{h}\in V^{h}$ satisfying $$\langle\mathbf{a}\nabla u_{h},\nabla v_{h}\rangle_{h}+\langle\mathbf{b}\nabla u% _{h},v_{h}\rangle_{h}+\langle cu_{h},v_{h}\rangle_{h}=\langle f,v_{h}\rangle_{% h},\quad\forall v_{h}\in V^{h},$$ (69) Let $\bar{U}$ be a $(n_{y}+2)\times(n_{x}+2)$ matrix such that its $(j,i)$-th entry is $\bar{U}(j,i)=u_{h}(x_{i-1},y_{j-1})$, $i=1,\dots,n_{x}+2$, $j=1,\dots,n_{y}+2$. Let $\bar{F}$ be a $(n_{y}+2)\times(n_{x}+2)$ matrix such that its $(j,i)$-th entry is $\bar{F}(j,i)=f(x_{i-1},y_{j-1})$. Then the matrix form of (69) is $$\bar{S}vec(\bar{U})=\bar{M}vec(\bar{F}),\quad\bar{M}=h_{x}h_{y}\bar{W}_{x}% \otimes\bar{W}_{y},\quad\bar{S}=\sum_{k,l=1}^{2}S_{a}^{kl}+\sum_{m=1}^{2}S_{b}% ^{m}+S_{c},$$ (70) where $$\displaystyle S_{a}^{11}=\frac{h_{y}}{h_{x}}(D_{x}^{T}\otimes I_{y})diag(vec(% \bar{W}_{y}A^{11}\bar{W}_{x}))(D_{x}\otimes I_{y})+\frac{h_{y}}{h_{x}}(E_{x}^{% T}\otimes I_{y})diag(vec(\bar{W}_{y}A^{11}\bar{W}_{x}))(E_{x}\otimes I_{y}),$$ $$\displaystyle S_{a}^{12}=(D_{x}^{T}\otimes I_{y})diag(vec(\bar{W}_{y}A^{12}% \bar{W}_{x}))(I_{x}\otimes D_{y})+(E_{x}^{T}\otimes I_{y})diag(vec(\bar{W}_{y}% A^{12}\bar{W}_{x}))(I_{x}\otimes E_{y}),$$ $$\displaystyle S_{a}^{21}=(I_{x}\otimes D_{y}^{T})diag(vec(\bar{W}_{y}A^{21}% \bar{W}_{x}))(D_{x}\otimes I_{y})+(I_{x}\otimes E_{y}^{T})diag(vec(\bar{W}_{y}% A^{21}\bar{W}_{x}))(E_{x}\otimes I_{y}),$$ $$\displaystyle S_{a}^{22}=\frac{h_{x}}{h_{y}}(I_{x}\otimes D_{y}^{T})diag(vec(% \bar{W}_{y}A^{22}\bar{W}_{x}))(I_{x}\otimes D_{y})+\frac{h_{x}}{h_{y}}(I_{x}% \otimes E_{y}^{T})diag(vec(\bar{W}_{y}A^{22}\bar{W}_{x}))(I_{x}\otimes E_{y}),$$ $$\displaystyle S_{b}^{1}=h_{y}diag(vec(\bar{W}_{y}B^{1}\bar{W}_{x}))(D_{x}% \otimes I_{y}),\quad S_{b}^{2}=h_{x}diag(vec(\bar{W}_{y}B^{2}\bar{W}_{x}))(I_{% x}\otimes D_{y}),$$ $$\displaystyle S_{c}=h_{x}h_{y}diag(vec(\bar{W}_{y}C\bar{W}_{x}).$$ Now consider the scheme (43) for nonhomogeneous Dirichlet boundary conditions. Its numerical solution can be represented as a matrix $U$ of size $ny\times nx$ with $(j,i)$-entry $U(j,i)=u_{h}(x_{i},y_{j})$ for $i=1,\cdots,nx;j=1,\cdots,ny$. Similar to the one-dimensional case, its stiffness matrix can be obtained as the submatrix of $\bar{S}$ in (70). Let $\bar{G}$ be a $(n_{y}+2)$ by $(n_{x}+2)$ matrix with $(j,i)$-th entry as $\bar{G}(j,i)=g(x_{i-1},y_{j-1})$, where $$g(x,y)=\begin{cases}0,&\mbox{if}\quad(x,y)\in(0,1)\times(0,1),\\ g(x,y),&\mbox{if}\quad(x,y)\in\partial\Omega.\\ \end{cases}$$ In particular, $\bar{G}(j+1,i+1)=0$ for $j=1,\dots,n_{y}$, $i=1,\dots,n_{x}$. Let $F$ be a matrix of size $ny\times nx$ with $(j,i)$-entry as $F(j,i)=f(x_{i},y_{j})$ for $i=1,\cdots,nx;j=1,\cdots,ny$. Then the scheme (43) becomes $$(\tilde{I}_{x}^{T}\otimes\tilde{I}_{y}^{T})\bar{S}(\tilde{I}_{x}\otimes\tilde{% I}_{y})vec(U)=(W_{x}\otimes W_{y})vec(F)-(\tilde{I}_{x}^{T}\otimes\tilde{I}_{y% }^{T})\bar{S}vec(\bar{G}).$$ (71) Even though the stiffness matrix is given as $S=(\tilde{I}_{x}^{T}\otimes\tilde{I}_{y}^{T})\bar{S}(\tilde{I}_{x}\otimes% \tilde{I}_{y})$, $S$ should be implemented as a linear operator in iterative linear system solvers. For example, the matrix vector multiplication $(\tilde{I}_{x}^{T}\otimes\tilde{I}_{y}^{T})S^{11}_{a}(\tilde{I}_{x}\otimes% \tilde{I}_{y})vec(U)$ is equivalent to the following linear operator from $\mathbbm{R}^{ny\times nx}$ to $\mathbbm{R}^{ny\times nx}$: $$\frac{h_{y}}{h_{x}}\tilde{I}_{y}^{T}\left\{I_{y}\left([\bar{W}_{y}A^{11}\bar{W% }_{x}]\circ[I_{y}(\tilde{I}_{y}U\tilde{I}^{T}_{x})D_{x}^{T}]\right)D_{x}+I_{y}% \left([\bar{W}_{y}A^{11}\bar{W}_{x}]\circ[I_{y}(\tilde{I}_{y}U\tilde{I}^{T}_{x% })E_{x}^{T}]\right)E_{x}\right\}\tilde{I}_{x},$$ where $\circ$ is the Hadamard product (i.e., entrywise multiplication). 7.4 The Laplacian case For one-dimensional constant coefficient case with homogeneous Dirichlet boundary condition, the scheme can be written as a classical finite difference scheme $H\mathbf{u}=\mathbf{f}$ with $$H=M^{-1}S=\frac{1}{h^{2}}\left(\begin{smallmatrix}2&-1&&&&&\\ -2&\frac{7}{2}&-2&\frac{1}{4}&&&\\ &-1&2&-1&&&\\ &\frac{1}{4}&-2&\frac{7}{2}&-2&\frac{1}{4}&\\ &&&-1&2&-1&\\ &&&&\ddots&\ddots&\\ &&&\frac{1}{4}&-2&\frac{7}{2}&-2\\ &&&&&-1&2\\ \end{smallmatrix}\right)$$ In other words, if $x_{i}$ is a cell center, the scheme is $$\frac{-u_{i-1}+2u_{i}-u_{i+1}}{h^{2}}=f_{i},$$ and if $x_{i}$ is a knot away from the boundary, the scheme is $$\frac{u_{i-2}-8u_{i-1}+14u_{i}-8u_{i+1}+u_{i+2}}{4h^{2}}=f_{i}.$$ It is straightforward to verify that the local truncation error is only second order. For the two-dimensional Laplacian case homogeneous Dirichlet boundary condition, the scheme can be rewritten as $$(H_{x}\otimes I_{y})+(I_{x}\otimes H_{y})vec(U)=vec(F),$$ where $H_{x}$ and $H_{y}$ are the same $H$ matrix above with size $n_{x}\times n_{x}$ and $n_{y}\times n_{y}$ respectively. The inverse of $(H_{x}\otimes I_{y})+(I_{x}\otimes H_{y})$ can be efficiently constructed via the eigen-decomposition of small matrices $H_{x}$ and $H_{y}$: 1. Compute eigen-decomposition of $H_{x}=T_{x}\Lambda_{x}T_{x}^{-1}$ and $H_{y}=T_{y}\Lambda_{y}T_{y}^{-1}$. 2. The properties of Kronecker product imply that $$(H_{x}\otimes I_{y})+(I_{x}\otimes H_{y})=(T_{x}\otimes T_{y})(\Lambda_{x}% \otimes I_{y}+I_{x}\otimes\Lambda_{y})(T_{x}^{-1}\otimes T_{y}^{-1}),$$ thus $$[(H_{x}\otimes I_{y})+(I_{x}\otimes H_{y})]^{-1}=(T_{x}\otimes T_{y})(\Lambda_% {x}\otimes I_{y}+I_{x}\otimes\Lambda_{y})^{-1}(T_{x}^{-1}\otimes T_{y}^{-1}).$$ 3. It is nontrivial to determine whether $H$ is diagonalizable. In all our numerical tests, $H$ has no repeated eigenvalues. So if assuming $\Lambda_{x}$ and $\Lambda_{y}$ are diagonal matrices, the matrix vector multiplication $[(H_{x}\otimes I_{y})+(I_{x}\otimes H_{y})]^{-1}vec(F)$ can be implemented as a linear operator on $F$: $$T_{y}([T_{y}^{-1}F(T_{x}^{-1})^{T}]./\Lambda)T_{x}^{T},$$ (72) where $\Lambda$ is a $n_{y}\times n_{x}$ matrix with $(i,j)$-th entry as $\Lambda(i,j)=\Lambda_{y}(i,i)+\Lambda_{x}(j,j)$ and $./$ denotes entry-wise division for two matrices of the same size. For the 3D Laplacian, the matrix can be represented as $H_{x}\otimes I_{y}\otimes I_{z}+I_{x}\otimes H_{y}\otimes I_{z}+I_{x}\otimes I% _{y}\otimes H_{z}$ thus can be efficiently inverted through eigen-decomposition of small matrices $H_{x},H_{y}$ and $H_{z}$ as well. Since the eigen-decomposition of small matrices $H_{x}$ and $H_{y}$ can be precomputed, and (72) costs only $\mathcal{O}(n^{3})$ for a 2D problem on a mesh size $n\times n$, in practice (72) can be used as a simple preconditioner in conjugate gradient solvers for the following linear system equivalent to (71): $$(W_{x}^{-1}\otimes W_{y}^{-1})(\tilde{I}_{x}^{T}\otimes\tilde{I}_{y}^{T})\bar{% S}(\tilde{I}_{x}\otimes\tilde{I}_{y})vec(U)=vec(F)-(W_{x}^{-1}\otimes W_{y}^{-% 1})(\tilde{I}_{x}^{T}\otimes\tilde{I}_{y}^{T})\bar{S}vec(G),$$ even though the multigrid method as reviewed in Xu & Zikatanov (2017) is the optimal solver in terms of computational complexity. 8 Numerical results In this section we show a few numerical tests verifying the accuracy of the scheme (43) implemented as a finite difference scheme on a uniform grid. We first consider the following two dimensional elliptic equation: $$-\nabla\cdot(\mathbf{a}\nabla u)+cu=f\quad\textrm{on }[0,1]\times[0,2]$$ where $\mathbf{a}=\left({\begin{array}[]{cc}a_{11}&a_{12}\\ a_{21}&a_{22}\\ \end{array}}\right)$, $a_{11}=10+30y^{5}+x\cos{y}+y$, $a_{12}=a_{21}=2+0.5(\sin(\pi x)+x^{3})(\sin(\pi y)+y^{3})+\cos(x^{4}+y^{3})$, $a_{22}=10+x^{5}$, $c=1+x^{4}y^{3}$, with an exact solution $$u(x,y)=0.1(\sin(\pi x)+x^{3})(\sin(\pi y)+y^{3})+\cos(x^{4}+y^{3}).$$ The errors at grid points are listed in Table 1 for purely Dirichlet boundary condition and Table 2 for purely Neumann boundary condition. We observe fourth order accuracy in the discrete 2-norm for both tests, even though only $\mathcal{O}(h^{3.5})$ can be proven for Neumann boundary condition as discussed in Remark 5.10. Regarding the maximum norm of the superconvergence of the function values at Gauss-Lobatto points, one can only prove $\mathcal{O}(h^{3}\log h)$ even for the full finite element scheme (1) since discrete Green’s function is used, see Chen (2001). Next we consider a three-dimensional problem $-\Delta u=f$ with homogeneous Dirichlet boundary conditions on a cube $[0,1]^{3}$ with the following exact solution $$u(x,y,z)=\sin(\pi x)\sin(2\pi y)\sin(3\pi z)+(x-x^{3})(y^{2}-y^{4})(z-z^{2}).$$ See Table 3 for the performance of the finite difference scheme. There is no essential difficulty to extend the proof to three dimensions, even though it is not very straightforward. Nonetheless we observe that the scheme is indeed fourth order accurate. The linear system is solved by the eigenvector method shown in Section 7.4. The discrete 2-norm over the set of all grid points $Z_{0}$ is defined as $\|u\|_{2,Z_{0}}=\left[h^{3}\sum_{(x,y,z)\in Z_{0}}|u(x,y,z)|^{2}\right]^{\frac% {1}{2}}$. Last we consider a two dimensional elliptic equation with convection term and the coefficients $\mathbf{b}$ is incompressible $\nabla\cdot\mathbf{b}=0$: $$-\nabla\cdot(\mathbf{a}\nabla u)+\mathbf{b}\cdot\nabla u+cu=f\quad\textrm{on }% [0,1]\times[0,2]$$ where $\mathbf{a}=\left({\begin{array}[]{cc}a_{11}&a_{12}\\ a_{21}&a_{22}\\ \end{array}}\right)$, $a_{11}=100+30y^{5}+x\cos{y}+y$, $a_{12}=a_{21}=2+0.5(\sin(\pi x)+x^{3})(\sin(\pi y)+y^{3})+\cos(x^{4}+y^{3})$, $a_{22}=100+x^{5}$, $\mathbf{b}=\left({\begin{array}[]{c}b_{1}\\ b_{2}\\ \end{array}}\right)$, $b_{1}=\psi_{y}$, $b_{2}=-\psi_{x}$, $\psi=xexp(x^{2}+y)$, $c=1+x^{4}y^{3}$, with an exact solution $$u(x,y)=0.1(\sin(\pi x)+x^{3})(\sin(\pi y)+y^{3})+\cos(x^{4}+y^{3}).$$ The errors at grid points are listed in Table 4 for Dirichlet boundary conditions. 9 Concluding remarks In this paper we have proven the superconvergence of function values in the simplest finite difference implementation of $C^{0}$-$Q^{2}$ finite element method for elliptic equations. In particular, the scheme (43) can be easily implemented as a fourth order accurate finite difference scheme as shown in Section 7. It provides only only an convenient approach for constructing fourth order accurate finite difference schemes but also an efficient implementation of $C^{0}$-$Q^{2}$ finite element method without losing superconvergence of function values. In a follow up paper Li & Zhang (2019a), we will show that discrete maximum principle can be proven for the scheme (43) solving a variable coefficient Poisson equation. Acknowledgments H. Li and X. Zhang were supported by the NSF grant DMS-1522593. References Bakker (1982) Bakker, M. (1982) A note on $C^{0}$ Galerkin methods for two-point boundary problems. Numerische Mathematik, 38, 447–453. Chen (1979) Chen, C. (1979) Superconvergent points of Galerkin’s method for two point boundary value problems. Numerical Mathematics A Journal of Chinese Universities, 1, 73–79. Chen (1981) Chen, C. (1981) Superconvergence of finite element solutions and their derivatives. Numerical Mathematics A Journal of Chinese Universities, 3, 118–125. Chen (2001) Chen, C. (2001) Structure theory of superconvergence of finite elements (In Chinese). Hunan Science and Technology Press, Changsha. Ciarlet (1991) Ciarlet, P. G. (1991) Basic error estimates for elliptic problems. Handbook of Numerical Analysis, 2, 17–351. Ciarlet (2002) Ciarlet, P. G. (2002) The Finite Element Method for Elliptic Problems. Society for Industrial and Applied Mathematics. Ciarlet & Raviart (1972) Ciarlet, P. G. & Raviart, P.-A. (1972) The combined effect of curved boundaries and numerical integration in isoparametric finite element methods. The mathematical foundations of the finite element method with applications to partial differential equations. Elsevier, pp. 409–474. Douglas et al. (1974) Douglas, J., Dupont, T. & Wheeler, M. F. (1974) An $L^{\infty}$ estimate and a superconvergence result for a Galerkin method for elliptic equations based on tensor products of piecewise polynomials. Grisvard (2011) Grisvard, P. (2011) Elliptic problems in nonsmooth domains, vol. 69. SIAM. Huang & Xu (2008) Huang, Y. & Xu, J. (2008) Superconvergence of quadratic finite elements on mildly structured grids. Mathematics of computation, 77, 1253–1268. Lesaint & Zlamal (1979) Lesaint, P. & Zlamal, M. (1979) Superconvergence of the gradient of finite element solutions. RAIRO. Analyse numérique, 13, 139–166. Li & Zhang (2019a) Li, H. & Zhang, X. (2019a) On the monotonicity and discrete maximum principle of the finite difference implementation of $C^{0}$-$Q^{2}$ finite element method. in preparation. Li & Zhang (2019b) Li, H. & Zhang, X. (2019b) Superconvergence of $C^{0}$-$Q^{k}$ finite element method for elliptic equations with approximated coefficients. arXiv preprint arXiv:1902.00945. Lin et al. (1991) Lin, Q., Yan, N. & Zhou, A. (1991) A rectangle test for interpolated finite elements. Proc. Sys. Sci. and Sys. Eng.(Hong Kong), Great Wall Culture Publ. Co. Proc. Sys. Sci. and Sys. Eng.(Hong Kong), Great Wall Culture Publ. Co., pp. 217–229. Lin & Yan (1996) Lin, Q. & Yan, N. (1996) Construction and Analysis for Efficient Finite Element Method (In Chinese). Hebei University Press. Savaré (1998) Savaré, G. (1998) Regularity results for elliptic equations in lipschitz domains. Journal of Functional Analysis, 152, 176–201. Wahlbin (2006) Wahlbin, L. (2006) Superconvergence in Galerkin finite element methods. Springer. Whiteman (1975) Whiteman, J. (1975) Lagrangian finite element and finite difference methods for poisson problems. Numerische Behandlung von Differentialgleichungen. Springer, pp. 331–355. Xu & Zikatanov (2017) Xu, J. & Zikatanov, L. (2017) Algebraic multigrid methods. Acta Numerica, 26, 591–721.
Perturbative expansions of Rényi relative divergences and holography Tomonori Ugajin Okinawa Institute of Science and Technology, Tancha, Kunigami gun, Onna son, Okinawa 1919-1 In this paper, we develop a novel way to perturbatively calculate Rényi relative divergences $D_{\gamma}(\rho||\sigma)={\rm tr}\rho^{\gamma}\sigma^{1-\gamma}$ and related quantities without using replica trick as well as analytic continuation. We explicitly determine the form of the perturbative term at any order by an integral along the modular flow of the unperturbed state. By applying the prescription to a class of reduced density matrices in conformal field theory, we find that the second order term of certain linear combination of the divergences has a holographic expression in terms of bulk symplectic form, which is a one parameter generalization of the statement ”Fisher information = Bulk canonical energy”. Contents 1 Introduction 2 New expansion formula using the resolvent trick 3 Some explicit checks 3.1 First order term $T^{(1)}_{\gamma}(\delta\rho)$ 3.2 Second order term $T^{(2)}_{\gamma}(\delta\rho)$ 3.2.1 Checks 4 Expressions of perturbative terms in terms of the vacuum modular flow 4.1 Doing the Fourier transformation 4.2 Choice of the integration contour: The quadratic $n=2$ term 4.3 Contour choice: $n\geq 3$ terms 5 Applications to conformal field theory 5.1 Set up 5.2 The perturbative expression of $T_{\gamma}(\rho)$ 5.3 Bring $n=2$ term to the standard form 6 Expansion of Petz’s quasi entropy $D_{\gamma}(\rho||\sigma)$ 6.1 Expressing $X_{\gamma}(\delta\rho)$ and $Y_{\gamma}(\delta\rho)$ by modular flow integrals 6.2 Holographic expressions of $X_{\gamma}(\delta\rho)$ and $Y_{\gamma}(\delta\rho)$ 6.2.1 Set up 6.2.2 Holographic rewritings 7 Conclusions A The calculation of $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$ B Fixing the contour of $n=2$ term C Simplifying $T^{(2)}_{\gamma}(\delta\rho)$ D Direct Fourier transformation E Details of the holographic rewriting 1 Introduction The concept of entanglement is one of the keys to understand how holography works. This idea is supported by the Ryu Takayanagi formula [1, 2] and its covariant generalization [3], which relate the area of particular extremal surfaces in the bulk, to the entanglement entropies in the dual conformal field theory (CFT). As a concrete and quantitative application of this entanglement vs gravity program, recently it has been shown that bulk gravitational dynamics can be read off from the entanglement structure of states in the dual CFT. In this line of developments, it was initially observed that so called first law of entanglement [4] is related to the linearized Einstein equations in the bulk [5, 6]. Consider starting from the vacuum reduced density matrix $\rho_{0}$ and making it excited slightly $\rho_{0}\rightarrow\rho_{0}+\delta\rho$ in a CFT. The change of the entanglement entropy $\delta S$ obeys first law of entanglement, $\delta S={\rm tr}\left[K\delta_{0}\right]$, where $K=-\log\rho_{0}$ is called modular Hamiltonian of $\rho_{0}$. For the subsystems of special type, the vacuum modular Hamiltonian has a local expression given by an integral of energy density over the subsystem. There is a natural bulk counterpart of the vacuum modular Hamiltonian, namely, the generator of time translation of a topological black hole with hyperbolic horizon, whose Bekenstein Hawking entropy gives the CFT vacuum entanglement entropy [7]. The first law of entanglement is related to the first law of thermodynamics applied to the topological black hole, and this enabled us to read off the linearized equations of motion. Recently this nice story at the first order in the perturbation $\delta\rho$ has been generalized to the quadratic order . It was noticed that, in CFT the second order change of the entanglement entropy can be concisely summarized as an integral of correlation functions along the flow generated by the vacuum modular Hamiltonian $K=-\log\rho_{0}$ on the subsystem [8, 9, 10]. This was further extended to arbitrarily order in $\delta\rho$ and some technical issue was pointed out [11]. It was also recognized that by rewriting the CFT answer in terms of bulk variables, we naturally identify it with bulk canonical energy[12], which was first found holographically in [13]. This makes it possible to read off the bulk equations of motion beyond the linearized level. Given these developments, it is now natural to generalize this story to other quantum information theoretic quantities. In particular we would like to find such a quantity which admits a nice perturbative expansion in CFT and has a dual holographic expression. Natural candidates having these properties are those involving powers of reduced density matrices, for example ${\rm tr}\;\rho^{\gamma}$ which is related to Rényi entropy. Conventionally, a Rényi type quantity, like ${\rm tr}\;\rho^{\gamma}$ has been computed by replica trick. In this trick, we first regard the Rényi index to be a positive integer $\gamma=n$, and represent the quantity as a path integral on a branched space $\Sigma_{n}$ which is prepared by gluing $n$ copies of the original space with cuts along the subsystems. After the computation of the path integral, we then analytically continue the integer $n$ to arbitrarily number $\gamma$. However, this trick has several disadvantages, even when we compute the quantity perturbatively. First of all, the analytic continuation is usually difficult to perform. For example, when we perturbatively expand ${\rm tr}\;\rho^{n}$ for $\rho=\rho_{0}+\delta\rho$, at quadratic order we encounter following sum $$\sum_{k,m}{\rm tr}\left[\rho_{0}^{k-1}\delta\rho\rho_{0}^{m-k-1}\delta\rho\rho% _{0}^{n-m}\right].$$ (1) In order to analytically continue it in $n$ we first need to perform this sum to get a closed expression. Although for special cases we can do this, in general it is difficult. In addition to this, we do not know how to do analogous sums for the cubic term and higher. Second, there are ambiguities in the analytic continuations. According to the Carson’s theorem, we need to correctly specify the behavior of ${\rm tr}\;\rho^{n}$ on certain region of the complex $n$ plane, in order to fix the ambiguities. In order to overcome these difficulties, in this paper we would like to develop a new way to perturbatively calculate Rényi type quantities without using replica trick, and analytic continuation. The idea we employ is simple, namely writing ${\rm tr}\;\rho^{\gamma}$ by a contour integral, $${\rm tr}\;\rho^{\gamma}=\int_{C}\frac{dz}{2\pi i}\;z^{\gamma}\;{\rm tr}\frac{1% }{z-\rho},$$ (2) where the contour $C$ is chosen so that it includes all the poles of the integrand, but avoid the contribution of the branch cut coming from $z^{\gamma}$. We refer to [14, 15] for discussions on the representation. By expanding the denominator of the integrand for perturbative states $\rho=\rho_{0}+\delta\rho$, we can systematically write each term of the perturbative expansion by an integral along the modular flow of the reference state $\rho_{0}$. If we apply this expansion for a class of perturbative excited states from vacuum in a $d$ dimensional CFT, we can write each term as an integral of a correlation function $\langle\cdots\rangle_{\Sigma_{\gamma}}$on the branched space $\Sigma_{\gamma}=S^{1}_{\gamma}\times H^{d-1}$ along the modular flow generated by $\rho_{0}$. Here, $S^{1}_{\gamma}$ denotes the Euclidean time circle with $2\pi\gamma$ periodicity, and $H^{d-1}$ is $d-1$ dimensional hyperbolic space. Of course, the CFT correlation functions on $\Sigma_{\gamma}$ are difficult to calculate when $d>2$, as the branched space $\Sigma_{\gamma}$ is not conformally related to d dimensional flat space, and even two point functions are highly theory dependent ones. However, by the same trick, we can similarly expand the Petz’s quasi entropy [16] defined by, $$D_{\gamma}(\rho||\sigma)={\rm tr}\;\ \rho^{\gamma}\sigma^{1-\gamma}.$$ (3) This quantity can be regarded as a one parameter generalization of relative entropy, $$\frac{d}{d\gamma}D_{\gamma}(\rho||\sigma)\big{|}_{\gamma=1}=S(\rho||\sigma)={% \rm tr}\rho\log\rho-{\rm tr}\rho\log\sigma.$$ (4) We also refer to recent studies on Rényi generalizations of relative entropy [17, 18, 19, 20, 21, 22] as well as perturbative calculations of relative entropy [23, 11, 24, 25, 26, 27, 28, 29]. One notable feature of this Rényi relative divergence is that, each term of its perturbative expansion involves a correlator on the regular space $\Sigma_{1}$ which is conformally related to flat space. This implies that the first few terms of the expansion are almost fixed by conformal symmetry, and independent of the CFT we consider. In particular, this property enables us to holographically write the quadratic terms of certain linear combinations of $D_{\gamma}(\rho||\sigma)$ which we will denote by $X_{\gamma}(\delta\rho),Y_{\gamma}(\delta\rho)$, in terms of bulk symplectic form, without the details of the bulk to boundary dictionary. This generalizes the statement ” quantum fisher information = bulk canonical energy”. See also [30, 31] for recent discussions on bulk symplectic form. This paper is organized as follows. In section 2, we explain how to expand $T_{\gamma}(\rho)={\rm tr}\rho^{\gamma}$ using the formula (2). We first derive expressions of the perturbative terms as integrals with respect to the entanglement spectrum of the unperturbed state. In section 3, we check these expressions against known results. In section 4 we express each term of the perturbative expansion as an integral along the modular flow of the unperturbed state by Fourier transforming the spectral representation of the kernel derived in section 2. In section 5 we apply the formalism to reduced density matrices in conformal field theory, and write these perturbative terms in terms of correlation functions in CFT. In section 6, we discuss a similar expansion of Petz’s quasi entropy and derive a holographic expression of the second order term. 2 New expansion formula using the resolvent trick In the first few sections we focus on the Rényi type quantity $$T_{\gamma}(\rho)={\rm tr}\;\rho^{\gamma}.$$ (5) In the discussions we do not assume the index $\gamma$ to be an positive integer $\gamma\in\mathbb{Z}_{+}$, where one can use the replica trick. Although we will apply the prescription developing here to conformal field theory, the discussions in this section and the next few ones are applicable for any density matrix of any theory. When the density matrix $\rho$ is sufficiently close to the reference state $\rho_{0}$, ie $\rho=\rho_{0}+\delta\rho$, we can expand $T_{\gamma}(\rho)$ by a power series of $\delta\rho$, $$T_{\gamma}(\rho)=T_{\gamma}(\rho_{0})+\sum^{\infty}_{n=0}T^{(n)}_{\gamma}(% \delta\rho),$$ (6) and decompose each term in the perturbative expansion by the spectra of the reference state $\rho_{0}$. Let us first do this. We begin the discussion by first writing $T_{\gamma}(\rho)$ using the resolvent of $\rho$, $${\rm tr}\;\rho^{\gamma}=\int_{C}\frac{dz}{2\pi i}\;z^{\gamma}\;{\rm tr}\frac{1% }{z-\rho},$$ (7) where the contour $C$ is encircling the interval $[\rho_{\min},1]$ in the $z$ plane, but not $z=0$, so that it picks up all contributions of eigenvalues of $\rho$. $\rho_{\min}$ is the smallest eigenvalue of the density matrix $\rho$.(See figure 1.) When $\rho$ is a reduced density matrix of a quantum field theory, we need to put a UV cut off $\varepsilon$ so that the density matrix $\rho$ has a minimum eigenvalue, then after the calculation we send $\varepsilon\rightarrow 0$. We will explicitly see that only the unperturbed term $T_{\gamma}(\rho_{0})$ depends on the UV cutoff and rests do not. Therefore we can uniquely fix the the form of $T^{(n)}_{\gamma}(\delta\rho),\;n\geq 1$. When $\rho=\rho_{0}+\delta\rho$ the resolvent can be easily expanded, $$\frac{1}{z-\rho}=\sum^{\infty}_{n=0}R_{n}(\delta\rho)\quad R_{n}(\delta\rho)=% \left(\frac{1}{(z-\rho_{0})}\delta\rho\right)^{n}\frac{1}{(z-\rho_{0})}.$$ (8) By inserting the complete set of eigenstates $|\omega_{i}\rangle$ of the reference state $\rho_{0}$ , $$\int d\omega_{i}|\omega_{i}\rangle\langle\omega_{i}|=1,\quad\rho_{0}|\omega_{i% }\rangle=e^{-2\pi\omega_{i}}|\omega_{i}\rangle,$$ (9) to the left of $i$-th term, taking trace, and evaluating $1/(z-\rho_{0})$ from the left, we have, $$\displaystyle{\rm tr}\left[R_{n}(\delta\rho)\right]$$ $$\displaystyle=\int\prod^{n}_{i=1}d\omega_{i}\prod^{n-1}_{i=1}\frac{1}{z-e^{-2% \pi\omega_{i}}}\prod^{n-1}_{k=1}\langle\omega_{k}|\delta\rho|\omega_{k+1}% \rangle\langle\omega_{n}|\frac{1}{(z-\rho_{0})}\delta\rho\frac{1}{(z-\rho_{0})% }|\omega_{1}\rangle$$ $$\displaystyle=\int\prod^{n}_{i=1}d\omega_{i}\frac{1}{(z-e^{-2\pi\omega_{1}})^{% 2}}\prod^{n}_{i=2}\frac{1}{z-e^{-2\pi\omega_{i}}}\prod^{n}_{k=1}\langle\omega_% {k}|\delta\rho|\omega_{k+1}\rangle,$$ (10) in the last term, $\omega_{n+1}\equiv\omega_{1}$ is understood. In summary, here we expanded $T_{\gamma}(\rho)$ with respect to $\delta\rho$, as in (6), and saw that the $n$ th order term of the expansion $T^{(n)}_{\gamma}(\delta\rho)$ is given by $$T^{(n)}_{\gamma}(\delta\rho)=\int\prod^{n}_{i=1}d\omega_{i}\left[\int_{C}\frac% {dz}{2\pi i}z^{\gamma}\frac{1}{(z-e^{-2\pi\omega_{1}})^{2}}\prod^{n}_{i=2}% \frac{1}{z-e^{-2\pi\omega_{i}}}\right]\prod^{n}_{k=1}\langle\omega_{k}|\delta% \rho|\omega_{k+1}\rangle.$$ (11) By defining the kernel function, $$K^{(n)}(\omega_{1},\cdots\omega_{n})\equiv\int_{C}\frac{dz}{2\pi i}\frac{z^{% \gamma}}{(z-e^{-2\pi\omega_{1}})^{2}}\prod^{n}_{i=2}\frac{1}{z-e^{-2\pi\omega_% {i}}},$$ (12) we write, $$T^{(n)}_{\gamma}(\delta\rho)=\int\prod^{n}_{i=1}d\omega_{i}K^{(n)}(\omega_{1},% \cdots\omega_{n})\prod^{n}_{k=1}\langle\omega_{k}|\delta\rho|\omega_{k+1}\rangle.$$ (13) 3 Some explicit checks We have obtained the perturbative expansion using the spectrum of the reference state $\rho_{0}$. To get some insights, in this section we explicitly write down first few terms of the expansion and check them against known results. 3.1 First order term $T^{(1)}_{\gamma}(\delta\rho)$ The first order term of the series is given by $$\displaystyle T^{(1)}_{\gamma}(\delta\rho)$$ $$\displaystyle=\int d\omega\;\langle\omega|\delta\rho|\omega\rangle\int_{C}% \frac{dz}{2\pi i}\frac{z^{\gamma}}{(z-e^{-2\pi\omega})^{2}}$$ $$\displaystyle=\gamma{\rm tr}\left[\rho_{0}^{\gamma-1}\delta\rho\right].$$ (14) as it should be. 3.2 Second order term $T^{(2)}_{\gamma}(\delta\rho)$ Let us move on to the second order term $T^{(2)}_{\gamma}(\delta\rho)$. It is given by $$\displaystyle T^{(2)}_{\gamma}(\delta\rho)=\int d\omega d\omega^{\prime}% \langle\omega|\delta\rho|\omega^{\prime}\rangle\langle\omega^{\prime}|\delta% \rho|\omega\rangle\;K(\omega,\omega^{\prime}).$$ (15) Precise form of $K(\omega,\omega^{\prime})$ can be derived by the contour integral, $$\displaystyle K(\omega,\omega^{\prime})$$ $$\displaystyle=\int_{C}\frac{dz}{2\pi i}\;\frac{z^{\gamma}}{(z-e^{-2\pi\omega})% ^{2}(z-e^{-2\pi\omega^{\prime}})}$$ $$\displaystyle=\frac{1}{(e^{-2\pi\omega^{\prime}}-e^{-2\pi\omega})^{2}}\left[(% \gamma-1)e^{-2\pi\gamma\omega}+e^{-2\pi\gamma\omega^{\prime}}-\gamma e^{-2\pi(% \gamma-1)\omega}e^{-2\pi\omega^{\prime}}\right].$$ (16) 3.2.1 Checks $\gamma=n\in\mathbb{Z}_{+}$ When the index $\gamma$ is a positive integer, the kernel $K_{\gamma}(\omega,\omega^{\prime})$ is decomposed into the sum, $$\displaystyle K(\omega,\omega^{\prime})=\left[\sum^{\gamma-2}_{l=0}\left((% \gamma-1)-l\right)\left(e^{-2\pi\omega}\right)^{\gamma-l}\left(e^{-2\pi\omega^% {\prime}}\right)^{l}\right].$$ (17) Plugging this into (15) and undoing the spectral decomposition, we recover the obvious expansion (1) which we frequently encounter in replica calculations. The kernel avoids the difficulties of replica trick, by automatically doing the summation as well as analytic continuation in $n$. The von Neumann entropy limit $T_{\gamma}(\rho)$ is related to the von Neumann entropy $S(\rho)$ by $$S(\rho)=-{\rm tr}\rho\log\rho=\frac{\partial}{\partial\gamma}T_{\gamma}(\rho)% \big{|}_{\gamma=1}.$$ (18) From (16) we derive the kernel for the quadratic part of the von Neumann entropy, $$\frac{\partial K_{\gamma}}{\partial\gamma}\big{|}_{\gamma=1}=\frac{e^{2\pi% \omega}}{(1-e^{2\pi(\omega-\omega^{\prime})})}\left[(e^{-2\pi\omega}-e^{-2\pi% \omega^{\prime}})+2\pi(\omega-\omega^{\prime})e^{-2\pi\omega^{\prime}}\right].$$ (19) In [8], a perturbative expansion of the von Neumann entropy $S(\rho_{0}+\delta\rho)$ was discussed, by expanding the modular Hamiltonian $K_{\rho}=-\rho_{0}+\delta\rho$ using the identity, $$\log\rho=\int^{\infty}_{0}d\beta\left(\frac{1}{\rho+\beta}-\frac{1}{\beta+1}% \right),$$ (20) the result of the quadratic order kernel in [8] agrees with (19). 4 Expressions of perturbative terms in terms of the vacuum modular flow The $\omega$ integrals in the right hand side of (13) are of course hard to perform, as we do not know precise form of the eigenvalue distribution of $\rho_{0}$. To proceed, we now express each term of the perturbative series $T^{(n)}_{\gamma}(\delta\rho)$ as an integral along the modular flow of $\rho_{0}$, by Fourier transforming the kernel $\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})$. This process is very analogous to the case of the von Neumann entropy perturbation done in [8] for quadratic order term and generalized to higher order terms in [11]. It is convenient to introduce the rescaled kernel, defined by $$\displaystyle\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})$$ $$\displaystyle\equiv e^{2\pi\gamma\omega_{1}-2\pi\sum_{k=1}^{n}\omega_{k}}K^{(n% )}(\omega_{1},\cdots\omega_{n}),$$ (21) $$\displaystyle=\int_{C}\frac{dz}{2\pi i}z^{\gamma}\frac{e^{2\pi(\gamma-1)\omega% _{1}}}{(z-e^{-2\pi\omega_{1}})^{2}}\prod^{n}_{i=2}\frac{e^{-2\pi\omega_{i}}}{z% -e^{-2\pi\omega_{i}}}.$$ (22) Using this function, we get $$\displaystyle T^{(n)}_{\gamma}(\delta\rho)$$ $$\displaystyle=\int\prod^{n}_{i=1}d\omega_{i}K^{(n)}(\omega_{1},\cdots\omega_{n% })\prod^{n}_{k=1}\langle\omega_{k}|\delta\rho|\omega_{k+1}\rangle$$ $$\displaystyle=\int\prod^{n}_{i=1}d\omega_{i}\;e^{-2\pi\gamma\omega_{1}+2\pi% \sum_{k=1}^{n}\omega_{k}}\;\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_% {n})\prod^{n}_{k=1}\langle\omega_{k}|\delta\rho|\omega_{k+1}\rangle$$ $$\displaystyle=\int\prod^{n}_{i=1}d\omega_{i}\;\mathcal{K}_{n}^{\gamma}(\omega_% {1},\cdots\omega_{n})\;\langle\omega_{1}|e^{-2\pi\gamma K}\delta\tilde{\rho}|% \omega_{2}\rangle\prod^{n}_{k=1}\langle\omega_{k}|\tilde{\delta}\rho|\omega_{k% +1}\rangle,$$ where $2\pi K=-\log\rho_{0}$ is the modular Hamiltonian of $\rho_{0}$, and $\tilde{\delta}\rho=e^{\pi K}\delta\rho\;e^{\pi K}$. It can be easily shown that the new kernel $\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})$ is invariant under the shifts $\omega_{i}\rightarrow\omega_{i}+\alpha$, $$\mathcal{K}^{(n)}_{\gamma}(\omega_{1}+\alpha,\cdots\omega_{n}+\alpha)=\mathcal% {K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n}).$$ (24) So if we change the variables to $\{a_{i},b\}$, $$a_{i}=\omega_{i}-\omega_{i+1},i=1\cdots n-1\quad b=\sum_{i=1}^{n}\omega_{i},$$ (25) $\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})$ only depends on $n-1$ variables $\{a_{i}\}_{i=1\cdots n-1}$, $$\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})=\mathcal{K}^{(n)}_{% \gamma}(a_{1},a_{2},\cdots a_{n-1}).$$ (26) Thanks to this property $\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})$ has a nice Fourier transformation, $$\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})=\int_{C}ds_{1}\cdots ds% _{n-1}e^{i\sum_{k=1}^{n-1}s_{k}a_{k}}\;\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s% _{n-1}),$$ (27) $\{s_{i}\}_{i=1\cdots n-1}$ are variables dual to the spectrum of $\rho_{0}$, therefore they have a geometric interpretation, ie, they are parameterizing the modular flow of $\rho_{0}$. Also, as we will see later, we need to properly choose the integration contours $C$ in order for the Fourier transformation (27) to correctly reproduce the kernel $\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots\omega_{n})$. Using this and undoing the spectral decompositions (9), we can write $T^{(n)}_{\gamma}(\delta\rho)$ as an integral of real time $\{s_{i}\}$ variables, $$\displaystyle T^{(n)}_{\gamma}(\delta\rho)$$ $$\displaystyle=\int\prod^{n}_{i=1}d\omega_{i}\;\mathcal{K}^{(n)}_{\gamma}(% \omega_{1},\cdots\omega_{n})\;\langle\omega_{1}|e^{-2\pi\gamma K}\delta\tilde{% \rho}|\omega_{2}\rangle\prod^{n}_{k=1}\langle\omega_{k}|\tilde{\delta}\rho|% \omega_{k+1}\rangle$$ $$\displaystyle=\int_{C}ds_{1}\cdots ds_{n-1}\;\mathcal{K}^{(n)}_{\gamma}(s_{1},% \cdots s_{n-1})\;{\rm tr}\left[e^{-2\pi\gamma K}\prod^{n-1}_{k=1}e^{iKs_{k}}% \tilde{\delta}\rho\;e^{-iKs_{k}}\;\tilde{\delta}\rho\right].$$ (28) In the actual CFT computations, this undoing is a bit tricky, and needed special cares. We will discuss on this in the latter sections. 4.1 Doing the Fourier transformation Let us first specify the form of the real time kernel $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$.The task is doing the inverse Fourier transformation, $$\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})=\int\frac{da_{1}\cdots da_{n-% 1}}{(2\pi)^{n-1}}e^{-i\sum_{k=1}^{n-1}s_{k}a_{k}}\mathcal{K}^{(n)}_{\gamma}(a_% {1},\cdots a_{n-1}).$$ (29) The trick we use is very similar to the one developed in our previous paper [11]. By inserting a delta function, $$\delta(q)=\frac{1}{2\pi}\int dbe^{-iqb},$$ (30) we can disentangle the multiple integral to a product of integrals of single variables $\{\omega_{i}\}$, $$\displaystyle\delta(q)\;\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$$ $$\displaystyle=\frac{1}{(2\pi)^{n}}\int dbe^{-iqb}\int da_{1}\cdots da_{n-1}\;e% ^{-i\sum_{k=1}^{n-1}s_{k}a_{k}}\;\mathcal{K}^{(n)}_{\gamma}(a_{1},\cdots a_{n-% 1})$$ $$\displaystyle=\frac{n}{(2\pi)^{n}}\int d\omega_{1}\cdots d\omega_{n}\;e^{-iqb}% e^{-i\sum_{k=1}^{n-1}s_{k}a_{k}}\;\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots% \omega_{n}),$$ (31) in the second line we used the relations (25). Now the integral is $$\displaystyle\delta(q)\;\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$$ $$\displaystyle=\frac{n}{(2\pi)^{n}}\int d\omega_{1}\cdots d\omega_{n}\;e^{-iqb}% e^{-i\sum_{k=1}^{n-1}s_{k}a_{k}}\;\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots% \omega_{n})$$ $$\displaystyle=\frac{n}{(2\pi)^{n}}\int_{C}\frac{dz}{2\pi i}z^{\gamma}\int d% \omega_{1}\frac{e^{-\omega_{1}\left[-2\pi(\gamma-1)+i(s_{1}+q)\right]}}{(z-e^{% -2\pi\omega_{1}})^{2}}$$ $$\displaystyle\times\prod^{n-1}_{i=2}\int d\omega_{i}\frac{e^{-\omega_{i}\left[% 2\pi+(s_{k}-s_{k-1}+q)i\right]}}{z-e^{-2\pi\omega_{i}}}$$ $$\displaystyle\times\int d\omega_{n}\frac{e^{-\omega_{n}\left[2\pi-(s_{n-1}-q)i% \right]}}{z-e^{-2\pi\omega_{n}}}$$ $$\displaystyle\equiv\frac{n}{(2\pi)^{n}}\int_{C}\frac{dz}{2\pi i}J(z).$$ (32) The strategy to compute this complicated integral is first compute each $\omega_{i}$ integral, and express $J(z)$ as a function of modular times $\{{s_{i}}\}_{i=1\cdots n-1}$. We then perform the $z$ integral by choosing the contour along the real axis, $$\delta(q)\;\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})=\frac{n}{(2\pi)^{n% }}\int^{\infty}_{0}\frac{d\beta}{2\pi i}\left(J(\beta-i\epsilon)-J(\beta+i% \epsilon)\right),\quad\epsilon\rightarrow 0_{+}.$$ (33) The details of the calculation can be found in Appendix (A) and here we only present the final result for the kernel $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$ , $$\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})=\frac{i}{8\pi^{2}}\left(\frac% {-i}{4\pi}\right)^{n-2}\frac{(s_{1}+2\pi i\gamma)\sin\pi\gamma}{\sinh\left(% \frac{s_{1}+2\pi i\gamma}{2}\right)\prod^{n-1}_{k=2}\sinh\left(\frac{s_{k}-s_{% k-1}}{2}\right)\sinh\left(\frac{s_{n-1}}{2}\right)}$$ (34) 4.2 Choice of the integration contour: The quadratic $n=2$ term In the previous subsection we derived the expression (34) of the real time kernel $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$. In order to complete the discussion we need to properly fix the contour of the real time integrals $C$ in (28). We can do so by demanding the Fourier transformation can be correctly reversed, $$\mathcal{K}^{(n)}_{\gamma}(\omega_{1},\cdots,\omega_{n})=\int_{C_{k}}\prod_{k=% 1}^{n-1}ds_{k}\;e^{i\sum_{k=1}^{n-1}s_{k}a_{k}}\;\mathcal{K}^{(n)}_{\gamma}(s_% {1},\cdots s_{n-1}).$$ (35) We first consider the contour of quadratic $n=2$ term, $$\int_{C_{s}}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\;e^{ias}=\frac{i\sin\pi\gamma}{8% \pi^{2}}\int_{C_{s}}ds\;\frac{s+2\pi i\gamma}{\sinh\frac{s}{2}\sinh\frac{s+2% \pi i\gamma}{2}}\;e^{ias},$$ (36) which is a bit tricky compared to higher order terms. When $a>0$ we close the contour on the upper half plane. The real time kernel $\mathcal{K}^{(2)}_{\gamma}(s)$ has two types of poles. $$s^{n}_{1}=2\pi in,\quad s^{k}_{2}=2\pi i(k-\gamma),\quad n,k\in\mathbb{Z},% \quad k\neq 0.$$ (37) We can easily see that if one choose the contour $C_{s}$ which contains $s^{n}_{1},n\geq 0$, and $s^{k}_{2},k\geq 1$ (as in figure 2), then the Fourier transformation is correctly reversed, $$\mathcal{K}^{(n)}_{\gamma}(a)=\int_{C_{s}}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\;e% ^{ias}.$$ (38) Again we explicitly check this in Appendix B. It is useful to write the integral as follows. Since we can write the integrand, $$\frac{(s+2\pi i\gamma)\sin\pi\gamma}{\sinh\frac{s}{2}\sinh\frac{s+2\pi i\gamma% }{2}}=\frac{s+2\pi i\gamma}{1-e^{-s}}-\frac{s+2\pi i\gamma}{1-e^{-(s+2\pi i% \gamma)}}$$ (39) then, the contour integral is naturally split into two parts, $$\int_{C_{s}}ds\frac{(s+2\pi i\gamma)\sin\pi\gamma}{\sinh\frac{s}{2}\sinh\frac{% s+2\pi i\gamma}{2}}G(s)=\int^{\infty-i\epsilon}_{-\infty-i\epsilon}ds\left[% \frac{s+2\pi i\gamma}{1-e^{-s}}\right]G(s)-\int^{\infty-2\pi i(\gamma-\epsilon% )}_{-\infty-2\pi i(\gamma-\epsilon)}ds\left[\frac{s+2\pi i\gamma}{1-e^{-(s+2% \pi i\gamma)}}\right]G(s)$$ (40) for any function $G(s)$ which is holomorphic on the strip $-2\pi\gamma<{\rm Im}s<0,$ when $\gamma>0$ . It is also worth emphasizing that when $-1<\gamma<1$, the contour gets simplified, $$\int_{C_{s}}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\;G(s)=\int^{\infty-i\epsilon}_{-% \infty-i\epsilon}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\;G(s)$$ (41) 4.3 Contour choice: $n\geq 3$ terms Now we fix all the contours $C_{k}$ in the integral (35). In the above derivation we have used following formula, $$I_{1}(\xi,\beta+i\epsilon)=\int^{\infty}_{-\infty}d\omega\frac{e^{-\omega\xi}}% {(\beta+i\epsilon)-e^{-2\pi\omega}}=\beta^{\left(\frac{\xi}{2\pi}-1\right)}% \left(\frac{e^{i\frac{\xi}{2}}}{2\sin\frac{\xi}{2}}\right).$$ (42) Notice that $\xi=p+it$, and $p$ was a real number. In order for the integral to have an inverse, we need to make sure the choice of the contour $C_{t}$ $$\int_{C_{t}}dt\;I(p+it,\beta)e^{i\omega t}=\frac{e^{-\omega p}}{\beta-e^{-2\pi% \omega}}.$$ (43) The integrand has poles at $s_{n}=ip+2\pi n$. By an explicit calculation, we recognize that we need pick up poles with $n\geq 1$, thus $$\int_{C}dt\equiv\int^{\infty+i(p+\epsilon)}_{-\infty+i(p+\epsilon)}dt.$$ (44) This in particular means that $$\displaystyle\mathcal{K}_{n}^{\gamma}(\omega_{1},\cdots\omega_{n})$$ $$\displaystyle=\int_{C}\frac{dz}{2\pi i}z^{\gamma}\frac{e^{2\pi(\gamma-1)\omega% _{1}}}{(z-e^{-2\pi\omega_{1}})^{2}}\prod^{n}_{i=2}\frac{e^{-2\pi\omega_{i}}}{z% -e^{-2\pi\omega_{i}}}$$ $$\displaystyle=\prod^{n}_{k=1}\int^{\infty+i(p_{k}+\epsilon)}_{-\infty+i(p_{k}+% \epsilon)}dt_{k}e^{i\omega_{k}t_{k}}\int_{C}\frac{dz}{2\pi i}z^{\gamma}\prod^{% n}_{k=1}I(it_{k}+p_{k},z)$$ $$\displaystyle=\prod^{n-1}_{k=1}\int_{C_{k}}ds_{k}\;e^{i\sum_{k=1}^{n-1}s_{k}a_% {k}}\mathcal{K}_{n}^{\gamma}(s_{1},\cdots s_{n-1})$$ (45) Therefore we need to choose the following contours, $${\rm Im}s_{1}=-2\pi(\gamma-\epsilon),\quad{\rm Im}s_{k}-{\rm Im}s_{k-1}=% \epsilon,\quad{\rm Im}s_{n-1}=-\epsilon$$ (46) In particular when $\gamma<0$ there is no consistent contour choice for $n\geq 3$ terms. 5 Applications to conformal field theory The discussion so far is quite general, applicable to any density matrices of any theories. From now on, we would like to apply the formula to a special type of reduced density matrices in conformal field theory(CFT). For this purpose, we first briefly summarize the construction of the reduced density matrices. For detailed discussions we refer to [11]. 5.1 Set up We start from a CFT on $d$ dimensional cylinder $\mathbb{R}\times S^{d-1}$, $$ds^{2}=dt^{2}+d\theta^{2}+\sin^{2}\theta d\Omega_{d-2}^{2}.$$ (47) We consider a ball shaped subsystem $A$, which is given by $$A:[0,\theta_{0}]\times S^{d-2},\quad t=0,$$ (48) and a reduced density matrix $\rho_{V}$ of a globally excited state $|V\rangle$ on the region $A$ , $$\rho_{V}={\rm tr}_{A^{c}}|V\rangle\langle V|.$$ (49) The reduced density matrix has a path integral representation on the cylinder with a branch cut on A. The branched cylinder is mapped to $S^{1}\times H^{d-1}$ with the metric [7], $$ds^{2}=d\tau^{2}+du^{2}+\sinh^{2}ud\Omega_{d-2}^{2},\quad\tau\sim\tau+2\pi.$$ (50) We find that in this frame $\rho_{V}$ has following expression [11], $$\rho_{V}=\frac{e^{-\pi K}V(\theta_{0})V(-\theta_{0})e^{-\pi K}}{\langle V(% \theta_{0})V(-\theta_{0})\rangle}$$ (51) where $K$ is the generator of the translation along $\tau$ direction, which can be identified with the modular Hamiltonian of $\rho_{0}$ and $V(\pm\theta_{0})$ are local operators corresponding to the excited states through state operator correspondence, located at $\tau=\pm\theta_{0},u=0$. In the small subsystem limit $\theta_{0}\rightarrow 0$, $V(\theta_{0})\rightarrow V(-\theta_{0})$. In this limit we can split the density matrix into the vacuum one $\rho_{0}=e^{-2\pi K}$ and the rest, $\rho_{V}=\rho_{0}+\delta\rho$. We do so by taking operator product expansion (OPE) of the two local operators, $$\rho_{V}=\rho_{0}+e^{-\pi K}\left[\sum_{\mathcal{O}:{\rm primaries}}C^{% \mathcal{O}}_{VV}B_{\mathcal{O}}(\theta_{0},-\theta_{0})\right]e^{-\pi K}$$ (52) where the index $\mathcal{O}$ labels non identity primaries , and $C^{\mathcal{O}}_{VV},\;$ $B_{\mathcal{O}}(\theta_{0},-\theta_{0})$ are the OPE coefficient and the OPE block of $\mathcal{O}$ respectively. 5.2 The perturbative expression of $T_{\gamma}(\rho)$ Now we determine the perturbative expression of $T_{\gamma}(\delta\rho)$ in CFT from (28). We write, $$\displaystyle{\rm tr}\rho^{\gamma}={\rm tr}\rho_{0}^{\gamma}+\sum T^{(n)}_{% \gamma}(\delta\rho),$$ (53) and for convenience we reproduce the expression of $T^{(n)}_{\gamma}$ explicitly. $$T^{(n)}_{\gamma}(\delta\rho)=\int ds_{1}\cdots ds_{n-1}\mathcal{K}^{(n)}_{% \gamma}(s_{1},\cdots s_{n-1}){\rm tr}\left[e^{-2\pi\gamma K}\prod^{n}_{k=1}e^{% iKs_{k}}\tilde{\delta}\rho e^{-iKs_{k}}\right].$$ (54) Since $\tilde{\delta}\rho=e^{\pi K}\delta\rho e^{\pi K}$, in our case we have $$e^{iKs}\delta\tilde{\rho}e^{-iKs}=\sum_{\mathcal{O}:{\rm primaries}}C^{% \mathcal{O}}_{VV}\;B_{\mathcal{O}}(is+\theta_{0},is-\theta_{0}).$$ (55) For our $\delta\rho$, the trace in (54) can be regarded as a correlation function of the OPE blocks on the covering space $\Sigma_{\gamma}=S^{1}_{\gamma}\times H^{d-1}$, with the metric (50) but the periodicity of the Euclidean time direction is changed $\tau\sim\tau+2\pi\gamma$, $$\langle\cdots\rangle_{\Sigma_{\gamma}}\equiv\frac{1}{Z_{\gamma}}{\rm tr}\left[% e^{-2\pi\gamma K}\cdots\right],$$ (56) where $Z_{\gamma}$ is the CFT partition function on this space. Combining these we can write each term of $T_{\gamma}(\delta\rho)$ by an integral of correlation functions of OPE blocks on $\Sigma_{\gamma}$ along modular flow of vacuum $\rho_{0}$. $$\frac{1}{Z_{\gamma}}T^{(n)}_{\gamma}(\delta\rho)=\sum_{\{\mathcal{O}_{l}\}}\;% \prod^{n}_{l=1}\;C^{\mathcal{O}_{l}}_{VV}\int ds_{1}\cdots ds_{n-1}\mathcal{K}% ^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})\langle\prod^{n-1}_{k}B_{\mathcal{O}_{k}}% (is_{k}+\theta_{0},is_{k}-\theta_{0})B_{\mathcal{O}}(\theta_{0},-\theta_{0})% \rangle_{\Sigma_{\gamma}}$$ (57) 5.3 Bring $n=2$ term to the standard form We have seen $n=2$ term is given by $$\frac{1}{Z_{\gamma}}T^{(2)}_{\gamma}(\delta\rho)=\sum_{\mathcal{O}:{\rm primaries% }}\int_{C}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\langle B_{\mathcal{O}}(is+\theta_{% 0},is-\theta_{0})B_{\mathcal{O}}(\theta_{0},-\theta_{0})\rangle_{\Sigma_{% \gamma}}$$ (58) We can simplify this expression when $0<\gamma<1$, and compare it with known results. In order to do so, let us focus on the contribution $T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)$ of a particular primary $\mathcal{O}$ to the $n=2$ term. Since the OPE block $B_{\mathcal{O}}$ is summing up descendants of the primary $\mathcal{O}$,we can write it as $$B_{\mathcal{O}}(\theta_{0},-\theta_{0})=C(\theta_{0},\partial_{a})\mathcal{O}(% \tau_{a})\big{|}_{\tau_{a}=0}$$ (59) where $C(\theta_{0},\partial_{a})$ is a differential operator, and $\tau_{a}$ is the coordinate of Euclidean timelike direction. In the above we did not manifest the dependence of $\mathcal{O}$ on the coordinates of hyperbolic space. The main ingredient of the formula is the integral of two point function, $$I_{ab}=\frac{i}{8\pi^{2}}\int^{\infty-i\epsilon}_{-\infty-i\epsilon}ds\frac{s+% 2\pi i\gamma}{\sinh\frac{s}{2}\sinh\frac{s+2\pi i\gamma}{2}}G_{ab}(s),\quad G_% {ab}(s)=\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{\Sigma_{% \gamma}}.$$ (60) and we can write, $$T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0% },\partial_{b})I_{ab}$$ (61) As we explain in Appendix C, we can obtain a simpler expression of $T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)$, $$T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)=\frac{\gamma\sin\pi\gamma}{4\pi}C(% \theta_{0},\partial_{a})C(\theta_{0},\partial_{b})\int^{\infty}_{-\infty}\frac% {ds}{\sinh\frac{s-\pi i\gamma}{2}\sinh\frac{s+\pi i\gamma}{2}}G_{ab}(s-\pi i\gamma)$$ (62) Notice that in the $\gamma\rightarrow 1$ limit, its derivative recovers the contribution of $\mathcal{O}$ to the second order term $S^{(2)}(\delta\rho)$ of entanglement entropy [8], $$S_{\mathcal{O}}^{(2)}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0},% \partial_{b})\int^{\infty}_{-\infty}ds\frac{-1}{4\sinh^{2}\left(\frac{s-i% \epsilon}{2}\right)}\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})% \rangle_{\Sigma_{1}}.$$ (63) 6 Expansion of Petz’s quasi entropy $D_{\gamma}(\rho||\sigma)$ In this section, we consider a similar perturbative expansion for Petz’s quasi entropy [16], defined by $$D_{\gamma}(\rho||\sigma)={\rm tr}\;\ \rho^{\gamma}\sigma^{1-\gamma},$$ (64) In this section we consider the case where the one of the reduced density matrices is vacuum $\sigma=\rho_{0}$. We then write $\rho=\rho_{0}+\delta\rho$, $$D_{\gamma}(\rho||\rho_{0})=\sum_{n=2}^{\infty}D^{(n)}_{\gamma}(\delta\rho).$$ (65) The derivation of the perturbative series is very similar to the one of $T_{\gamma}(\rho)$. We first write $$D_{\gamma}(\rho||\rho_{0})=\int_{C}\frac{dz}{2\pi i}\;z^{\gamma}\;{\rm tr}\;% \frac{\rho_{0}^{1-\gamma}}{z-\rho},$$ (66) then by expanding the denominator we obtain a similar perturbative series. One notable difference is that in the power series of $D_{\gamma}(\rho||\rho_{0})$, the $\rho^{\gamma}_{0}$ factor appears in the expansion (54) is canceled with the $\rho_{0}^{1-\gamma}$ factor which appear in the definition (64). The explicit expression of $D^{(n)}_{\gamma}(\delta\rho)$ is given by $$D^{(n)}_{\gamma}(\delta\rho)=\sum_{\{\mathcal{O}_{k}\}}\;\prod^{n}_{k=1}\;C^{% \mathcal{O}_{k}}_{VV}\prod^{n-1}_{k=1}ds_{k}\int ds_{k}\mathcal{K}^{(n)}_{% \gamma}(s_{1},\cdots s_{n-1})\langle\prod^{n-1}_{k}B_{\mathcal{O}_{k}}(is_{k}+% \theta_{0},is_{k}-\theta_{0})B_{\mathcal{O}_{n}}(\theta_{0},-\theta_{0})% \rangle_{\Sigma_{1}}$$ (67) with the kernel $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$ defined in (34). One advantage of this quantity is that we can expand it in terms of correlation functions on the space without branch cut, $\Sigma_{1}$, on the contrary to Rényi entropy itself, which is expanded by correlators $\langle\cdots\rangle_{\Sigma_{\gamma}}$ on the space $\Sigma_{\gamma}$ with branch cuts, and they are highly theory dependent quantities. This implies that first few terms of $D_{\gamma}(\rho||\sigma)$ are theory independent, and allows us to write them holographically. We also emphasize that the expressions (67) are only valid in some range of $\gamma$. In particular higher order terms $D^{(n)}_{\gamma}(\delta\rho),\;n\geq 3$, has an expression in terms of a modular flow integral only in the range $0<\gamma<1$. The limitation is again coming from the fact that there is a consistent contour choice of the modular flow integrals (46) only in the range. However $n=2$ term is still computable by the modular flow integral for any value of $\gamma$. Below, we will be focusing on following quantity, $$Z_{\gamma}(\rho||\sigma)\equiv D_{-\gamma}(\rho||\sigma)-D_{\gamma}(\rho||% \sigma),$$ (68) and its quadratic part, $$Y_{\gamma}(\delta\rho)\equiv\frac{d^{2}}{dt^{2}}Z_{\gamma}(\sigma+t\delta\rho% \;||\rho_{0})\big{|}_{t=0},.$$ (69) as well as its derivative with respect to the index $\gamma$, $$X_{\gamma}(\delta\rho)=\frac{d}{d\gamma}Y_{\gamma}(\delta\rho).$$ (70) Notice that when $\gamma=0$ $\partial_{\gamma}Z_{\gamma}(\rho||\sigma)$ reduces to the relative entropy $$\partial_{\gamma}Z_{\gamma}(\rho||\sigma)\big{|}_{\gamma=0}=2S(\sigma||\rho),$$ (71) in which the order of two density matrices is flipped $\rho\leftrightarrow\sigma$, and $X_{\gamma}(\delta\rho)$ reduces to the Fisher information, which is symmetric under the exchange. $$X_{\gamma}(\delta\rho)\big{|}_{\gamma=0}=F(\rho||\sigma).$$ (72) 6.1 Expressing $X_{\gamma}(\delta\rho)$ and $Y_{\gamma}(\delta\rho)$ by modular flow integrals Below we will focus on the range of the Rényi index $-1<\gamma<1$ for $D_{\gamma}^{(2)}(\delta\rho)$, or equivalently $0<\gamma<1$ for $X_{\gamma}(\delta\rho)$ and $Y_{\gamma}(\delta\rho)$ . When the Rényi index is in the window, $Y_{\gamma}(\delta\rho)$ has following simple modular flow integral representation, $$Y_{\gamma}(\delta\rho)=\int^{\infty-i\epsilon}_{-\infty-i\epsilon}\left[% \mathcal{K}^{(2)}_{-\gamma}(s)-\mathcal{K}^{(2)}_{\gamma}(s)\right]{\rm tr}% \left[e^{-2\pi K}\tilde{\delta}\rho(s)\;\tilde{\delta}\rho\right]ds.$$ (73) $\mathcal{K}^{(2)}_{\gamma}(s)$ is given by $$\mathcal{K}^{(2)}_{\gamma}(s)=\frac{i\sin\pi\gamma}{8\pi^{2}}\frac{(s+2\pi i% \gamma)}{\sinh\frac{s}{2}\sinh\frac{s+2\pi i\gamma}{2}}.$$ (74) For the class of $\delta\rho$ we are interested in, we have $$\displaystyle Y_{\gamma}(\delta\rho)$$ $$\displaystyle=C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b})\int^{\infty% -i\epsilon}_{-\infty-i\epsilon}\left[\mathcal{K}^{(2)}_{-\gamma}(s)-\mathcal{K% }^{(2)}_{\gamma}(-s-2\pi i)\right]\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(% \tau_{b})\rangle_{\Sigma_{1}}ds$$ $$\displaystyle=C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b})\int^{\infty% }_{-\infty}ds\left[\frac{-(\sin\pi\gamma)/4\pi}{\sinh\left(\frac{s-i\epsilon}{% 2}\right)\sinh\left(\frac{s-2\pi i\gamma}{2}\right)}\right]\langle\mathcal{O}(% is+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{\Sigma_{1}}.$$ (75) In the second term of the first line, we used another expression of $D^{2}_{\gamma}(\delta\rho)$ $$D^{2}_{\gamma}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b}% )I_{ba},\quad I_{ba}=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}ds\mathcal{K}^% {(2)}_{\gamma}(s-2\pi i)\langle\mathcal{O}(is+\tau_{b})\mathcal{O}(\tau_{a})% \rangle_{\Sigma_{1}},\quad\tau_{a}>\tau_{b},$$ (76) and flipped the sign of the integration variable $s\rightarrow-s$. The derivation of this expression is the same with that of (129) in Appendix C. By taking derivative of (75) with respect to $\gamma$, we have an expression of $X_{\gamma}(\delta\rho)$, $$X_{\gamma}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b})% \int^{\infty}_{-\infty}ds\frac{-1}{4\sinh^{2}\left(\frac{s-2\pi i\gamma}{2}% \right)}\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{\Sigma_{1% }}.$$ (77) 6.2 Holographic expressions of $X_{\gamma}(\delta\rho)$ and $Y_{\gamma}(\delta\rho)$ So far we have obtained quadratic term $Y_{\gamma}(\delta\rho)$ which is particular linear combination of the Rényi relative divergence $Z_{\gamma}(\delta||\rho_{0})$, and its derivative $X_{\gamma}(\delta\rho)$ in terms of modular flow integral (75), (77). As we will see below, through AdS/CFT correspondence, they have simple bulk expressions. The derivations are parallel to the argument of [10], where they obtained the holographic expression of quadratic term of the entanglement entropy $S^{(2)}(\delta\rho)$. 6.2.1 Set up To explain this let us first recall the corresponding bulk set up. Our reference state is the vacuum reduced density matrix $\rho_{0}$, and since we take the subsystem $A$ to be a ball shape region, corresponding Ryu Takayanagi surface can be regarded as the bifurcation surface $r_{B}=1$ of the topological black hole, $$ds^{2}=-(r_{B}^{2}-1)ds_{B}^{2}+\frac{dr_{B}^{2}}{(r_{B}^{2}-1)}+r_{B}^{2}dH_{% d-1}^{2}$$ (78) where $dH_{d-1}^{2}$ denotes the metric of $d-1$ dimensional hyperbolic space, $$dH_{d-1}^{2}=du^{2}+\sinh^{2}u\;d\Omega_{d-2}^{2}.$$ (79) In [10] it was shown that the CFT two point function in (75), (77) can be written in terms of the bulk symplectic form $\omega_{\phi}$ of the bulk field $\phi$ dual to the CFT primary $\mathcal{O}$, $$\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{\Sigma_{1}}=-\int dX% _{B}\;{\boldmath\omega}_{\phi}\left(K_{E}(X_{B}|\tau_{ba}),K_{R}(X_{B}|s)% \right).$$ (80) We evaluate the integral on fixed $r_{B}=r_{0}$ surface of the topological black hole (78), and collectively denote the coordinates of the surface by $X_{B}$. The bulk symplectic form is given by $$\omega_{\phi}(\delta\phi_{1},\delta\phi_{2})=n^{M}\left(\delta\phi_{1}\partial% _{M}\delta\phi_{2}-\delta\phi_{2}\partial_{M}\delta\phi_{1}\right),$$ (81) where $n^{M}$ is the normal vector of the $r_{B}=r_{0}$ surface. $K_{E}(X_{B}|\tau_{ba})$, $K_{R}(X_{B}|s)$ are the Euclidean and Retarded bulk to boundary propagator of the bulk field $\phi$, respectively. The primary operators in the CFT two point function are located at the origin of the hyperbolic space $u=0$. We omit this information in the bulk to boundary propagators. 6.2.2 Holographic rewritings By plugging (80) into (75), and evaluating the remaining $s$ integral by picking up poles of the kernel, we get 111The argument here is very similar to the one in [10]. See Appendix E for the details. $$Y_{\gamma}(\delta\rho)=i\;C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b})% \int dX_{B}\;\omega_{\phi}\left(K_{E}(X_{B}|\tau_{ba}),\;K_{E}(X_{B}|-2\pi% \gamma)-K_{E}(X_{B}|0)\right)$$ (82) By shifting the time coordinate $s_{B}\rightarrow s_{B}+i\tau_{a}$, and using the relation between the Euclidean bulk to boundary propagator and the expectation value of the bulk scalar field operator $\phi(X_{B})$, $$C(\theta_{0},\partial_{a})K_{E}(X_{B}|\tau_{a})=\langle V|\phi(X_{B})|V\rangle% \equiv\langle\phi(X_{B})\rangle_{V},$$ (83) we get, $$Y_{\gamma}(\delta\rho)=i\int dX_{B}\;\omega_{\phi}\left(\langle\phi(0)\rangle_% {V},\;\langle\phi(2\pi\gamma)\rangle_{V}-\langle\phi(0)\rangle_{V}\right).$$ (84) where $\langle\phi(2\pi\gamma)\rangle_{V}$ is the expectation value of the bulk field rotated by $2\pi\gamma$ along the Euclidean timelike direction, $$\langle\phi(2\pi\gamma)\rangle_{V}\equiv{\rm tr}\left[\rho_{V}\;e^{-2\pi\gamma K% }\;\phi\;e^{2\pi\gamma K}\right]$$ (85) In the argument of the bulk local field $\phi$, we only manifested the Euclidean time like coordinate, $$\phi(\tau)\equiv\phi(r_{B},\tau+is_{B},u,\Omega_{d-2}).$$ (86) We can obtain a similar expression for $X_{\gamma}(\delta\rho)$ just by taking a derivative of $Y_{\gamma}(\delta\rho)$, $$X_{\gamma}(\delta\rho)=-2\pi\int dX_{B}\;\omega_{\phi}\left(\langle\phi(0)% \rangle_{V},\;\partial_{s}\langle\phi(2\pi\gamma)\rangle_{V}\right),$$ (87) here we used the relation $\partial_{\gamma}=-i\partial_{s}$. This integral is invariant under the deformation of the surface on which we are evaluating the integral. In particular we can choose the fixed time slice $s_{B}=0$, then the integral can be written as, $$X_{\gamma}(\delta\rho)=-2\pi\int_{\Sigma}d\Sigma^{a}\;\xi^{b}\;T_{ab}(\langle% \phi(0)\rangle_{V},\;\langle\phi(2\pi\gamma)\rangle_{V}),$$ (88) where $\Sigma$ is the bulk region on the time slice $s_{B}=0$, which is enclosed by the boundary subsystem A and the bifurcation surface of the topological black hole (ie, RT surface). Also $d\Sigma^{a}$ is the volume element of $\Sigma$, and $\xi^{b}$ is the timelike Killing vector of the black hole. $T_{ab}$ is a quadratic form of $\phi$ related to the stress energy tensor of the bulk field, $$T_{ab}(\phi_{1},\phi_{2})=\partial_{a}\phi_{1}\partial_{b}\phi_{2}-m^{2}g_{ab}% \phi_{1}\phi_{2}$$ (89) There is another way to derive this result. Let us come back to the CFT formula, $$X_{\gamma}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b})% \int^{\infty}_{-\infty}ds\frac{-1}{4\sinh^{2}\left(\frac{s-2\pi i\gamma}{2}% \right)}\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{\Sigma_{1}}$$ (90) by changing the integration variable to $t=s-2\pi i\gamma$ and shifting the contour we get, $$\displaystyle X_{\gamma}(\delta\rho)$$ $$\displaystyle=C(\theta_{0},\partial_{a})C(\theta_{0},\partial_{b})\int^{\infty% }_{-\infty}dt\frac{-1}{4\sinh^{2}\left(\frac{t-2\pi i\epsilon}{2}\right)}% \langle\mathcal{O}(i(t+2\pi i\gamma)+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{% \Sigma_{1}}$$ $$\displaystyle=\int^{\infty}_{-\infty}ds\frac{-1}{4\sinh^{2}\left(\frac{s-2\pi i% \epsilon}{2}\right)}{\rm tr}\left[\tilde{\delta}\rho(s)\;e^{2\pi\gamma}\;% \tilde{\delta}\rho\;e^{-2\pi\gamma}\right]$$ (91) In [11] it was shown that the excited state modular Hamiltonian $K_{\rho}$ of $\rho$, when expanded by $\delta\rho$, the leading order correction to the vacuum modular Hamiltonian $K$ is given by $$K_{\rho}=K+\int^{\infty}_{-\infty}\frac{ds}{\sinh^{2}\frac{s}{2}}\tilde{\delta% }\rho(s)\equiv K+\delta K.$$ (92) It was also shown that contribution of a primary operator $\mathcal{O}$ to the correction $\delta K$ has a bulk expression $$\delta K=2\pi\int_{\Sigma}d\Sigma^{a}\;\xi^{b}\;T_{ab}(\langle\phi(0)\rangle_{% V},\;\hat{\phi}),$$ (93) where $\hat{\phi}$ is the bulk field operator dual to $\mathcal{O}$. By plugging this into (91), we recover the result. 7 Conclusions In this paper we developed a novel way to perturbatively expand Rényi tpe quantities involving powers of reduced density matrices. We then obtained a holographic expression of the quadratic parts of Rényi relative divergences $X_{\gamma}(\delta\rho),Y_{\gamma}(\delta\rho)$ in terms of bulk symplectic form starting from the CFT calculations. It is interesting find a bulk derivation of this result. One difficulty in doing so is coming form the fact that in general there is no nice path integral representation of Rényi relative divergence. This is because even if reduced density matrices $\rho,\sigma$ can be written by path integrals, $\rho^{\gamma}$ and $\sigma^{1-\gamma}$ can not. If we could find such a representation, then we can map the CFT path integral calculationss to the bulk on shell action calculations. Indeed, in a special case where Rényi relative divergence can be represented by a path integral, corresponding holographic calcuation is known[21]. However in order to derive a bulk formula for Rényi relative divergence between two generic bulk configurations, we need to take a different approach. A possible approach would be first going back to replica trick [32], compute ${\rm tr}\rho^{n}\sigma^{m}$ for positive integers $n,m$ then analytically continue the result $n\rightarrow\gamma,m\rightarrow 1-\gamma$. Furthermore it would be nice if we could read off finer information of bulk geometries using Rényi relative divergence. It has been shown that using relative entropy, we can read off first non linear part of Einstein equations [8, 10] in particular. Since Rényi relative divergence is a one parameter generalization of relative entropy, and knows about details of eigenvalue distribution of excited state reduced density matrices, it is natural to expect this. Another interesting direction would be to calculate correlation functions with insertions of modular flows of excited states, by using the technique developed in this paper. For example[33, 34, 35], two point function with an insertion of a modular flow $\langle\mathcal{O}(x)\Delta^{it}\mathcal{O}(y)\rangle$ was considered. There, it was also argued that this is useful to extract information of corresponding bulk geometry. Naively speaking we can perturbatively compute them by Wick rotating the Rényi index $\gamma$ to the imaginary value $\gamma\rightarrow it$ in our result. The task would be to check that there is no obstacle to do this. Acknowledgments We thank Alex Belin, Tom Faulkner, Sudip Ghosh, Norihiro Iizuka, Robert Myers,Tatsuma Nishioka, Jonathan Oppenheim, Gábor Sárosi, Tadashi Takayanagi and Kotaro Tamaoka for discussions. Appendix A The calculation of $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$ In this appendix, we explain the details of the calculation of the kernel $\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$, starting from (32). In order to do this, we first decompose $J(z)$ in (32) $$J(z)=z^{\gamma}I_{2}(\xi_{1},z)\prod^{n-1}_{k=2}I_{1}(\xi_{k},z)I_{1}(\xi_{n},% z),$$ (94) where $$\xi_{1}=-2\pi(\gamma-1)+i(s_{1}+q),\quad\xi_{n}=2\pi-(s_{n-1}-q)i,$$ (95) $$\xi_{k}=2\pi+(s_{k}-s_{k-1}+q)i,\quad 2\leq k\leq n-1.$$ (96) and $$\displaystyle I_{1}(\xi,z)$$ $$\displaystyle=\int^{\infty}_{-\infty}d\omega\frac{e^{-\omega\xi}}{z-e^{-2\pi% \omega}},\qquad I_{2}(\xi,z)=\int^{\infty}_{-\infty}d\omega\frac{e^{-\omega\xi% }}{(z-e^{-2\pi\omega})^{2}}.$$ (97) For $I_{1}(\xi,z)$, by carefully picking up the contributions of the relevant poles we have, $$I_{1}(\xi,\beta+i\epsilon)=\beta^{\left(\frac{\xi}{2\pi}-1\right)}\left(\frac{% e^{-i\frac{\xi}{2}}}{2\sin\frac{\xi}{2}}\right),\quad I_{1}(\xi,\beta-i% \epsilon)=\beta^{\left(\frac{\xi}{2\pi}-1\right)}\left(\frac{e^{i\frac{\xi}{2}% }}{2\sin\frac{\xi}{2}}\right).$$ (98) One way to check these is using $$\frac{1}{z+i\epsilon}-\frac{1}{z-i\epsilon}=-2\pi i\delta(z),$$ (99) Then, $$\displaystyle{\rm Disc}\;I$$ $$\displaystyle=\lim_{\epsilon\rightarrow 0_{+}}\left[I(z+i\epsilon)-I(z-i% \epsilon)\right]$$ $$\displaystyle=-2\pi i\int^{\infty}_{-\infty}d\omega e^{-\xi\omega}\;\delta(% \beta-e^{-2\pi\omega})=-i\beta^{\left(\frac{\xi}{2\pi}-1\right)}.$$ (100) This is consistent with (98). We can evaluate $I_{2}(\xi,z)$ just by taking derivative of $I_{1}(\xi,z)$ with respect to $\beta$, $$I_{2}(\xi,\beta+i\epsilon)=-\left(\frac{\xi}{2\pi}-1\right)\beta^{\left(\frac{% \xi}{2\pi}-2\right)}\left(\frac{e^{-i\frac{\xi}{2}}}{2\sin\frac{\xi}{2}}\right% ),\quad I_{2}(\xi,\beta-i\epsilon)=-\left(\frac{\xi}{2\pi}-1\right)\beta^{% \left(\frac{\xi}{2\pi}-2\right)}\left(\frac{e^{i\frac{\xi}{2}}}{2\sin\frac{\xi% }{2}}\right).$$ (101) Combining these, we obtain the relevant expressions of $J(z)$ $$J(\beta+i\epsilon)=-\beta^{\left(\gamma+\sum^{n}_{k=1}\frac{\xi_{k}}{2\pi}-(n+% 1)\right)}\frac{\left(\frac{\xi_{1}}{2\pi}-1\right)}{\prod^{n}_{k=1}2\sin\frac% {\xi_{k}}{2}}e^{-\frac{i}{2}\sum^{n}_{k=1}\xi_{k}},$$ (102) and $$J(\beta-i\epsilon)=-\beta^{\left(\gamma+\sum^{n}_{k=1}\frac{\xi_{k}}{2\pi}-(n+% 1)\right)}\frac{\left(\frac{\xi_{1}}{2\pi}-1\right)}{\prod^{n}_{k=1}2\sin\frac% {\xi_{k}}{2}}e^{\frac{i}{2}\sum^{n}_{k=1}\xi_{k}}.$$ (103) Since $$\gamma+\sum^{n}_{k=1}\frac{\xi_{k}}{2\pi}-(n+1)=-1+\frac{iqn}{2\pi},$$ (104) the $\beta$ integral produces the delta function, $$\displaystyle\int^{\infty}_{-\infty}\frac{d\beta}{2\pi i}\;\beta^{-1+\frac{iqn% }{2\pi}}=\frac{2\pi}{ni}\delta(q).$$ (105) By picking up the discontinuity across the real line, we get $$\displaystyle\delta(q)\;\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})$$ $$\displaystyle=\frac{n}{(2\pi)^{n}}\int^{\infty}_{0}\frac{d\beta}{2\pi i}\left(% J(\beta-i\epsilon)-J(\beta+i\epsilon)\right),\quad\epsilon\rightarrow 0_{+},$$ $$\displaystyle=-\frac{n}{(2\pi)^{n}}\left(\frac{2\pi}{ni}\delta(q)\right)\frac{% \left(\frac{\xi_{1}}{2\pi}-1\right)}{\prod^{n}_{k=1}2\sin\frac{\xi_{k}}{2}}% \left(e^{\frac{i}{2}\sum^{n}_{k=1}\xi_{k}}-e^{-\frac{i}{2}\sum^{n}_{k=1}\xi_{k% }}\right)$$ (106) Notice that $$\displaystyle e^{-\frac{i}{2}\sum^{n}_{k=1}\xi_{k}}-e^{+\frac{i}{2}\sum^{n}_{k% =1}\xi_{k}}$$ $$\displaystyle=e^{i\pi(\gamma-n)}-e^{-i\pi(\gamma-n)}$$ (107) $$\displaystyle=2i(-1)^{n}\sin\pi\gamma$$ (108) and $$\frac{\left(\frac{\xi_{1}}{2\pi}-1\right)}{\prod^{n}_{k=1}\sin\frac{\xi_{k}}{2% }}=\frac{-i^{n+1}}{2\pi}\frac{(s_{1}+2\pi i\gamma)}{\sinh\left(\frac{s_{1}+2% \pi i\gamma}{2}\right)\prod^{n-1}_{k=2}\sinh\left(\frac{s_{k}-s_{k-1}}{2}% \right)\sinh\left(\frac{s_{n-1}}{2}\right)}$$ (109) From this we finally arrive at the expression of the kernel, $$\mathcal{K}^{(n)}_{\gamma}(s_{1},\cdots s_{n-1})=\frac{i}{8\pi^{2}}\left(\frac% {-i}{4\pi}\right)^{n-2}\frac{(s_{1}+2\pi i\gamma)\sin\pi\gamma}{\sinh\left(% \frac{s_{1}+2\pi i\gamma}{2}\right)\prod^{n-1}_{k=2}\sinh\left(\frac{s_{k}-s_{% k-1}}{2}\right)\sinh\left(\frac{s_{n-1}}{2}\right)}$$ (110) Appendix B Fixing the contour of $n=2$ term In this appendix, we fix the correct contour $C_{s}$ of $n=2$ real time integral $$\int_{C_{s}}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\;e^{ias}=\frac{i\sin\pi\gamma}{8% \pi^{2}}\int_{C_{s}}ds\;\frac{s+2\pi i\gamma}{\sinh\frac{s}{2}\sinh\frac{s+2% \pi i\gamma}{2}}\;e^{ias},$$ (111) which reproduces the kernel in the frequency representation, (16) $$\displaystyle\mathcal{K}^{(2)}_{\gamma}(\omega_{1},\omega_{2})$$ $$\displaystyle=e^{2\pi\gamma\omega_{1}}e^{-2\pi\omega_{1}-2\pi\omega_{2}}K(% \omega_{1},\omega_{2})$$ $$\displaystyle=\frac{e^{2\pi\gamma\omega_{1}}e^{-2\pi\omega_{1}-2\pi\omega_{2}}% }{(e^{-2\pi\omega_{1}}-e^{-2\pi\omega_{2}})^{2}}\left[(\gamma-1)e^{-2\pi\gamma% \omega_{1}}+e^{-2\pi\gamma\omega_{2}}-\gamma e^{-2\pi(\gamma-1)\omega_{1}}e^{-% 2\pi\omega_{2}}\right].$$ (112) Using $a\equiv\omega_{1}-\omega_{2}$, we have, $$\mathcal{K}^{(2)}_{\gamma}(a)=\frac{e^{2\pi a}}{(1-e^{2\pi a})^{2}}\left[(% \gamma-1)+e^{2\pi a\gamma}-\gamma e^{2\pi a}\right].$$ (113) Let’s do the integral (111). There are two types of poles. $$s^{n}_{1}=2\pi in,\quad s^{k}_{2}=2\pi i(k-\gamma),$$ (114) We choose a contour which contains $s^{n}_{1},n\geq 0$, and $s^{k}_{2},k\geq 1$. One way to manifest the contour prescription is introducing an additional parameter $x>0$, $$\mathcal{K}^{(2)}_{\gamma}(s,x)=\left(\frac{i\sin\pi\gamma}{8\pi^{2}}\right)% \frac{s+2\pi i\gamma}{\sinh\frac{s+x}{2}\sinh\frac{s+2\pi i\gamma}{2}},$$ (115) and finally send $x\rightarrow 0$ to get the desired result. We have $${\rm Res}[s^{1}_{n}]=\frac{i(n+\gamma)}{2\pi}e^{-2\pi an},\quad{\rm Res}[s^{2}% _{n}]=-\frac{ik}{2\pi}e^{2\pi a(\gamma-k)}.$$ (116) By combining them, $$\displaystyle\int_{C_{s}}ds\;\mathcal{K}^{(2)}_{\gamma}(s)\;e^{ias}$$ $$\displaystyle=2\pi i\left(\sum_{n}{\rm Res}[s^{1}_{n}]+\sum_{k}{\rm Res}[s^{2}% _{k}]\right)$$ $$\displaystyle=-\left[(1-e^{-2\pi a\gamma})\sum_{k}ke^{-2\pi ak}+\gamma\sum_{n}% e^{-2\pi a\gamma n}\right]$$ $$\displaystyle=-\frac{e^{2\pi a}}{(e^{2\pi a}-1)^{2}}\left(1-\gamma)+\gamma e^{% 2\pi a}-e^{2\pi a\gamma}\right]$$ $$\displaystyle=\mathcal{K}^{(2)}_{\gamma}(a).$$ (117) This is what we want. In the sum, we included $n=0$ contribution. Appendix C Simplifying $T^{(2)}_{\gamma}(\delta\rho)$ In this section we simplify $n=2$ term of $T^{(2)}_{\gamma}(\delta\rho)$. In section 5.3 we saw that the contribution of particular primary $\mathcal{O}$ to $T^{(2)}_{\gamma}(\delta\rho)$ can be written $$T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0% },\partial_{b})I_{ab}$$ (118) where $$I_{ab}=\frac{i}{8\pi^{2}}\int^{\infty-i\epsilon}_{-\infty-i\epsilon}ds\frac{s+% 2\pi i\gamma}{\sinh\frac{s}{2}\sinh\frac{s+2\pi i\gamma}{2}}G_{ab}(s),\quad G_% {ab}(s)=\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})\rangle_{\Sigma_{% \gamma}}.$$ (119) and $C(\theta_{0},\partial_{a})$ is a differential operator summing up all descendants. This expression only holds when $\tau_{a}>\tau_{b}$ . This is because we started from the spectral representation, $$I_{ab}=\int d\omega_{1}d\omega_{2}\;\mathcal{K}^{\gamma}(a)\;e^{-2\pi\gamma% \omega_{1}}\langle\omega_{1}|\mathcal{O}(\tau_{a})|\omega_{2}\rangle\langle% \omega_{2}|\mathcal{O}(\tau_{b})|\omega_{1}\rangle,$$ (120) rewrote it in terms of the modular flow integral by $$\mathcal{K}^{\gamma}(a)=\int^{\infty-i\epsilon}_{-\infty-i\epsilon}\;\mathcal{% K}^{\gamma}(s)e^{ias},\quad a=\omega_{1}-\omega_{2}.$$ (121) and then undoing the spectral decomposition of the two point function $G_{ab}(s)$, $$\int d\omega_{1}d\omega_{2}\;e^{-2\pi\gamma\omega_{1}+ias}\langle\omega_{1}|% \mathcal{O}(\tau_{a})|\omega_{2}\rangle\langle\omega_{2}|\mathcal{O}(\tau_{b})% |\omega_{1}\rangle=\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})\rangle% _{\Sigma_{\gamma}}.$$ (122) The spectral integral only converges when $\tau_{a}>\tau_{b}$. When $\tau_{b}>\tau_{a}$, we instead write $$I_{ab}=\int d\omega_{1}d\omega_{2}\;\left[\mathcal{K}^{\gamma}(a)e^{-2\pi% \gamma a}\right]\;e^{-2\pi\gamma\omega_{2}}\langle\omega_{1}|\mathcal{O}(\tau_% {a})|\omega_{2}\rangle\langle\omega_{2}|\mathcal{O}(\tau_{b})|\omega_{1}\rangle,$$ (123) $$\displaystyle\mathcal{K}^{\gamma}(a)e^{-2\pi\gamma a}$$ $$\displaystyle=\int^{\infty-i\epsilon}_{-\infty-i\epsilon}ds\;\mathcal{K}^{% \gamma}(s)\;e^{ia(s+2\pi i\gamma)}$$ $$\displaystyle=\int^{\infty+2\pi i(\gamma-\epsilon)}_{-\infty+2\pi i(\gamma-% \epsilon)}dt\;\mathcal{K}^{\gamma}(t-2\pi i\gamma)\;e^{iat}.$$ (124) Since $$\mathcal{K}^{\gamma}(t-2\pi i\gamma)=\frac{i\sin\pi\gamma}{8\pi^{2}}\frac{t}{% \sinh\frac{t}{2}\sinh\frac{t-2\pi i\gamma}{2}}$$ (125) is regular on the strip $2\pi(\gamma-\epsilon)>{\rm Im}t>0$ when $0<\gamma<1$, we deform the contour to ${\rm Im}\;t=\epsilon$ $$\mathcal{K}^{\gamma}(a)e^{-2\pi\gamma a}=\int^{\infty+i\epsilon}_{-\infty+i% \epsilon}\;dt\;\mathcal{K}^{\gamma}(t-2\pi i\gamma)\;e^{iat}$$ (126) Therefore for $\tau_{b}>\tau_{a}$ we have $$I_{ab}=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\mathcal{K}^{\gamma}(s-2\pi i% \gamma)\langle\mathcal{O}(\tau_{b})\mathcal{O}(\tau_{a}+is)\rangle_{\Sigma_{% \gamma}},\quad\tau_{b}>\tau_{a}.$$ (127) We have similar formule for $I_{ba}$, just by flipping $\tau_{a}\leftrightarrow\tau_{b}$. Finally we combine these expressions to get a simpler form of $T^{(2)}_{\gamma}(\delta\rho)$. The two point function in (119) is analytic in the strip region $-2\pi\gamma<{\rm Im}s<\tau_{ba}$. Since when $0<\gamma<1$ there is no pole coming from the kernel in the strip, and we are allowed to deform the contour $s\rightarrow s-\pi i\gamma$. Then the integral for $\tau_{a}>\tau_{b}$ becomes $$I_{ab}=\frac{i\sin\pi\gamma}{8\pi^{2}}\int^{\infty-i\epsilon}_{-\infty-i% \epsilon}ds\frac{s+\pi i\gamma}{\sinh\frac{s-\pi i\gamma}{2}\sinh\frac{s+\pi i% \gamma}{2}}G_{ab}(s-\pi i\gamma),\quad\tau_{a}>\tau_{b}$$ (128) Now we do a similar thing for $I_{ba}$, $$I_{ba}=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\mathcal{K}^{\gamma}(s-2\pi i% \gamma)\langle\mathcal{O}(\tau_{a})\mathcal{O}(\tau_{b}+is)\rangle_{\Sigma_{% \gamma}},\quad\tau_{a}>\tau_{b}.$$ (129) By shifting the contour $s\rightarrow s+\pi i\gamma$, and then flipping the sign $s\rightarrow-s$ we get $$I_{ba}=\frac{i\sin\pi\gamma}{8\pi^{2}}\int^{\infty+i\epsilon}_{-\infty+i% \epsilon}ds\frac{-s+\pi i\gamma}{\sinh\frac{s-\pi i\gamma}{2}\sinh\frac{s+\pi i% \gamma}{2}}G_{ab}(s-\pi i\gamma)$$ (130) In the expressions (128) (130), we can take $\epsilon\rightarrow 0$. Finally we obtain $$I_{ab}+I_{ba}=\frac{\gamma\sin\pi\gamma}{4\pi}\int^{\infty}_{-\infty}\frac{ds}% {\sinh\frac{s-\pi i\gamma}{2}\sinh\frac{s+\pi i\gamma}{2}}G_{ab}(s-\pi i\gamma)$$ (131) $T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)$ is obtained by applying the differential operator, $$T^{(2)}_{\gamma,\mathcal{O}}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0% },\partial_{b})(I_{ab}+I_{ba})$$ (132) Notice that in the $\gamma\rightarrow 1$ limit, its derivative recovers the second order term $S^{(2)}(\delta\rho)$ entanglement entropy, $$S_{\mathcal{O}}^{(2)}(\delta\rho)=C(\theta_{0},\partial_{a})C(\theta_{0},% \partial_{b})\int^{\infty}_{-\infty}ds\frac{-1}{4\sinh^{2}\left(\frac{s-i% \epsilon}{2}\right)}\langle\mathcal{O}(is+\tau_{a})\mathcal{O}(\tau_{b})% \rangle_{\Sigma_{1}}.$$ (133) Appendix D Direct Fourier transformation Here we would like to directly show that $$\displaystyle\mathcal{K}_{n}^{\gamma}(s)$$ $$\displaystyle=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\frac{da}{2\pi}\;% \mathcal{K}_{n}^{\gamma}(\omega)e^{-ias}$$ $$\displaystyle=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\frac{da}{2\pi}\;% \frac{e^{-ias}}{\sinh^{2}\pi a}\left[(\gamma-1)-\gamma e^{2\pi a}+e^{2\pi% \gamma a}\right]$$ (134) The first piece is $$I_{1}=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\frac{da}{2\pi}\frac{e^{-ias}% }{\sinh^{2}\pi a}=\frac{s}{4\pi^{2}}\left(\frac{1}{1-e^{-s}}\right)$$ (135) The second order term can be obtained by the shift $s\rightarrow s+2\pi i$, therefore $$I_{2}=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\frac{da}{2\pi}\frac{e^{-ia(s% +2\pi i)}}{\sinh^{2}\pi a}=\frac{(s+2\pi i)}{4\pi^{2}}\left(\frac{1}{1-e^{-s}}\right)$$ (136) Similarly, $$I_{3}=\int^{\infty+i\epsilon}_{-\infty+i\epsilon}\frac{da}{2\pi}\frac{e^{-ia(s% +2\pi i\gamma)}}{\sinh^{2}\pi a}=\frac{(s+2\pi i\gamma)}{4\pi^{2}}\left(\frac{% 1}{1-e^{-(s+2\pi i\gamma)}}\right)$$ (137) Then the total integral is $$(\gamma-1)I_{1}+\gamma I_{2}+I_{3}=\frac{i}{8\pi^{2}}\left[\frac{(s+2\pi i% \gamma)\sin\pi\gamma}{\sinh\frac{s}{2}\sinh\frac{s+2\pi i\gamma}{2}}\right]$$ (138) therefore we recover the first non trivial part. Appendix E Details of the holographic rewriting In section 6.2.2, we used the result, $$\displaystyle Y_{\gamma}(\delta\rho)$$ $$\displaystyle=\int dX_{B}\;\omega_{\phi}\left(K_{E}(X_{B}|\tau_{ab},\int^{% \infty}_{-\infty}ds\mathcal{Y}(s-i\epsilon)K_{R}(X_{B}|s)\right)$$ $$\displaystyle=i\int dX_{B}\;\omega_{\phi}\left(K_{E}(X_{,B}|\tau_{ba}),\;K_{E}% (X_{B}|-2\pi\gamma)-K_{E}(X_{B}|0)\right)$$ (139) with $$\mathcal{Y}(s-i\epsilon)=\frac{-(\sin\pi\gamma)/4\pi}{\sinh\left(\frac{s-2i% \epsilon}{2}\right)\sinh\left(\frac{s-2\pi i\gamma}{2}\right)}.$$ (140) In this appendix, we prove this. The derivation is very similar to the one in [10]. The retarded bulk to boundary propagator is given by $$K_{R}(X_{B}|s)=i\theta(s_{B}-s)\lim_{\epsilon\rightarrow 0}\left[K_{E}(X_{B}|% is-\epsilon)-K_{E}(X_{B}|is+\epsilon)\right].$$ (141) In particular, as a function of $s$, the retarded propagator is non vanishing only in the window $-\infty<s<s_{*}$. The value of $s_{*}$ is fixed by demanding that the boundary point is null separated from the bulk point $X_{B}$. Then $$\displaystyle\int^{\infty}_{-\infty}ds\mathcal{Y}(s-i\epsilon)K_{R}(X_{B}|s)$$ $$\displaystyle=\int^{s_{*}}_{-\infty}\mathcal{Y}(is-\epsilon)\left[K_{E}(X_{B}|% s+i\epsilon)-K_{E}(X_{B}|is+\epsilon)\right]$$ $$\displaystyle=\int_{C}ds\mathcal{Y}(s-i\epsilon)K_{E}(X_{B}|s),$$ (142) where $C$ is the closed contour starting from $-\infty+i\epsilon$ to $s_{*}+i\epsilon$, then to $s_{*}+2(\pi-\epsilon)i$ and ending at $-\infty+2(\pi-\epsilon)i$. We also used the KMS condition $K_{E}(X_{B}|is+2\pi)=K_{E}(X_{B}|is)$, $\mathcal{Y}(s+2\pi i)=\mathcal{Y}(s)$ to fix the contour. By picking up poles of $\mathcal{Y}(s-i\epsilon)$ at $s=i\epsilon$ and $s=2\pi i\gamma$, we obtain the result. References [1] S. Ryu and T. Takayanagi, “Holographic derivation of entanglement entropy from AdS/CFT,” Phys. Rev. Lett. 96 (2006) 181602, arXiv:hep-th/0603001 [hep-th]. [2] S. Ryu and T. Takayanagi, “Aspects of Holographic Entanglement Entropy,” JHEP 08 (2006) 045, arXiv:hep-th/0605073 [hep-th]. [3] V. E. Hubeny, M. Rangamani, and T. Takayanagi, “A Covariant holographic entanglement entropy proposal,” JHEP 07 (2007) 062, arXiv:0705.0016 [hep-th]. [4] J. Bhattacharya, M. Nozaki, T. Takayanagi, and T. Ugajin, “Thermodynamical Property of Entanglement Entropy for Excited States,” Phys. Rev. Lett. 110 no. 9, (2013) 091602, arXiv:1212.1164 [hep-th]. [5] N. Lashkari, M. B. McDermott, and M. Van Raamsdonk, “Gravitational dynamics from entanglement ’thermodynamics’,” JHEP 04 (2014) 195, arXiv:1308.3716 [hep-th]. [6] T. Faulkner, M. Guica, T. Hartman, R. C. Myers, and M. Van Raamsdonk, “Gravitation from Entanglement in Holographic CFTs,” JHEP 03 (2014) 051, arXiv:1312.7856 [hep-th]. [7] H. Casini, M. Huerta, and R. C. Myers, “Towards a derivation of holographic entanglement entropy,” JHEP 05 (2011) 036, arXiv:1102.0440 [hep-th]. [8] T. Faulkner, “Bulk Emergence and the RG Flow of Entanglement Entropy,” JHEP 05 (2015) 033, arXiv:1412.5648 [hep-th]. [9] T. Faulkner, R. G. Leigh, and O. Parrikar, “Shape Dependence of Entanglement Entropy in Conformal Field Theories,” JHEP 04 (2016) 088, arXiv:1511.05179 [hep-th]. [10] T. Faulkner, F. M. Haehl, E. Hijano, O. Parrikar, C. Rabideau, and M. Van Raamsdonk, “Nonlinear Gravity from Entanglement in Conformal Field Theories,” JHEP 08 (2017) 057, arXiv:1705.03026 [hep-th]. [11] G. Sárosi and T. Ugajin, “Modular Hamiltonians of excited states, OPE blocks and emergent bulk fields,” JHEP 01 (2018) 012, arXiv:1705.01486 [hep-th]. [12] S. Hollands and R. M. Wald, “Stability of Black Holes and Black Branes,” Commun. Math. Phys. 321 (2013) 629–680, arXiv:1201.0463 [gr-qc]. [13] N. Lashkari and M. Van Raamsdonk, “Canonical Energy is Quantum Fisher Information,” JHEP 04 (2016) 153, arXiv:1508.00897 [hep-th]. [14] H. Casini and M. Huerta, “Entanglement entropy in free quantum field theory,” J. Phys. A42 (2009) 504007, arXiv:0905.2562 [hep-th]. [15] E. Witten, “ Notes on Some Entanglement Properties of Quantum Field Theory,” Rev. Mod. Phys. 90 no. 4, (2018) 045003, arXiv:1803.04993 [hep-th]. [16] D. Petz, “Quasi-entropies for states of a von neumann algebra,” Publications of the Research Institute for Mathematical Sciences 21 no. 4, (1985) 787–800. [17] N. Lashkari, “Relative Entropies in Conformal Field Theory,” Phys. Rev. Lett. 113 (2014) 051602, arXiv:1404.3216 [hep-th]. [18] K. P. Seshadreesan, L. Lami, and M. M. Wilde, “Rényi relative entropies of quantum Gaussian states,” J. Math. Phys. 59 no. 7, (2018) 072204, arXiv:1706.09885 [quant-ph]. [19] H. Casini, R. Medina, I. Salazar Landea, and G. Torroba, “Renyi relative entropies and renormalization group flows,” JHEP 09 (2018) 166, arXiv:1807.03305 [hep-th]. [20] N. Lashkari, “Constraining Quantum Fields using Modular Theory,” arXiv:1810.09306 [hep-th]. [21] A. Bernamonti, F. Galli, R. C. Myers, and J. Oppenheim, “Holographic second laws of black hole thermodynamics,” JHEP 07 (2018) 111, arXiv:1803.03633 [hep-th]. [22] A. May and E. Hijano, “The holographic entropy zoo,” JHEP 10 (2018) 036, arXiv:1806.06077 [hep-th]. [23] D. D. Blanco, H. Casini, L.-Y. Hung, and R. C. Myers, “Relative Entropy and Holography,” JHEP 08 (2013) 060, arXiv:1305.3182 [hep-th]. [24] G. Sárosi and T. Ugajin, “Relative entropy of excited states in two dimensional conformal field theories,” JHEP 07 (2016) 114, arXiv:1603.03057 [hep-th]. [25] G. Sárosi and T. Ugajin, “Relative entropy of excited states in conformal field theories of arbitrary dimensions,” JHEP 02 (2017) 060, arXiv:1611.02959 [hep-th]. [26] T. Ugajin, “Mutual information of excited states and relative entropy of two disjoint subsystems in CFT,” JHEP 10 (2017) 184, arXiv:1611.03163 [hep-th]. [27] Y. O. Nakagawa and T. Ugajin, “Numerical calculations on the relative entanglement entropy in critical spin chains,” J. Stat. Mech. 1709 no. 9, (2017) 093104, arXiv:1705.07899 [cond-mat.stat-mech]. [28] T. Takayanagi, T. Ugajin, and K. Umemoto, “Towards an Entanglement Measure for Mixed States in CFTs Based on Relative Entropy,” arXiv:1807.09448 [hep-th]. [29] N. Lashkari, H. Liu, and S. Rajagopal, “Modular Flow of Excited States,” arXiv:1811.05052 [hep-th]. [30] A. Belin, A. Lewkowycz, and G. Sárosi, “The boundary dual of the bulk symplectic form,” arXiv:1806.10144 [hep-th]. [31] A. Belin, A. Lewkowycz, and G. Sárosi, “Complexity and the bulk volume, a new York time story,” arXiv:1811.03097 [hep-th]. [32] A. Lewkowycz and J. Maldacena, “Generalized gravitational entropy,” JHEP 08 (2013) 090, arXiv:1304.4926 [hep-th]. [33] T. Faulkner, M. Li, and H. Wang, “A modular toolkit for bulk reconstruction,” arXiv:1806.10560 [hep-th]. [34] T. Faulkner and A. Lewkowycz, “Bulk locality from modular flow,” JHEP 07 (2017) 151, arXiv:1704.05464 [hep-th]. [35] Y. Chen, X. Dong, A. Lewkowycz, and X.-L. Qi, “Modular Flow as a Disentangler,” arXiv:1806.09622 [hep-th].
Unified treatment of fractional integral inequalities via linear functionals M. Bombardelli Department of Mathematics, University of Zagreb Zagreb, Croatia [email protected] ,  L. Nikolova Department of Mathematics and Informatics, Sofia University Sofia, Bulgaria [email protected]  and  S. Varošanec Department of Mathematics, University of Zagreb Zagreb, Croatia [email protected] Abstract. In the paper we prove several inequalities involving two isotonic linear functionals. We consider inequalities for functions with variable bounds, for Lipschitz and Hölder type functions etc. These results give us an elegant method for obtaining a number of inequalities for various kinds of fractional integral operators such as for the Riemann-Liouville fractional integral operator, the Hadamard fractional integral operator, fractional hyperqeometric integral and corresponding q-integrals. Key words and phrases:the Chebyshev inequality, the Chebyshev difference, fractional integral operator, isotonic linear functional, Lipschitz function 1991 Mathematics Subject Classification: 26D10; 26A33 1. Introduction Recently several papers involving inequalities for fractional integral operators have been published, see [1, 2, 3, 4, 5, 6, 7] and references therein. Certain similarity of those inequalities shows that those results have a common origin. In this paper we give a unified treatment of several known inequalities for fractional integral operators via theory of isotonic linear functionals. In fact, we prove general inequalities involving isotonic linear functionals from which some interesting results are followed. The paper is organized in the following way. The rest of this section contains definitions and some examples of isotonic linear functionals connected with fractional integration and integration on time scales. The Chebyshev inequality for one and two isotonic functionals are given. Inequalities for Lipschitz functions are given in the second section. The third section is devoted to new inequalities involving two isotonic functionals and functions with variable upper and lower bounds. The fourth section is devoted to results involving more than two functions. Applications or references where we can find applications in the theory of fractional operators and calculus on time scales are also given. Isotonic linear functionals Definition 1. (Isotonic linear functional) Let $E$ be a non-empty set and $L$ be a class of real-valued functions on $E$ having the properties: L1. If $f,g\in L$, then $(af+bg)\in L$ for all $a,b\in{\bf R}$; L2. The function ${\bf 1}$ belongs to $L$. (${\bf 1}(t)=1$ for $t\in E$). A functional $A:L\rightarrow{\bf R}$ is called an isotonic linear functional if A1. $A(af+bg)=aA(f)+bA(g)$ for $f,g\in L$, $a,b\in{\bf R}$; A2. $f\in L$, $f(t)\geq 0$ on $E$ implies $A(f)\geq 0$. There exist a lot of interesting examples of linear functionals which play some role in different parts of mathematics. In the following example we describe the most mentioned functionals - discrete and integral, and we give several functionals which appear in the theory of fractional calculus and calculus on time scales. Example 1. (i) Discrete functional. If $E=\{1,2,\ldots,n\}$ and $f:E\rightarrow{\bf R}$, then $A(f)=\sum_{i=1}^{n}f(i)$ is an isotonic linear functional. (ii) Integral functional. If $E=[a,b]\subset{\bf R}$ and $L=L(a,b)$, then $A(f)=\int_{a}^{b}f(t)dt$ is an isotonic linear functional. If $\displaystyle A_{1}(f)=\frac{1}{b-a}A(f)$, then $A_{1}$ is a normalized isotonic linear functional. (iii) Fractional hypergeometric operator. If $t>0,\ \alpha>\max\{0,-\beta-\mu\},\mu>-1,\beta-1<\eta<0$, then $$A(f)=I^{\alpha,\beta,\eta,\mu}_{t}\{f(t)\}$$ is an isotonic linear functional, ([1]), where $I^{\alpha,\beta,\eta,\mu}_{t}\{f(t)\}$ is a fractional hypergeometric operator defined as $$\displaystyle I^{\alpha,\beta,\eta,\mu}_{t}\{f(t)\}\\ \displaystyle=\frac{t^{-\alpha-\beta-2\mu}}{\Gamma(\alpha)}\int_{0}^{t}\sigma^% {\mu}(t-\sigma)^{\alpha-1}\ _{2}F_{1}(\alpha+\beta+\mu,-\eta,\alpha;1-\frac{% \sigma}{t})f(\sigma)\,d\sigma$$ where the function ${}_{2}F_{1}(a,b,c,t)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{t^% {n}}{n!}$ is the Gaussian hypergeometric function and $(a)_{n}$ is the Pochhammer symbol: $(a)_{n}=a(a+1)\ldots(a+n-1),\ \ (a)_{0}=1$. • Putting $\mu=0$, then the fractional hypergeometric operator reduces to the Saigo fractional integral operator $I^{\alpha,\beta,\eta}\{f(t)\}$. • The Erdélyi-Kober fractional integral operator $I^{\alpha,\eta}\{f(t)\}$ is a particular case of $I^{\alpha,\beta,\eta,\mu}_{t}\{f(t)\}$ when $\beta=\mu=0$. • One of the earliest defined and the most investigated fractional integral operator is the so-called Riemann-Liouville operator defined as $$\displaystyle J^{\alpha}f(t)=I^{\alpha,-\alpha,0,0}_{t}\{f(t)\}=\frac{1}{% \Gamma(\alpha)}\int_{0}^{t}(t-\sigma)^{\alpha-1}f(\sigma)\,d\sigma,\ \ \alpha>0,$$ (1.1) and it is a particular case of a fractional hypergeometric operator for $\beta=-\alpha$, $\eta=\mu=0$. (iv) q-analogues The above-mentioned operators have the so-called $q$-analogues. We describe a $q$-analogue of Saigo’s fractional integral, [12]. Let $\Re(\alpha)>0$, $\beta,\eta\in{\bf C}$, $0<q<1$. A $q$-analogue of Saigo’s fractional integral $I_{q}^{\alpha,\beta,\eta}$ is given for $|\tau/t|<1$ by equation $$I_{q}^{\alpha,\beta,\eta}\{f(t)\}=\frac{t^{-\beta-1}}{\Gamma_{q}(\alpha)}\int_% {0}^{t}\left(q\frac{\tau}{t};q\right)_{\alpha-1}\times$$ $$\times\sum_{m=0}^{\infty}\frac{(q^{\alpha+\beta};q)_{m}(q^{-\eta};q)_{m}}{(q^{% \alpha};q)_{m}(q;q)_{m}}\cdot q^{(\eta-\beta)m}(-1)^{m}q^{-m(m-1)/2}\left(% \frac{\tau}{t}-1\right)^{m}_{q}f(\tau)d_{q}\tau,$$ where $$(a;q)_{\alpha}=\frac{\prod_{k=0}^{\infty}(1-aq^{k})}{\prod_{k=0}^{\infty}(1-aq% ^{\alpha+k})}\ \ \textrm{and}\ \ (t-a)_{q}^{n}=t^{n}(\frac{a}{t};q)_{n}.$$ If $\alpha>0$, $\beta,\eta\in{\bf R}$ with $\alpha+\beta>0$ and $\eta<0$, then $I_{q}^{\alpha,\beta,\eta}$ is isotonic, [13]. (v) The Hadamard fractional integral The Hadamard fractional integral of order $\alpha>0$ of function $f$ is defined as $${}_{H}J^{\alpha}f(x)=\frac{1}{\Gamma(\alpha)}\int_{1}^{x}\left(\log\frac{x}{y}% \right)^{\alpha-1}\frac{f(y)dy}{y},\ 1<x.$$ For further reading about fractional calculus we recommend, for example, [15]. (vi) In 1988 S. Hilger introduced the calculus on time scales, a strong tool for unified treatment of differential and difference equations. Among different kinds of integrals the most investigated is $\Delta-$ integral, see for example, [17]. A $\Delta-$ integral was followed by $\nabla-$ integral, $\diamondsuit_{\alpha}$ integral, $\alpha,\beta$-symmetric integral etc. All of them are isotonic linear functionals. Chebyshev-type inequalities for isotonic linear functionals After that short text about various kinds of isotonic linear functionals, let us say few words about some inequalities of Chebyshev type involving isotonic linear functionals. We say that functions $f$ and $g$ on $E$ are similarly ordered (or synchronous) if for each $x,y\in E$ $$(f(x)-f(y))(g(x)-g(y))\geq 0.$$ If the reversed inequality holds, then we say that $f$ and $g$ are oppositely ordered or asynchronous. The most famous inequality which involve similarly or oppositely ordered functions is the Chebyshev inequality for integrals. It states that if $p,f$ and $g$ are integrable real functions on $[a,b]\subset{\bf R}$ and if $f$ and $g$ are similarly ordered, then $$\displaystyle\int_{a}^{b}p(x)dx\int_{a}^{b}p(x)f(x)g(x)dx\geq\int_{a}^{b}p(x)f% (x)dx\int_{a}^{b}p(x)g(x)dx.$$ (1.2) If $f$ and $g$ are oppositely ordered then the reverse of the inequality in (1.2) is valid. During last century a lot of results about the Chebyshev inequality appear. Here, we only give the most recent results involving two isotonic linear functionals, [10]. Theorem 1 (The Chebyshev inequality for two functionals). Let $A$ and $B$ be two isotonic linear functionals on $L$ and let $p,q\in L$ be non-negative functions. Let $f,g$ be two functions on $E$ such that $pf$, $pg$, $qf$, $qg,pfg$, $qfg\in L$. If $f$ and $g$ are similarly ordered functions, then $$\displaystyle A(pfg)B(q)+A(p)B(qfg)\geq A(pf)B(qg)+A(pg)B(qf).$$ (1.3) If $f$ and $g$ are oppositely ordered functions, then the reverse inequality in (1.3) holds. Putting $A=B$, $p=q$ in (1.3) and divided by $2$ we get that for similarly ordered functions $f$ and $g$ such that $pf,pg,pfg\in L$, the following holds $$\displaystyle A(p)A(pfg)\geq A(pf)A(pg).$$ If $f$ and $g$ are oppositely ordered functions, then the reverse inequality holds. It is, in fact, the Chebyshev inequality for one isotonic positive functional. One of the most investigated question related to the Chebyshev integral inequality is the question of finding bounds for the so-called Chebyshev difference which is defined as a difference between two sides from inequality (1.2). Results related to that question are called Gruüss type inequalities. In [10] Grüss type inequalities for the Chebyshev difference $T(A,B,p,q,f,g)$ which arise from inequality (1.3) are given where $$T(A,B,p,q,f,g)=B(q)A(pfg)+A(p)B(qfg)-A(pf)B(qg)-A(pg)B(qf).$$ 2. Inequalities for $M-g-$Lipschitz and Hölder-type functions In this section $M-g-$Lipschitz functions are considered. We say that $f$ is an $M-g-$Lipschitz function if $$|f(x)-f(y)|\leq M|g(x)-g(y)|$$ for all $x,y\in E$. If $g=id$, then $f$ is simple called an $M-$Lipschitz function. In the following theorem we consider two functions $f$ and $g$ which are $h_{1}-$ and $h_{2}-$Lipschitz functions with constants $M_{1}$ and $M_{2}$ respectively. Theorem 2. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $M_{1}$, $M_{2}$ be real numbers and let $f$, $g$, $h_{1}$, $h_{2}$ be functions such that $f$ is $M_{1}-h_{1}-$Lipschitz and $g$ is $M_{2}-h_{2}-$Lipschitz, i.e. for all $x,y\in E$ $$\displaystyle|f(x)-f(y)|\leq M_{1}|h_{1}(x)-h_{1}(y)|,$$ (2.1) $$\displaystyle|g(x)-g(y)|\leq M_{2}|h_{2}(x)-h_{2}(y)|.$$ (2.2) If all the terms in the below inequality exist and $h_{1}$ and $h_{2}$ are or similarly ordered, or oppositely ordered, then $$\displaystyle|T(A,B,p,q,f,g)|\leq M_{1}M_{2}T(A,B,p,q,h_{1},h_{2}).$$ (2.3) Proof. Let $h_{1}$ and $h_{2}$ be similarly ordered. Multiplying the inequalities (2.1) and (2.2) we get $$|(f(x)-f(y))(g(x)-g(y))|\leq M_{1}M_{2}(h_{1}(x)-h_{1}(y))(h_{2}(x)-h_{2}(y)).$$ It means that $$\displaystyle(f(x)-f(y))(g(x)-g(y))\leq M_{1}M_{2}(h_{1}(x)-h_{1}(y))(h_{2}(x)% -h_{2}(y))$$ and $$\displaystyle(f(x)-f(y))(g(x)-g(y))\geq-M_{1}M_{2}(h_{1}(x)-h_{1}(y))(h_{2}(x)% -h_{2}(y)).$$ Since $$(f(x)-f(y))(g(x)-g(y))=f(x)g(x)+f(y)g(y)-f(x)g(y)-f(y)g(x)$$ multiplying with $p(x)q(y)$ and acting on the first inequality by functional $A$ with respect to $x$ and then by functional $B$ with respect to $y$ we get $$\displaystyle A(pfg)B(q)+A(p)B(qfh)-A(pf)B(qg)-A(pg)B(qf)$$ $$\displaystyle\leq$$ $$\displaystyle M_{1}M_{2}(A(ph_{1}h_{2})B(q)+A(p)B(qh_{1}h_{2})-A(ph_{1})B(qh_{% 2})-A(ph_{2})B(qh_{1}),i.e.$$ $$T(A,B,p,q,f,g)\leq M_{1}M_{2}T(A,B,p,q,h_{1},h_{2}).$$ Similarly, from the second inequality we obtain $$T(A,B,p,q,f,g)\geq-M_{1}M_{2}T(A,B,p,q,h_{1},h_{2})$$ and we get the claimed result. The case when $h_{1}$ and $h_{2}$ are oppositely ordered is proven similary. ∎ Theorem 3. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $M$ be real number and let $f$, $g$ be functions such that $$|f(x)-f(y)|\leq M|g(x)-g(y)|,\qquad\forall x,y.$$ If all the terms in the below inequality exist, then $$\displaystyle\big{|}A(pfg)B(q)$$ $$\displaystyle+$$ $$\displaystyle A(p)B(qfh)-A(pf)B(qg)-A(pg)B(qf)\big{|}$$ $$\displaystyle\leq$$ $$\displaystyle M\left(A(pg^{2})B(q)-2A(pg)B(qg)+A(p)B(qg^{2})\right).$$ Proof. Since $f$ is an $M-g-$Lipschitz function and $g$ is $1-g-$Lipschitz the desired inequality is a simple consequence of Theorem 2. ∎ In the following table we give a list of papers where particular cases of Theorems from this section can be found. Let us say few words how to read the above table. In the first column we write a list of isotonic linear functionals. In the corresponding row we give a reference where applications of our Theorem 2 and Theorem 3 occur. For example, Theorem 2 for two Saigo operators, but with functions $h_{1}$ and $h_{2}$ equal to identity $id$ can be find in paper [21] as Theorem 2.20 etc. As we can see, our results enable us to give analogue results for other cases of linear functionals such as for fractional hypergeometric operators, for the Hadamard operators, for other kinds of integrals on time scales etc. Also, we improve existing results by using two different, more general functions $h_{1}$ and $h_{2}$ instead of identity function $id$. For example, here we give a result for the Hadamard integral operators. Let $p=q=1$, $h_{1}=h_{2}=id$, $\alpha,\beta>0$. If functions $f,g$ are $M_{1}-$, $M_{2}-$Lipschitz respectively, then the following inequality holds $$\frac{\log^{\beta}t}{\Gamma(\beta+1)}\phantom{a}_{H}J^{\alpha}(fg(t))+\frac{% \log^{\alpha}t}{\Gamma(\alpha+1)}\phantom{a}_{H}J^{\beta}(fg(t))$$ $$-\phantom{a}_{H}J^{\alpha}(f(t))\phantom{a}_{H}J^{\beta}(g(t))-\phantom{a}_{H}% J^{\alpha}(g(t))\phantom{a}_{H}J^{\beta}(f(t))\leq M_{1}M_{2}\frac{t^{2}}{% \Gamma(\alpha)\Gamma(\beta)}\times$$ $$\times\left[\frac{\log^{\alpha}t\cdot\gamma(\beta,2\log t)}{2^{\beta}\alpha}+% \frac{\log^{\beta}t\cdot\gamma(\alpha,2\log t)}{2^{\alpha}\beta}-2\gamma(% \alpha,\log t)\gamma(\beta,\log t)\right]$$ where $\gamma(s,x)$ is incomplete Gamma function, i.e. $\gamma(s,x)=\int_{0}^{x}t^{s-1}e^{-t}dt$. The procedure when we firstly apply on some function $F(x,y)$ functional $A$ with respect to variable $x$ and then apply functional $B$ with respect to variable $y$ occurs very often in this paper. So, we use the following notation: if $F(x,y)$ is a function, then the number which appears after the above-described procedure is written as $$B_{y}A_{x}(F(x,y))\ \ \mbox{or}\ \ B_{y}A_{x}(F).$$ It is worthless to say that if $A$ and $B$ are isotonic linear functionals, then a functional $$F\mapsto B_{y}A_{x}(F)$$ is also an isotonic linear functional. Theorem 4. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. If $f$ is of $r$-Hölder-type and $g$ is of $s$-Hölder-type, i.e. $$|f(x)-f(y)|\leq H_{1}|x-y|^{r},\ \ \ |g(x)-g(y)|\leq H_{2}|x-y|^{s}$$ for all $x,y\in E$, where $H_{1},H_{2}>0$, $r,s\in(0,1]$ fixed, then $$|T(A,B,p,q,f,g)|\leq H_{1}H_{2}\cdot B_{y}A_{x}(p(x)q(y)|x-y|^{r+s}).$$ Proof. It is proved in similar manner as Theorem 2. ∎ In particular case, when $p=q={\bf 1}$, $A(f)=B(f)=\int_{a}^{b}f(x)dx$, a factor $B_{y}A_{x}(p(x)q(y)|x-y|^{r+s})$ was calculated in [23] and it is equal to $$\frac{2(b-a)^{r+s+2}}{(r+s+1)(r+s+2)}.$$ 3. Inequalities for functions with variable bounds In this section we collect different results for functions with variable and constant bounds. For example, if we look at the paper [5], their Theorem 4 can be seen as a particular case of the following theorem. Theorem 5. Let $A$ and $B$ be isotonic linear functionals on $L$, $p$, $q$ be non-negative functions from $L$. Let $f$, $\phi_{1},\phi_{2},$ be functions such that $$\phi_{1}(t)\leq f(t)\leq\phi_{2}(t)$$ and all terms in the below inequality exist. Then $$\displaystyle A(p\phi_{2})B(qf)+A(pf)B(q\phi_{1})\geq A(p\phi_{2})B(q\phi_{1})% +A(pf)B(qf).$$ Proof. We consider the inequality $$(\phi_{2}(x)-f(x))(f(y)-\phi_{1}(y))\geq 0.$$ It is equivalent to the following $$\phi_{2}(x)f(y)+f(x)\phi_{1}(y)\geq\phi_{1}(y)\phi_{2}(x)+f(x)f(y).$$ After multiplying with $p(x)q(y)$ and acting on this inequality first by functional $A$ with respect to $x$ and then by $B$ with respect to $y$ we get the wanted inequality. ∎ If functions $\phi_{1},\phi_{2}$ become constant functions, then we get the following corollary. Corollary 1. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $m$, $M$ be real numbers and let $f$ be function such that $$m\leq f(t)\leq M$$ and all terms in the below inequality exist. Then $$MA(p)B(qf)+mA(pf)B(q)\geq MmA(p)B(q)+A(pf)B(qf).$$ Proof. Follows from Theorem 5 for $\phi_{1}(t)=m$, $\phi_{2}(t)=M$. ∎ Corollary 2. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $M>0$ and let $\varphi$, $f$ be function such that $$|f(t)-\varphi(t)|<M$$ and all terms in the below inequality exist. Then $$\displaystyle A(p\varphi)B(qf)+A(pf)B(q\varphi)+MA(p)B(qf)+MA(p\varphi)B(q)+M^% {2}A(p)B(q)$$ $$\displaystyle\geq A(p\varphi)B(q\varphi)+MA(p)B(q\varphi)+MA(pf)B(q)+A(pf)B(qf).$$ Proof. Follows from Theorem 5 for $\phi_{1}(t)=\varphi(t)-M$, $\phi_{2}(t)=\varphi(t)+M$. ∎ Theorem 6. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $\varphi_{1},\varphi_{2},$ $\psi_{1},\psi_{2},$ $f$ and $g$ be functions such that all terms in the below inequality exist and conditions $$\varphi_{1}(t)\leq f(t)\leq\varphi_{2}(t)\quad{\rm and}\quad\psi_{1}(t)\leq g(% t)\leq\psi_{2}(t)$$ hold. Then $$\displaystyle A(p\varphi_{1})B(q\psi_{1})+A(pf)B(qg)\geq A(p\varphi_{1})B(qg)+% A(pf)B(q\psi_{1}),$$ $$\displaystyle A(p\varphi_{1})B(q\psi_{2})+A(pf)B(qg)\leq A(p\varphi_{1})B(qg)+% A(pf)B(q\psi_{2}),$$ $$\displaystyle A(p\varphi_{2})B(q\psi_{1})+A(pf)B(qg)\leq A(p\varphi_{2})B(qg)+% A(pf)B(q\psi_{1}),$$ $$\displaystyle A(p\varphi_{2})B(q\psi_{2})+A(pf)B(qg)\geq A(p\varphi_{2})B(qg)+% A(pf)B(q\psi_{2}).$$ Proof. The inequality $\left(f(x)-\varphi_{1}(x)\right)\left(g(y)-\psi_{1}(y)\right)\geq 0$ which can be written as $$\varphi_{1}(x)\psi_{1}(y)+f(x)g(y)\geq\varphi_{1}(x)g(y)+f(x)\psi_{1}(y)$$ is obviously true. After multiplying it with $p(x)q(y)$ and acting on this inequality first by functional $A$ with respect to $x$ and then by $B$ with respect to $y$, we get $$B_{y}A_{x}(p(x)q(y)(\varphi_{1}(x)\psi_{1}(y)+f(x)g(y))\geq B_{y}A_{x}(p(x)q(y% )(\varphi_{1}(x)g(y)+f(x)\psi_{1}(y))$$ and applying properties of isotonic linear functionals $A$ and $B$ we obtain the first inequality. The other three inequalities are obtained in a similar way starting from $$\displaystyle\left(f(x)-\varphi_{1}(x)\right)\left(\psi_{2}(y)-g(y)\right)\geq 0,$$ $$\displaystyle\left(\varphi_{2}(x)-f(x)\right)\left(g(y)-\psi_{1}(y)\right)\geq 0,$$ $$\displaystyle\left(\varphi_{2}(x)-f(x)\right)\left(\psi_{2}(y)-g(y)\right)\geq 0$$ respectively. ∎ Corollary 3. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $m,M,n,N$ be real numbers and let $f$ and $g$ be functions such that $$m\leq f(t)\leq M\quad{\rm and}\quad n\leq g(t)\leq N$$ and all terms in the below inequalities exist. Then $$\displaystyle mnA(p)B(q)+A(pf)B(qg)\geq mA(p)B(qg)+nA(pf)B(q),$$ $$\displaystyle mNA(p)B(q)+A(pf)B(qg)\leq mA(p)B(qg)+NA(pf)B(q),$$ $$\displaystyle MnA(p)B(q)+A(pf)B(qg)\leq MA(p)B(qg)+nA(pf)B(q),$$ $$\displaystyle MNA(p)B(q)+A(pf)B(qg)\geq MA(p)B(qg)+NA(pf)B(q).$$ Proof. Follows from Theorem 6 for $\phi_{1}=m$, $\phi_{2}=M$, $\psi_{1}=n$, $\psi_{2}=N$. ∎ Theorem 7. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $\theta_{1}$ and $\theta_{2}$ be a positive real numbers satisfying $\displaystyle\frac{1}{\theta_{1}}+\frac{1}{\theta_{2}}=1$. Let $f$, $\phi_{1},\phi_{2},$ be functions such that $$\phi_{1}(t)\leq f(t)\leq\phi_{2}(t).$$ and all terms in the below inequality exist. Then $$\displaystyle\frac{1}{\theta_{1}}B(q)A(p(\phi_{2}-f)^{\theta_{1}})+\frac{1}{% \theta_{2}}A(p)B(q(f-\phi_{1})^{\theta_{2}})+A(p\phi_{2})B(q\phi_{1})$$ $$\displaystyle+A(pf)B(qf)\geq A(p\phi_{2})B(qf)+A(pf)B(q\phi_{1}).$$ Proof. Let us mention the Young inequality which holds for non-negative $a,b$ and for positive $\theta_{1}$ and $\theta_{2}$ with property $\displaystyle\frac{1}{\theta_{1}}+\frac{1}{\theta_{2}}=1$: $$\frac{1}{\theta_{1}}a^{\theta_{1}}+\frac{1}{\theta_{2}}b^{\theta_{2}}\geq ab.$$ Setting in the previous inequality $$a=\phi_{2}(x)-f(x),\ \ \ b=f(y)-\phi_{1}(y)$$ we have $$\frac{1}{\theta_{1}}(\phi_{2}(x)-f(x))^{\theta_{1}}+\frac{1}{\theta_{2}}(f(y)-% \phi_{1}(y))^{\theta_{2}}\geq(\phi_{2}(x)-f(x))(f(y)-\phi_{1}(y)).$$ Applying usual procedure we get $$\displaystyle B_{y}A_{x}\left(p(x)q(y)(\frac{1}{\theta_{1}}\right.\left.(\phi_% {2}(x)-f(x))^{\theta_{1}}+\frac{1}{\theta_{2}}(f(y)-\phi_{1}(y))^{\theta_{2}}\right)$$ $$\displaystyle             +B_{y}A_{x}\Big{(}p(x)q(y)(\phi_{2}(x)-f(x))(f(y)-% \phi_{1}(y)\Big{)}$$ and after applying properties of $A$ and $B$ we obtain $$\frac{1}{\theta_{1}}B(q)A(p(\phi_{2}-f)^{\theta_{1}})+\frac{1}{\theta_{2}}A(p)% B(q(f-\phi_{1})^{\theta_{2}})\geq A(p(\phi_{2}-f))B(q(f-\phi_{1})).$$ Using result from Theorem 5 we get inequality (7). ∎ Corollary 4. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $m,n\in{\bf R}$ and let $f$ be function such that $m\leq f(t)\leq M$ and all terms in the below inequality exist. Then $$\displaystyle(M+m)^{2}A(p)B(q)$$ $$\displaystyle+$$ $$\displaystyle A(pf^{2})B(q)+2A(pf)B(qf)+A(p)B(qf^{2})$$ $$\displaystyle\geq$$ $$\displaystyle 2(M+m)[A(p)B(qf)+A(pf)B(q)].$$ Proof. Follows from Theorem 7 for $\phi_{1}=m$, $\phi_{2}=M$ and $\theta_{1}=\theta_{2}=2$. ∎ Theorem 8. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $\theta_{1}$ and $\theta_{2}$ be a positive real numbers satisfying $\displaystyle\frac{1}{\theta_{1}}+\frac{1}{\theta_{2}}=1$. Let $\varphi_{1},\varphi_{2},$ $\psi_{1},\psi_{2},$ $f$ and $g$ be functions such that all terms in the below inequality exist and conditions $$\varphi_{1}(t)\leq f(t)\leq\varphi_{2}(t)\quad{\rm and}\quad\psi_{1}(t)\leq g(% t)\leq\psi_{2}(t)$$ hold. Then $$\displaystyle\frac{1}{\theta_{1}}A\left(p(\varphi_{2}-f)^{\theta_{1}}\right)B(% q)+\frac{1}{\theta_{2}}A(p)B\left(q(\psi_{2}-g)^{\theta_{2}}\right)\geq A\left% (p(\varphi_{2}-f)\right)B\left(q(\psi_{2}-g)\right),$$ $$\displaystyle\frac{1}{\theta_{1}}A\left(p(\varphi_{2}-f)^{\theta_{1}}\right)B(% q)+\frac{1}{\theta_{2}}A(p)B\left(q(g-\psi_{1})^{\theta_{2}}\right)\geq A\left% (p(\varphi_{2}-f)\right)B\left(q(g-\psi_{1})\right),$$ $$\displaystyle\frac{1}{\theta_{1}}A\left(p(f-\varphi_{1})^{\theta_{1}}\right)B(% q)+\frac{1}{\theta_{2}}A(p)B\left(q(\psi_{2}-g)^{\theta_{2}}\right)\geq A\left% (p(f-\varphi_{1})\right)B\left(q(\psi_{2}-g)\right),$$ $$\displaystyle\frac{1}{\theta_{1}}A\left(p(f-\varphi_{1})^{\theta_{1}}\right)B(% q)+\frac{1}{\theta_{2}}A(p)B\left(q(g-\psi_{1})^{\theta_{2}}\right)\geq A\left% (p(f-\varphi_{1})\right)B\left(q(g-\psi_{1})\right).$$ Proof. Using the Young inequality for $a=\varphi_{2}(x)-f(x)$, $b=\psi_{2}(y)-g(y)$ we get $$\frac{1}{\theta_{1}}\left(\varphi_{2}(x)-f(x)\right)^{\theta_{1}}+\frac{1}{% \theta_{2}}\left(\psi_{2}(y)-g(y)\right)^{\theta_{2}})\geq\left(\varphi_{2}(x)% -f(x)\right)\left(\psi_{2}(y)-g(y)\right).$$ Multiplying both sides with $p(x)q(y)$ and acting on the inequality by functional $A$ with respect to $x$ and then by $B$ with respect to $y$, we get $$\frac{1}{\theta_{1}}A\left(p(\varphi_{2}-f)^{\theta_{1}}\right)B(q)+\frac{1}{% \theta_{2}}A(p)B\left(q(\psi_{2}-g)^{\theta_{2}}\right)$$ $$\geq A\left(p(\varphi_{2}-f)\right)B\left(q(\psi_{2}-g)\right).$$ In the similar way the other three inequalities are proved. ∎ In the following table we give a list of papers where particular cases of some Theorems from this section can be found. Let us mention that the non-weighted versions of Theorems 7, 8 and Corollary 4 for two Hadamard operators $A=_{H}J^{\alpha}$ and $B=_{H}J^{\beta}$ can be found in paper [5]. As we can see, we did not find similar results for hypergeometric operators in literature. But, it is obvious that our results can be applied on two fractional hypergeometric operator or on $q$-analogues of those integral operators. 4. Inequalities for three functions This section is devoted to the results involving three or more functions and in some sense it generalize the previous section. Theorem 9. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $f$, $g$ be similarly ordered functions and let $h$ be function with positive values. If all terms in the below inequality exist, then $$\displaystyle A(pfgh)B(q)+A(pfg)B(qh)+A(ph)B(qfg)+A(p)B(qfgh)$$ $$\displaystyle\geq A(pfh)B(qg)+A(pf)B(qgh)+A(pgh)B(qf)+A(pg)B(qfh).$$ (4.1) If $f$ and $g$ are oppositely ordered, then the reversed inequality holds. Proof. For similarly ordered functions $f$, $g$ we have $(f(x)-f(y))(g(x)-g(y))\geq 0$, but then also $$(f(x)-f(y))(g(x)-g(y))(h(x)+h(y))\geq 0.$$ This can be written as $$\displaystyle f(x)g(x)h(x)+f(x)g(x)h(y)+f(y)g(y)h(x)+f(y)g(y)h(y)$$ $$\displaystyle\geq$$ $$\displaystyle f(x)g(y)h(x)+f(x)g(y)h(y)+f(y)g(x)h(x)+f(y)g(x)h(y).$$ Multiplying both sides by $p(x)q(y)$ and acting on this inequality first by functional $A$ with respect to $x$ and then by $B$ with respect to $y$ we get the desired inequality. ∎ Remark 1. For $p=q$ from previous theorem we get $$\displaystyle A(pfgh)B(p)+A(pfg)B(ph)+A(ph)B(pfg)+A(p)B(pfgh)$$ $$\displaystyle\geq$$ $$\displaystyle A(pfh)B(pg)+A(pf)B(pgh)+A(pgh)B(pf)+A(pg)B(pfh).$$ If also $A=B$ then $$A(pfgh)A(p)+A(pfg)A(ph)\geq A(pfh)A(pg)+A(pf)A(pgh)$$ and for $h=const$ it reduces to the Chebyshev inequality. Remark 2. Particular cases of inequality (4.1) are appeared in several papers for different kinds of linear functionals. For example, if $A$ and $B$ are Riemann-Liouville’s operators, then a non-weighted inequality is given in Theorem 2.1 in paper [25]. If $A$ and $B$ are different fractional $q$-integral of the Riemann-Liouville-type, then (4.1) is given in [26, Thm 2.1] for $p=q$. Inequalities involving two $q$-analogues of Saigo’s fractional integral operators which are particular cases of (4.1) are given in [12] as Theorems 5 and 6, while similar results for generalized $q$-Erdélyi-Kober fractional integral operators are given in [28, Thm 1 and 2]. Lemma 1. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $$\displaystyle H_{f,g,h}(x,y)$$ $$\displaystyle=$$ $$\displaystyle\left(f(x)-f(y)\right)\left(g(x)-g(y)\right)\left(h(x)-h(y)\right)$$ $$\displaystyle=$$ $$\displaystyle f(x)g(x)h(x)+f(x)g(y)h(y)+f(y)g(x)h(y)+f(y)g(y)h(x)$$ $$\displaystyle-$$ $$\displaystyle f(y)g(x)h(x)-f(x)g(y)h(x)-f(x)g(x)h(y)-f(y)g(y)h(y).$$ Then $$\displaystyle A_{x}B_{y}(p(x)q(y)H_{f,g,h}(x,y))$$ $$\displaystyle=$$ $$\displaystyle A(pfgh)B(q)+A(pf)B(qgh)+A(pg)B(qfh)+A(ph)B(qfg)$$ $$\displaystyle-$$ $$\displaystyle A(pgh)B(qf)-A(pfh)B(qg)-A(pfg)B(qh)-A(p)B(qfgh).$$ Theorem 10. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $m,M$, $n,N$, $k,K$ be real numbers and let $f$, $g$, $h$ be functions such that $$m\leq f(x)\leq M,\quad n\leq g(x)\leq N,\quad k\leq h(x)\leq K,\quad\forall x.$$ If all the terms in the below inequality exist, then $$\displaystyle\big{|}A(pfgh)B(q)+A(pf)B(qgh)+A(pg)B(qfh)+A(ph)B(qfg)$$ $$\displaystyle-A(pfg)B(qh)-A(pfh)B(qg)-A(pgh)B(qf)-A(p)B(fgh)\big{|}$$ $$\displaystyle\leq(M-m)(N-n)(K-k)A(p)B(q).$$ Proof. From $m\leq f(x)\leq M$ it follows $\left|f(x)-f(y)\right|\leq M-m$. Therefore $$\left|\left(f(x)-f(y)\right)\left(g(x)-g(y)\right)\left(h(x)-h(y)\right)\right% |\leq(M-m)(N-n)(K-k),\ i.e.$$ $$|H_{f,g,h}(x,y)|\leq(M-m)(N-n)(K-k).$$ Multiplying both sides by $p(x)q(y)$ and using Lemma 1 we get $$\displaystyle\Big{|}A(pfgh)B(q)-A(pfg)B(qh)-A(pfh)B(qg)+A(pf)B(qgh)$$ $$\displaystyle-A(pgh)B(qf)+A(pg)B(qfh)+A(ph)B(qfg)-A(p)B(fgh)\Big{|}$$ $$\displaystyle\leq(M-m)(N-n)(K-k)A(p)B(q),$$ which proves the theorem. ∎ Remark 3. Particular cases of Theorem 10 are appeared in [12, Thm 8 and 9] for two $q$-analogues of Saigo’s fractional integral operators and in [28, Thm 3 and 4] for generalized $q$-Erdélyi-Kober fractional integral operators. Theorem 11. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p$, $q$ be non-negative functions from $L$. Let $M_{1},M_{2},M_{3}$ be real numbers and let $f_{i}$, $(i=1,2,3)$ be $M_{i}-g-$Lipschitz functions. If all the terms in the below inequality exist, then $$\displaystyle\big{|}A(pf_{1}f_{2}f_{3})B(q)+A(pf_{1})B(qf_{2}f_{3})+A(pf_{2})B% (qf_{1}f_{3})+A(pf_{3})B(qf_{1}f_{2})$$ $$\displaystyle-A(pf_{1}f_{2})B(qf_{3})-A(pf_{1}f_{3})B(qf_{2})-A(pf_{2}f_{3})B(% qf_{1})-A(p)B(qf_{1}f_{2}f_{3})\big{|}$$ $$\displaystyle\leq M_{1}M_{2}M_{3}\cdot B_{y}A_{x}(p(x)q(y)|g(x)-g(y)|^{3}).$$ Proof. If $f_{i}$, $(i=1,2,3)$ are $M_{i}-g-$Lipschitz functions, then $$|f_{1}(x)-f_{1}(y)|\leq M_{1}|g(x)-g(y)|,\quad|f_{2}(x)-f_{2}(y)|\leq M_{2}|g(% x)-g(y)|,$$ $$|f_{3}(x)-f_{3}(y)|\leq M_{3}|g(x)-g(y)|$$ for all $x,y$. Multiplying those inequalities we get $$|H_{f_{1},f_{2},f_{3}}(x,y)|\leq M_{1}M_{2}M_{3}|g(x)-g(y)|^{3}.$$ This is equivalent to $$H_{f_{1},f_{2},f_{3}}(x,y)\leq M_{1}M_{2}M_{3}|g(x)-g(y)|^{3}\quad{\rm and}$$ $$-H_{f_{1},f_{2},f_{3}}(x,y)\leq M_{1}M_{2}M_{3}|g(x)-g(y)|^{3}.$$ Multiplying both inequalities with $p(x)q(y)$ and acting on the resulting inequalities by $A$ with respect to $x$ and then by $B$ with respect to $y$, we get the desired result. ∎ Remark 4. In [12, Thm 11 and 12] and [28, Thm 5 and 6] authors attempted to give corresponding results for $q$-analogues of Saigo’s fractional integral operators and for generalized $q$-Erdélyi-Kober fractional integral operators, respectively. But they used assumptions $|f_{i}(x)-f_{i}(y)|\leq M_{i}(x-y)$, $i=1,2,3$, $x,y>0$ which leads to conclusion that $f_{i}\equiv 0$. Remark 5. Considering results from this and from the previous section it is clear how Theorems 2 and 11 can be generalized for $n$ $M_{i}-g-$Lipschitz functions $f_{i}$, $i=2,\ldots,n$. We leave it to a reader. Results with two functions and three weights The following result is based on the succesive using of the Chebyshev inequality for pairs of weights. Theorem 12. Let $A$ and $B$ be isotonic linear functionals on $L$ and let $p,q,r$ be non-negative functions from $L$. If $f$ and $g$ are similarly ordered functions, then $$\displaystyle A(p)[2A(q)B(rfg)$$ $$\displaystyle+$$ $$\displaystyle A(r)B(qfg)+B(r)A(qfg)]$$ $$\displaystyle+A(pfg)[A(q)B(r)+A(r)B(q)]$$ $$\displaystyle\geq$$ $$\displaystyle A(p)[A(qf)B(rg)+A(qg)B(rf)]$$ $$\displaystyle+A(q)[A(pf)B(rg)+A(pg)B(rf)]$$ $$\displaystyle+$$ $$\displaystyle A(r)[A(pf)B(qg)+A(pg)B(qf)],$$ under assumptions that all terms are well-defined. If $f$ and $g$ are oppositely ordered functions, then the reversed inequality holds. Proof. Replacing in (1.3) $p$ by $q$ and $q$ by $r$ and multiplying by $A(p)$ we get $$A(p)[A(q)B(rfg)+B(r)A(qfg)]\geq A(p)[A(qf)B(rg)+A(qg)B(rf)].$$ Replacing in (1.3) $q$ by $r$ and multiplying by $A(q)$ we get $$A(q)[A(p)B(rfg)+B(r)A(pfg)]\geq A(q)[A(pf)B(rg)+A(pg)B(rf)].$$ Multiplying (1.3) by $A(r)$ we get $$A(r)[A(p)B(qfg)+B(q)A(pfg)]\geq A(r)[A(pf)B(qg)+A(pg)B(qf)].$$ Adding the above inequalities we get the statement of the theorem. ∎ Remark 6. Theorem 12 are proved in several papers for different kinds of linear operators. For example, if $A$ and $B$ are the Riemann-Liouville operators, then it is given in [29]. Result involving Hadamard operators is given in [3], while an analogue result for the Saigo operators and $q$-analogue of Saigo’s operators are given in [13] and [21]. Acknowledgements The research of the second author was partially supported by the Sofia University SRF under contract No 146/2015. References [1] Baleanu, D, Purohit, SD, Agarwal, P.: On fractional integral inequalities involving hypergeometric operators. Chinese Journal of Mathematics. 2014, Article ID 609476 (2014). [2] Belarbi, S, Dahmani, Z: On some new fractional integral inequalities. Journal JIPAM. 19(3) Art. 86 (2009). [3] Chinchane, VL, Pachpatte, DB: On some integral inequalities using Hadamard fractional integral. Malaya Journal of Matematik. 1(1) 62–66 (2012). [4] Chinchane, VL, Pachpatte, DB: On some Grüss-type fractional inequalities using Saigo fractional integral operator. Journal of Mathematics. 2014, Article ID 527910 (2014). [5] Sudsutad, W, Ntouyas, SK, Tariboon, J: Fractional integral inequalities via Hadamard’s fractional integral. Abstract and Applied Analysis. 2014, Article ID 563096 (2014). [6] Tariboon, J, Ntouyas, SK, Sudsutad, W: Some new Riemann-Liouville fractional integral inequalities. International Journal of Mathematics and Mathematics Sciences. 2014, Article ID 869434 (2014). [7] Wang, G, Harsh, H, Purohit, SD, Gupta, T: A note on Saigo’s fractional integral inequalities. Turkish Journal of Analysis and Number Theory. 2(3), 65–69 (2014). [8] Pečarić, JE, Proschan, F, Tong, YL: Convex functions, partial orderings, and statistical applications. Academic Press Inc. (1992). [9] Nikolova, L, Varošanec, S: Properties of mappings generated with inequalities for isotonic linear functionals. Proceedings of ’The International Conference Constructive Theory of Functions, Sozopol 2013’. Sofia (Bulgaria): Prof. Marin Drinov Academic Publishing House; p. 199–215 (2014). [10] Nikolova, L, Varošanec, S: Chebyshev and Grüss type inequalities involving two linear functionals and applications. Mathematical Inequalities and Applications, to appear. [11] Pečarić, J, Tepeš, B: On a Grüss type inequality for isotonic linear functionals I. Nonlinear Studies. 12(2), 119–125 (2005). [12] Baleanu, D, Agarwal, P: Certain inequalities involving the fractional $q$-integral operators. Abstract and Applied Analysis. 2014, Article ID 371274 (2014). [13] Choi, J, Agarwal, P: Some new Saigo type fractional integral inequalities and their $q$-analogues. Abstract and Applied Analysis. 2014, Article ID 579260 (2014). [14] Anastassiou, GA: Fractional Differentiation Inequalities. Springer, Dordrecht-Heidelberg-London-New York (2009). [15] Kiryakova, V: Generalized Fractional Calculus and Applications. Pitman Research Notes in Math. Series, 301. New York (USA): Longman and J. Wiley; (1994). [16] Samko SG, Kilbas AA, Marichev OI: Fractional Integrals and Derivatives: Theory and Applications. Gordon and Breach, Yverdon(Switzerland) (1993). [17] Agarwal, R, Bohner, M, Peterson, A: Inequalities on time scales: a survey. Mathematical Inequalities and Applications. 4, 535–557 (2001). [18] Bohner, M, Peterson, A: Dynamic Equations on Time Scales. Birkhäuser (2001). [19] Dahmani, Z, Tabharit, L, Taf, S: Some fractional integral inequalities. J. Nonlinear Science. Lett A, 1(2), 155–166 (2010). [20] Dahmani, Z: Some results associated with fractional integrals involving the extended Chebyshev functional. Acta Universitatis Apulensis. 27, 217–224 (2011). [21] Yang, W: Some new Chebyshev and Grüss-type integral inequalities for Saigo fractional integral operators and their $q$-analogues. Filomat. 29(6), 1269–1289 (2015). [22] Brahim, K, Taf, S: On some fractional q-integral inequalities. Malaya Journal of Matematik. 3(1), 21–26 (2013). [23] Dragomir, SS: Some integral inequalities of Grüss type. Indian J. Pure Appl. Math. 31(4), 397–415 (2000). [24] Bohner, M, Mathews, T, Tuna, A: Diamond-alpha Grüss type inequalities on time scales. Int. J. Dynamical Systems and Differential Equation. 3(1/2), 234–247 (2011). [25] Sulaiman, WT: Some new fractional integral inequalities. Journal of Mathematical Analysis. 2(2), 23–28 (2011). [26] Sroysang, B: A study on a new fractional integral inequality in quantum calculus. Adv. Studies Theor. Phys. 7(14), 689–692 (2014). [27] Sarikaya, MZ, Karaca, A: On the k-Riemann-Liouville fractional integral and applications. International Journal of Statistics and Mathematics. 1(3), 33–43 (2014). [28] Ritelli D, Agarwal, P: On some new inequalities involving generalized Erdélyi-Kober fractional q-integral operator. arXiv:1405.6829v1. [29] Dahmani, Z: New inequalities in fractional integrals. International Journal of Nonlinear Science. 9(4), 493–497 (2010).
Noise, sign problems, and statistics Michael G. Endres${}^{1}$ [email protected]    David B. Kaplan${}^{2}$ [email protected]    Jong-Wan Lee${}^{2}$ [email protected]    Amy N. Nicholson${}^{2}$ [email protected] ${}^{1}$Theoretical Research Division, RIKEN Nishina Center, Wako, Saitama 351-0198, Japan ${}^{2}$Institute for Nuclear Theory, University of Washington, Seattle, WA 98195-1550, USA (December 5, 2020) Abstract We show how sign problems in simulations of many-body systems can manifest themselves in the form of heavy-tailed correlator distributions, similar to what is seen in electron propagation through disordered media. We propose an alternative statistical approach for extracting ground state energies in such systems, illustrating the method with a toy model and with lattice data for unitary fermions. ††preprint: INT-PUB-11-25††preprint: RIKEN-QHP-2 I Introduction One of the most challenging and interesting problems in physics is to understand the properties of a system of many strongly interacting fermions. Numerical simulation is an important tool for understanding the ground state, and the common approach is to compute the $N$-body correlator $C_{N}(\tau;\phi)=\langle 0|\Psi_{N}(\tau)\Psi_{N}^{\dagger}(0)|0\rangle_{\phi}$, where $\Psi_{N}^{\dagger}(0)$, $\Psi_{N}(\tau)$ are interpolating fields which create an $N$-body state at Euclidean time zero and annihilate it at time $\tau$, and $\phi$ is a stochastic field responsible for fermion interactions. The field $\phi$ could be the dynamical gluon field in the case of QCD, for example, or an auxiliary field to induce short-range interactions. For large $\tau$ the averaged correlator asymptotically approaches $$\displaystyle\langle C_{N}(\tau,\phi)\rangle\sim Ze^{-\tau E_{0}(N)}$$ (1) where $E_{0}(N)$ is the ground state energy of the system and $\sqrt{Z}$ is the amplitude for $\Psi$ to create the ground state. Therefore if one computes $-\frac{1}{\tau}\ln\kern 0.6pt\overline{\kern-0.6ptC\kern-0.6pt}{}_{N}(\tau)$, where $\kern 0.6pt\overline{\kern-0.6ptC\kern-0.6pt}{}_{N}(\tau)=\frac{1}{{\cal N}}% \sum_{i}C_{N}(\tau,\phi_{i})$ is a sample mean computed on an ensemble of ${\cal N}$ statistically independent $\phi$ fields, one expects to see a “plateau” at large $\tau$ whose height yields the ground state energy $E_{0}(N)$. Excited state energies and response of the ground state to probes can also be computed by variations of this technique. The computation of $-\frac{1}{\tau}\ln\kern 0.6pt\overline{\kern-0.6ptC\kern-0.6pt}{}_{N}(\tau)$ can be problematic, however: it might be excessively noisy, or it may drift with $\tau$ and never find a plateau. We wish to address these problems here, defining the former as a “noise” problem, and the latter as an “overlap” problem, both of which can be related to the sign problem encountered in lattice simulations at nonzero chemical potential. In particular, referring to recent lattice simulations by the present authors of large numbers of unitary fermions, we show that the problems encountered can be manifestations of heavy-tailed distributions for $C_{N}(\tau,\phi)$ which make computing $\ln\langle C_{N}\rangle$ very difficult, and that the ideal estimator for this quantity might not simply be $\ln\kern 0.6pt\overline{\kern-0.6ptC\kern-0.6pt}{}_{N}$, as is commonly used. We find here that a cumulant expansion in the log of the correlator is a more efficient estimator, for example. More generally, we suggest that a study of the statistics of systems exhibiting noise or an overlap problem might be exploited to greatly facilitate the extraction of useful physics from numerical simulations. II Noise, and the physical spectrum The sign problem encountered in $N$-particle simulations does not arise simply because of Fermi statistics; if that were the only obstacle one could construct $C_{N}$ as an $N\times N$ Slater determinant of one-body propagators, with a computational difficulty of computing the determinant scaling only as $N^{3}$. In contrast, the sign problems commonly encountered, such as with QCD at nonzero chemical potential, entail computational difficulty which grows exponentially with particle number; furthermore, serious sign problems can occur in bosonic systems as well. Instead, sign problems appear when there are multiparticle states for which the energy/constituent is lower than for the states one wants to study. For example, if $\langle C_{A}\rangle\sim e^{-M_{A}\tau}$ is the expectation of a $3A$ quark correlator in QCD for a nucleus of atomic number $A$ and mass $M_{A}$, the variance in the sample mean $\kern 0.6pt\overline{\kern-0.6ptC\kern-0.6pt}{}_{A}$ can be estimated as $$\displaystyle\sigma^{2}\sim\langle C_{A}^{\dagger}C_{A}\rangle-\langle C_{A}^{% \dagger}\rangle\langle C_{A}\rangle\sim\frac{1}{{\cal N}}e^{-3Am_{\pi}\tau}$$ (2) for sample size ${\cal N}$. Since $C_{A}$ corresponds to $3A$ quark propagators and $C_{A}^{\dagger}$ to $3A$ anti-quark propagators, the variance is dominated by the state with $3A$ pions and $\sigma$ falls off with $\tau$ much more slowly than the signal one is looking for, $\langle C_{A}\rangle$, since $\frac{3}{2}m_{\pi}A\ll M_{A}$. This “Lepage analysis” Lepage (1989) suggests there is a noise problem and that it arises because in a background gluon field each quark propagator is uncorrelated with any other and doesn’t “know” whether it is to be contained in a light pion or a heavy nucleon. This suggests a picture where each correlator $C_{A}(\tau,{\cal A})$ in a particular background gauge field ${\cal A}$ roughly equals $e^{-3A/2m_{\pi}\tau}$, and the exponentially smaller value expected for $\langle C_{A}\rangle$ only arises from subsequent cancellations while averaging over gauge fields. A very similar analysis applies to QCD with nonzero chemical potential Gibbs (1985); Splittorff and Verbaarschot (2007). This would be a reasonable picture if the distribution of $C_{A}(\tau,{\cal A})$ over the ensemble of gauge fields was normal, with mean $e^{-M_{A}\tau}$ and variance $e^{-3Am_{\pi}\tau}$, with large fluctuations concealing an exponentially small signal. There are general arguments that suggest this is incorrect, however, and that the distribution of many-fermion correlation functions will be heavy-tailed and extremely non-Gaussian, a result we also find from explicit simulations of unitary fermions. In the latter case we show that better understanding the nature of the noise can help devise an efficient strategy for extracting a signal; it is plausible that similar techniques could be more widely applicable to noisy systems. III A Mean Field Description Nonrelativistic fermions with strong short-range interactions tuned to a conformal fixed point where the phase shift satisfies $\delta(k)=\pi/2$ for all $k$ are called “unitary fermions”. This nonrelativistic conformal field theory is interesting to study both for its simplicity and universality, its challenges for many-body theory, and because it can be realized and studied experimentally using trapped atoms tuned to a Feshbach resonance. It is also an ideal theory for studying fermion sign problems on the lattice, being much simpler and faster to simulate than QCD. At its most basic, the lattice action is the obvious discretization of the Euclidean Lagrangian Chen and Kaplan (2004) $$\displaystyle\psi^{\dagger}(\partial_{\tau}-\nabla^{2}/2M)\psi-{\textstyle{% \frac{1}{2}}}m^{2}\phi^{2}+\phi\psi^{\dagger}\psi$$ (3) where $\phi$ is a nonpropagating auxiliary field with $m^{2}$ tuned to a critical value $m^{2}_{c}$, and $\psi$ is a spin ${\textstyle{\frac{1}{2}}}$ fermion with mass $M$; a more sophisticated action tuned to reduce discretization errors was recently presented in Endres et al. (2010). A simulation of this theory reveals a distribution for $N$-body correlators $C_{N}(\tau,\phi)$ which is increasingly non-Gaussian at late $\tau$; in fact, it is $\ln C_{N}$ which appears to be roughly normally distributed, as shown in Fig. 1, so that $C_{N}(\tau,\phi)$ is roughly log-normal distributed with an increasingly large $\sigma$ and long tail at late time. The appearance of a heavy-tailed distribution should not be surprising, since the system is similar to the problem of electron propagation in disordered media, where heavy-tailed distributions are ubiquitous in the vicinity of the Anderson localization transition. For example, it is found that for physical quantities such as the current relaxation time or normalized local density of states, the distribution function $P(z)$ scales as $\exp(-C_{d}\ln^{d}z)$. A particularly simple way to derive these results is to use the optimal fluctuation method of Ref. Smolyarenko and Altshuler (1997), which is a mean field approach. We can adapt these methods to the current problem, defining the variable $Y=\ln C_{N}(\tau,\phi)$ and computing its probability distribution $P(y)$ as $$\displaystyle P(y)$$ $$\displaystyle=$$ $$\displaystyle{\cal N}\int D\phi e^{-S_{\phi}}\,\delta(Y(\tau,\phi)-y)$$ (4) $$\displaystyle=$$ $$\displaystyle{\cal N}\int D\phi\,\frac{dt}{2\pi}e^{-S}$$ (5) where $S_{\phi}=\int d^{4}x\textstyle{\frac{m^{2}}{2}}\phi^{2}$ and $S=S_{\phi}-it(\ln C_{N}(\tau,\phi)-y)$. Using the PDS subtraction scheme Kaplan et al. (1998) we have $m^{2}=M\lambda/4\pi$, where the renormalization scale $\lambda$ is taken to be the physical momentum scale in the problem — in this case $\lambda=k_{F}\equiv(3\pi^{2}N/V)^{1/3}$, $N/2$ being the number of fermions with a single spin orientation. We proceed now to evaluate this integral using a mean field expansion; it is not evident that there is a small parameter to justify this expansion, but the leading order result is illuminating and fits the numerical data well. We expand about $\phi(x)=\phi_{0}$, $t=t_{0}$, and use the fact that for large $\tau$ the $n^{th}$ functional derivative of $\ln C_{N}(\tau,\phi)$ with respect to $\phi(x)$ equals the the 1-loop Feynman diagram with $n$ insertions of $\psi^{\dagger}\psi$ in the presence of a chemical potential $\mu=k_{F}^{2}/(2M)$. The equations for $\phi_{0}$ and $t_{0}$ are given by $$\displaystyle t_{0}$$ $$\displaystyle=$$ $$\displaystyle-i\frac{m^{2}\phi_{0}}{\langle n(x)\rangle_{c}}=-i\frac{Vm^{2}% \phi_{0}}{N}$$ (6) $$\displaystyle\phi_{0}$$ $$\displaystyle=$$ $$\displaystyle-\frac{y-\ln Z+\tau E_{0}(N)}{N\tau}$$ (7) where $E_{0}(N)=3NE_{F}/5$ is the total energy of $N$ free degenerate fermions ($N/2$ of each spin), and $Z$ is the overlap of the source and sink with the free fermion state. The leading term in the mean field expansion for $P(y)$ can therefore be expressed as $P(y)\propto\exp\left[-\frac{(y-\kern 0.6pt\overline{\kern-0.6pty\kern-0.6pt}% \kern 0.6pt)^{2}}{2\sigma^{2}}\right]$ with $$\displaystyle\kern 0.6pt\overline{\kern-0.6pty\kern-0.6pt}\kern 0.6pt=\ln Z-% \tau E_{0}(N)\ ,\quad\sigma^{2}=\frac{40}{9\pi}E_{0}(N)\,\tau\ .$$ (8) This describes a log-normal distribution for the $N$-fermion propagator $C_{N}(\tau,\phi)$, with both mean and variance growing with time in units of the energy of $N$ free degenerate fermions. In Fig. 2 we plot the quantities $-\frac{1}{E_{0}}\frac{\partial\kern 0.6pt\overline{\kern-0.6pty\kern-0.6pt}% \kern 0.6pt}{\partial\tau}$ and $\frac{1}{E_{0}}\frac{\partial\sigma^{2}}{\partial\tau}$ as a function of $N$ obtained from correlator distribution data for unitary fermions at late $\tau$, and find that the gross features of the results are compatible with the mean field estimates of unity and $40/9\pi$ obtained from eq. (8). IV A toy model It would be useful to devise an algorithm to reliably estimate energies without having to exhaustively sample the long tail of the correlator distribution, yet without making incorrect assumptions about the exact functional form of that tail. An approach we suggest here is to exploit the general relationship between stochastic variables $X$ and $Y=\ln X$: $$\displaystyle\ln\langle X\rangle=\sum_{n=1}^{\infty}\frac{\kappa_{n}}{n!}$$ (9) where $\kappa_{n}$ is the $n^{th}$ cumulant of $Y$. This relation can be proved by noting that the generating function for the $\kappa_{n}$ is $\ln\phi_{Y}(t)$ where $\phi_{Y}(t)=\langle e^{Yt}\rangle=\langle X^{t}\rangle$ is the moment generating function for $Y$, and evaluating at $t=1$, assumed to be within the radius of convergence. The motivation for investigating eq. (9) is that if the distribution $P(X)$ were exactly log-normal, the above sum would end after the second term, as $\kappa_{n>2}$ would all vanish; therefore by replacing the $\kappa_{n}$ by sampled cumulants and truncating the sum at finite order, one might hope to have a reliable estimator for $\ln\langle X\rangle$ provided $P(X)$ was nearly log-normal, in the sense that the $\kappa_{n}$ fall off rapidly for $n>2$. Distributions with log-normal-like tails arise naturally in products of stochastic variables. The propagator $C_{N}(\tau,\phi)$ for unitary fermions can be expressed in a transfer matrix formalism as the product of a $\tau$ matrices — one per time hop — each of which is the direct product of $N$ $V\times V$ matrices of the form $e^{-K/2}(1+g\varphi)e^{-K/2}$, where $K$ is a constant matrix (the spatial kinetic operator), $\varphi$ is a random diagonal matrix with $O(1)$ entries corresponding to stochastic $\phi$ fields living on the time links, and $g$ is a coupling constant (identified with $1/m^{2}$ in Eq. 3) that has been tuned to a particular critical value that is $O(1)$. Unfortunately, little seems to be known about products of random matrices, beyond the study in Jackson et al. (2002) which deals with large products of weakly random matrices. Therefore we analyze instead a toy model where we define a “correlator” $C_{\tau}$ as a product of random numbers, and an “energy” ${\cal E}=\lim_{\tau\to\infty}{\cal E}_{\tau}$ where: $$\displaystyle C_{\tau}=\prod_{i=1}^{\tau}(1+g\varphi_{i})\ ,\quad{\cal E}_{% \tau}=-\frac{1}{\tau}\ln\langle C_{\tau}\rangle$$ (10) where $0\leq g\leq 1$ and the $\varphi_{i}$ are independent and identically distributed random numbers with a uniform distribution on the interval $[-1,1]$. The exact value for the energy is obviously ${\cal E}_{\tau}=0$ for any $\tau$ since the statistical average of the correlator is $\langle C_{\tau}\rangle=1$. The cumulants of the variable $Y=\ln(C_{\tau})$ are given by $$\displaystyle\kappa_{1}$$ $$\displaystyle=$$ $$\displaystyle\tau\left[\textstyle{\frac{1}{2}}\log\left(1-g^{2}\right)+% \textstyle{\frac{\tanh^{-1}(g)}{g}}-1\right]\ ,$$ (11) $$\displaystyle\frac{\kappa_{n}}{n!}$$ $$\displaystyle=$$ $$\displaystyle\tau\left(\textstyle{\frac{(-1)^{n}}{n}}-\text{Li}_{1-n}\left(% \textstyle{\frac{1+g}{1-g}}\right)\textstyle{\frac{\left(2\tanh^{-1}(g)\right)% ^{n}}{n!}}\right)$$ for $n\geq 2$; for small $g$ one finds that the $\kappa_{n}$ rapidly decrease as $n$ increases for $g<1$. Table 1 shows how the systematic error in eq. (9) when truncated at $n=n_{max}$, converges to the exact answer ${\cal E}_{\tau}=0$ as a function of $n_{\max}$ for $g=1/2$, and shows that even though the distribution is not log-normal ($\kappa_{n>2}\neq 0$) the convergence is rapid. In Fig. 3 we show the results of a simulation where we compute ${\cal E}_{\tau}$ for $g={\textstyle{\frac{1}{2}}}$ and $\tau=1,\ldots,1000$. At each value of $\tau$ we independently generated an ensemble of values for $C_{\tau}$ of size $N=50,000$. From that ensemble we computed ${\cal E}_{\tau}$ by (i) using the conventional estimator ${\cal E}_{\tau}=-\frac{1}{\tau}\ln\kern 0.6pt\overline{\kern-0.6ptC\kern-0.6pt% }{}_{\tau}$ (blue), which shows a striking systematic error for $\tau\gtrsim 50$, and statistical noise increasing up to $\tau\simeq 500$ but decreasing beyond that; (ii) using eq. (9) truncated at $n=2$ using conventional estimators for the $\kappa_{n}$ (green), showing a $\tau$-independent systematic error with smaller but slowly growing statistical error; (iii) eq. (9) truncated at $n=3$ (red) with a negligible constant systematic error but a larger statistical error, growing with $\tau$. Evidently, one trades systematic error for statistical error by truncating eq. (9) at increasingly large $n_{max}$. Table 1 displays results of a simulation of $1.25\times 10^{7}$ $\phi$ configurations blocked into 250 blocks of 50,000 each, for the model eq. (10) at $\tau=1000$ and $g=1/2$. For each case we give the mean and the square root of the variance; for the truncated cumulant expansion we also give the theoretical systematic error from truncating eq. (9) using our analytic expressions for $\kappa_{n}$. These numbers show how the conventional method gives a wrong answer with deceptively small statistical error. One sees again the trade of systematic error for statistical error as one increases the order $n_{max}$ where one truncates the sum in eq. (9). Table 1 suggests the place to stop for the smallest combined error is at $n_{max}=3$, justified by noting that the $n_{max}=4$ result with statistical errors encompasses the $n_{max}=3$ result; we suggest this as a practical algorithm for determining where to truncate the cumulant expansion in general. Fig. 4 shows how this works in a real simulation for 50 trapped unitary fermions Endres et al. (2011). V Discussion Heavy-tail distributions are likely to be ubiquitous in $N$-body simulations, and perhaps even in other types of noisy calculations. With such distributions theoretical statistical means can deviate wildly from sample means for any realizable sample size and render conventional estimates of expected fluctuations irrelevant. We have shown that there are more efficient estimators for ground state energies using the cumulants of the log of the correlator instead of the conventional effective mass, at least for positive correlators. This method is presumably only effective for nonpositive data when when the heavy-tail is asymmetric. It may be useful to think of this procedure in a renormalization group language, where the higher cumulants behave like irrelevant operators affecting the flow toward a log-normal distribution. Acknowledgements. This work was supported in part by U.S. DOE grant No. DE-FG02-00ER41132. M.G.E is supported by the Foreign Postdoctoral Researcher program at RIKEN. References Endres et al. (2011) M. G. Endres, D. B. Kaplan, J.-W. Lee, and A. N. Nicholson (2011), to appear. Lepage (1989) G. P. Lepage (1989), invited lectures given at TASI’89 Summer School, Boulder, CO, Jun 4-30, 1989. Gibbs (1985) P. E. Gibbs, PRINT-86-0389 (GLASGOW) (1985). Splittorff and Verbaarschot (2007) K. Splittorff and J. Verbaarschot, Phys.Rev.Lett. 98, 031601 (2007), eprint hep-lat/0609076. Chen and Kaplan (2004) J.-W. Chen and D. B. Kaplan, Phys.Rev.Lett. 92, 257002 (2004), eprint hep-lat/0308016. Endres et al. (2010) M. G. Endres, D. B. Kaplan, J.-W. Lee, and A. N. Nicholson, PoS LATTICE2010, 182 (2010), eprint 1011.3089. Smolyarenko and Altshuler (1997) I. Smolyarenko and B. Altshuler, Physical Review B 55, 10451 (1997). Kaplan et al. (1998) D. B. Kaplan, M. J. Savage, and M. B. Wise, Phys.Lett. B424, 390 (1998), eprint nucl-th/9801034. Jackson et al. (2002) A. Jackson, B. Lautrup, P. Johansen, and M. Nielsen (2002), eprint physics/0202037.
Static scaling behavior of polymer melts of a semiflexible bead-spring model Sara Jabbari-Farouji ${}^{*}$ Institute of Physics, Johannes Gutenberg-University, Staudingerweg 7-9, 55128 Mainz, Germany Correspondence to: [email protected] (December 8, 2020) Abstract We present results from molecular-dynamics simulations for semiflexible polymer melts of the coarse-grained Polyvinyl alcohol (CG-PVA) model. We characterize in detail the structural features of equilibrated polymer melts with chain lengths $5\leq N\leq 1000$ and we examine the validity of Flory’s ideality hypothesis for them. We find that for sufficiently long polymers $N>50$, the chain length dependence of the end-to-end distance and the gyration radius follow the scaling predictions of ideal chains. Furthermore, the results of the mean square internal distance, the probability distributions of the end-to-end distance, and form factors are in good agreement with the theoretical predictions for ideal chains. We also provide a detailed characterization of primitive paths of long equilibrated polymer melts in the entangled regime and we compare them to the original polymer conformations. molecular Dynamics, coarse-grained PVA model, entanglement, polymer melt I Introduction Polymer melts are dense polymeric liquids that merely consist of macromolecular chains. The main characteristic of polymer melts is their high-packing density that leads to overlapping of pervaded volume of their chains Rubinstein and Colby (2003). In a melt, density fluctuations are small and every monomer is isotropically surrounded by other monomers that can be part of the same chain or belong to other chains. According to Flory’s argument Flory (1969), the excluded volume interactions between monomers separated by more than a few bonds are screened. Such a screening implies that any local conformational information decays exponentially along the chain backbone and thus has no influence on its long-range conformation. Flory’s hypothesis states that polymer conformations in a melt behave statistically as ideal random-walks on length scales much larger than the monomer’s diameter Flory (1969); Doi and Edwards (1986). The validity of Flory’s hypothesis has been extensively tested by computer simulations Wittmer et al. (2004, 2007a, 2007b); Beckrich et al. (2007); Meyer et al. (2008); Hsu and Kremer (2016) and Neutron scattering (NS) experiments Mortensen (2011); Higgins and Benoit (1997). These investigation confirm that the Gaussian coil shape of polymers. However, computational studies of fully flexible long polymers for both lattice (bond fluctuation) and continuum (bead-spring) models have revealed noticeable deviations from the ideal chain behavior Wittmer et al. (2004, 2007a, 2007b); Beckrich et al. (2007); Hsu (2014). These deviations result from the interplay between the chain connectivity and the melt incompressibility which foster an incomplete screening of excluded volume interactions Wittmer et al. (2007a); Beckrich et al. (2007). Interestingly, recent studies of conformational properties of long semiflexible polymer melts demonstrate that the deviations decrease as the chains bending stiffness increases Hsu and Kremer (2016). Notably, the results of the mean square internal distance, the probability distributions of the end-to-end distance, and the chain structure factor are well described by theoretical predictions for ideal chains Hsu and Kremer (2016). Inspired by these recent findings Hsu and Kremer (2016), here we examine the credibility of the Flory’s ideality hypothesis for a semiflexible bead-spring polymer model that generates crystallizable polymer melts. This model is is obtained by a systematic coarse-graining of atomistic simulations of polyvinyl alcohol and it is known as the CG-PVA model Meyer and Muller-Plathe (2001). The main distinctive feature of the model is its anharmonic intrachain bending rigidity that leads to crystallization from the melt upon cooling. The prior studies of structural properties of CG-PVA polymer melts have been limited to short chains $N\leq 100$ Vettorel et al. (2007). Our aim is to provide a comprehensive study of structural features of long equilibrated polymer melts as a full characterization of the static properties provides a basis for a better understanding of their crystallization behavior Meyer and Muller-Plathe (2002). We present the results for equilibrated CG-PVA polymer melts with $5\leq N\leq 1000$ that includes chain lengths in the entangled regime. We investigate the static scaling behavior of CG-PVA polymer melts and compare our numerical results to the existing theories for ideal chains. We find that the structural features of sufficiently long polymers are in good agreement with the theoretical predictions for ideal chains and deviations from ideality are small. The remainder of the paper is organized as follows. In Sec. II, we briefly review the CG-PVA model and provide the simulations details. We present a detailed analysis of conformational and structural features of polymer melts in Sec. III and we compare simulation results to the theoretical predictions for ideal chains. We investigate conformational properties of the primitive paths of long chains in section IV where we determine the entanglement length of fully equilibrated CG-PVA chains. Finally, we summarize our main findings and discuss our future directions in section V. II Model and Simulations details We equilibrate polymer melt configurations of the coarse-grained Polyvinyl alcohol (CG-PVA) model using molecular dynamics simulations. In the following, we first briefly review CG-PVA model and then provide the details of simulations. II.1 Recap of the CG-PVA model The CG-PVA model is obtained by a systematic coarse-graining of atomistic simulations of Polyvinyl alcohol (PVA) Meyer and Muller-Plathe (2001). It is a bead-spring model in which each bead of the coarse-grained chain with diameter $\sigma=0.52$ nm corresponds to a monomer of the PVA polymer. The fluctuations of the bond length about its average value $b_{0}=0.5\sigma$ are restricted by a harmonic potential $$U_{bond}=\frac{1}{2}k_{bond}(b-b_{0})^{2}.$$ (1) The bond stiffness constant $k_{bond}=2700k_{B}T/\sigma^{2}$ is large and leads to bond length fluctuations with a size comparable to monomer diameter. Monomers of distinct chains and the same chain that are three bonds or farther apart interact by a soft 6-9 Lennard-Jones potential, $$U_{LJ}(r)=\epsilon_{0}\left[(\frac{\sigma_{0}}{r})^{9}-(\frac{\sigma_{0}}{r})^% {6}\right]$$ (2) in which $\sigma_{LJ}=0.89\sigma$ and $\epsilon_{LJ}=1.511$ $k_{B}T_{0}$ . Here, $T_{0}=550$ K is the reference temperature of the PVA melt Meyer and Muller-Plathe (2001). We truncate and shift the Lennard-Jones potential at $r^{C}_{LJ}=1.6\sigma$ in our simulations. Note that our choice of $r^{C}_{LJ}$ is different from initial studies where the non-bonded interactions were truncated at the minimum of LJ potential $r_{min}\approx 1.02\sigma$ and thus were purely repulsive. However, as we will see, the structural properties are CG-PVA polymer melts remain unaffected. The distinguishing characteristic of the CG-PVA model is its anharmonic angle-bending potential Meyer and Muller-Plathe (2001) as presented in Fig.1. This bond angle potential is determined directly from atomistic simulations by Boltzmann inversion of the probability distribution of the bond angle $\theta$. The minima of $U_{bend}(\theta)$ reflect the specific states of two successive torsions at the atomistic level and they correspond to three energetically favorable states, trans-trans, trans-gauche and gauche-gauche. Therefore, the bending potential retains semiflexibility of chains originated from the torsional states of the atomistic backbone Meyer and Muller-Plathe (2001). II.2 Units and simulation aspects We carry out molecular dynamics simulations of CG-PVA polymers with chains lengths $5\leq N\leq 1000$ using LAMMPS Plimpton (1995). We report the distances in length unit $\sigma=0.52$ nm. Meyer and Muller-Plathe (2001). The time unit from the conversion relation of units is $\tau=\sqrt{m\sigma^{2}/k_{B}T}$ with the monomer mass $m=1$. The starting melt configurations are prepared by generating an ensemble of $n_{c}$ (number of chains) self-avoiding random walks composed of $N$ monomers with an initial density of $\rho\sigma^{3}=2.11$. Following a fast push off to remove the interchain monomer overlaps, we equilibrate disordered melt structures in the NPT ensemble using a Berendsen barostat and Nose-Hoover thermostat. The temperatures and pressures are reported in reduced units $T=1.0$ and $P=8$, equivalent to $T_{0}=550$ K and $P_{0}=1$ bar in atomistic simulations. The time step used through all the simulations is $0.005\tau$. Table 1 provides a summary of configurations of polymers and their equilibration time in units of $\tau$. In order to analyze the static properties of CG-PVA polymers, we extract from the polymers configurations the normalized probability distribution functions of the end-to-end distance, the gyration radius and the bond angle of polymers as well as the bond length and the bond angle of their primitive paths. We acquire the numerical probability distribution of a desired observable $x$ by accumulating a histogram $H_{N}(x)$ of a fixed width $\Delta x$. Then, we obtain the normalized probability distribution function as $P_{N}(x)=\frac{H_{N}(x)}{\sum_{x^{\prime}}H_{N}(x^{\prime})\Delta x}$. III Structural features of CG-PVA polymer melts We first investigate the chain-length dependence of the mean square end-to-end distance and the gyration radius for chain sizes in the range $5\leq N\leq 1000$. The mean square end-to-end distance is defined as $$\langle R_{e}^{2}\rangle=\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}\langle(\mathbf{r}_{% i,N}-\mathbf{r}_{i,1})^{2}\rangle$$ (3) and the mean square gyration radius is given by $$\langle R_{g}^{2}\rangle=\frac{1}{n_{c}N}\sum_{i=1}^{n_{c}}\langle\sum_{n=1}^{% N}(\mathbf{r}_{i,n}-\mathbf{r}_{i,cm})^{2}\rangle$$ (4) where $\mathbf{r}_{i,n}$ is the position of $n$th monomer of chain number $i$ and $\mathbf{r}_{i,cm}$ is the center of mass position of the $i$th polymer chain in a sample. Here, $\langle\cdots\rangle$ includes an averaging over $20-50$ equilibrated configurations that are $5-10\times 10^{3}\tau$ apart for the shorter chains and $2.5\times 10^{6}\tau$ apart for the longer chains with $N>200$. Fig. 2 shows $\langle R_{g}^{2}\rangle/\langle\mathbf{b}^{2}\rangle\rangle$ and $\langle R_{e}^{2}\rangle/(6\langle\mathbf{b}^{2}\rangle)$ as a function of chain length $N$. Here, $\langle\mathbf{b}^{2}\rangle:=\ell_{b}^{2}=0.2477\pm 0.0002$ denotes the mean-square bond length that is independent of the chain length. We notice that the longer chains with $N>50$ follow the relation $\langle R_{e}^{2}\rangle/\langle R_{g}^{2}\rangle=6$ valid for ideal chains de Gennes (1979). For shorter chains the ratio $\langle R_{e}^{2}\rangle/\langle R_{g}^{2}\rangle$ is in the range $6.5-7.5$. Additionally, chains with $N>50$ follow the scaling behavior of ideal chains $\langle R_{e}^{2}\rangle\propto\langle R_{g}^{2}\rangle\propto N$. The extracted scaling exponent from fitting $\langle R_{e}^{2}\rangle\propto N^{2\nu}$ versus $N$ with a power law gives $\nu=0.50$ that is identical to the value for the ideal chains. III.1 Probability distribution functions The observed scaling behavior for the mean square end-to-end distance and the gyration radius of CG-PVA polymers with $N>50$ suggests that long semiflexible polymers behave like ideal chains. To test the ideality hypothesis, we investigate the conformational statistics of individual chains, and compare them to the theoretical predictions for ideal chains de Gennes (1979); Rubinstein and Colby (2003). We obtain the probability distributions of the scaled end-to-end distance $r_{e}=(R_{e}^{2}/\langle R_{e}^{2}\rangle)^{1/2}$ and the scaled gyration radius $r_{g}=(R_{g}^{2}/\langle R_{g}^{2}\rangle)^{1/2}$ from the simulated polymer configurations. For ideal chains, the probability distribution function of the end-to-end vector $\vec{R}_{e}$ for chains of size $N$ has a Gaussian distribution Rubinstein and Colby (2003); Doi and Edwards (1986). As a result, the corresponding probability distribution function for the reduced end-to-end distance $P_{N}(r_{e})$ follows from $$P_{N}(r_{e})=4\pi r_{e}^{2}\left(\frac{3}{2\pi}\right)^{3/2}\exp\left(-\frac{3% r_{e}^{2}}{2\langle r_{e}^{2}\rangle}\right)$$ (5) where $\int_{0}^{\infty}P_{N}(r_{e})dr_{e}=1$. The exact expression for the probability distribution of gyration radius is more complicated and does not have a compact form Fujita and Norisuye (1970). However, it is found that the formula suggested by Lhuillier Lhuillier (1988) for polymer chains under good solvent conditions provides a good approximation for ideal chains Vettorel et al. (2010); Hsu (2014); Hsu and Kremer (2016) too. The Lhuillier formula for the scaled gyration radius $r_{g}$ in $d$-dimensions reduces to $$P_{N}(r_{g})=A_{g}\exp\left(-a_{1}r_{g}^{-\alpha d}-a_{2}r_{g}^{\delta}\right)$$ (6) in which the exponents $\alpha$ and $\delta$ are related to the space dimension $d$ and the Flory exponent $\nu$ by $\alpha=(\nu d-1)^{-1}$ and $\delta=(1-\nu)^{-1}$. $a_{1}$ and $a_{2}$ are system-dependent non-universal constants and $A_{g}$ is a normalization constant such that $\int_{0}^{\infty}P_{N}(r_{g})dr_{g}=1$. Fig. 3 depicts the normalized probability distributions of the scaled end-to-end distance $r_{e}$ and gyration radius $r_{g}$ for different chain lengths. For chain lengths $N\geq 50$, all the data collapse on a single master curve for both $P_{N}(r_{e})$ and $P_{N}(r_{g})$. Even, the data for $N=30$ are not so far from the master curves. We note that $N=1000$ data present a larger scatter from the master curves in the central regions of distribution functions. These deviations are most likely due to a poor equilibration of $N=1000$ chains. We have also plotted the corresponding theoretical predictions for the $N$-independent normalized distribution functions given by Eq. (5) and Eq. (6) in Fig. 3a and Fig. 3b, respectively. We find a very good agreement between the master curves and the theoretical predictions for the ideal chains. Next, we examine the probability distribution of bond angles $P_{N}(\theta)$ and compare it with the form expected from Boltzmann distribution $P_{N}(\theta)=A_{\theta}\sin\theta\exp[-\beta U_{bend}(\theta)]$ where $A_{\theta}$ is a normalization constant such that $\int_{0}^{\pi}d\theta P_{N}(\theta)=1$. Fig. 5 presents the $P_{N}(\theta)$ obtained from accumulating the histograms of bond-angles as well the Boltzmann distribution prediction. Overall, we find a good agreement between the two probability distribution functions for all the chain length including the poorly equilibrated polymers of size $N=1000$. This observation demonstrates that short-range conformation of polymers are well equilibrated. III.2 Intrachain correlations To understand the intrachain correlations, we calculate the mean-square internal distances (MSID) for various chain lengths defined as $$\langle R^{2}(n)\rangle=\frac{1}{n_{c}}\left\langle\sum_{i=1}^{n_{c}}\frac{1}{% N-n}(\mathbf{r}_{i,j}-\mathbf{r}_{i,j+n})^{2}\right\rangle$$ (7) where $n$ is the internal (chemical) distance between the $j$th monomer and the $(j+n)$th monomer along the same chain. MSID is a measure of internal chain conformation that can be used to evaluate the equilibration degree of long polymers. In Fig. 3a, we present the results of the rescaled mean square internal distance, $\langle R^{2}(n)\rangle/n\ell_{b}^{2}$ obtained by averaging over $20-50$ polymer melt configurations that are $10^{4}$ $\tau$ ($2.5\times 10^{6}$ $\tau$) apart for short (long) chains. For longer chains with $100<N<1000$, we find a good collapse of all the MSIDs. The $\langle R^{2}(n)\rangle/n\ell_{b}^{2}$ of the longest chain length $N=1000$ deviates from that of other long chains for internal distances $n>50$. These results show that the $N=1000$ chains are equilibrated at shorter length scales but their large-scale conformation still needs much longer equilibration time $\tau_{eq}\propto 3N^{3}/N_{e}\tau\approx 10^{8}\tau$ where $N_{e}\approx 36$ is the entanglement length as will be discussed in the next section. From the asymptotic behavior of mean square end-to-end distance of long CG-PVA chains, we can extract their characteristic ratio $C_{\infty}$ and Kuhn length $\ell_{K}$. The characteristic ratio is defined by the relation $\langle R_{e}^{2}(N)\rangle=C_{\infty}(N-1)\ell_{b}^{2}$ where $\ell_{b}$ is the average bond length. From MSID of longer polymers, $N=300$ and $N=500$, we obtain $C_{\infty}=5.89\pm 0.04$. The Kuhn length gives us the effective bond length of an equivalent freely jointed chain which has the same mean square end-to-end distance $R_{e}^{2}$ and the same maximum end-to-end to distance $R_{\text{max}}$ Rubinstein and Colby (2003). For a freely jointed chain with $N_{k}$ Kuhn segments with bond length $\ell_{K}$, we have $R_{\text{max}}=(N_{k}-1)\ell_{k}$ and $\langle R_{e}^{2}(N_{k})\rangle=(N_{k}-1)\ell_{k}^{2}$. For CG-PVA polymers, we find $R_{\text{max}}=(N-1)\ell_{b}$ and $\langle R_{e}^{2}(N\gg 1)\rangle=5.89(N-1)\ell_{b}^{2}$. Equating $\langle R_{e}^{2}(N)\rangle$ and $R_{\text{max}}$ of the CG-PVA chains with those of the equivalent freely jointed chain, we obtain $\ell_{k}=5.89\ell_{b}=2.93$ $\sigma$. If the excluded volume interactions in the melt are screened, one expects that the mean square internal distance of semiflexible polymers to be well described by a generalized freely rotating chain (FRC) model Flory (1969); Honnell et al. (1990). $\langle R^{2}(n)\rangle$ of FRC model only depends on the value of $\langle\cos\theta\rangle$ where $\theta$ is the angle between any two successive bonds in a chain. It is given by $$\langle R^{2}(n)\rangle=n\ell_{b}^{2}\left(\frac{1+\langle\cos\theta\rangle}{1% -\langle\cos\theta\rangle}-\frac{2}{n}\frac{\langle\cos\theta\rangle(1-\langle% \cos\theta\rangle^{n})}{(1-\langle\cos\theta\rangle)^{2}}\right).$$ (8) The value of $\langle\cos\theta\rangle$ for the CG-PVA model can be obtained from $P_{N}(\theta)\propto\exp[-\beta U_{bend}(\theta)]$ as $$\langle\cos\theta\rangle=\frac{\int_{0}^{\pi}\sin\theta\cos\theta\exp[-\beta U% _{bend}(\theta)]}{\int_{0}^{\pi}\sin\theta\exp[-\beta U_{bend}(\theta)]}.$$ (9) where $U_{bend}(\theta)$ is presented in Fig. 1. Doing the integration in Eq. (8) at $T=1.0$ numerically, we obtain $\langle\cos\theta\rangle=0.6985$. We can also directly infer $\langle\cos\theta\rangle$ from the MD simulations results as $\langle\cos\theta\rangle\equiv\langle\widehat{\mathbf{b}}_{i,j}\cdot\widehat{% \mathbf{b}}_{i,j+1}\rangle$ where $\widehat{\mathbf{b}}_{i,j}$ is the $j$th unit bond vector of the $i$th chain and the averaging is carried out over all the chains and several polymer melt configurations. From MD simulations, we deduce the universal value of $\langle\cos\theta\rangle=0.699\pm 0.005$ independent of the chain length. This value agrees well with the Boltzmann-averaged mean value. In Fig. 3a, we have also shown the MSID of an equivalent freely rotating chain with $\langle\cos\theta\rangle=0.6985$. We find that MSIDs of short chains $N\leq 100$ fully agree with that of the freely rotating chain model and the MSIDs of longer chains are slightly larger than this prediction. These small deviations are most-likely due to correlation hole effect that stems from incomplete screening of interchain excluded volume interactions in the pervaded volume of polymers de Gennes (1979). Next, we compare the characteristic ratio of the FRC model, given by $C_{\infty}=\frac{1+\langle\cos\theta\rangle}{1-\langle\cos\theta\rangle}$ with that of CG-PVA chains. $C_{\infty}$ value for the equivalent FRC of CG-PVA model with $\langle\cos\theta\rangle=0.6985$ is $C_{\infty}=5.64$ which is slightly lower than the $C_{\infty}=5.89$ obtained from the simulation results. We inspect the intrachain orientational bond-bond correlations $\langle\cos\theta(n)\rangle\equiv\langle\widehat{\mathbf{b}}_{i,j}\cdot% \widehat{\mathbf{b}}_{i,j+n}\rangle$ as a function of the internal distance $1\leq n\leq N-1$. Fig. 3b represents the orientational bond-bond correlations for different chain lengths. We find that the bond-bond correlation functions of all the chain lengths decay exponentially and can be well described by $\exp(-0.35n)$. We extract the so-called persistence length $\ell_{p}$ from the bond-bond orientational correlation functions. $\ell_{p}$ is defined by their decay length, more precisely: $$\langle\widehat{\mathbf{b}}_{i,j}\cdot\widehat{\mathbf{b}}_{i,j+n}\rangle=\exp% (-n\ell_{b}/\ell_{p}).$$ (10) Using $\ell_{b}=0.497$ and $\ell_{b}/\ell_{p}=0.35$, we obtain $\ell_{p}=2.85\ell_{b}=1.42\sigma$. Alternatively, we can estimate the persistence length from $\langle\cos\theta\rangle=0.699$ as $\ell_{p}=-\ell_{b}/\ln(\langle\cos\theta\rangle)$ which leads to $\ell_{p}=2.79\ell_{b}$ comparable to the estimated value from fitting the bond-bond orientational correlation functions with an exponential decay. Notably, the relation $\ell_{k}=2\ell_{p}$ valid for worm-like chains roughly holds for semiflexible CG-PVA polymers. We also note that here the range over which the exponential decay holds is extended compared to the semiflexible finitely extensible nonlinear elastic (FENE) polymer model Kremer and Grest (1990) with bending potential $U_{bend}=\kappa_{bend}(1-\cos\theta)$ and $\kappa_{bend}=1.5$ Hsu and Kremer (2016). The larger extent of exponential decay is probably due to a larger average bending stiffness of CG-PVA polymers. Having investigated the conformational properties of CG-PVA polymers, we focus on their structural properties in the Fourier space in the next subsection. III.3 Form factor and structure factor A common way to characterize the structural properties of polymer melts is to explore their structure factor that can be measured directly in scattering experiments. The structure factor encompasses the information about spatial correlations between the monomers via Fourier transform of density-density correlation functions. For spatially homogenous and isotropic systems such as polymer melts at equilibrium, the static structure factor only depends on the modulus $q$ of the wave vector. The static structure factor $S(q)$ measured in scattering experiments of amorphous melts is often spherically averaged over all the wave vectors $\mathbf{q}$ with same modulus $q$. This quantity can be computed as $$S(q)=\frac{1}{Nn_{c}}\sum_{i,j=1}^{n_{c}}\sum_{n,m=1}^{N}\langle\exp\left[-i% \mathbf{q}\cdot(\mathbf{r}_{i,n}-\mathbf{r}_{j,m})\right]\rangle$$ (11) where the angular brackets represent averaging over all the wave vectors with the same modulus and all the melt configurations. $S(q)$ given in Eq. (15) encompasses scattering from all the monomer pairs. It can be split into intrachain and interchain contributions $$S(q)=S_{c}(q)+\rho_{m}h(q)$$ (12) where $\rho_{m}=Nn_{c}/V$ (V volume of the simulation box) is the monomer density and $$S_{c}(q)=\frac{1}{Nn_{c}}\sum_{i=1}^{n_{c}}\sum_{n,m=1}^{N}\langle\exp\left[-i% \mathbf{q}\cdot(\mathbf{r}_{i,n}-\mathbf{r}_{i,m})\right]\rangle$$ (13) includes the contributions from intrachain pair correlations and it is called intrachain or single chain structure factor. Equivalently, $F(q)=S_{c}(q)/N$ known as the form factor Rubinstein and Colby (2003) is used to quantity the intrachain correlations in the Fourier space. The interchain contribution is given by $h(q)$ that is defined as the Fourier transform of intermolecular pair correlation function Hansen and McDonald (1986) as $$h(q)=\frac{V}{(Nn_{c})^{2}}\sum_{i\neq j}^{n_{c}}\sum_{n,m=1}^{N}\langle\exp% \left[-i\mathbf{q}\cdot(\mathbf{r}_{i,n}-\mathbf{r}_{j,m})\right]\rangle.$$ (14) We present the behavior of the form and structure factors for different chain lengths. We first focus on the form factor as depicted in Fig. 6. In Fig. 6a, we have plotted $S_{c}(q)=NF(q)$ verus $q$ for different chain lengths. We find that for small $q$ values ( $q\frac{2\pi}{R_{g}}\ll 1)$ in the Guinier regime the behavior of $S_{c}(q)$ is well described by $S_{c}(q)=N(1-q^{2}\langle R_{g}^{2}\rangle/3)$. On the other hand, at larger $q$ values, $S_{c}(q)$ of all the chain lengths coincide. Particularly, we observe the scaling behavior of the form $S_{c}(q)\propto q^{-1}$ for the range $\frac{2\pi}{\ell_{K}}<q\ll\frac{2\pi}{\ell_{b}}$ that is the fingerprint of rod-like conformation for chain portions with lengths shorter than their Kuhn length. For long chains with $N\geq 300$, we observe a scaling behavior $q^{-2}$ at intermediate $q$ values ( $\frac{2\pi}{R_{g}}<q\ll\frac{2\pi}{\ell_{K}}$) that is the characteristic scaling of Gaussian coils (ideal chains) with $S_{c}(q)\propto q^{-1/\nu}$ with $\nu=1/2$. The form factor of Gaussian chains, known as Debye function, is described by Rubinstein and Colby (2003) $$F_{Debye}(q)=\frac{2}{Q^{2}}\left[\exp(-Q)+Q-1\right]\quad\text{with}\quad Q=q% ^{2}\langle R_{g}^{2}\rangle$$ (15) In order to compare the behavior of CG-PVA polymers in the melt state with that of ideal Gaussian chains, in Fig. 6b, we have plotted the form factor of CG-PVA chains and Debye function versus $qR_{g}$. For all the chain lengths, we observe deviations from the ideal polymer behavior at high $q$ values. The onset of deviations shifts progressively to larger wave vectors for longer chains. For the longest chains $N\geq 300$, we have also presented the form factors $F(q)$ in a Kratky-plot in Fig. 7. This plot confirms the existence of a Kratky plateau in the scale free regime that extends up to $qR_{g}\approx 20$ for $N=1000$. The deviations at larger $q$ values reflect the underlying form of the bond angle potential $U_{bend}(\theta)$ that dominates the behavior of the form factor at length scales smaller or comparable to the Kuhn length. Next, we present the structure factor of different chain lengths in Fig. 8. As we notice, the $S(q)$ of all the polymer melts displays the characteristic features of the liquid-state. We find a very weak dependence on the chain length; for $N>50$, the $S(q)$ of various chain lengths are identical. We notice several important features in the structure factors. First, the structure factor at low $q$ is very small. By virtue of compressibility equation that relates the isothermal compressibility $\kappa_{T}$ to the structure of the liquid, i.e. $\lim_{q\rightarrow 0}S(q)=\rho_{m}k_{b}T\kappa_{T}$, we conclude that the polymer melts are almost incompressible. Second, the first peak of $S(q)$ at $q^{*}$ characterizes the packing of monomers in the first nearest neighbor shell. The value of $q^{*}$ nearly agrees with $2\pi/\sigma_{0}$ reflecting that the first peak of $S(q)$ is dominated by interchain contributions. To gain more insight into the interchain correlations of CG-PVA polymers, we compare $\rho_{m}h(q)$ with that of simple liquids with no internal structure. For such a simple liquid, we have $S_{c}(q)=1$, hence $\rho_{m}h(q)=S(q)-1$ Hansen and McDonald (1986). Fig. 9a shows both $\rho_{m}h(q)$ and $S(q)-1$ for two chain lengths $N=50$ and $N=500$. We find than in the region near the first peak $\rho_{m}h(q)$ and $S(q)-1$ coincide confirming that the peak at $q=q^{*}$ is totally determined by the interchain correlations. In Fig. 9a, we have also included $-S_{c}(q)$. We note that for very low wave vectors beyond the peak region, $\rho_{m}h(q)$ closely agrees with $-S_{c}(q)$. This behavior shows that the correlation between monomers of different chains decreases with increasing distance. This decrease is concomitant by the increase of $S_{c}(q)$ at low $q$ values such that the sum of both intrachain and interchain contributions yields a small finite value for $S(q)$ as $q\rightarrow 0$. In the other extreme of $q\gg q^{*}$, $\rho_{m}h(q)$ deviates from $S(q)-1$ as the large $q$ behavior of structure factor is fully determined by the intrachain correlations due to correlation hole effect de Gennes (1979); Vettorel et al. (2007). The correlation hole effect leads to a decreased probability of finding a monomer of another chain in the pervaded volume of a particular chain. To illustrate this point, in Fig. 9b, we have shown $S_{c}(q)$ and $S(q)$ for $N=50$ and $N=500$ in the same plot. We see that the large-$q$ behavior is entirely dominated by intrachain contributions. These observations are in agreement with the prior investigations for short chain lengths $10\leq N\leq 100$ Vettorel et al. (2007). IV Primitive path analysis and entanglement statistics Having investigated conformational and structural features of CG-PVA polymer melts, we focus on their topological characteristics, i.e. interchain entanglements. Entanglements stem from topological constraints due to the chains connectivity and uncrossability that restrict the movements of chains at the intermediate time and length scales. As first noted by Edwards Edwards (1967), the presence of neighboring strands in a dense polymer melt effectively confines a single polymer strand to a tube-like region. The centerline of such a tube is known as the primitive path (PP). A practical and powerful method for characterizing the entanglements is primitive path analysis (PPA). Such an analysis provides us with an operational definition of primitive path and allows to investigate statistics of chains entanglements. There exists a couple of variants of PPA in the literature Kroger (2005); R. Hoy (2009); Everaers et al. (2004) that are all similar in spirit. Here, we implement the PPA method proposed by Everaers et al. Everaers et al. (2004) that identifies the primitive path of each polymer chain in a melt based on the concept of Edwards tube model Edwards (1967). The primitive path is defined as the shortest path between the chains ends that can be reached from the initial conformations of polymers without crossing other chains. In this analysis, topologies of chains are conserved, and chains are assumed to follow random walks along their primitive paths. Therefore, the primitive path is a random walk with the same mean square end-to-end distance $\langle R_{e}^{2}\rangle=\langle R_{e}^{2}\rangle^{(pp)}$ but shorter bond length $\ell_{b}^{(pp)}$ and contour length $L^{(pp)}=(N-1)\ell_{b}^{(pp)}$. In practice, by extracting the average bond length of the primitive paths $\langle\ell_{b}^{(pp)}\rangle=1/(N-1)\langle\sum_{i=1}^{N-1}|\mathbf{r}_{i+1}-% \mathbf{r}_{i}|\rangle$, we can determine all the other desired quantities. In particular, the Kuhn length of primitive path $\ell_{K}^{(pp)}$ is obtained as $$\ell_{K}^{(pp)}=\frac{\langle R_{e}^{2}\rangle}{\langle L^{(pp)}\rangle}=\frac% {\langle R_{e}^{2}\rangle}{(N-1)\langle\ell_{b}^{(pp)}\rangle}.$$ (16) The so-called entanglement length $N_{e}$, defined as the average number of monomers in the Kuhn segment of the primitive path, follows from $$N_{e}=\frac{\ell_{K}^{(pp)}}{\langle\ell_{b}^{(pp)}\rangle}=\frac{\langle R_{e% }^{2}\rangle}{(N-1)\langle\ell_{b}^{(pp)}\rangle^{2}}.$$ (17) Operationally, we attain the primitive paths of polymers in a melt by slowly cooling the system toward $T=0$ while the two chain ends are kept fixed. During this procedure, the intrachain excluded volume interactions and bond angle potential are switched off. The system is then equilibrated using a conjugate gradient algorithm in order to minimize its potential energy and reach a local minimum. We perform primitive path analysis for the two longest chain lengths that are fully equilibrated, i.e. $N=300$ and 500 as it is known than poor equilibration affects the entanglement length R. Hoy (2009). We first examine the probability distributions of the bond lengths of the primitive paths, i.e. $P_{N}(\ell_{b}^{(pp)})$ as presented in Fig. 10. The distributions of bond lengths of the primitive paths are chain length dependent but both are centered at $\ell_{b}^{(pp)}=0.20$. Furthermore, the primitive path bond length fluctuations are considerably larger than those of their original paths. The normalized distributions of bond length $\ell_{b}^{(pp)}$ can be well described by a Gaussian distribution of the form $$P_{N}(\ell_{b}^{(pp)})=\frac{1}{\sqrt{2\pi\sigma_{N}^{2}}}\exp\left(\frac{(% \ell_{b}^{(pp)}-\langle\ell_{b}^{(pp)}\rangle)^{2}}{2\pi\sigma_{N}^{2}}\right).$$ (18) where $\sigma_{N}=\langle{\ell_{b}^{(pp)}}^{2}\rangle-\langle\ell_{b}^{(pp)}\rangle^{2}$ presents the $N$-dependent standard deviation of $\ell_{b}^{(pp)}$. Next, we investigate the statistical features of bond angles of the primitive paths. Fig. 11a presents the bond-bond orientational correlation function $\langle\cos\theta(n)\rangle$ as a function of internal distance $n$ for the primitive paths of chain lengths $N=300$ and 500. For comparison, we have also shown the $\langle\cos\theta(n)\rangle$ of the original polymer conformations. Similar to the original polymer conformations, the initial decay of $\langle\cos\theta(n)\rangle$ for $10<n<80$ can be well described by an exponential decay. However, at short scales $n<10$, bonds are slightly stretched out because of the constraints of fixed chain ends during minimization of primitive path length. Assuming an exponential function of the form $\exp(-n\langle\ell_{b}^{(pp)}\rangle/\ell_{p}^{(pp)})$, we can extract the persistence length of the primitive path $\ell_{p}^{(pp)}$. From the fit values, we find $\ell_{p}^{(pp)}=19\langle\ell_{b}^{(pp)}\rangle=3.80\sigma$ that is considerably larger than persistence length of original conformations $\ell_{p}=1.42\sigma$. We also examine the normalized probability distributions of bond angles $\theta$ of the primitive paths as displayed in Fig. 11b. Unlike the bond angle distributions of the original chain conformations, the bond angles of the primitive paths is unimodal with its peak centered around $\theta=4.5^{\circ}$. Furthermore, the range of angles shrinks from $[0^{\circ},100^{\circ}]$ for the original paths to $[0^{\circ},20^{\circ}]$ for the primitive paths reflecting that the primitive paths are mainly in stretched conformations. To explore the intrachain correlations of the primitive paths, we have plotted the mean square internal distances $\langle R^{2}(n)\rangle/n$ of the original and primitive paths in Fig. 12. As expected the values of $\langle R^{2}(n)\rangle/n$ for both paths approach the same value with increasing $n$, since the chains endpoints during the primitive path analysis are held fixed. We find that results of $\langle R^{2}(n)\rangle$ for the primitive path can still be relatively well described by the generalized FRC model provided that we use $\langle\cos\theta\rangle^{(pp)}=\exp(-\langle\ell_{b}^{(pp)}\rangle/\ell_{p}^{% (pp)})=0.947$ extracted from bond-bond orientational correlations. Having confirmed that the mean end-to-end distance of the primitive paths remain identical to those of the original chains, we obtain $\ell_{K}^{(pp)}=7.1\pm 0.05\sigma$ for the Kuhn length of the primitive path. We note that the $\ell_{K}^{(pp)}$ is larger than the Kuhn length of polymers $\ell_{K}=2.93$ $\sigma$. Subsequently, we acquire the distribution of entanglement length $P(N_{e})$ as presented in Fig. 13a. We notice that $P(N_{e})$ has a narrow distribution and presents a weak dependence on $N$ possibly resulting from the finite size of the chains. Our estimated value of the average entanglement length is $\langle N_{e}\rangle=36.4$ for $N=300$ and $\langle N_{e}\rangle=36.6$ for $N=500$. These results suggest that we are rather close to the asymptotic value of entanglement length $N_{e}^{\infty}$. We have also plotted $N_{e}P(N_{e})$ in Fig. 13b and we find that the position of the peak of $N_{e}P(N_{e})$ coincides with our estimated value of $\langle N_{e}\rangle\approx 36.5$. This observation is in agreement with the PPA analysis results for the Kremer-Grest (FENE) model Hsu and Kremer (2016). V Conclusions We have investigated the static properties of the polymer melts of a semiflexible bead-spring model known as the CG-PVA model Meyer and Muller-Plathe (2001, 2002). The CG-PVA polymer melts are crystallizable and characterization of the structural properties are important for understanding their crystallization behavior. We have equilibrated polymer melts with chain lengths $5\leq N\leq 1000$. The results for the long chains allow us to determine the Kuhn length $\ell_{K}$, the persistence length $\ell_{p}$ and the entanglement length $N_{e}$ of CG-PVA polymers accurately as summarized in Table II. We note that the relation $\ell_{K}\approx 2\ell_{p}$ holds for semiflexible CG-PVA polymers. Overall, our results show that the deviations from ideality for CG-PVA polymers are small. We find that sufficiently long polymer melts with $N>50$ follow the relations $\langle R_{e}^{2}\rangle/\langle R_{g}^{2}\rangle=6$ and $\langle R_{e}^{2}\rangle\propto N$ valid for ideal chains. The probability distribution functions of the reduced end-to-end distance $r_{e}=(R_{e}^{2}/\langle R_{e}^{2}\rangle)^{1/2}$ and the reduced gyration radius $r_{g}=(R_{g}^{2}/\langle R_{g}^{2}\rangle)^{1/2}$ for chain lengths $N\geq 50$ also collapse on universal master curves that are well described by the theoretical predictions for the ideal chains. The mean square internal distance of short polymers up to $N=100$ show an excellent agreement with the generalized freely rotating chain model Flory (1969), while for longer chains we observe slight deviations that are most likely associated with correlation hole effect de Gennes (1979). We have investigated in detail the intrachain and interchain structure factors of different chain lengths. The interchain structure factor is almost independent of the chain length whereas the intrachain structure factor $S_{c}(q)$ depends on $N$ as expected. We find that $S_{c}(q)$ of sufficiently long semiflexible CG-PVA polymer are well-described by the Debye function for lengthscales larger than the Kuhn length. The agreement with the Debye function improves upon increase of $N$. Notably, we observe a plateau in the Kratky plot for the range $2<qR_{g}<20$. Our results are in contrast with the findings for fully flexible chains that exhibit significant deviations from Debye function at intermediate wave-vectors Wittmer et al. (2007a); Beckrich et al. (2007); Hsu (2014). But, they support the recent findings that increasing the bending stiffness of the chains in a melt, irrespective of details, improves the agreement with the ideal-chain limit Hsu and Kremer (2016). These findings suggest that if the excluded volume interactions are screened on distances shorter than the persistence length, they will not affect the long-range chain conformation. Using the primitive path analysis, we have determined the average entanglement length $N_{e}$ of long and equilibrated chains and we have compared the original polymer paths with their primitive paths. Probing the bond-bond orientational correlation function and the mean square internal distance of primitive paths, we confirm the assumption that polymers behaves nearly as Gaussian chains along their primitive paths. Notably, the Kuhn length of the primitive path $\ell_{K}^{(pp)}$ is more than twice the $\ell_{K}$ of the original path. Similar to the FENE model, the average bond length of primitive paths follows a Gaussian distribution and the peak of the first moment of the entanglement length probability distribution agrees with the average entanglement length. Entanglements are known to strongly affect the dynamics and rheological properties of polymer melts Doi and Edwards (1986). To quantify the influence of entanglements on viscoelastic properties of CG-PVA polymer melts, the dynamics of fully equilibrated chains is under investigation and it will be presented in a future work. Acknowledgements. S. J.-F. is grateful to Jean-Louis Barrat for his support and insightful discussions. She also thanks Kurt Kremer and Hsiao-Ping Hsu for inspiring discussions. She also acknowledges financial support from the German Science Foundation (http://www.dfg.de) within SFB TRR 146 (http://trr146.de). The main part of the computations were performed using the Froggy platform of the CIMENT infrastructure supported by the Rhone-Alpes region (Grant No. CPER07-13 CIRA) and the Equip@Meso Project (Reference 337 No. ANR-10-EQPX-29-01). Additionally, the computing time granted on the supercomputer Mogon at Johannes Gutenberg University Mainz (hpc.uni-mainz.de) is gratefully acknowledged. References Rubinstein and Colby (2003) M. Rubinstein and R. H. Colby, Polymer Physics (Oxford University Press, Oxford, 2003). Flory (1969) P. J. Flory, Statistical Mechanics of Chain Molecules (Wiley, New York, 1969). Doi and Edwards (1986) M. Doi and S. F. Edwards, The Theory of Polymer Dynamics (Clarendon Press, Oxford, 1986). Wittmer et al. (2004) J. P. Wittmer, H. Meyer, J. B. A. Johner, S. Obukhov, L. Mattioni, M. Muller,  and A. N. Semenov, Physical Review Letters 93, 147801 (2004). Wittmer et al. (2007a) J. P. Wittmer, P. Beckrich, A. Johner, S. O. A. N. Semenov, H. Meyer,  and J. Baschnagel, Europhysics Letters 77, 56003 (2007a). Wittmer et al. (2007b) J. P. Wittmer, P. Beckrich, H. Meyer, A. Cavallo, A. Johner,  and J. Baschnagel, Physical Review E 76, 011803 (2007b). Beckrich et al. (2007) P. Beckrich, A. Johner, A. N. Semenov, S. P. Obukhov, H. C. Benoit,  and J. P. Wittmer, Macromolecules 40, 3805 (2007). Meyer et al. (2008) H. Meyer, J. P. Wittmer, T. Kreer, P. Beckrich, A. Johner, J. Farago,  and J. Baschnagel, European Physical Journal E 26, 25 (2008). Hsu and Kremer (2016) H.-P. Hsu and K. Kremer, J. Chem. Phys. 144, 154907 (2016). Mortensen (2011) K. Mortensen, Advanced functional molecules and polymers 2, 223 (2011). Higgins and Benoit (1997) J. S. Higgins and H. C. Benoit, Polymers and Neutron Scattering (Oxford University Press Inc., New York, 1997). Hsu (2014) H.-P. Hsu, J. Chem. Phys. 141, 164903 (2014). Meyer and Muller-Plathe (2001) H. Meyer and F. Muller-Plathe, The Journal of Chemical Physics 115, 7807 (2001). Vettorel et al. (2007) T. Vettorel, H. Meyer, J. Baschnagel,  and M. Fuchs, Phys. Rev. E 75, 041801 (2007). Meyer and Muller-Plathe (2002) H. Meyer and F. Muller-Plathe, Macromolecules 35, 1241 (2002). Plimpton (1995) S. Plimpton, Journal of Computational Physics 117, 1 (1995). de Gennes (1979) P. G. de Gennes, Scaling Concepts in Polymer Physics (Cornell University Press, Itharca, New York, 1979). Fujita and Norisuye (1970) H. Fujita and T. Norisuye, J. Chem. Phys. 52, 1115 (1970). Lhuillier (1988) D. Lhuillier, J. Phys. France 49, 705 (1988). Vettorel et al. (2010) T. Vettorel, G. Besold,  and K. Kremer, Soft Matter 6, 2282 (2010). Honnell et al. (1990) K. G. Honnell, J. G. Curro,  and K. S. Schweizer, Macromolecules 23, 3496 (1990). Kremer and Grest (1990) K. Kremer and G. S. Grest, J. Chem. Phys. 92, 5057 (1990). Hansen and McDonald (1986) J. P. Hansen and I. R. McDonald, Theory of Simple Liquids (Academic Press, London, 1986). Edwards (1967) S. F. Edwards, Proc. Phys. Soc. 91, 513 (1967). Kroger (2005) M. Kroger, Comput. Phys. Commun. 168, 209 (2005). R. Hoy (2009) M. K. R. Hoy, K. Foteinopoulou, Phys. Rev. E 80, 031803 (2009). Everaers et al. (2004) R. Everaers, S. K. Sukumaran, G. S. Grest, C. Svaneborg, A. Sivasubramanian,  and K. Kremer, Science 303, 823 (2004).
Efficient Programmable Random Variate Generation Accelerator from Sensor Noise James Timothy Meech, Phillip Stanley-Marbell,  James Timothy Meech and Phillip Stanley-Marbell are with the Department of Electrical Engineering, University of Cambridge, Cambridge, CB3 0FA UK e-mail: [email protected]. Abstract We introduce a method for non-uniform random number generation based on sampling a physical process in a controlled environment. We demonstrate one proof-of-concept implementation of the method that reduces the error of Monte Carlo integration of a univariate Gaussian by $1068\times$ while doubling the speed of the Monte Carlo simulation. We show that the supply voltage and temperature of the physical process must be controlled to prevent the mean and standard deviation of the random number generator from drifting. Sensor, Noise, Bayesian, Inference, Non-uniform, Random. I Introduction Current software-based methods of non-uniform random variate generation are slow and inefficient [1][2][3][4]. We present a programmable system capable of generating Gaussian random variates by extracting the noise properties of a MEMS sensor. We demonstrate its principle and application. Sampling a random physical process with a given distribution provides a continuous random variable with a theoretically unlimited sample rate. In a hardware implementation, the sample rate of the analog to digital convertor limits random number generation rate. I-A Generating Uniform Random Variates Is Easy Uniform random numbers are generated for cryptography [5]. Gaussian random variate generation is typically an order of magnitude slower and less efficient than uniform random variate generation [1]. Table I compares several state-of-the-art methods of generating non-uniform random variates with a Gaussian distribution. We propose a method superior to all of the state-of-the-art methods in terms of sample rate and efficiency, consisting of a physical noise source and an analog to digital convertor. I-B Generating Non-Uniform Random Variates Is Harder The inversion method and accept reject method are used in software for generating samples from non-uniform random variates [6]. Let $U$ and $X$ be uniform and non-uniform random variates respectively and $F^{-1}$ the analytical closed-form solution for the inverse cumulative distribution function [6]. Algorithm 1 shows the inversion method. The inversion method requires that the inverse cumulative distribution function has an analytical closed-form solution. The Gaussian distribution has no analytical closed-form solution for its inverse cumulative distribution function so cannot be used with the inversion method [4]. The accept reject method must be used instead. Let $U$ and $X$ be independent and $g$ the density on $\mathbb{R}^{d}$ of $X$. Where $\mathbb{R}^{d}$ is $d$-dimensional Euclidean space. Algorithm 2 shows the accept reject method. The accept reject method requires more math operations than the inversion method to transform a sample [6]. This causes it to take more clock cycles to compute and therefore, more time to transform each sample. When using the accept reject method, samples deviating from the desired distribution are rejected [6]. I-C Uses Of Non-Uniform Random Variates Non-uniform random variate generators are fundamental to applications employing Monte Carlo Methods [7][8][9], population balance modelling of the crystallisation process [10], generating phylogenetic trees [11], drug discovery [12], ray tracing [13], communication channel emulation [14], financial computing [15], local area networks [16], modelling of manufacture systems [17] and measuring healthcare strategy effectiveness [18]. When conducting Bayesian inference in probabilistic machine learning, we must evaluate Bayes’ theorem to calculate the probability that a belief $B$ is true given new data $D$, we denote this as $P(B|D)$. To do this we need the probability that the belief is true regardless of our data $P(B)$, the probability that the data is true given the belief $P(D|B)$ and the probability that the data is true regardless of the belief $P(D)$. We will refer to $P(B)$ as the prior, $P(D|B)$ as the likelihood, $P(D)$ as the marginal likelihood and $P(B|D)$ as the posterior [19]. $$P(B|D)=\frac{P(D|B)P(B)}{P(D)}.$$ (1) We calculate the marginal likelihood by integrating the joint density $P(B,D)$ [19]. In practice, the analytical calculation of the marginal likelihood is impossible for all but the simplest joint distributions [19]. We instead sample from the joint distribution $P(B,D)$ to obtain summary statistics that we can use to describe it [19]. These random samples from bespoke probability distributions must be produced using a non-uniform random variate generator. I-D Contributions 1. The idea that physical noise sources such as MEMS sensors can be used as non-uniform random variate generators (Section I). 2. Estimation of performance increase and error reduction achieved by using such a generator for Monte Carlo integration (Section II). 3. Investigation of the impact of temperature and supply voltage on the noise distribution obtained from a commercial MEMS sensor (Section III-A). II Motivating Example We performed Monte Carlo integration of a Gaussian with a mean of 0 and standard deviation of 1 using samples from a Gaussian with the same parameters. We ran the experiment with a Gaussian generated by the C++ random library and repeated it with the random number generation rate adjusted to match that of our proposed non-uniform random variate generator based on physical noise sources. We performed the same integration using samples from a uniform distribution with a minimum of -3 and a maximum of 3. Let $e$ be the error of the integration, $t$ be the time taken by the integration, $N$ be the number of random samples and $D$ be the distribution (either uniform or Gaussian). Let samps be the array of random numbers, $A$ be the area, $b$ be the rectangle base length, $h$ be the rectangle height and $f$ the probability density function of the Gaussian for integration. Algorithm 3 shows the integration scheme that we used. We repeated each integration process 1000 times and calculated the average. Figure 1 shows that the proposed hardware random number generator outperforms the C++ uniform random number generator after 1000 samples and reduces the error by $1068\times$ after 1 million samples. The proposed hardware random number generator always performs the task at least twice as fast as the C++ Gaussian random number generator. III Methodology A non-uniform random variate generator based on physical noise sources must have negligible drift of the mean and standard deviation over time. Drift would cause errors in calculations using the output of the random number generator. Any environmental parameter that causes non-negligible drift must be physically controlled, for example using a voltage regulator to control voltage and a Peltier device to control temperature. We investigated the dependence of 100,000 sample distributions on voltage and temperature. We sampled the z-axis of a MEMS accelerometer (we used the accelerometer in the Bosch BMX055) to obtain the distributions. III-A Temperature-Controlled Experiments Figure 2 shows the experimental setup. We placed the microcontroller, accelerometer, tilt and rotate stage and vibration isolation platform inside a Binder MK56 thermal chamber. We connected a microcontroller to the sensor via I2C for a 1154 Hz sample rate. We used a Keithly 2450 source measurement unit to power the sensor and measure the current drawn. We set the chamber temperature to 25 ${}^{\circ}$C and allowed 30 minutes for the temperature of the sensor to equilibrate whilst constantly sampling z-axis acceleration values from it. We then sampled 100,000 values from the BMX055 sensor at a 3.6 V supply voltage. We repeated this for all the voltages in the range of 3.6 to 1.4 V with a 0.2 V decrement. We then repeated this process for the temperature from 25 down to -5 ${}^{\circ}$C with a decrement of 5 ${}^{\circ}$C. III-B Quantisation Investigation We investigated the effect of quantisation on the Kullback–Leibler (KL) divergence between a discrete distribution and its ideal fitted curve. We used the MATLAB normrnd function to generate 100,000 values from a Gaussian distribution with the same mean and standard deviation as the BMX055 z-axis at 2.6 V and 10 ${}^{\circ}$C. We then discretized the values into various numbers of bins, fitted a Gaussian distribution to them and calculated the KL divergence between the fitted distribution and the actual distribution. IV Results and Discussion We calculated the KL divergence between two discrete distributions using the following equation where $P$ and $Q$ are discrete probability distributions, $x$ is a given sample value and $\chi$ the sample space [20]: $$D_{KL}(P||Q)=-\sum_{x\in\chi}P(x)\log\bigg{(}\frac{Q(x)}{P(x)}\bigg{)}.$$ (2) Figure 3 shows the BMX055 z-axis acceleration distribution at 2.6 V and 10 ${}^{\circ}$C. We found that the KL divergence between the distribution from the BMX055 z-axis and its fitted Gaussian (0.00263) was more than an order of magnitude smaller than the equivalent result for a MATLAB generated distribution (0.0392). We rounded the MATLAB generated floats for comparison with the BMX055 generated integers. The KL divergence between a MATLAB generated uniform distribution and its fitted Gaussian is 0.116 for reference. We averaged the KL divergence calculations over 100 distributions to account for the random variations in the measurement. The numbers generated by the sensor are closer to an ideal Gaussian distribution than those generated by MATLAB. Figures 4 and 5 show how voltage and temperature effect the mean and standard deviation of the BMX055 z-axis acceleration measurement. Temperature has a greater effect on mean and standard deviation than voltage. Both voltage and temperature have a greater effect upon the standard deviation than the mean. A random number generator based on this phenomenon should control both the temperature and the supply voltage of the sensor. Figure 6 shows the effect of increasing the bin size on the divergence between a distribution of 100,000 values and its ideal fitted distribution. This shows that increased quantization increases the difference between a distribution and its fitted Gaussian. V Gaussian To Gaussian Transform A univariate Gaussian can be transformed to any other univariate Gaussian with one multiplication and one addition. This is significantly less computation than the accept reject method which requires at least 10 operations per each accept reject test: an exponential, square, square root, subtraction, comparison and five divisions / multiplications. The accept reject method may furthermore need to repeat this set of operations numerous times for each random variate. Two uniform random numbers must be generated for one accept reject method sample. Rapid addition and multiplication can be achieved using fast adders and multipliers implemented on an FPGA, Figure 7 shows how this could be achieved. The CPU requests a distribution by specifying parameters to the transform circuitry which proceeds to transform the input Gaussian to fit the requested output distribution. The transform circuitry then stores the values in a small high-speed cache that the CPU can read from. The asynchronous transform circuitry on the FPGA will be free to execute transformations at a rate not bound by the global clock frequency. Offloading the transformation to dedicated hardware leaves the processor free to execute other instructions which will improve performance. VI Conclusion Sensors are a feasible source of non-uniform random variates at a higher sample rate and with greater efficiency than all of the state-of-the-art methods used to generate them. The mean and standard deviation of the noise produced by the z-axis of a commercial accelerometer depend upon the temperature of the environment and the supply voltage. The parameters of the distribution drift with temperature and supply voltage so both must be kept constant to avoid this. Quantizing a Gaussian distribution increases the KL divergence between it and a fitted Gaussian. This type of non-uniform random number generator can reduce the error of Monte Carlo integration of a univariate Gaussian by a factor of $1068\times$ whilst doubling the speed. References [1] D. B. Thomas, L. Howes, and W. Luk, “A comparison of cpus, gpus, fpgas, and massively parallel processor arrays for random number generation,” in Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, ser. FPGA ’09.   New York, NY, USA: ACM, 2009, pp. 63–72. [2] S. Wang, A. R. Lebeck, and C. Dwyer, “Nanoscale resonance energy transfer-based devices for probabilistic computing,” IEEE Micro, vol. 35, no. 5, pp. 72–84, Sep. 2015. [3] D. B. Thomas and W. Luk, “Non-uniform random number generation through piece-wise linear approximations,” IET Computers Digital Techniques, vol. 1, no. 4, pp. 312–321, July 2007. [4] D. Thomas and W. Luk, “Efficient hardware generation of random variates with arbitrary distributions,” in 14th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, April 2006, pp. 57–66. [5] “Randomness requirements for security,” Network Working Group. [6] L. Devroye, Non-Uniform Random Variate Generation.   McGill University Montreal H3A 2K6 Canada: Springer-Verlag, 1986. [7] D. B. Thomas and W. Luk, “Resource efficient generators for the floating-point uniform and exponential distributions,” in 2008 International Conference on Application-Specific Systems, Architectures and Processors, July 2008, pp. 102–107. [8] D. P. Kroese, T. Brereton, T. Taimre, and Z. I. Botev, “Why the monte carlo method is so important today,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 6, no. 6, pp. 386–392, 2014. [9] M. Mitzenmacher and E. Upfal, Probability and Computing: Randomized Algorithms and Probabilistic Analysis.   New York, NY, USA: Cambridge University Press, 2005. [10] E. Aamir, Z. K. Nagy, C. D. Rielly, T. Kleinert, and B. Judat, “Combined quadrature method of moments and method of characteristics approach for efficient solution of population balance models for dynamic modeling and crystal size distribution control of crystallization processes,” Industrial & Engineering Chemistry Research, vol. 48, no. 18, pp. 8575–8584, Sep 2009. [11] J. R Oaks, K. A Cobb, V. N Minin, and A. D Leaché, “Marginal Likelihoods in Phylogenetics: A Review of Methods and Applications,” Systematic Biology, vol. 68, no. 5, pp. 681–697, 01 2019. [12] M. Chang, Monte Carlo Simulation for the Pharmaceutical Industry: Concepts, Algorithms, and Case Studies.   CRC Press, 2010. [13] R. L. Cook, “Stochastic sampling in computer graphics,” ACM Trans. Graph., vol. 5, no. 1, pp. 51–72, Jan. 1986. [14] J. Danger, A. Ghazel, E. Boutillon, and H. Laamari, “Efficient fpga implementation of gaussian noise generator for communication channel emulation,” in ICECS 2000. 7th IEEE International Conference on Electronics, Circuits and Systems (Cat. No.00EX445), vol. 1, Dec 2000, pp. 366–369 vol.1. [15] G. L. Zhang, P. H. W. Leong, C. H. Ho, K. H. Tsoi, C. C. C. Cheung, D. Lee, R. C. C. Cheung, and W. Luk, “Reconfigurable acceleration for monte carlo based financial simulation,” in Proceedings. 2005 IEEE International Conference on Field-Programmable Technology, 2005., Dec 2005, pp. 215–222. [16] M. N. O. Sadiku and M. Ilyas, Simulation of Local Area Networks.   Boca Raton, FL, USA: CRC Press, Inc., 1995. [17] V. V. Marcel, I. J. Adan, and S. A. Resing-Sassen, Stochastic Modeling of Manufacturing Systems.   Springer, 2006. [18] Y. Dai, W. Jiang, and G. Wang, “Building bayesian inference graphs for healthcare statistic evidence,” in 2016 45th International Conference on Parallel Processing Workshops (ICPPW), Aug 2016, pp. 415–420. [19] B. Lambert, A Student’s Guide to Bayesian Statistics.   SAGE. [20] “Kldiv,” Nima Razavi, [Online]. Available: https://uk.mathworks.com/matlabcentral/fileexchange/13089-kldiv?s_tid=FX_rc1_behav, Accessed: 27/03/2019.
Towards a Geometric Understanding of the 4-Dimensional Point Groups Laith Rastanawi and Günter Rote    Laith Rastanawi Institut für Mathematik Freie Universität Berlin Arnimallee 2 14195 Berlin, Germany [email protected] Supported by the DFG Research Training Group GRK 2434 “Facets of Complexity”    Günter Rote††footnotemark: Institut für Informatik Freie Universität Berlin Takustraße 9 14195 Berlin, Germany [email protected] Abstract We classify the finite groups of orthogonal transformations in 4-space, and we study these groups from the viewpoint of their geometric action, using polar orbit polytopes. For one type of groups (the toroidal groups), we develop a new classification based on their action on an invariant torus, while we rely on classic results for the remaining groups. As a tool, we develop a convenient parameterization of the oriented great circles on the 3-sphere, which leads to (oriented) Hopf fibrations in a natural way. Contents 1 Introduction and Results 2 Orbit Polytopes 2.1 Geometric understanding through orbit polytopes: the pyritohedral group 2.1.1 The pyritohedral group for flatlanders 2.1.2 Polar orbit polytopes and Voronoi diagrams 2.2 Fundamental domains and orbifolds 2.3 Left or right orientation of projected images: view from outside 3 Point groups 3.1 The 4-dimensional orthogonal transformations 3.1.1 Orientation-preserving transformations 3.1.2 Absolutely orthogonal planes and circles 3.1.3 Left and right rotations 3.1.4 Orientation-reversing transformations 3.1.5 Quaternion representation 3.2 The classic approach to the classification 3.3 Previous classifications 3.3.1 Related work 3.4 Conjugacy, geometrically equal groups 3.5 Obtaining the achiral groups 3.6 Point groups in 3-space and their quaternion representation 3.7 Finite groups of quaternions 3.8 Notations for the 4-dimensional point groups, diploid and haploid groups 4 Hopf fibrations 4.1 Parameterizing the great circles in $S^{3}$ 4.1.1 Keeping a circle invariant 4.1.2 Oriented great circles 4.2 Hopf bundles 4.2.1 Left and right screws 4.2.2 Clifford-parallel circles 5 Classification of the point groups 5.1 The Clifford torus 6 The tubical groups 6.1 Orbit circles 6.2 Tubes 6.2.1 Mapping between adjacent cells 6.3 The geometry of the tubes 6.3.1 The spherical tubes 6.3.2 The spherical tube boundaries 6.3.3 The tangential slices 6.3.4 The tangential tube boundaries 6.4 Generic starting points 6.5 Starting point close to a mirror 6.6 Starting point on a mirror 6.7 Starting point close to a rotation center 6.8 Starting point on a rotation center 6.8.1 Supergroups of cyclic type 6.8.2 Supergroups of dihedral type, and flip symmetries 6.9 Two examples of special starting points 6.9.1 $\pm[I\times C_{n}]$, 5-fold rotation center 6.9.2 $\pm\frac{1}{2}[O\times C_{2n}]$, 4-fold rotation center 6.10 Consequences for starting points near rotation centers 6.11 Mappings between different tubes 6.12 Small values of $n$ 6.13 Online gallery of polar orbit polytopes 6.14 $\pm[T\times C_{n}]$ versus $\pm\frac{1}{3}[T\times C_{3n}]$ 7 The toroidal groups 7.1 The invariant Clifford torus 7.2 Torus coordinates and the torus foliation 7.3 Symmetries of the torus 7.3.1 Torus translations 7.3.2 The directional group: symmetries with a fixed point 7.3.3 Choice of coordinate system 7.3.4 The directional group and the translational subgroup 7.4 Overview of the toroidal groups 7.5 The torus translation groups, type              7.5.1 Dependence on the starting point 7.6 The torus flip groups, type           $\cdot$    7.7 Groups that contain only one type of reflection 7.7.1 The torus reflection groups, type                7.7.2 The torus swap groups 7.8 The torus swapturn groups, type           $\scriptstyle\circlearrowleft$    7.9 Groups that contain two orthogonal reflections, type           $+$      and           $\times$    7.10 The full torus groups, type           $+$$\times$    7.11 Duplications 7.11.1 List of Duplications 7.11.2 A duplication example 7.12 Comparison with the classification of Conway and Smith 8 The polyhedral groups 8.1 The Coxeter notation for groups 8.2 Strongly inscribed polytopes 8.3 Symmetries of the simplex 8.4 Symmetries of the hypercube (and its polar, the cross-polytope) 8.5 Symmetries of the 600-cell (and its polar, the 120-cell) 8.6 Symmetries of the 24-cell 8.6.1 A pair of enantiomorphic groups 9 The axial groups 10 Computer calculations 10.1 Representation of transformations and groups 10.2 Fingerprinting 10.3 Computer checks 10.4 Checking the achiral polyhedral and axial groups 10.5 Checking the toroidal groups 11 Higher dimensions A Generators for the polyhedral and axial groups B Orbit polytopes for tubical groups with special starting points B.1 $\pm[I\times C_{n}]$ B.1.1 $\pm[I\times C_{n}]$, 3-fold rotation center B.1.2 $\pm[I\times C_{n}]$, 2-fold rotation center B.2 $\pm[O\times C_{n}]$ B.2.1 $\pm[O\times C_{n}]$, 4-fold rotation center B.2.2 $\pm[O\times C_{n}]$, 3-fold rotation center B.2.3 $\pm[O\times C_{n}]$, 2-fold rotation center B.3 $\pm\frac{1}{2}[O\times C_{2n}]$ B.3.1 $\pm\frac{1}{2}[O\times C_{2n}]$, 3-fold rotation center B.3.2 $\pm\frac{1}{2}[O\times C_{2n}]$, 2-fold rotation center B.4 $\pm[T\times C_{n}]$ B.4.1 $\pm[T\times C_{n}]$, 3-fold rotation center B.4.2 $\pm[T\times C_{n}]$, 2-fold rotation center B.5 $\pm\frac{1}{3}[T\times C_{3n}]$ B.5.1 $\pm\frac{1}{3}[T\times C_{3n}]$, 3-fold (type I) rotation center B.5.2 $\pm\frac{1}{3}[T\times C_{3n}]$, 3-fold (type II) rotation center B.5.3 $\pm\frac{1}{3}[T\times C_{3n}]$, 2-fold rotation center C The number of groups of given order D The crystallographic point groups E Geometric interpretation of oriented great circles F Subgroup relations between tubical groups G Conway and Smith’s classification of the toroidal groups G.1 Index-4 subgroups of $D_{4m}$ List of Tables 1 Point groups in 3 dimensions 2 The 11 classes of left tubical groups 3 Relations among tubical groups 4 The group $D_{8}^{\mathbb{T}}$, the directional parts of the torus symmetries 5 The 10 subgroups of $D_{8}^{\mathbb{T}}$ 6 Overview of the 25 classes of toroidal groups 7 Generators for torus reflection groups and torus swap groups 8 Generators for full torus reflection/swap groups and full torus groups 9 The duplications among toroidal groups 10 The 25 polyhedral groups 11 Analogy between symmetries of the four-dimensional and three-dimensional cube 12 Analogies between symmetries of self-dual polytopes 13 The 14 pyramidal and prismatic axial groups 14 The 7 hybrid axial groups 15 Summary of the 21 axial groups 16 The 46 polyhedral and axial groups with generators 17 The 227 crystallographic point groups in four dimensions, part 1 18 The 227 crystallographic point groups, part 2, and three pseudo-crystal groups 1 Introduction and Results A $d$-dimensional point group is a finite group of orthogonal transformations in $\mathbb{R}^{d}$, or in other words, a finite subgroup of $\mathrm{O}(d)$. We propose the following classification for the $4$-dimensional point groups. Theorem 1.1. The 4-dimensional point groups can be classified into • 25 polyhedral groups (Table 10), • 21 axial groups (7 pyramidal groups, 7 prismatic groups, and 7 hybrid groups, Table 15), • 22 one-parameter families of tubical groups (11 left tubical groups and 11 right tubical groups, Table 2), and • 25 infinite families of toroidal groups (Table 6), among them – 2 three-parameter families, – 19 two-parameter families, and – 4 one-parameter families. In contrast to earlier classifications of these groups (notably by Du Val in 1962 [15] and by Conway and Smith in 2003 [8]), see Section 3.3), we emphasize a geometric viewpoint, trying to visualize and understand actions of these groups. Besides, we correct some omissions, duplications, and mistakes in these classifications. Overview of the groups. The 25 polyhedral groups are related to the regular polytopes. The symmetries of the regular polytopes are well understood, because they are generated by reflections, and the classification of such groups as Coxeter groups is classic. We will deal with these groups only briefly, dwelling a little on just a few groups that come in enantiomorphic pairs (i.e., groups that are not equal to their own mirror.) The 21 axial groups are those that keep one axis fixed. Thus, they essentially operate in the three dimensions perpendicular to this axis (possibly combined with a flip of the axis), and they are easy to handle, based on the well-known classification of the three-dimensional point groups. The tubical groups are characterized as those that have (exactly) one Hopf bundle invariant. They come in left and right versions (which are mirrors of each other) depending on the Hopf bundle they keep invariant. They are so named because they arise with a decomposition of the 3-sphere into tube-like structures (discrete Hopf fibrations). The toroidal groups are characterized as having an invariant torus. This class of groups is where our main contribution in terms of the completeness of the classification lies. We propose a new, geometric, classification of these groups. Essentially, it boils down to classifying the isometry groups of the two-dimensional square flat torus. We emphasize that, regarding the completeness of the classification, in particular concerning the polyhedral and tubical groups, we rely on the classic approach (see Section 3.2). Only for the toroidal and axial groups, we supplant the classic approach by our geometric approach. Hopf fibrations. We give a self-contained presentation of Hopf fibrations (Section 4). In many places in the literature, one particular Hopf map is introduced as “the Hopf map”, either in terms of four real coordinates or two complex coordinates, leading to “the Hopf fibration”. In some sense, this is justified, as all Hopf bundles are (mirror-)congruent. However, for our characterization, we require the full generality of Hopf bundles. As a tool for working with Hopf fibrations, we introduce a parameterization for great circles in $S^{3}$, which might be useful elsewhere. Orbit polytope. Our main tool to understand tubical groups are polar orbit polytopes. (Section 2). In particular, we study the symmetries of a cell of the polar orbit polytope for different starting points. 2 Orbit Polytopes 2.1 Geometric understanding through orbit polytopes: the pyritohedral group One can try to visualize a point group $G\leqslant\mathrm{O}(d)$ by looking at the orbit of some point $0\neq v\in\mathbb{R}^{d}$ and taking the convex hull. This is called the $G$-orbit polytope of $v$. For an in-depth study of orbit polytopes and their symmetries, refer to [17, 18]. The orbit polytope will usually depend on the choice of $v$, and it may have other symmetries in addition to those of $G$. For example, the $C_{n}$-orbit polytope in the plane is always a regular $n$-gon, and this orbit polytope has the larger dihedral group $D_{2n}$ as its symmetry group. We will illustrate the usefulness of orbit polytopes with a three-dimensional example. The pyritohedral group is perhaps the most interesting among the point groups in 3 dimensions. It is generated by a cyclic rotation of the coordinates $(x_{1},x_{2},x_{3})\mapsto(x_{2},x_{3},x_{1})$ and by the coordinate reflection $(x_{1},x_{2},x_{3})\mapsto(-x_{1},x_{2},x_{3})$. It has order 24. Figure 1 shows a few examples of orbit polytopes for this group, and their polars. The elements of the pyritohedral group are simultaneously symmetries of the octahedron (where it is an index-2 subgroup of the full symmetry group) and the icosahedron (an index-5 subgroup), and of course of their polars, the cube and the dodecahedron. The group contains reflections, but it is not generated by its reflections. The orbit of the points $(1,0,0)$ and $(1,1,1)$ generate the regular octahedron and the cube, respectively. These are each other’s polars, but they don’t give any specific information about the pyritohedral group. Figure 0(a) shows the orbit polytope (in yellow) of a generic point $(\frac{2}{3},\frac{1}{2},1)$, and its polar (in orange). The symmetries of these polytopes are exactly the pyritohedral group. That orbit polytope has 6 rectangular faces (lying in planes of the faces of a cube), 8 equilateral triangles (lying in the faces of an octahedron), and 12 trapezoids (going through the edges of some cube, but not of some regular octahedron). The polar has 24 quadrilateral faces, corresponding to the 24 group elements. For any pair of faces, there is a unique symmetry of the polytope that maps one face to the other.111In mineralogy, this shape is sometimes called a diploid, and diploidal symmetry is an alternative name for pyritohedral symmetry. In our context, the term diploid will show up in a different sense. If we choose one coordinate of the starting point to be 0, the rectangles shrink to line segments, and the trapezoids become isosceles triangles. See Figure 0(b). The orbit polytope is an icosahedron with 20 triangular faces: 8 equilateral triangles and 12 isosceles triangles. The polar polytope is a pyritohedron, that is, a dodecahedron with 12 equal but not necessarily regular pentagons. For this choice, the orbit contains only 12 points, but the polytope gains no additional symmetries beyond the pyritohedral symmetries. However, for $(0,\frac{\sqrt{5}-1}{2},1)$, we get the regular icosahedron and the regular dodecahedron. For the specific choice $(0,\frac{1}{2},1)$, the polar orbit polytope is one of the crystal forms of the mineral pyrite, which gave the polytope and group its name, see Figure 0(b). This polytope is also an alternahedron on $4$ symbols [13]. An alternahedron can be constructed as the orbit of a generic point $(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}^{4}$ under all even permutations. Since the points lie in a hyperplane $x_{1}+x_{2}+x_{3}+x_{4}=\mathrm{const}$, this is a three-dimensional polytope. For the starting point $(0,1,2)$, we obtain the alternahedron that results from the canonical choice $(x_{1},x_{2},x_{3},x_{4})=(1,2,3,4)$, a scaled copy of Figure 0(b).222The illustration of this polytope in [13, Fig. 1] may make the wrong impression of consisting of equilateral triangles only. However, its isosceles faces have base length $2$ and two equal legs of length $\sqrt{6}\approx 2.45$. The pyritohedral group differs from the symmetries of the cube (or the octahedron) by allowing only even permutations of the coordinates $x_{1},x_{2},x_{3}$. When two coordinates are equal, this distinction plays no role, and the resulting polyhedron will have all symmetries of the cube, see Figure 0(f). (We mention that some special starting points of this form lead to Archimedean polytopes: The starting point $(1,1,\sqrt{2}+1)$ generates a rhombicuboctahedron with 8 regular triangles and 18 squares; $(0,1,1)$ generates the cuboctahedron with 8 regular triangles and 6 squares; with $(\frac{1}{\sqrt{2}+1},1,1)$, we get the truncated cube with 8 regular triangles and 8 regular octagons, similar to the yellow polytope in Figure 0(f).) For the purpose of visualizing the pyritohedral group, we will try to keep the three coordinates distinct. By choosing the point close to $(1,1,1)$ or $(0,0,1)$, we can emphasize the cube-like or the octahedron-like appearance of the orbit polytope or its polar. For example, the polar orbit polytope for $(0,\frac{1}{10},1)$ resembles a cube whose squares are subdivided into rectangles, like the orange polytope in Figure 0(c). (Actually, the mineral pyrite has sometimes a cubic crystal form in which the faces carry parallel thin grooves, so-called striations.333See http://www.mineralogische-sammlungen.de/Pyrit-gestreift-engl.html) See also Figure 0(d) for $(\frac{2}{10},\frac{1}{10},1)$. The orbit polytope in Figure 0(c) appears like an octahedron whose edges have been shaved off, but in an asymmetric way that provides a direction for the edges (see Figure 32a on p. 32 in Section 8.6). On the other hand, the polar orbit polytope for $(\frac{8}{10},\frac{9}{10},1)$ resembles an octahedron, carrying a pinwheel-like structure on every face. See Figure 0(e). 2.1.1 The pyritohedral group for flatlanders We will be in the situation that we try to visualize 4-dimensional point groups through orbit polytopes or their polars. So let us go one dimension lower and imagine that we, as ordinary three-dimensional people, would like to explain the pyritohedral group to flatlanders. We will see that different options have different merits, and there may be no unique best way of visualizing a group. Assuming that flatlanders accept the notions of a cube or an octahedron, we could tell them that we build a cube whose squares are striped in such a way that the patterns on adjacent squares never abut, similar to the orange polytope in Figure 0(c). It is allowed to map any square to any other square (6 possibilities) in such a way that the stripes match (the dihedral group $D_{4}$ with 4 possibilities, for a total of 24 transformations). Alternatively, we could tell them that the edges of an octahedron are oriented such that each triangle forms a directed cycle (Figure 32a on p. 32). It is allowed to map any triangle to any other triangle (8 possibilities) in such a way that edge directions are preserved (the cyclic group $C_{3}$ with 3 possibilities, for a total of 24 transformations). Another option is the polar of $(c,1,1)$, where $c\not\in\{0,1\}$, see the orange polytope in Figure 0(f). It has 24 isosceles triangles, one per group element, As $c$ approaches 1 or 0, the polar orbit polytope converges to an octahedron or to a rhombic dodecahedron. As a shape, the triangle does not reveal much about the group, so we have to add the information that the base edge acts as a mirror, and the opposite vertex is a 3-fold gyration point, i.e., there are three rotated copies that fit together. (This is essentially what is expressed in the orbifold notation $3{*}2$.) We are not allowed to use the reflection that maps the triangle to itself, and we might indicate this by placing an arrow along the base edge. In most cases, it was advantageous to describe the group in terms of the polar orbit polytope: We have many copies of one shape, and any shape can be mapped to any other. It is not necessarily the best option to insist that all points of the orbit are distinct. Sometimes it is preferable to allow also symmetries within each face. In this case, the information, which of these symmetries are in the group must be conveyed as side information, for example by decorations or patterns that should be left invariant, such as the stripes in Figure 0(c). Figure 2 summarizes the relation between a polar orbit polytope and its group $G$. All cells are equal, and the cells correspond to the points of the orbit. We know that between any two cells, there is at least one transformation in $G$ that carries one cell to the other. However, it is not directly apparent which transformations carry one cell to another cell, or to itself. If all symmetries of a cell belong to the group, the answer is clear; otherwise we have to discuss this question and describe the answer separately. The bottom row of Figure 2 splits this question into two subproblems that are relevant only for tubical groups (Section 6), namely the relation between adjacent cells in a tube, and between cells of different tubes. 2.1.2 Polar orbit polytopes and Voronoi diagrams There is a well-known connection between polar orbit polytopes and spherical Voronoi diagrams, or more generally, between polytopes whose facets are tangent to a sphere and spherical Voronoi diagrams: The central projection of the polytope to the sphere gives the spherical Voronoi diagram of the tangency points (the orbit points). Figure 3 shows spherical Voronoi diagrams for two orbits of Figure 1. Thus, when we look at polar orbit polytopes, we may think about partitioning the sphere according to the closest point from the orbit. The orbit polytope and the spherical Voronoi diagram have the same combinatorial structure, but the faces of the orbit polytope are true Euclidean polytopes, whereas the faces of the Voronoi diagram are spherical polytopes. The closer the orbit points are together, the smaller the distortion will be, and the more the orbit polytope will represent the true metric situation of the Voronoi diagram. In our illustrations of 4-dimensional groups, we will prefer to show orbit polytopes, because these are easier to compute. 2.2 Fundamental domains and orbifolds For comparison, we mention another way to characterize geometric groups, namely by showing a fundamental domain of the group, possibly extended by additional information that characterizes the type of rotations that fix an edge, such as in an orbifold. This is particularly appropriate for Coxeter groups, which are generated by reflections and for which the choice of fundamental domain is canonical. Dunbar [16] studied orientation-preserving 4-dimensional point groups. He constructed fundamental domains for 10 out of the 14 orientation-preserving polyhedral groups (omitting $\pm[I\times T]$ and $\pm[I\times O]$ and their mirrors). For each of the 21 orientation-preserving polyhedral and axial groups, he showed the structure of the singular set (fixpoints of some group elements) of the corresponding orbifold, which is a 3-valent graph where each edge is labeled with the order of the rotational symmetry around the edge.444In the list of orientation-reversing polyhedral groups that are Coxeter groups [16, Figure 17], the 6th and 8th entries, which are the Coxeter-Dynkin diagrams for the orientation-reversing extensions of ${T}\times_{{C}_{3}}T$ and $J\times_{J}^{*}J^{1}$, must be exchanged. The fundamental domain, possibly enriched by additional information, is a concise way for representing some groups, but it does not have the immediate visual appeal of polar orbit polytopes. For example, the fundamental domain of every Coxeter group is a simplex, and the distinctions between different groups lies only in the dihedral angles at the edges. 2.3 Left or right orientation of projected images: view from outside We will illustrate many situations in 4-space by three-dimensional graphics that are derived through projection. Just as a plane in space has no preferred orientation, a 3-dimensional hyperplane in 4-space has no intrinsic orientation. It depends on from which side we look at it. Hence, it is important to establish a convention about the orientation, in order to distinguish a situation from its mirror image. Let us look at plane images of the familiar three-dimensional space “for orientation” in this matter. For a polytope or a sphere, we follow the convention that we want to look at it from outside, as for a map of some part of the Earth. Accordingly, when we interpret a plane picture with an $x_{1},x_{2}$-coordinate system (with $x_{2}$ counterclockwise from $x_{1}$), the usual convention is to think of the third coordinate $x_{3}$ as the “vertical upward” direction that is facing us, leading to a right-handed coordinate system $x_{1},x_{2},x_{3}$. Similarly, when we deal with a 4-polytope and want to show a picture of one of its facets, which is a three-dimensional polytope $F$, we use a right-handed orthonormal $x_{1},x_{2},x_{3}$-coordinate system in the space of $F$ that can be extended to a positively oriented coordinate system $x_{1},x_{2},x_{3},x_{4}$ of 4-space such that $x_{4}$ points outward from the 4-polytope. We use the same convention when drawing a cluster of adjacent facets, or when illustrating situations in the 3-sphere, either through central projection or through parallel projection. For example, a small region in the 3-sphere can be visualized as 3-space, with some distortion, and we will be careful to ensure that this corresponds to a view on the sphere “from outside”. There are other contexts that favor the opposite convention. For example, stereographic projection is often done from the North Pole $(x_{1},x_{2},x_{3},x_{4})=(0,0,0,1)$ of $S^{3}$, and this yields a view “from inside” in the $(x_{1},x_{2},x_{3})$-hyperplane. See for example [35, §7], or also [16, p. 123] for a different ordering of the coordinates with the same effect. 3 Point groups The 2-dimensional point groups are the cyclic groups $C_{n}$ and the dihedral groups $D_{2n}$, for $n\geq 1$. For $n\geq 3$, they can be visualized, respectively, as the $n$ rotations of the regular $n$-gon, and the $2n$ symmetries (rotations and reflections) of the regular $n$-gon. See Figure 4. The 3-dimensional point groups are well-studied (see Section 3.6 below). In one sentence, they can be characterized as the symmetry groups of the five Platonic solids and of the regular $n$-side prisms, and their subgroups. This gives a frame for classifying these groups, but it does not give the full information. It remains to work out what the subgroups are, and moreover, there are duplications, for example: certain Platonic solids are polar to each other; the vertices of the cube are contained in the vertices of an icosahedron; and in turn, they contain the vertices of a tetrahedron; a cube is a special quadrilateral prism. 3.1 The 4-dimensional orthogonal transformations 3.1.1 Orientation-preserving transformations We call a 4-dimensional orientation-preserving transformation a rotation. In some appropriate basis with coordinates $x_{1},x_{2},x_{3},x_{4}$, every rotation has the form $$R_{\alpha_{1},\alpha_{2}}=\begin{pmatrix}\cos\alpha_{1}&-\sin\alpha_{1}&0&0\\ \sin\alpha_{1}&\cos\alpha_{1}&0&0\\ 0&0&\cos\alpha_{2}&-\sin\alpha_{2}\\ 0&0&\sin\alpha_{2}&\cos\alpha_{2}\\ \end{pmatrix},\text{ or }R_{\alpha_{1},\alpha_{2}}=\begin{pmatrix}R_{\alpha_{1}}&0\\ 0&R_{\alpha_{2}}\\ \end{pmatrix}=\operatorname{diag}(R_{\alpha_{1}},R_{\alpha_{2}})$$ (1) in block form, using the rotation matrices $R_{\alpha}=\left(\begin{smallmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{smallmatrix}\right)$ as building blocks [10, §12.1]. If $\alpha_{2}=0$, we have a simple rotation: a rotation in the $x_{1}x_{2}$-plane by the angle $\alpha_{1}$, leaving the complementary $x_{3}x_{4}$-plane fixed. Thus, the general rotation is the product of two simple rotations in two orthogonal planes, and we call it more specifically a double rotation. If $\alpha_{2}\neq\pm\alpha_{1}$ then the two planes are uniquely determined. Each plane is an invariant plane: as a set, it is fixed by the operation. If $\alpha_{1}=\alpha_{2}=\pi$, the matrix is the negative identity matrix, and we have the central inversion or antipodal map, which we denote by $-\mathrm{id}$. In $\mathbb{R}^{4}$, this is an orientation-preserving transformation. 3.1.2 Absolutely orthogonal planes and circles When we speak of orthogonal planes in 4-space, we always mean “absolutely” orthogonal, in the sense that every vector in one plane is orthogonal to every vector in the other plane. We will mostly study the situation on the sphere. Here, an invariant plane becomes an invariant great circle, and there are absolutely orthogonal great circles. 3.1.3 Left and right rotations The rotations with $\alpha_{2}=\pm\alpha_{1}$ play a special role: Every point is moved by the same angle $|\alpha_{1}|$, and there is no unique pair of invariant planes. The rotations with $\alpha_{2}=\alpha_{1}$ are left rotations, and the rotations with $\alpha_{2}=-\alpha_{1}$ are right rotations. It is easy to see that every rotation $R_{\alpha_{1},\alpha_{2}}$ is the product of a left and a right rotation (with angles $(\alpha_{1}\pm\alpha_{2})/2$). This representation is unique, up to a multiplication of both factors with $-\mathrm{id}$. Left rotations commute with right rotations. These facts are not straightforward, but they follow easily from the quaternion representation that is discussed below. The product of a left rotation by $\beta_{L}$ and a right rotation by $\beta_{R}$ is a rotation $R_{\beta_{L}+\beta_{R},\beta_{L}-\beta_{R}}$. 3.1.4 Orientation-reversing transformations An orientation-reversing transformation has the following form, in some appropriate basis with coordinates $x_{1},x_{2},x_{3},x_{4}$: $$\bar{R}_{\alpha}=\begin{pmatrix}\cos\alpha&-\sin\alpha&0&0\\ \sin\alpha&\cos\alpha&0&0\\ 0&0&-1&0\\ 0&0&0&1\end{pmatrix}=\operatorname{diag}(R_{\alpha},-1,1)$$ (2) It operates in some three-dimensional subspace $x_{1},x_{2},x_{3}$ and leaves one axis $x_{4}$ fixed. The $x_{3}$-axis is inverted. For $\alpha=0$, we have a mirror reflection in a hyperplane, $\bar{R}_{0}=\operatorname{diag}(1,1,-1,1)$. For $\alpha=\pi$, we have $\bar{R}_{\pi}=\operatorname{diag}(-1,-1,-1,1)$, which could be interpreted as a reflection in the $x_{4}$-axis. In general, we have a rotary-reflection, which has two unique invariant planes: In one plane, it acts as a rotation by $\alpha$; in the other plane, it has two opposite fixpoints in $S^{3}$, and two other opposite points that are swapped. The square of an orientation-reversing transformation $\bar{R}_{\alpha}$ is always a simple rotation. 3.1.5 Quaternion representation The quaternions $x_{1}+x_{2}i+x_{3}j+x_{4}k$ are naturally identified with the vectors $x=(x_{1},x_{2},x_{3},x_{4})\in\mathbb{R}^{4}$. We identify the set of unit quaternions with $S^{3}$, the 3-sphere, and the set of pure unit quaternions $v_{1}i+v_{2}j+v_{3}k$ with the points $(v_{1},v_{2},v_{3})$ on $S^{2}$, the 2-sphere. Every 4-dimensional rotation can be represented by a pair $[l,r]$ of unit quaternions $l,r\in S^{3}$. See [8, §4.1]. The pair $[l,r]$ operates on the vectors $x\in\mathbb{R}^{4}$, treated as quaternions, by the rule $$[l,r]\colon x\mapsto\bar{l}xr.$$ The representation of rotations by quaternion pairs is unique except that $[l,r]=[-l,-r]$. The rotations $[l,1]$ are the left rotations, and the rotations $[1,r]$ are the right rotations: They correspond to quaternion multiplication from the left and from the right. A left or right rotation moves every point by the same angular distance $\alpha$. In fact, as we shall see (Proportion 4.14(ii)), a left or right rotation by an angle $\alpha$ other than 0 or $\pi$ defines a Hopf bundle, a decomposition of the 3-sphere $S^{3}$ into circles, each of which is rotated in itself by $\alpha$. As transformations on $S^{3}$, they operate as left screws and right screws, respectively. See Section 4.2.1. We compose transformations by writing them from left to right, i.e. $[l_{1},r_{1}][l_{2},r_{2}]$ denotes the effect of first applying $[l_{1},r_{1}]$ and then $[l_{2},r_{2}]$.555Du Val [15] used the opposite convention, and accordingly his notation $[l,r]$ denotes the map $x\mapsto lx\bar{r}$. Accordingly, composition can be carried out as componentwise quaternion multiplication: $[l_{1},r_{1}][l_{2},r_{2}]=[l_{1}l_{2},r_{1}r_{2}]$. Every orientation-reversing transformation can be represented as $${*}[l,r]\colon x\mapsto\bar{l}\bar{x}r.$$ See [8, §4.1]. The stand-alone symbol $*$ is alternate notation for quaternion conjugation $*[1,1]\colon x\mapsto\bar{x}$. Then $*[a,b]$ can be interpreted as a composition of the operations $*$ and $[a,b]$. Geometrically, the transformation ${*}$ maps $(x_{1},x_{2},x_{3},x_{4})$ to $(x_{1},-x_{2},-x_{3},-x_{4})$, and it is a reflection in the $x_{1}$-axis. The transformation $-{*}$ maps $(x_{1},x_{2},x_{3},x_{4})$ to $(-x_{1},x_{2},x_{3},x_{4})$, and it is a reflection in the hyperplane $x_{1}=0$. The inverse transformations are given by these formulas: $$\displaystyle[l,r]^{-1}$$ $$\displaystyle=[\bar{l},\bar{r}]$$ $$\displaystyle({*}[l,r])^{-1}$$ $$\displaystyle={*}[\bar{r},\bar{l}]=[\bar{l},\bar{r}]{*}$$ (3) The last equation in (3) is also interesting: We may put the ${*}$ operation on the other side of a transformation $[l,r]$ after swapping the components $l$ and $r$. For $l=r$, it is easy to see that $[l,l]$ maps the point $1$ to itself, and thus operates only on the pure quaternion part. Thus, the pairs $[l,l]$ act as 3-dimensional rotations. For $l=\cos\alpha+\sin\alpha(ui+vj+wk)$, $[l,l]$ performs a rotation by $2\alpha$ around the axis with unit vector $(u,v,w)\in\mathbb{R}^{3}$. We will denote $[l,l]$ by $[l]\colon x\mapsto\bar{l}xl$. When viewed as an operation on the unit sphere $S^{2}$, $[l]$ is a clockwise rotation by $2\alpha$ around the point $(u,v,w)$.666Measuring the rotation angle clockwise is opposite to the usual convention of regarding the counterclockwise direction as the mathematically positive direction. This is a consequence of writing the operation $[l]$ as $x\mapsto\bar{l}xl$ (as opposed to the alternative $x\mapsto lx\bar{l}$, which was chosen, for example, by Du Val [15]) and regarding the quaternion axes $i,j,k$ as a right-handed coordinate frame of 3-space, see [12, Exercise 6.4 on p. 67, answer on pp. 189–190]. Note that, when the quaternion $l$ is used as a left rotation $[l,1]$ or a right rotation $[1,l]$ in 4-space, every point is rotated only by $\alpha$, not by $2\alpha$. 3.2 The classic approach to the classification For a finite subgroup $G\leqslant\mathrm{SO}(4)$, we can consider the group $$A=\{\,(l,r)\in S^{3}\times S^{3}\mid[l,r]\in G\,\},$$ which is a two-fold cover of $G$, as each rotation $[l,r]\in G$ is represented by two quaternion pairs $(l,r)$ and $(-l,-r)$ in $A$. The elements $l$ and $r$ of these pairs form the left and the right group of $G$: $$L:=\{\,l\mid(l,r)\in A\,\},\quad R:=\{\,r\mid(l,r)\in A\,\}$$ These are finite groups of quaternions. Proposition 3.1. There is a one-to-one correspondence between 1. The finite subgroups $G$ of $\mathrm{SO}(4)$ 2. The subgroups $A$ of $L\times R$ that contain the element $(-1,-1)$, where $L$ and $R$ are finite groups of unit quaternions. Since there are only five possibilities for finite groups of unit quaternions (including two infinite families, see Section 3.7), this makes it easy, in principle, to determine the finite subgroups of $\mathrm{SO}(4)$. One task of this program, the enumeration of the subgroups $A$ of a direct product $L\times R$ is guided by Goursat’s Lemma, which was established by Goursat [20] in this very context: The groups $$L_{0}:=\{\,l\mid(l,1)\in A\,\},\quad R_{0}:=\{\,r\mid(1,r)\in A\,\}$$ form normal subgroups of $L$ and $R$, which we call the left and right kernel of $G$. The group $A$, and hence $G$, is determined by $L,R,L_{0},R_{0}$ and an isomorphism $\Phi:L/L_{0}\to R/R_{0}$ between the factor groups: $$G=\{\,[l,r]\in\mathrm{SO}(4)\mid l\in L,\ r\in R,\ \Phi(lL_{0})=rR_{0}\,\}$$ The task reduces to the enumeration of all possibilities for the components $L,R,L_{0},R_{0},\Phi$, and to the less trivial task of determining which parameters lead to geometrically equal groups. This approach underlies all classifications so far, and we call it the classic classification. 3.3 Previous classifications • Goursat [20], in 1889, classified the finite groups of motions of elliptic 3-space. Elliptic 3-space can be interpreted as the 3-sphere $S^{3}$ in which antipodal points are identified. Hence, these groups can be equivalently described as those groups in $\mathrm{SO}$(4) that contain the central inversion $-\mathrm{id}$ (the so-called diploid groups, see Section 3.8). • Threlfall and Seifert [35, 36], in a series of two papers in 1931 and 1933, extended this to the groups of $\mathrm{SO}$(4), but they only concentrated on the chiral groups. Their goal was to study the quotient spaces of the 3-sphere under fixpoint-free group actions, because these lead to space forms, spaces of constant curvature without singularities.777The term “Diskontuinuitätsbereich” in the title of [35, 36] is used like a well-established concept that does not require a definition. In the contemporary literature, it means what we today call a fundamental domain. Seifert and Threlfall were in particular interested in its topological properties, referring by “Diskontuinuitätsbereich” to the quotient space under a group action, with a specification how the boundary faces of the fundamental domain are to be pairwise identified. Du Val [15, § 30] also takes this interpretation and calls it a group-set space, where group-set is his term for orbit. In modern usage, “region of discontinuity” has other meanings, closer to the literal meaning of the words, where discontinuity plays a role. • Hurley [23], in 1951, independently of Threlfall and Seifert, built on Goursat’s classification and extended it to $\mathrm{O}(4)$. However, he considered only the crystallographic groups, see Appendix D. • Du Val [15], independently of Hurley, in a small monograph from 1964, took up Goursat’s classification and extended it to all groups. From a geometric viewpoint, he extensively discussed the symmetries of the 4-dimensional regular polytopes. • Conway and Smith [8] in a monograph from 2003, took up the classification task again, correcting some omissions and duplications of the previous classifications. They gave geometric descriptions for the polyhedral and axial groups in terms of Coxeter’s notation. 3.3.1 Related work • De Medeiros and Figueroa-O’Farrill [14], in 2012, classified the groups of order pairs $(l,r)\in S^{3}\times S^{3}$ of unit quaternions under componentwise multiplication (using Goursat’s Lemma again). These form the 4-dimensional spin group Spin(4). Since this is a double cover of $\mathrm{SO}(4)$, the results should confirm the classification of the chiral point groups. Indeed, Tables 16–18 in [14, Appendix B] give references to $\mathrm{SO}$(4) and the classification of [8].888However, besides noticing a few typographical errors, we found some discrepancies in these tables: (i) The 6th entry in Table 18 lists a group $\pm[C_{2k+1}\times\bar{D}_{4m}]$. We cannot match this with anything in the Conway–Smith classification, even allowing for one typo. (ii) The last entry in Table 4.2 of [8] is $+\frac{1}{f}[C_{mf}\times C_{nf}]$. This group does not appear in the tables of [14]. We don’t know whether these discrepancies arose in the translation from the classification in [14] to the notions of $\mathrm{SO}$(4) or they indicate problems in the classification itself. • Marina Maerchik, in 1976 [29], investigated the groups that are generated by reflections and simple rotations (also in higher dimensions), as reported in Lange and Mikhaîlova [27], (The term “pseudoreflections” in the title of [29] refers to simple rotations.) • We mention that the approach of understanding the 4-dimensional groups through their orbits was pioneered by Robinson [32], who, in 1931, studied the orbits of the polyhedral groups. He focused on the orbits themselves and their convex hulls (and not on the polar orbit polytopes as we do). 3.4 Conjugacy, geometrically equal groups Conjugation with a rotation $[a,b]$ transforms a group into a different group, which is geometrically the same, but expressed in a different coordinate system. Conjugation transforms an orientation-preserving transformation $[l,r]$ as follows: $$[a,b]^{-1}[l,r][a,b]=[a^{-1}la,b^{-1}rb]$$ Its effect is thus a conjugation of the left group by $a$ and an independent conjugation of the right group by $b$. As a conclusion, we can represent the left group $L$ and the right group $R$ in any convenient coordinate system of our choice, and it is no loss of generality to choose a particular representative for each finite group of quaternions. (Section 3.7 specifies the representatives that we use.) 3.5 Obtaining the achiral groups The classic approach by Goursat’s Lemma leads only to the chiral groups. Since the chiral part of an achiral group is an index-2 subgroup, every achiral group $G$ is obtained by extending a chiral group $H$ with some orientation-reversing element $$e={*}[a,b].$$ We will now derive some conditions on $e$, and possibly by modifying the group $G$ into a geometrically conjugate group, constrain $e$ to a finite number of possibilities. Let $H$ be a chiral group with left group $L$ and right group $R$. For each $[l,r]\in H$, we must have $e^{-1}[l,r]e\in H$, i.e., $H$ is normalized by $e$: $$e^{-1}[l,r]e={*}[\bar{b},\bar{a}][l,r]{*}[a,b]=[\bar{a}ra,\bar{b}lb]\in H$$ This means that $\bar{a}ra\in L$ and $\bar{b}lb\in R$ for every $[l,r]\in H$, which implies $\bar{a}Ra=L$ and $\bar{b}Lb=R$, i.e., $L$ and $R$ are conjugate. We conjugate $G$ with $[1,a]$, transforming $G$ to some geometrically equivalent group $G^{\prime}$ with left group $L^{\prime}$ and right group $R^{\prime}$. Let us see what happens to an arbitrary element $[l,r]$: $$[1,\bar{a}][l,r][1,a]=[l,\bar{a}ra]$$ (4) The set of values $\bar{a}ra$ forms the new right group $R^{\prime}=\bar{a}Ra=L$, while the left group remains unchanged: $L^{\prime}=L$. Thus, we have achieved $L^{\prime}=R^{\prime}$, i.e., the left and right groups are not just conjugate, but equal. The extending element $e={*}[a,b]$ is transformed as follows: $$e^{\prime}:=[1,\bar{a}]{*}[a,b][1,a]={*}[1,ba]={*}[1,c]$$ (5) Thus we have simultaneously achieved $e^{\prime}={*}[1,c]$. Moreover, $$e^{\prime}e^{\prime}={*}[1,c]{*}[1,c]=[c,c]\in H,$$ and thus, $c$ must be an element of $L=R$. Proposition 3.2. W.l.o.g., we can assume $L=R$, and the extending element is of the form $e={*}[1,c]$, with $c\in L$. This reduces the extending element to a finite number of possibilities. Conway and Smith [8, p. 51] have sketched some additional considerations, which allow to further restrict the extending element, sometimes at the cost of giving up the condition $L=R$, see Figure 54 on p. 54. Conjugation by $[a,a]$ changes the transformations as follows: $$\displaystyle[a,a]^{-1}[l,r][a,a]$$ $$\displaystyle=[a^{-1}la,a^{-1}ra]$$ $$\displaystyle[a,a]^{-1}{*}[l,r][a,a]$$ $$\displaystyle=*[a^{-1}la,a^{-1}ra]$$ Its effect is thus a conjugation of the left and right group $L=R$ by $a$. As for the chiral groups, we can therefore choose any convenient representation of the left and right group $L$ in Proposition 3.2. 3.6 Point groups in 3-space and their quaternion representation Table 1 lists the three-dimensional point groups that we will use. We will refer to them by the notation of Conway and Smith [8], given in the first column. As alternate notations, we give the orbifold notation, the Hermann-Mauguin notation or international symbol [21], and the Coxeter notation, which we will revisit in Section 8. The table contains all polyhedral groups (3 chiral and 4 achiral ones): groups consisting of symmetries of regular polytopes. The groups that are not polyhedral (subgroups of the symmetry groups of regular prisms, related to the frieze groups) include, besides $+C_{n}$ and $+D_{2n}$, five additional classes of achiral groups, which are not listed here. In total, there are 14 types of three-dimensional point groups. Note that the subscript $2n$ in $D_{2n}$ is always even; we follow the convention of using the order of the group, not the number of sides of the polygon or prism of which it is the symmetry group. The notations $+I,\pm I$, etc. for the polyhedral groups are easy to remember. The one that requires some attention is the full symmetry group of the tetrahedron, which is denoted by $TO$, as opposed to the pyritohedral group $\pm T$, which is obtained by extending $+T$ by the central reflection, and which we have discussed extensively in Section 2.1. 3.7 Finite groups of quaternions The finite groups of quaternions are [8, Theorem 12]: $$\displaystyle 2I$$ $$\displaystyle=\langle i_{I},\omega\rangle$$ $$\displaystyle 2D_{2n}$$ $$\displaystyle=\langle e_{n},j\rangle$$ $$\displaystyle 2O$$ $$\displaystyle=\langle i_{O},\omega\rangle$$ $$\displaystyle 2C_{n}$$ $$\displaystyle=\langle e_{n}\rangle$$ $$\displaystyle 2T$$ $$\displaystyle=\langle i,\omega\rangle$$ $$\displaystyle 1C_{n}$$ $$\displaystyle=\langle e_{n/2}\rangle\ \ (n\text{ odd})$$ The generators are defined in terms of the following quaternions, which we will use throughout: $$\displaystyle\omega$$ $$\displaystyle=\tfrac{1}{2}(-1+i+j+k)$$ (order 3) (6) $$\displaystyle i_{O}$$ $$\displaystyle=\tfrac{1}{\sqrt{2}}(j+k)$$ (order 4) $$\displaystyle i_{I}$$ $$\displaystyle=\tfrac{1}{2}\bigl{(}i+\tfrac{\sqrt{5}-1}{2}j+\tfrac{\sqrt{5}+1}{2}k\bigr{)}$$ (order 4) $$\displaystyle e_{n}$$ $$\displaystyle=\cos\tfrac{\pi}{n}+i\sin\tfrac{\pi}{n}$$ (order $$2n$$) We follow Conway and Smith’s notation for these groups. For each group $+G<\mathrm{SO}(3)$ (see the upper part of Table 1), there is quaternion group $2G$ of twice the size, containing the quaternions $\pm l$ for which $[l]$ represents a rotation in $+G$. All these groups contain the quaternion $-1$. In addition, there are the odd cyclic groups $1C_{n}$, of order $n$. They cannot arise as left or right groups, because $(-1,-1)$ is always contained in $A$ and hence the left and right groups contain the quaternion $-1$. 3.8 Notations for the 4-dimensional point groups, diploid and haploid groups We use the notation by Conway and Smith [8] for 4-dimensional point groups $G$, except for the toroidal groups, where we will replace it with our own notation. If $L$ and $R$ are 3-dimensional orientation-preserving point groups, $\pm[L\times R]$ denotes full product group $\{\,[l,r]\mid(l,r)\in 2L\times 2R\,\}$, of order $2|L|\cdot|R|$. Note that the groups $2L$ and $2R$ that appear in the definition are quaternion groups, while the notation shows only the corresponding rotation groups $L,R\in\mathrm{SO}(3)$. A group that contains the negation $-\mathrm{id}=[1,-1]$ is called a diploid group. A diploid index-$f$ subgroup of $\pm[L\times R]$ is denoted by $\pm\frac{1}{f}[L\times R]$. It is defined by two normal subgroups of $2L$ and of $2R$ of index $f$. Different possibilities for the normal subgroups and for the isomorphism $\Phi$ are distinguished by various ornamentations of the notation, see Appendix G for some of these cases. A haploid group, which does not contain the negation $-\mathrm{id}$, is denoted by $+\frac{1}{f}[L\times R]$, and it is an index-2 subgroup of the corresponding diploid group $\pm\frac{1}{f}[L\times R]$. Achiral groups are index-2 extensions of chiral groups, and they are also denoted by various decorations. Du Val [15] writes the groups as $(\mathbf{L}/\mathbf{L_{0}};\mathbf{R}/\mathbf{R_{0}})$, where the boldface letters distinguish quaternion groups from the corresponding 3-dimensional rotation groups. Again, various ornamentations denote different cases of normal subgroups and the isomorphism $\Phi$. Achiral extensions are denoted by a star. We will not work with this notation except for reference in our tables, and then we will omit the boldface font. In some cases, we had to adapt Du Val’s names, see Table 15 and footnote 19. 4 Hopf fibrations We give a self-contained presentation of Hopf fibrations. In many places in the literature, one particular Hopf map is introduced as “the Hopf map”, either in terms of four real coordinates or two complex coordinates, leading to “the Hopf fibration”. In some sense, this is justified, as all Hopf bundles are (mirror-)congruent. However, for our characterization, we need the full generality of Hopf bundles. Our treatment was inspired by Lyons [28], but we did not see it anywhere in this generality. As a tool, we introduce a parameterization of the great circles in $S^{3}$, which might be useful elsewhere. We also define oriented Hopf bundles: families of consistently oriented great circles. We summarize the main statements: • The great circles in $S^{3}$ can be parameterized by pairs $p,q$ of pure unit quaternions, or equivalently, by pairs of points $p,q\in S^{2}$ (Section 4.1). The choice of parameters is unique except that $K_{p}^{q}=K_{-p}^{-q}$. The twofold ambiguity of the parameters can be used to specify an orientation of the circles (Section 4.1.2). • The great circles $K_{p}^{q}$ with fixed $q$ form a partition of $S^{3}$, which we call the left Hopf bundle $\mathcal{H}^{q}$. It naturally comes with a left Hopf map $h^{q}\colon S^{3}\to S^{2}$, which maps all points of $K_{p}^{q}$ to the point $p\in S^{2}$. This map provides a bijection between the circles of the left Hopf bundle $\mathcal{H}^{q}$ and the points on $S^{2}$. Similarly, the great circles $K_{p}^{q}$ with fixed $p$ form a right Hopf bundle $\mathcal{H}_{p}$, with a right Hopf map $h_{p}$, etc. In the following, we will mention only the left Hopf bundles, but all statements hold also with left and right reversed. • Every great circle of $S^{3}$ belongs to a unique left Hopf bundle. In other words, the left Hopf bundles form a partition of the set of great circles of $S^{3}$. • For every left Hopf bundle $\mathcal{H}^{q}$, there is a one-parameter family of right rotations that maps every circle in $\mathcal{H}^{q}$ to itself, rotating each circle by the same angle $\alpha$. Conversely, a right rotation by an angle $\alpha\notin\{0,\pi\}$ rotates every point of $S^{3}$ by the same angle $\alpha$, and the set of circles along which these rotations happen form a left Hopf bundle (Proposition 4.14). • The following statements discuss the behavior of Hopf bundles under orthogonal transformations (Proposition 4.12): – Any left rotation leaves the left Hopf bundle $\mathcal{H}^{q}$ fixed, as a partition. It permutes the great circles of the bundle. – Any rotation maps the left Hopf bundle $\mathcal{H}^{q}$ to another left Hopf bundle. Any two left Hopf bundles are congruent (by some right rotation). – Left Hopf bundles and right Hopf bundles are mirrors of each other. • The intersection of a left Hopf bundle and a right Hopf bundle consists of two absolutely orthogonal circles (Corollary 4.10). • Any two great circles in the same Hopf bundle are Clifford-parallel (Proposition 4.15). This means that a point moving on one circle maintains a constant distance to the other circle. 4.1 Parameterizing the great circles in $S^{3}$ Definition 4.1. For any two pure unit quaternions $p,q\in S^{2}$, we define the following subset of unit quaternions: $$K_{p}^{q}:=\{\,x\in S^{3}\mid[x]p=q\,\}$$ (7) This can be interpreted as the set of rotations on $S^{2}$ that map $p$ to $q$. Proposition 4.2. $K_{p}^{q}$ has an alternative representation $$K_{p}^{q}=\{\,x\in S^{3}\mid[p,q]x=x\,\},$$ (8) and it forms a great circle in $S^{3}$. Moreover, every great circle in $S^{3}$ can be represented in this way, and the choice of parameters $p,q\in S^{2}$ is unique except that $K_{p}^{q}=K_{-p}^{-q}.$ This gives a convenient parameterization of the great circles in $S^{3}$ (or equivalently, the planes in $\mathbb{R}^{4}$) by pairs of points on $S^{2}$, which might be useful in other contexts. For example, they might be used to define a notion of distance between great circles (or planes in $\mathbb{R}^{4}$). (Other distance measures are discussed in [26, 25] and [7]. The connection to these different distance notions remains to be explored.) Before giving the proof, let us make a general remark about quaternions. Multiple meanings can be associated to a unit quaternion $x$: Besides treating it (i) as a point on $S^{3}$, we can regard it (ii) as a rotation $[x]$ of $S^{2}$, or (iii) as a left rotation $[x,1]$ of $S^{3}$, or (iv) as a right rotation $[1,x]$ of $S^{3}$. Rather than fixing an opinion on what a quaternion really is (cf. [1, p. 298]), we capitalize on this ambiguity and freely switch between the definitions (7) and (8). Proof of Proposition 4.2. The two expressions (7) and (8) are equivalent by a simple rearrangement of terms: $$[x]p=q\iff\bar{x}px=q\iff px=xq\iff x=\bar{p}xq\iff x=[p,q]x$$ The expression (8) shows that $K_{p}^{q}$ is the set of fixpoints of the rotation $[p,q]$. Since $p$ and $q$ are unit quaternions, the rotation $[p,q]$ is a simple rotation by $180^{\circ}$ (a half-turn). Its set of fixpoints is a two-dimensional plane, or when restricted to unit quaternions, a great circle. Conversely, if a great circle $K$ is given and we want to determine $p$ and $q$, we know that we are looking for a simple rotation by $180^{\circ}$ whose set of fixpoints is $K$. This rotation is uniquely determined, and its quaternion representation $[p,q]$ is unique up to flipping both signs simultaneously. ∎ The effect of orthogonal transformations on great circles is expressed easily in our parameterization: Proposition 4.3. Let $p,q\in S^{2}$. Then for any $l,r\in S^{3}$, (i) $[l,r]K_{p}^{q}=K_{[l]p}^{[r]q}$. (ii) $(*[l,r])K_{p}^{q}=K_{[l]q}^{[r]p}$, and in particular, $*K_{p}^{q}=K_{q}^{p}$. Proof. The following calculation proves part (i). $$[l,r]K_{p}^{q}=\{\,\bar{l}xr\mid\bar{x}px=q\,\}\\ =\{\,y\mid r\bar{y}\bar{l}ply\bar{r}=q\,\}=\{\,y\mid\bar{y}\bar{l}ply=\bar{r}qr\,\}=\{\,y\mid[y][l]p=[r]q\,\}=K_{[l]p}^{[r]q},$$ where we have substituted $x$ by $y:=\bar{l}xr$. Part (ii) follows from part (i) and $*K_{p}^{q}=K_{q}^{p}$. This last statement expresses the fact that the inverse rotations $[\bar{x}]$ of the rotations $[x]$ that map $p$ to $q$ are the rotations mapping $q$ to $p$. More formally, $$*K_{p}^{q}=\{\,\bar{x}\mid\bar{x}px=q\,\}=\{\,y\mid yp\bar{y}=q\,\}=\{\,y\mid p=\bar{y}q{y}\,\}=K_{q}^{p},$$ with $y:=\bar{x}$. ∎ The elements of $K_{p}^{p}$ form a subgroup of the quaternions [16]: According to (7), $K_{p}^{p}$ is the stabilizer of $p$. Its cosets can be characterized by Proposition 4.3(i): Corollary 4.4. The left cosets of $K_{p}^{p}$ are the circles $K_{p^{\prime}}^{p}$, and the right cosets of $K_{p}^{p}$ are the circles $K_{p}^{p^{\prime}}$, for arbitrary $p^{\prime}\in S^{2}$. ∎ We emphasize that the two parameters $p$ and $q$ in $K_{p}^{q}$ “live on different spheres $S^{2}$”: Any relation between them has no intrinsic geometric meaning, and will be changed by coordinate transformations according to Proposition 4.3. This is despite the fact that $p=q$ has an algebraic significance, since the circle $K_{p}^{p}$ goes through the special quaternion 1, which is one of the coordinate axes, and hence $K_{p}^{p}$ forms a subgroup of quaternions. 4.1.1 Keeping a circle invariant The following proposition characterizes the transformations that map a given great circle to itself. Moreover, it describes the action of these transformations when restricted to that circle. For a pure unit quaternion $p\in S^{2}$ and an angle $\theta\in\mathbb{R}$ we use the notation $$\exp p\theta:=\cos\theta+p\sin\theta,$$ so that $[\exp p\theta]$ is a clockwise rotation around $p$ by $2\theta$ on $S^{2}.$ Proposition 4.5. Consider the circle $K_{p}^{q}$, for $p,q\in S^{2}$. The rotations $[l,r]$ that leave $K_{p}^{q}$ invariant fall into two categories, each of which is a two-parameter family. (a) The orientation-preserving case: $[l]p=p$ and $[r]q=q$. Every transformation in this family can be written as $[\exp p\varphi,\exp q\theta]$ for $\varphi,\theta\in\mathbb{R}$. This transformation acts on the circle $K_{p}^{q}$ as rotation by $|\theta-\varphi|$. (b) The orientation-reversing case: $[l]p=-p$ and $[r]q=-q$. After choosing two fixed quaternions $p^{\prime},q^{\prime}\in S^{2}$ orthogonal to $p$ and $q$, respectively, they can be written as the transformations $[p^{\prime}\exp p\varphi,q^{\prime}\exp q\theta]$ for $\varphi,\theta\in\mathbb{R}$, and they act on $K_{p}^{q}$ as reflections. Note that the transformations that we consider are always orientation-preserving when considered in 4-space; they can be orientation-reversing when considered as (2-dimensional) operations on the circle $K_{p}^{q}$. Proof. Let $[l,r]\in\mathrm{SO}(4)$ be a rotation. Then we have the following equivalences. $$[l,r]K_{p}^{q}=K_{p}^{q}\iff K_{[l]p}^{[r]q}=K_{p}^{q}\iff([l]p=p\wedge[r]q=q)\lor([l]p=-p\wedge[r]q=-q)$$ For the first case, the transformations $[l]$ on $S^{2}$ that leave the point $p$ fixed are the rotations around $p$, and they are given by the quaternions $l=\exp p\varphi$, and similarly for $r$. For the second case, the transformations $[l]$ on $S^{2}$ that map $p$ to $-p$ can be written as a composition of $[p^{\prime}]$, which maps $p$ to $-p$, and an arbitrary rotation around the axis through $p$ and $-p$, which is expressed as $[\exp p\varphi]$. This establishes that $[l,r]$ can be written in the claimed form. We now investigate the action of these rotations on $K_{p}^{q}$. (a) Let $x\in K_{p}^{q}$. Since $xq=px$, we have $x\exp q\theta=(\exp p\varphi)x$. In particular, $$\displaystyle[\exp p\varphi,\exp q\theta]x=\exp(-p\varphi)x\exp q\theta=\exp(-p\varphi)(\exp p\theta)x=(\exp p(-\varphi+\theta))x.$$ Thus, $[\exp p\varphi,\exp q\theta]$ acts on $K_{p}^{q}$ like the left multiplication with $\exp p(\theta-\varphi)$, which (being a left rotation) moves every point by the angle $|\theta-\varphi|$. (b) It is enough to show that $[p^{\prime},q^{\prime}]$ acts as a reflection on $K_{p}^{q}$. We will show that $K_{p}^{q}\cap K_{p^{\prime}}^{q^{\prime}}\not=\emptyset$ and $K_{p}^{q}\cap K_{p^{\prime}}^{-q^{\prime}}\not=\emptyset$. Thus, there is a point $x\in K_{p}^{q}$ with $[p^{\prime},q^{\prime}]x=x$ and another point $y\in K_{p}^{q}$ with $[p^{\prime},q^{\prime}]y=-y$, and this means that $[p^{\prime},q^{\prime}]$ fixes some, but not all, points on $K_{p}^{q}$, and thus its action cannot be a rotation. Let $[x_{0}]$ be a rotation that maps $p$ to $q$. Then it maps $p^{\prime}$ to some point $p^{\prime\prime}$ that is orthogonal to $q$. Let $[y_{0}]$ be the rotation that fixes $q$ and maps $p^{\prime\prime}$ to $q^{\prime}$. The rotation $[x_{0}y_{0}]$ maps $p$ to $q$ and $p^{\prime}$ to $q^{\prime}$. Thus, $x_{0}y_{0}\in K_{p}^{q}\cap K_{p^{\prime}}^{q^{\prime}}$. Similarly, if $[z_{0}]$ is the rotation that fixes $q$ and maps $p^{\prime\prime}$ to $-q^{\prime}$, then $x_{0}z_{0}\in K_{p}^{q}\cap K_{p^{\prime}}^{-q^{\prime}}$. ∎ Proposition 4.6. The great circles $K_{p}^{q}$ and $K_{p}^{-q}=K_{-p}^{q}$ are absolutely orthogonal. Proof. The simple rotation $[p,-q]=[-p,q]$ maps $x\in K_{p}^{q}$ to $-x\in K_{p}^{q}$. That is, $[p,-q]$ preserves (not pointwise) $K_{p}^{q}$. Since $K_{p}^{-q}$ is the fixed circle of $[p,-q]$ and the invariant circles of a simple rotation are absolutely orthogonal, we are done. ∎ 4.1.2 Oriented great circles By Proportion 4.5, the left rotation $[\exp(-p\theta),1]$ has the same effect on the circle $K_{p}^{q}$ as the right rotation $[1,\exp q\theta]$. This allows us to specify an orientation for $K_{p}^{q}$. For some starting point $x\in K_{p}^{q}$, we write $$K_{p}^{q}=\{\,(\exp p\theta)x\mid\theta\in\mathbb{R}\,\}=\{\,x\exp q\theta\mid\theta\in\mathbb{R}\,\},$$ (9) and both parameterizations traverse the circle in the same sense, for increasing $\theta$. We may thus introduce the notation $\vec{K}_{p}^{q}$ to denote an oriented great circle on $S^{3}$. If we use $\vec{K}_{-p}^{-q}$ in (9), the same circle will be traversed in the opposite sense. Thus, we obtain a notation for oriented great circles on $S^{3}$, and for this notation, the choice of parameters $p,q\in S^{2}$ is unique. Only for an oriented circle, the phrase “rotation by $\pi/4$” or “rotation by $-\pi/3$” has a well-defined meaning, and we can give a more specific version of Proposition 4.5a: The operation $[\exp p\varphi,\exp q\theta]$ rotates $\vec{K}_{p}^{q}$ by $\theta-\varphi$. In Appendix E, we give a direct geometric view of this orientation, based on the original interpretation of $K_{p}^{q}$ as the set of rotations on $S^{2}$ that map $p$ to $q$ (Definition 4.1). Proposition 4.3 extends to oriented circles as follows: Proposition 4.7. $[l,r]\vec{K}_{p}^{q}=\vec{K}_{[l]p}^{[r]q}$ and $*\vec{K}_{p}^{q}=\vec{K}_{-q}^{-p}$. Proof. For $x\in K_{p}^{q}$, $$[l,r](x\exp q\theta)=\bar{l}x(\exp q\theta)r=\bar{l}xr\bar{r}(\exp q\theta)r=(\bar{l}xr)\exp(\bar{r}qr\theta)=y\exp(([r]q)\theta)$$ with $y=\bar{l}xr\in[l,r]K_{p}^{q}=K_{[l]p}^{[r]q}$. Thus, the orientation that we get on $[l,r]\vec{K}_{p}^{q}$ coincides with the orientation prescribed in (9) for $\vec{K}_{[l]p}^{[r]q}$. Similarly, $${*}(x\exp q\theta)=(\exp\bar{q}\theta)\bar{x}=\exp(-q\theta)\,y$$ with $y=\bar{x}\in{*}K_{p}^{q}=K_{q}^{p}=K_{-q}^{-p}$, and this is the correct orientation for $\vec{K}_{-q}^{-p}$ in accordance with (9). ∎ 4.2 Hopf bundles Hopf bundles are families of circles $K_{p}^{q}$ with fixed $p$ or with fixed $q$: Definition 4.8. Let $q_{0}\in S^{2}$ be a pure unit quaternion. The left Hopf bundle $\mathcal{H}^{q_{0}}$ is $$\mathcal{H}^{q_{0}}:=\{\,K_{q}^{q_{0}}\mid q\in S^{2}\,\},$$ and the right Hopf bundle $\mathcal{H}_{q_{0}}$ is $$\mathcal{H}_{q_{0}}:=\{\,K_{q_{0}}^{q}\mid q\in S^{2}\,\}.$$ The oriented left and right Hopf bundles are defined analogously: $$\displaystyle\vec{}\mathcal{H}^{q_{0}}$$ $$\displaystyle:=\{\,\vec{K}_{q}^{q_{0}}\mid q\in S^{2}\,\}$$ $$\displaystyle\vec{}\mathcal{H}_{q_{0}}$$ $$\displaystyle:=\{\,\vec{K}_{q_{0}}^{q}\mid q\in S^{2}\,\}$$ The convention for left and right was adopted from Dunbar [16]: According to Corollary 4.4, the circles $K_{q}^{q_{0}}$ of the left Hopf bundle $\mathcal{H}^{q_{0}}$ are the left cosets of the circle $K_{q_{0}}^{q_{0}}$. We can naturally assign a Hopf map to each bundle, such that the circles of a bundle become the fibers of the associated Hopf map: Definition 4.9. Let $q_{0}\in S^{2}$ be a pure unit quaternion. The left Hopf map associated with $q_{0}$ is $$\displaystyle h^{q_{0}}\colon S^{3}$$ $$\displaystyle\to S^{2}$$ $$\displaystyle x$$ $$\displaystyle\mapsto[\bar{x}]q_{0}=xq_{0}\bar{x},$$ and the right Hopf map associated with $q_{0}$ is $$\displaystyle h_{q_{0}}\colon S^{3}$$ $$\displaystyle\to S^{2}$$ $$\displaystyle x$$ $$\displaystyle\mapsto[x]q_{0}=\bar{x}q_{0}x.$$ Corollary 4.10. The following statements are direct consequences of the definitions: • The choice of the parameter $q_{0}$ in the left Hopf bundle $\mathcal{H}^{q_{0}}$ is unique except that $\mathcal{H}^{q_{0}}=\mathcal{H}^{-q_{0}}$. As oriented Hopf bundles, $\vec{}\mathcal{H}^{q_{0}}$ and $\vec{}\mathcal{H}^{-q_{0}}$ contain the same circles in opposite orientation. The same statement holds for right Hopf bundles. • No two different left Hopf bundles share a circle. That is, $$\mathcal{H}^{p_{0}}\cap\mathcal{H}^{p_{1}}=\emptyset\ \text{if}\ p_{0}\neq\pm p_{1}.$$ A similar statement holds for right Hopf bundles. • A left Hopf bundle intersects a right Hopf bundle in exactly two circles, which are absolutely orthogonal: $$\mathcal{H}_{q_{0}}\cap\mathcal{H}^{p_{0}}=\{K_{q_{0}}^{p_{0}},K_{q_{0}}^{-p_{0}}=K_{-q_{0}}^{p_{0}}\}.$$ • Every great circle $K_{q_{0}}^{p_{0}}$ in $S^{3}$ belongs to a unique left Hopf bundle $\mathcal{H}^{p_{0}}$ and to a unique right Hopf bundle $\mathcal{H}_{q_{0}}$. From Proposition 4.7, we can directly work out the effect of a transformation on an (oriented) Hopf bundle: Proposition 4.11. (a) $[l,r]\vec{}\mathcal{H}^{q}=\vec{}\mathcal{H}^{[r]q}$ and $[l,r]\vec{}\mathcal{H}_{p}=\vec{}\mathcal{H}_{[l]p}$;  (b) $*\vec{}\mathcal{H}^{q}=\vec{}\mathcal{H}_{-q}$ and $*\vec{}\mathcal{H}_{p}=\vec{}\mathcal{H}^{-p}$. We get consequences about the operations that leave a Hopf bundle invariant and about mappings between Hopf bundles. Proposition 4.12. The following statements about the operations that leave a left Hopf bundle invariant hold, and similar statements hold for right Hopf bundles. (i) Any left rotation leaves an oriented left Hopf bundle $\vec{}\mathcal{H}^{q}$ invariant. It permutes the great circles of the bundle. (ii) A right rotation $[1,r]$ leaves the oriented left Hopf bundle $\vec{}\mathcal{H}^{q}$ invariant iff $[r]q=q$. (iii) A right rotation $[1,r]$ maps the oriented left Hopf bundle $\vec{}\mathcal{H}^{q}$ to the opposite bundle $\vec{}\mathcal{H}^{-q}$ iff $[r]q=-q$. (iv) Any two oriented left Hopf bundles are congruent, and can be mapped to each other by a right rotation. (v) Any oriented right Hopf bundle and any oriented left Hopf bundle are mirrors of each other. ∎ We can summarize properties (i)–(iii) in the following statement, which characterizes the transformations that leave a given left Hopf bundle invariant, in analogy to Proposition 4.5. Proposition 4.13. (i) A rotation $[l,r]$ preserves $\mathcal{H}^{q_{0}}$ if and only if $[r]q_{0}=\pm q_{0}$. (ii) More precisely, these rotations come in two families. (a) The rotations with $[r]q_{0}=q_{0}$ can be written as $[l,\exp q_{0}\theta]$ for $\theta\in\mathbb{R}$, and they map $\vec{}\mathcal{H}^{q_{0}}$ to $\vec{}\mathcal{H}^{q_{0}}$, preserving the orientation of the circles. (b) The rotations with $[r]q_{0}=-q_{0}$ can be written as $[l,q^{\prime}\exp q_{0}\theta]$ for $\theta\in\mathbb{R}$, where $q^{\prime}\in S^{2}$ is some fixed quaternion orthogonal to $q_{0}$. They map $\vec{}\mathcal{H}^{q_{0}}$ to $\vec{}\mathcal{H}^{-q_{0}}$, reversing the orientation of the circles. Note that an orientation-reversing transformation sends a left Hopf bundle to a right one, and those two share exactly two circles. Thus, no orientation-reversing transformation can preserve a Hopf bundle. 4.2.1 Left and right screws A generic rotation has two circles that it leaves invariant. The left and right rotations are special: they have infinitely many invariant circles, and as we will see, these circles form a Hopf bundle. In contrast to Proposition 4.13, we now discuss rotations that leave every individual circle of a Hopf bundle invariant: Proposition 4.14. (i) For the oriented left Hopf bundle $\vec{}\mathcal{H}^{q_{0}}$, the one-parameter subgroup of right rotations $[1,\exp q_{0}\varphi]$ rotates every circle of $\vec{}\mathcal{H}^{q_{0}}$ in itself by the same angle $\varphi$. (ii) Conversely, for a right rotation $[1,r]$ with $r\neq 1,-1$, the set of circles that it leaves invariant forms a left Hopf bundle $\mathcal{H}^{q_{0}}$, and $[1,r]$ rotates every circle of $\vec{H}^{q_{0}}$ in itself by the same angle $\varphi$. Proof. Part (i) is a direct consequence of the definition (9) of oriented circles. According to Proposition 4.5, the right rotation $[1,r]$ leaves a circle $K_{p}^{q}$ invariant iff $[r]q=q$. (Case (b) of Proposition 4.5, where $[l]p=-p$, does not apply since $l=1$.) After writing $r=\exp q_{0}\varphi$ with $\varphi\neq 0,\pi$, the condition $[r]q=q$ translates to $q=\pm q_{0}$, and the circles $\{\,K_{p}^{\pm q_{0}}\mid p\in S^{2}\,\}$ form the Hopf bundle $\mathcal{H}^{q_{0}}$. The last part of the statement repeats (i). ∎ Geometrically, these rotations are screw motions. If we look at one circle $K_{p}^{q_{0}}$ from the bundle, the adjacent circles form helices that wind around this circle, see Figure 5. The right multiplication by $\exp q_{0}\varphi$ effects a forward motion of $\varphi$ along every circle, and a simultaneous clockwise rotation by the same angle $\varphi$ around the circle, when seen in the direction of the forward movement, and is thus a right screw.999While not everything that is associated to right rotations is “right”, it is a lucky coincidence that at least right rotations effect right screws, and left rotations effect left screws. This view depends on the convention that we have chosen in Section 2.3 for viewing parts of the 3-sphere as three-dimensional space. Here is a check of this fact at an example: Figure 5 shows the situation around the point $(x_{1},x_{2},x_{3},x_{4})=(0,0,0,1)\equiv k\in K_{-i}^{i}$. According to our conventions from Section 2.3, we draw this in 3-space by projecting to the tangent space $x_{4}=1$, i.e., omitting the $x_{4}$-coordinate, and drawing $(x_{1},x_{2},x_{3})\equiv(1,i,j)$ as a right-handed coordinate system. The great circle $K_{-i}^{i}$ is invariant under the family of right rotations $[1,\exp i\varphi]$, which move the point $k$ along the circle: $\vec{K}_{-i}^{i}=\{\,k\exp i\varphi\,\}=\{\,k(\cos\varphi+i\sin\varphi)\,\}=\{\,k\cos\varphi+j\sin\varphi\,\}$ The tangent vector at $\varphi=0$ points in the direction $j\equiv(0,0,1,0)$. Let us look at a small circle of radius $r$ around $K_{-i}^{i}$, centered at $k$: It lies in a plane parallel to the $1,i$-plane and can be written as $\tfrac{1}{\sqrt{1+r^{2}}}(k+r(\cos\alpha+i\sin\alpha))=\tfrac{1}{\sqrt{1+r^{2}}}(k+r\exp i\alpha).$ The right rotation $[1,\exp i\varphi]$ maps this to $\tfrac{1}{\sqrt{1+r^{2}}}(k+r\exp i\alpha)\exp i\varphi=\tfrac{1}{\sqrt{1+r^{2}}}(k\exp i\varphi+r\exp i(\alpha+\varphi))$ i.e., it increases $\alpha$ together with $\varphi$. As can be seen in Figure 5, this is a right screw. Du Val [15, § 14, p. 36], for example, considers right quaternion multiplications as left screws, without giving reasons for this choice, and he draws his illustrations accordingly. On the other hand, Coxeter [12, Chapter 6, p. 70] considers right quaternion multiplications as right screws. In contrast to the situation in Euclidean 3-space, these screws have no distinguished axis. The blue circle seems to wind around the red circle, but this is an artifact of the projection of this picture. All circles are in fact equivalent, and the situation looks the same for every circle of the bundle. 4.2.2 Clifford-parallel circles We measure the distance between two points $p,q\in S^{3}$ as the geodesic distance on the sphere, which equals the angular distance along the great circle through $p$ and $q$: $\mathrm{dist}(p,q):=\arccos\langle p,q\rangle$, where $\langle p,q\rangle$ denotes the scalar product. The distance between two sets $K,K^{\prime}\subseteq S^{3}$ is $\mathrm{dist}(K,K^{\prime})=\inf\{\,\mathrm{dist}(p,q)\mid p\in K,q\in K^{\prime}\,\}.$ Two great circles $K$ and $K^{\prime}$ in $S^{3}$ are called Clifford-parallel if $\mathrm{dist}(x,K^{\prime})$ does not depend on $x\in K$. See for example [3, Section 18.8] for more information on Clifford parallelism. Proposition 4.15 ([3, Exercise 18.11.18]). Great circles in the same Hopf bundle $\mathcal{H}^{q}$ are Clifford-parallel, and $\mathrm{dist}(K_{p}^{q},K_{r}^{q})=\mathrm{dist}(p,r)/2$. Proof. By Proposition 4.5a, the right rotations $[1,\exp q\theta]$ rotate $x\in K_{p}^{q}$ along the circle $K_{p}^{q}$ while keeping $K_{r}^{q}$ invariant as a set. Thus, $\mathrm{dist}(x,K_{r}^{q})$ is constant as $x$ moves on $K_{p}^{q}$, showing that $K_{p}^{q}$ and $K_{r}^{q}$ are Clifford-parallel. Since $K_{r}^{q}$ is a left coset of $K_{q}^{q}$, by applying some left rotation to $K_{p}^{q}$ and $K_{r}^{q}$, we may assume that $r=q$. That is, it is enough to show that $\mathrm{dist}(K_{p}^{q},K_{q}^{q})=\mathrm{dist}(p,q)/2$. Since $1\in K_{q}^{q}$ and the circles $K_{p}^{q}$ and $K_{p}^{p}$ are Clifford parallel, it is enough to show that $\mathrm{dist}(K_{p}^{q},1)=\mathrm{dist}(p,q)/2$. The points $x=\cos{\alpha}+v\sin{\alpha}\in K_{p}^{q}$ represent the rotations $[x]$ on $S^{2}$ that map $p$ to $q$, and $\mathrm{dist}(x,1)=\arccos\cos\alpha=\alpha$, assuming $0\leq\alpha\leq\pi$. Thus, we are trying to minimize $\alpha$, which is half the rotation angle of $[x]$. The rotation that minimizes the rotation angle is the one that moves $p$ to $q$ along the great circle through $p$ and $q$, and its rotation angle $2\alpha$ is $\mathrm{dist}(p,q)$. ∎ We mention that Clifford parallelism arises in two kinds: left and right, accordingly as the circles belong to a common left or right Hopf bundle. Each kind of Clifford parallelism is transitive, but Clifford parallelism in itself is not. 5 Classification of the point groups We make a coarse classification of the groups by their invariant Hopf bundles. The following observation of Dunbar [16, p. 124] characterizes this in terms of the left and right groups. Proposition 5.1. A 4-dimensional point group leaves some left Hopf bundle invariant if and only if its right group is cyclic or dihedral. A similar statement holds for right Hopf bundles and the left group. Proof. By Proposition 4.13(i), a transformation $[l,r]\in\mathrm{SO}(4)$ preserves $\mathcal{H}^{q_{0}}$ if and only if $[r]$ keeps the line through $q_{0}$ invariant. The set of such $r$’s form an infinite group that is isomorphic to $\mathrm{O}(2)$. Its finite subgroups are either cyclic or dihedral. ∎ As we have seen, the left and right groups $L$ and $R$ are one of the five classes $2I,2O,2T,2D_{2n}$, and $2C_{n}$. Besides the infinite families of cyclic groups $2C_{n}$ and dihedral groups $2D_{2n}$, there are the three polyhedral groups $2I,2O,2T$. Accordingly, we get a rough classification into three classes of groups. 1. The left subgroup is cyclic or dihedral, and the right subgroup is polyhedral, or vice versa. These groups leave some left or right Hopf bundle invariant, and they are the tubical groups, to be discussed in Section 6. 2. Both the left and right subgroup are cyclic or dihedral. These groups leave some both some left and some right Hopf bundle invariant. They form a large family, the toroidal groups, to be discussed in Section 7. 3. Both the left and right subgroup are polyhedral. These groups leave no Hopf bundle invariant. There are finitely many groups of this class: the polyhedral groups and the axial groups. For all classes except the tubical groups, there is the possibility that $L=R$, and hence we also consider the achiral extensions of these groups. 5.1 The Clifford torus The toroidal groups are characterized as leaving both some left Hopf bundle $\mathcal{H}_{p}$ and some right Hopf bundle $\mathcal{H}^{q}$ invariant. By Corollary 4.10, these two bundles intersect in two orthogonal circles $K_{p}^{q}\cup K_{p}^{-q}$, and hence these two circles must also be invariant. We conclude that the set $\mathbb{T}_{p}^{q}$ of points that are equidistant from these two circles is also invariant. We will see that this set is a Clifford torus. It has several alternative representations. $$\displaystyle\mathbb{T}_{p}^{q}$$ $$\displaystyle=\{\,x\in S^{3}\mid\mathrm{dist}(x,K_{p}^{q})=\mathrm{dist}(x,K_{p}^{-q})\,\}$$ (10) $$\displaystyle=\{\,x\in S^{3}\mid\mathrm{dist}(x,K_{p}^{q})=\tfrac{\pi}{4}\,\}$$ $$\displaystyle=\{\,x\in S^{3}\mid\mathrm{dist}(x,K_{p}^{-q})=\tfrac{\pi}{4}\,\}$$ $$\displaystyle=\{\,x\in S^{3}\mid\mathrm{dist}(x,K_{p}^{q})=\mathrm{dist}(x,K_{-p}^{q})\,\}$$ Proposition 4.3 tells us how an orthogonal transformation acts on the circle $K_{p}^{q}$ that defines the torus $\mathbb{T}_{p}^{q}$. As an immediate corollary, we obtain: Proposition 5.2. Let $p,q\in S^{2}$. Then for any $l,r\in S^{3}$, (i) $[l,r]\mathbb{T}_{p}^{q}=\mathbb{T}_{[l]p}^{[r]q}$. (ii) $(*[l,r])\mathbb{T}_{p}^{q}=\mathbb{T}_{[l]q}^{[r]p}$, and as a special case, $*\mathbb{T}_{p}^{q}=\mathbb{T}_{q}^{p}$. From $\mathbb{T}_{p}^{q}$, we can recover the two defining circles $K_{p}^{q}\cup K_{p}^{-q}$ as those points whose distance from $\mathbb{T}_{p}^{q}$ takes the extreme values $\pi/4$: $$K_{p}^{q}\cup K_{p}^{-q}=\{x\in S^{3}\mid\mathrm{dist}(x,\mathbb{T}_{p}^{q})=\tfrac{\pi}{4}\}$$ Since the choice of parameters $p,q$ for circles $K_{p}^{q}$ is unique up to simultaneous sign changes, the choice of parameters $p,q\in S^{2}$ for the torus $\mathbb{T}_{p}^{q}$ is unique up to independent sign changes: $\mathbb{T}_{p}^{q}=\mathbb{T}_{-p}^{-q}=\mathbb{T}_{-p}^{q}=\mathbb{T}_{p}^{-q}$. By Proposition 5.2, any two Clifford tori are related by an appropriate orientation-preserving transformation. There are no “left” or “right” Clifford tori. Thus, it is sufficient to study one special torus. In particular, $\mathbb{T}_{i}^{i}$ is the “standard” Clifford torus: $$\mathbb{T}_{i}^{i}=\{\,\tfrac{1}{\sqrt{2}}(\cos\theta,\sin\theta,\cos\varphi,\sin\varphi)\mid 0\leq\theta,\varphi<2\pi\,\}=\{\,x\in\mathbb{R}^{4}\mid x_{1}^{2}+y_{1}^{2}=x_{2}^{2}+y_{2}^{2}=\tfrac{1}{2}\,\}$$ (11) It is a square flat torus, and we name the coordinates $(x_{1},y_{1},x_{2},y_{2})$ to emphasize that it is the Cartesian product of a circle of radius $\sqrt{1/2}$ in the $x_{1},y_{1}$-plane and a circle of radius $\sqrt{1/2}$ in the $x_{2},y_{2}$-plane. For this torus, the two circles of extreme distance are $K_{i}^{i}$ and $K_{i}^{-i}$, the great circles in the $x_{1},y_{1}$-plane and in the $x_{2},y_{2}$-plane. In Section 7.11.2, we will see another torus, $\mathbb{T}^{i}_{k}$, with a different, but equally natural equation (25). 6 The tubical groups In this section we consider the point groups that preserve a left or a right Hopf bundle, but not both. By Proposition 5.1, these groups are characterized as the groups for which the left or the right group, but not both, is cyclic or dihedral. These groups will be called tubical groups. We have chosen this name because, as we will see (see for instance Figure 6), for large enough order, the polar orbit polytope consists of intertwined congruent tube-like structures.101010 There is a notion of tubular groups, which is something completely different, see for example [5]. Since any two left (resp. right) Hopf bundles are congruent, it is enough to consider the tubical groups that preserve a specific left (resp. right) Hopf bundle. We will call these the left tubical groups and the right tubical groups. Since left and right Hopf bundles are mirror-congruent, we can restrict our attention to the left tubical groups. The classic classification leads to 11 classes of left tubical groups. Table 2 lists them with the notation from Conway and Smith [8, Table 4.1] in the first column, together with their generators. In Appendix F, we depict subgroup relations between these groups. According to the right group, there are 5 tubical group classes of cyclic type and 6 tubical group classes of dihedral type. The left Hopf bundle that they leave invariant is $\mathcal{H}^{i}$. This follows from Proposition 4.13(ii) and our choice for the generators of $2C_{n}$ and $2D_{2n}$. The cyclic-type groups are those tubical groups that moreover preserve the consistent orientation of the circles in $\mathcal{H}^{i}$. That is, they preserve $\vec{}\mathcal{H}^{i}$. Each of these classes is parameterized by a positive integer $n$, which is the largest integer $n$ such that $[1,e_{n}]$ is in the group. In some cases the parameter $n$ starts from 2 in order to exclude the groups $D_{2}$, which is geometrically the same as $C_{2}$. We also exclude $\pm\tfrac{1}{2}[O\times\overline{D}_{4}]$ because the notation $\overline{D}_{4n}$ indicates that the normal subgroup $D_{2n}$ of $D_{4n}$ is used, and not $C_{2n}$. For $n=1$, this distinction disappears, and hence $\pm\tfrac{1}{2}[O\times\overline{D}_{4}]$ is geometrically the same as $\pm\tfrac{1}{2}[O\times D_{4}]$ (see also Appendix G.1). In this case and in all other cases where $C_{2}$ and $D_{2}$ are exchanged, the respective groups are conjugate under $[1,\tfrac{1}{\sqrt{2}}(i+j)]$, which exchanges $[1,i]$ with $[1,j]$. Convention. For ease of use, we drop the word “left” from “left tubical group” and call it simply “tubical group” in this section. We will denote $\mathcal{H}^{i}$ by $\mathcal{H}$ and call it the Hopf bundle. We will also denote $h^{i}(x)=xi\bar{x}$ by $h(x)$ and call it the Hopf map. 6.1 Orbit circles An element of a tubical group has one of the following two forms, and Proposition 4.5 describes its action on the circles of $\mathcal{H}$: • $[l,e_{m}^{s}]$, which maps $\vec{K}_{p}$ to $\vec{K}_{[l]p}$, and • $[l,je_{m}^{s}]$, which maps $K_{p}$ to $K_{-[l]p}$ with a reversal of orientation. More precisely, this rotation maps $\vec{K}_{p}=\vec{K}_{p}^{i}$ to $\vec{K}^{-i}_{[l]p}$, which is the reverse of $\vec{K}^{i}_{-[l]p}=\vec{K}_{-[l]p}$. These elements occur only in the groups of dihedral type. Thus, the rotations permute the Hopf circles of $\mathcal{H}$. Via the one-to-one correspondence of the Hopf map, they induce mappings on the Hopf sphere $S^{2}$: Proposition 6.1. A tubical group $G$ induces a $3$-dimensional point group $G^{h}$ via the Hopf map $h$. This group $G^{h}$ is isomorphic to $G/\langle[1,e_{n}]\rangle$, where $n$ is the largest integer such that $[1,e_{n}]\in G$. Proof. The above considerations show that $[l,e_{m}^{s}]$ induces the orientation-preserving transformation $[l]$ on $S^{2}$, and $[l,je_{m}^{s}]$ induces the orientation-reversing transformation $-[l]$ on $S^{2}$. We are done since the image of $G$ in the homomorphism $$\displaystyle G$$ $$\displaystyle\to\mathrm{O}(3)$$ $$\displaystyle[l,e_{m}^{s}]$$ $$\displaystyle\mapsto[l]$$ $$\displaystyle[l,je_{m}^{s}]$$ $$\displaystyle\mapsto-[l]$$ is $G^{h}$, and the kernel is $\langle[1,e_{n}]\rangle$. ∎ The column “$G^{h}\leqslant\mathrm{O}(3)$” in Table 2 lists the induced group for each tubical group $G$. Tubical groups of cyclic type induce chiral groups $G^{h}$, and tubical groups of dihedral type induce achiral groups $G^{h}$. As a consequence, the orbit of some starting point $v\in S^{3}$ can be determined as follows: 1. The starting point lies on the circle $K_{h(v)}$. The subgroup $\langle[1,e_{n}]\rangle$ generates a regular $2n$-gon in this circle. 2. For each $t\in G^{h}$, there is a coset of elements that map $K_{h(v)}$ to the circle $K_{t(h(v))}$, and these elements generate a regular $2n$-gon in this circle. Proposition 6.2. Let $G$ be a tubical group. The orbit of a point $v\in S^{3}$ is the union of regular $2n$-gons on the circles $K_{t(h(v))}$ for $t\in G^{h}$. ∎ We call these circles the orbit circles of $G$. If the $G^{h}$-orbit of $h(v)$ is not free, several of these $2n$-gons will share the same circle, and they may overlap. The $2n$-gons may coincide, or they may form polygons with more vertices. It turns out that they can intersperse to form a regular $2fn$-gon or, in the case of dihedral-type groups, the union of two regular $2fn$-gons, for some $1\leq f\leq 5$. The $G^{h}$-orbit of $h(v)$ is always free when the starting point does not lie on a rotation center or a mirror of $G^{h}$. The following corollary follows directly from the previous proposition. Corollary 6.3. Let $G$ be a tubical group and let $v\in S^{3}$ be a point. If the $G^{h}$-orbit of $h(v)$ is free, then the $G$-orbit of $v$ is also free. ∎ For tubical groups of cyclic type, the orbit has the following nice property. Proposition 6.4. Let $G$ be a cyclic-type tubical group. The $G$-orbit of a point $v\in S^{3}$, up to congruence, depends only on the circle of $\mathcal{H}$ on which $v$ lies. Proof. Rotation of $v$ along $K_{h(v)}$ can be performed by a right rotation of the form $[1,\exp\theta i]$. Since the right group of $G$ is cyclic, elements of $G$ have the form $[l,e_{m}^{s}]$. These elements commute with right rotations of the form $[1,\exp\theta i]$. In particular, $$\mathrm{orbit}([1,\exp\theta i]v,G)=[1,\exp\theta i]\mathrm{orbit}(v,G).\qed$$ 6.2 Tubes If $n$ is large, the orbit fills the orbit circles densely. Figure 5(a) shows the cells (i.e. facets) of the polar orbit polytope that correspond to orbit points on three orbit circles. Here orbit points form a regular 80-gon on each orbit circle. We clearly see twisted and intertwined tubes, which are characteristic for these groups, and which we have used to assign their names. Figures 5(c) and 5(e) show a single cell. It has two large flat faces, where successive cells are stacked on top of each other with a slight twist. On the boundary of the tubes in Figure 5(a) we can distinguish two different sets of “parallel” curves. One set of curves comes from the boundaries between successive slices (cells) of the tubes, and the other set of curves is a trace of the slices of the adjacent tubes. At first sight, it is hard to know which of the two line patterns is which. In Figure 5(b), we have cut the tubes open to show where the boundaries between the slices are, revealing also the three orbit circles. If we let $n$ grow to infinity, the tubes become smooth, see Figure 5(d). We explore the limiting shape of these tubes in Section 6.3. We will see that the tubes are either 3-sided, 4-sided, or 5-sided, and their shape as well as their structure, how they share common boundaries and how they meet around edges, can be understood in terms of the spherical Voronoi diagram on the Hopf sphere $S^{2}$. Figure 5(f) shows this Voronoi diagram for our example. We will show some more examples of cells below (Figures 12 and 13) and in Appendix B. In general, the cell of a polar orbit polytope of a tubical group for large enough $n$ will always exhibit the following characteristic features. • It is a thin slice with a roughly polygonal shape. • The top and bottom faces are parallel. • Moreover, the top and bottom faces are congruent and slightly twisted with a right screw. (There are, however exceptions for tubical groups of dihedral type: With some choices of starting points, there is an alternative way of stacking the slices: every other slice is upside down, as in Figure 9.) • The top and bottom faces approach the shape of a triangle, quadrilateral or pentagon with curved sides. • The sides are decorated with slanted patterns, which come from the boundaries of the adjacent tubes. • The tube twists around the orbit circle by one full $360^{\circ}$ turn as it closes up on itself. If $n$ is small, these properties break down: The circles are not filled densely enough to ensure that the cells are thin slices. Sometimes they are regular or Archimedean polytopes, and the orbit polytopes coincide with those of polyhedral groups, and the “tubes” may even be disconnected, see for example Figures 36 or 44 in Appendix B. See Section 6.12 for more examples. Figure 6 shows a case where the $2n$-gons lie on different circles. Then the orbit is free: for any two cells, there is a unique transformation in the group that moves one cell to the other. If the starting point is generic enough, the cells have no symmetries. (See Proposition 6.10 below for a precise statement.) Then the given group is the symmetry group of its orbit polytope: There is a unique transformation mapping one cell to the other even among all orthogonal transformations, not just the group elements. 6.2.1 Mapping between adjacent cells Definition 6.5. The cell axis of a cell of the polar orbit polytope is the orthogonal projection of the orbit circle into the 3-dimensional hyperplane of the cell. The cell axis thus gives the direction in which consecutive cells are stacked upon each other along the orbit circle. It is a line going through the orbit point. Figure 5(c) shows a cell together with its axis. The cell axis is not necessarily a symmetry axis. The cell axis intersects the boundary of the cell in two poles. This is where consecutive cells are attached to each other (unless $n$ is too small and the tubes are disconnected.) More precisely: For the orbit polytope of a generic starting point, the next cell is attached as follows. We translate the cell $C$ from the bottom pole to the top pole. Call the new cell $C^{\prime}$. We rotate $C^{\prime}$ slightly until its bottom face matches the top face of $C$, and we attach it there (with a bend into the fourth dimension, as for every polytope). 6.3 The geometry of the tubes We investigate the structure of the tubes in the limiting case as $n\to\infty$, where they become smooth objects. As $n$ gets larger, the orbit circle is filled more and more densely, and the slices get thinner. In the limit, every slice becomes a flat plane convex region, which we call a tangential slice. The tangential slices around an orbit circle sweep out the tangential tube as $v$ moves around the circle. The limit of the polar orbit polytope consists of tangential tubes, and this is what is shown in Figure 5(d). The central projections of these tubes and slices to the sphere are the spherical tubes and the spherical slices of these tubes. The spherical tubes are the Voronoi diagram on $S^{3}$ of the orbit circles. This gives us a way to generalize these notations to any finite set of circles from a common Hopf bundle. For that we first need the definition of the spherical Voronoi diagram. Let $\mathcal{X}$ be a finite collection of nonempty subsets of $S^{d}$, and let $X\in\mathcal{X}$ be one of these subsets. The spherical Voronoi cell of $X$ with respect to $\mathcal{X}$ is $$\operatorname{Vor}_{\mathcal{X}}(X):=\{x\in S^{d}\mid\mathrm{dist}(x,X)\leq\mathrm{dist}(x,Y)\text{ for all }Y\in\mathcal{X}\}.$$ The spherical Voronoi cells of the subsets in $\mathcal{X}$ give a decomposition of $S^{d}$, denoted by $\operatorname{Vor}_{\mathcal{X}}$ and called the spherical Voronoi diagram. If the subsets in $\mathcal{X}$ are singletons, we get the usual spherical Voronoi diagram. Let $\mathcal{C}$ be a finite set of at least two circles from a common Hopf bundle, and let $K\in\mathcal{C}$ be one of them. We can assume that the common Hopf bundle is $\mathcal{H}$. The Voronoi cell of $K$ with respect to $\mathcal{C}$ is called a spherical tube. The intersection of $\operatorname{Vor}_{\mathcal{C}}(K)$ with the hyperplane perpendicular to $K$ at a point $v\in K$ gives two (2-dimensional) patches. One contains $v$ and one contains $-v$. These are spherical slices. The tangential slices and tangential tubes are defined as above in the special case of orbit circles. We will show that the spherical tubes are bounded by patches of Clifford tori (Theorem 6.7), and the tangential slices are polygons of circular arcs (Theorem 6.8). 6.3.1 The spherical tubes Given that the circles belong to a common Hopf bundle and the Hopf map transforms distances appropriately (Proposition 4.15), it is no surprise that the Voronoi diagram of the set of circles on $S^{3}$ is closely related to the Voronoi diagram of the corresponding points on $S^{2}$ (see Figure 5(f).) Proposition 6.6. Let $\mathcal{C}\subset\mathcal{H}$ be a finite set of circles from $\mathcal{H}$, and let $K\in\mathcal{C}$ be one of them. The spherical tube $\operatorname{Vor}_{\mathcal{C}}(K)$ is the union of circles from $\mathcal{H}$ that are the preimages under $h$ of the points in $\operatorname{Vor}_{h(\mathcal{C})}(h(K))$, where $h(\mathcal{C}):=\{\,h(C)\mid C\in\mathcal{C}\,\}$. Proof. First we will show that for any point $x^{\prime}\in\operatorname{Vor}_{\mathcal{C}}(K)$, the great circle $K^{\prime}$ from $\mathcal{H}$ on which $x^{\prime}$ lies is also in $\operatorname{Vor}_{\mathcal{C}}(K)$. Since all the circles in $\mathcal{H}$ are Clifford-parallel (Proposition 4.15), $\mathrm{dist}(K^{\prime},C)=\mathrm{dist}(x^{\prime},C)$ for all $C\in\mathcal{C}$. Thus, we get the following equivalence. $$\mathrm{dist}(x^{\prime},K)\leq\mathrm{dist}(x^{\prime},C)\iff\mathrm{dist}(K^{\prime},K)\leq\mathrm{dist}(K^{\prime},C),$$ for all $C\in\mathcal{C}$. That is, $K^{\prime}\subset\operatorname{Vor}_{\mathcal{C}}(K)$. By Proposition 4.15 we know that $$\mathrm{dist}(K^{\prime},K)\leq\mathrm{dist}(K^{\prime},C)\iff\mathrm{dist}(h(K^{\prime}),h(K))\leq\mathrm{dist}(h(K^{\prime}),h(C)),$$ for all $C\in\mathcal{C}$. That is, $K^{\prime}\in\operatorname{Vor}_{\mathcal{C}}(K)$ if and only if $h(K^{\prime})\in\operatorname{Vor}_{h(\mathcal{C})}\bigl{(}h(K)\bigr{)}$. ∎ 6.3.2 The spherical tube boundaries Theorem 6.7. Let $\mathcal{C}\subset\mathcal{H}$ be a finite set of circles from $\mathcal{H}$. The boundaries of the corresponding spherical tubes consist of patches of Clifford tori. The edges of these tubes are great circles from $\mathcal{H}$. Proof. As in Proposition 6.6, the boundary between two tubes is the preimage, under the Hopf map $h$, of the boundary between the two corresponding Voronoi regions in $\operatorname{Vor}_{h(\mathcal{C})}$. Such a boundary edge on the Hopf sphere $S^{2}$ is contained in a great circle. A great circle can be described as the points that are equidistant from two antipodal points $\pm p$ on $S^{2}$, and under the inverse Hopf map, these become the points on $S^{3}$ that are equidistant from two absolutely orthogonal circles $K_{p}$ and $K_{-p}$, and this is, by definition, a Clifford torus. The tube edges, where three or more tubes meet, are the preimages of the Voronoi vertices of $\operatorname{Vor}_{h(\mathcal{C})}$. Thus, they are circles from $\mathcal{H}$. ∎ 6.3.3 The tangential slices Theorem 6.8. Let $\mathcal{C}\subset\mathcal{H}$ be a finite set of circles from $\mathcal{H}$. The corresponding tangential slices are (flat) convex regions bounded by circular arcs. Proof. Let $K\in\mathcal{C}$ be one of the circles. We want to consider the tangential slice of $K$ at a point $v\in K$. Without loss of generality, we may assume that $v=i$, because the left rotation $[-vi,1]$ preserves $\mathcal{H}$ (see Proposition 4.12(i)) and maps $v$ to $i$. Then $K$ is actually $K_{i}$, the great circle through the points $1$ and $i$. The tangent direction of $K$ at $v$ is the quaternion 1. The hyperplane $Q$ perpendicular to $K$ at $v$ is spanned by $i$, $j$ and $k$, which we represent in a 3-dimensional coordinate system $\hat{x},\hat{y},\hat{z}$, see Figure 6(b). $Q$ intersects $S^{3}$ in a great 2-sphere $S_{0}$. The spherical tube $\operatorname{Vor}_{\mathcal{C}}(K)$ cuts out two opposite patches from $S_{0}$: the spherical slices. Denote by $A$ the slice that contains $v$. The slice $A$ intersects each circle of $\operatorname{Vor}_{\mathcal{C}}(K)$. Thus, by Proposition 6.6, $h(A)$ equals $\operatorname{Vor}_{h(\mathcal{C})}(h(v))$, which we will denote by $B$. Using spherical coordinates, a point in $S_{0}$ has the form $i\cos\theta+p\sin\theta$, where the direction vector $p$ is a unit vector in the $\hat{y},\hat{z}$-plane that plays the role of the longitude, and $\theta\in\mathbb{R}$ is the angular distance on $S_{0}$ between that point and $i$. See Figure 7. Since $p$ and $i$ are pure unit quaternions, they anticommute, and in particular, $pip=-i{p}p=i$. We will now apply the Hopf map $h$ to a point in $S_{0}$: $$\displaystyle h(i\cos\theta+p\sin\theta)$$ $$\displaystyle=(i\cos\theta+p\sin\theta)\,i\,(-i\cos\theta-p\sin\theta)$$ $$\displaystyle=i\cos^{2}{\theta}-pip\sin^{2}{\theta}+p\cos\theta\sin\theta+p\cos\theta\sin\theta$$ $$\displaystyle=i(\cos^{2}{\theta}-\sin^{2}{\theta})+2p\cos\theta\sin\theta$$ $$\displaystyle=i\cos{2\theta}+p\sin{2\theta}.$$ That is, $h$ maps a point whose angular distance from $i$ is $\theta$ to the point in the same direction but with angular distance $2\theta$. Thus, if we identify $S_{0}$ with $S^{2}$ using the natural identification (on $S^{2}$, we denote the $i$, $j$ and $k$ directions by $x$, $y$ and $z$, respectively), we see that $A$ is obtained from $B$ by a radial contraction. That is, we look from $i$ in all directions and multiply the angular distance between $i$ and each point in $B$ by $1/2$. The intersection of $Q$ with the (3-dimensional) tangent space of $S^{3}$ at $v$ is the 2-dimensional tangent plane $T$ of $S_{0}$ at $v$. For our choice $v=i$, $T$ is the plane in $Q$ defined by $\hat{x}=1$. The tangential slice lies in this plane. So to get the tangential slice $A_{T}$ at $v$, we radially contract $B$ to get $A$, and then centrally project $A$ to $T$. We will describe this procedure algebraically. The radial contraction towards $i$ is the map $$i\cos\theta+p\sin\theta\mapsto i\cos\tfrac{\theta}{2}+p\sin\tfrac{\theta}{2}.$$ This map is not uniquely determined at the South Pole ($\theta=\pi$), and we will tacitly exclude this point from further consideration. Writing $p$ as $j\cos\varphi+k\sin\varphi$, the map can be described as follows: $$\displaystyle\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}\cos\theta\\ \cos\varphi\sin\theta\\ \sin\varphi\sin\theta\\ \end{pmatrix}\mapsto\begin{pmatrix}\hat{x}\\ \hat{y}\\ \hat{z}\end{pmatrix}=\begin{pmatrix}\cos\frac{\theta}{2}\\ \cos\varphi\sin\frac{\theta}{2}\\ \sin\varphi\sin\frac{\theta}{2}\\ \end{pmatrix}$$ Using the identities $\cos\frac{\theta}{2}=\frac{\sqrt{1+\cos\theta}}{\sqrt{2}}$ and $\sin\theta=2\sin\frac{\theta}{2}\cos\frac{\theta}{2}$, the map is written as follows. $$(x,y,z)\mapsto(\hat{x},\hat{y},\hat{z})=\frac{1}{\sqrt{2}}\Bigl{(}\sqrt{1+x},\frac{y}{\sqrt{1+x}},\frac{z}{\sqrt{1+x}}\Bigr{)}$$ Combining this with the central projection from the origin onto $T$ gives the following map $f$. $$f\colon(x,y,z)\mapsto(\hat{x},\hat{y},\hat{z})=\Bigl{(}1,\frac{y}{1+x},\frac{z}{1+x}\Bigr{)}=\Bigl{(}1,\frac{y/x}{1+1/x},\frac{z/x}{1+1/x}\Bigr{)}$$ If we apply $f$ to a boundary edge of $B$, it will turn out the resulting curve is part of a circle. The boundary edges of $B$ are arcs of great circles on $S^{2}$. We obtain such an arc by centrally projecting to $S^{2}$ a straight segment in the tangent plane of $S^{2}$ at $h(v)=i$. Without loss of generality suppose that one of these segments lies on the line $(x,y,z)=(1,c_{0},t)$, $t\in\mathbb{R}$, for some constant $c_{0}\neq 0$, see the blue line in Figure 6(a). The central projection of this line to $S^{2}$ lies on the great circle $$\Bigl{\{}\,\frac{\pm 1}{\sqrt{c_{0}^{2}+t^{2}+1}}(1,c_{0},t)\Bigm{|}t\in\mathbb{R}\,\Bigr{\}}.$$ See the blue curve in Figure 6(a). The map $f$ transforms this great circle into the set $$\Bigl{\{}\,\Bigl{(}1,\frac{c_{0}}{1\pm\sqrt{c_{0}^{2}+t^{2}+1}},\frac{t}{{1\pm\sqrt{c_{0}^{2}+t^{2}+1}}}\Bigr{)}\Bigm{|}t\in\mathbb{R}\,\Bigr{\}}.$$ (12) See the blue curve in the tangent plane in Figure 6(b). Straightforward manipulations show that this set is a circle: $$\hat{y}=\frac{c_{0}}{1\pm\sqrt{c_{0}^{2}+t^{2}+1}}\iff\pm\hat{y}\sqrt{c_{0}^{2}+t^{2}+1}=c_{0}-\hat{y}\\ \iff\hat{y}^{2}c_{0}^{2}+\hat{y}^{2}t^{2}+\hat{y}^{2}=\hat{y}^{2}-2c_{0}\hat{y}+\hat{y}_{0}^{2}\iff\hat{y}^{2}c_{0}^{2}+\hat{y}^{2}t^{2}+2c_{0}\hat{y}=c_{0}^{2}$$ Dividing both sides by $c_{0}^{2}$ and then substituting the relation $\frac{\hat{z}}{\hat{y}}=\frac{t}{c_{0}}$, which follows from (12), gives $$\hat{y}^{2}+\hat{z}^{2}+\frac{2}{c_{0}}\hat{y}=1\iff\Bigl{(}\hat{y}+\frac{1}{c_{0}}\Bigr{)}^{2}+\hat{z}^{2}=\frac{c_{0}^{2}+1}{c_{0}^{2}},$$ (13) which is the equation of a circle. ∎ The circle defined in (13) belongs to the pencil of circles through the points $(\hat{x},\hat{y},\hat{z})=(1,0,\pm 1)$, because these points fulfill the equations (13). The center $(\hat{x},\hat{y},\hat{z})=(1,-\frac{1}{c_{0}},0)$ lies on the axis $(\hat{x},\hat{y},\hat{z})=\lambda(c_{0},-1,0)$ perpendicular to the plane $c_{0}x=y$ containing the great circle and the line that started the construction. If the set of great circles $\mathcal{C}$ in the previous theorem are the orbit circles of a tubical group $G$, then the spherical Voronoi cell $B$ on $S^{2}$ can have 3, 4 or 5 sides, because the cells form a tiling of the sphere with equal cells. Thus, the spherical slice is also 3, 4 or 5 sided. In particular, we get the following corollary. Corollary 6.9. The tangential slice of an orbit of a tubical group is a convex plane region whose boundary consists of 3, 4, or 5 circular arcs. 6.3.4 The tangential tube boundaries The boundary surfaces of the tangential tubes (shown in Figure 5(d)) carry some interesting structures, but we don’t know what these surfaces are. The points on such a surface are equidistant from two circles $K$ and $K^{\prime}$, and we denote the surface by $B(K,K^{\prime})$. We know from Theorem 6.7 that its central projection to the sphere is a Clifford torus $\mathbb{T}$, whose image $h(\mathbb{T})$ is the bisector between $h(K)$ and $h(K^{\prime})$ on $S^{2}$. According to the relation between Voronoi diagrams and polar orbit polytopes (as briefly discussed in Section 2.1.2), a circle $K\in\mathcal{H}$ that belongs to $\mathbb{T}$ is expanded by some factor, depending on the distance to $K$ and $K^{\prime}$, to become a circle on $B(K,K^{\prime})$. Thus, the surface $B(K,K^{\prime})$ is fibered by circles (of different radii) around the origin. Another fibration by circles, this time of equal radii, can be obtained by taking the circular arc that forms the boundary of the tangential slice towards $K^{\prime}$, and sweeping it along the circle $K$. In Figure 6(b), the circle $K$ proceeds from the point $i$ into the fourth dimension, and the circular boundary arc must simultaneously wind around $K$ as it moves along $K$. A third fibration, by circles of the same radius, is obtained in an analogous way from $K^{\prime}$. Each of these fibrations leads to a straightforward parametric description of $B(K,K^{\prime})$. Alternatively, an implicit description $B(K,K^{\prime})$ by two equations can be obtained as the intersection of two “tangential hypercylinders” in which the two tangential tubes of $K$ and $K^{\prime}$ lie. (If the circle $K$ is described by the system $x_{1}^{2}+x_{2}^{2}=1$, $x_{3}=x_{4}=0$ in an appropriate coordinate system, its tangential hypercylinder is obtained by omitting the equations $x_{3}=x_{4}=0$.) 6.4 Generic starting points We return to the analysis of the polar orbit polytope, and start with the easy generic case. Proposition 6.10. Let $G$ be a tubical group whose right group is $C_{n}$ or $D_{n}$ for $n\geq 6$. Let $v\in S^{3}$ be a point. If the $G^{h}$-orbit of $h(v)$ has no symmetries other than $G^{h}$, then the same holds for the $G$-orbit of $v$: the symmetry group of this orbit is $G$. Proof. Since no $C_{n}$ or $D_{n}$ for $n\geq 6$ is contained in a polyhedral group, the only groups containing $G$ are tubical. In particular, the symmetry group $H$ of the $G$-orbit of $v$ is tubical. Since the symmetry group of the $G^{h}$-orbit of $h(v)$ is $G^{h}$ by assumption, the point $h(v)$ does not lie on any rotation center or a mirror of a supergroup of $G^{h}$. In particular, the $H^{h}$-orbit of $h(v)$ is free. Thus, by Corollary 6.3, the $H$-orbit of $v$ is free. So $G$ and $H$ have the same order. Since $G\leqslant H$, we get $G=H$. ∎ According to our goal of obtaining a geometric understanding through the orbit polytope, as described in Figure 2 in Section 2, we are done, in principle. Since the cell has no nontrivial symmetries, all symmetries of a cell are in $G$. We are in the branch of Figure 2 that requires no further action. Every cell can be mapped to every other cell in a unique way. In particular, for two consecutive cells on a tube it is obvious what the transformation between them is: a small translation along the orbit circle combined with a slight twist around the orbit circle, or in other words, a right screw, effected by the right rotation $[1,e_{n}]$. Between cells on different tubes, the transformation is not so obvious. For example, in Figure 5(c), we see a vertical zigzag of three short edges between the front corner of the upper (roughly pentagonal) face and the corresponding corner of the lower face. These edges are part of a longer sequence of edges, where 3 tubes meet, and which closes in a circular way. How are the cells arranged around this “axis”, and how does the group map between them? To investigate this question, it is helpful to move the starting point closer to the axis to look what happens there. In particular, this will help us to distinguish different classes of groups $G$ with the same group $G^{h}$. We will see an example in Section 6.14. Eventually, we will also consider starting points on the axis. 6.5 Starting point close to a mirror Let $G$ be a dihedral-type tubical group, and $p\in S^{2}$ be a point close to the mirror of a reflection of $G^{h}$. Moreover, assume that $p$ does not lie on any rotation center of $G^{h}$. The point $p$ has a neighboring partner $p^{\prime}$, which is obtained from $p$ by reflecting it across that mirror. We call the corresponding circles $K_{p}$ and $K_{p^{\prime}}$ neighboring circles. The red point and the blue point in Figure 7(a) form a neighboring pair for the group $\pm T$. We will now discuss the $G$-orbit under different choices for the starting point $v$ on $K_{p}$. Case 1. Choose $v\in K_{p}$ such that for each orbit point, the closest point on the neighboring circle is also in the orbit. See Figure 7(c). Thus, in the polar $G$-orbit polytope, each cell has a “big” face that directly faces the closest point on the neighboring circle. Case 2. If we move $v$ in one direction, the orbit points on the neighboring circle move in the opposite direction. We choose $v$ such that the orbit points on neighboring circles are in “alternating positions”. That is, the distance between orbit points on neighboring circles is maximized. See Figure 7(d). Thus, in every cell of the polar $G$-orbit polytope, the side that is close to the neighboring circle is divided into two faces, on each a cell of the neighboring tube is stacked. Case 3. Figure 7(e) shows an intermediate situation. 6.6 Starting point on a mirror It is also interesting to see what happens if we move $p$ to lie on that mirror of $G^{h}$. We still assume that $p$ does not lie on any rotation center of $G^{h}$. In this case, the neighboring pairs on $S^{2}$ coincide, and thus the corresponding neighboring circles also coincide. We describe next what happens in each of the previous cases. Case 1. The orbit points coincide in pairs, and thus they form a regular $2n$-gon on $K_{p}$. Each orbit point can be mapped to any other orbit point by two different elements of $G$, one of which rotates $K_{p}$ and one of which reverses the orientation of $K_{p}$. Thus, in the polar orbit polytope, each cell has a half-turn symmetry that flips the direction of the cell axis, and exchanges the top and bottom faces. We call it a flip symmetry. (For small $n$, top and bottom faces might not be defined.) It is interesting to notice that for this choice of the starting point, the $G$-orbit of $v$ coincides with the orbit of $v$ under the cyclic-type index-2 subgroup $G_{C}$ of $G$. Since the $G_{C}$-orbit is the same up to congruence for any starting point on $K_{p}$ (Proposition 6.4), the $G_{C}$-orbit of any starting point on $K_{p}$ has the extra symmetries coming from a dihedral-type group that is geometrically equal to $G$. (This geometrically equal group has the generators of $G$ with $j$ replaced by a different unit quaternion $q^{\prime}$ orthogonal to $i$, which is the quaternion $q^{\prime}$ from Proposition 4.5(b).) We put this in a proposition since we will need it later. Proposition 6.11. Let $G_{C}$ be a cyclic-type tubical group, and let $G_{D}$ be a dihedral-type tubical group containing $G_{C}$ as an index-2 subgroup. If $p$ lies on a mirror of $G_{D}^{h}$, then the $G_{C}$-orbit of any point on $K_{p}$ has the symmetries from (a geometrically equal copy of) $G_{D}$. Case 2. Orbit points on $K_{p}$ form a regular $4n$-gon. Each orbit point can be mapped to any other orbit point by a unique element of $G$. However, this orbit has extra symmetries, which come from the supergroup of $G$ that we obtain by extending $G$ by the new symmetry $[1,e_{2n}]$. This orbit of the supergroup follows the behavior described in Case 1. Accordingly, each cell of the polar $G$-orbit polytope has a flip symmetry. In almost all choices for $G$, the supergroup has the same class as $G$ but with twice the parameter $n$. The only exceptional case is $G=\pm\frac{1}{2}[O\times\overline{D}_{4n}]$. In this case, the supergroup is $\pm[O\times D_{4n}]$. Case 3. Orbit points on $K_{p}$ form two regular $2n$-gons whose union is a $4n$-gon with equal angles, and side lengths alternating between two values. The orbit points come in close pairs. Accordingly, the cells of the polar orbit polytope come in a sequence of alternating “up-and-down pancakes” stacked upon each other. See the two cells in Figure 9. 6.7 Starting point close to a rotation center Let $G$ be a cyclic-type tubical group, and let $p$ be an $f$-fold rotation center111111 We call $p$ an $f$-fold rotation center of some 3-dimensional point group if $f$ is the largest order of a rotation around $p$ in that group. Hence, a 4-fold rotation center of a group is not a 2-fold rotation center of that group. of $G^{h}$. Let $[g]\in G^{h}$ be the clockwise rotation of $G^{h}$ around $p$ by $\frac{2\pi}{f}$. That is, $g=\cos\frac{\pi}{f}+p\sin\frac{\pi}{f}$. Choose a point $p_{1}\in S^{2}$ close to $p$. Since $p_{1}$ avoids rotation centers of $G^{h}$, its images under $[g]$ are all distinct: $$p_{1},\ p_{2}:=[g]p_{1},\ \ldots,\ p_{f}:=[g]^{f-1}p_{1}$$ Figure 9(a) and Figure 10(a) show these points around a 4-fold rotation center and a 5-fold rotation center, respectively. We want to describe the $G$-orbit for a starting point on $K_{p_{1}}$. By Proposition 6.4, any point on $K_{p_{1}}$ will give the same $G$-orbit, up to congruence. Thus, let $v\in K_{p_{1}}$ be any point on $K_{p_{1}}$ and consider its $G$-orbit. We will now discuss the $G$-orbit of $v$ under different assumptions on the subgroup $H$ of elements of $G$ that preserve $K_{p}$. Case 1. $H$ contains a simple rotation fixing $K_{p}$ of order $f$: Orbit points around $K_{p}$ can be grouped into regular $f$-gons (if $f\geq 3$) or pairs (if $f=2$). See Figure 9(c) and Figure 10(c). Case 2. $H$ contains no simple rotation fixing $K_{p}$: Orbit points around $K_{p}$ form different types of staircases. See Figures 9(d) and 9(f), and Figures 10(d)–10(g). Case 3. $H$ contains a simple rotation fixing $K_{p}$ of order not equal to $f$: This case can only occur when $f=4$ and the order of that simple rotation is 2. Orbit points around $K_{p}$ can be grouped into pairs. See Figure 9(e). 6.8 Starting point on a rotation center It is also interesting to see what happens if we move $p_{1}$ to $p$. In this case, the points $p_{1},\ldots,p_{f}$ coincide with $p$, and thus the corresponding circles $K_{p_{1}},\ldots,K_{p_{f}}$ coincide with $K_{p}$. We describe next what happens in each of the previous cases. Case 1. The orbit points coincide in groups of size $f$, and thus they form a regular $2n$-gon on $K_{p}$. Each orbit point can be mapped to itself by $f$ different elements of $G$. Thus, in the polar orbit polytope, each cell has an $f$-fold rotational symmetry whose axis is the cell axis. Case 2. Orbit points on $K_{p}$ form a regular $2fn$-gon. Each orbit point can be mapped to itself by a unique element of $G$. However, the orbit has extra symmetries, which come from the supergroup of $G$ that we obtain by extending $G$ by the new symmetry $[1,e_{fn}]$. Thus, in total, each orbit point can be mapped to itself by $f$ symmetries. Accordingly, in the polar orbit polytope, each cell has an $f$-fold rotational symmetry whose axis is the cell axis. Case 3. Orbit points on $K_{p}$ form a regular $4n$-gon. Each orbit point can be mapped to itself by 2 different elements of $G$. However, the orbit has extra symmetries, which come from the supergroup of $G$ that we obtain by extending $G$ by the new symmetry $[1,e_{2n}]$. Thus, each orbit point can be mapped to itself by extra 2 symmetries. Accordingly, in the polar orbit polytope, each cell has a 4-fold rotational symmetry whose axis is the cell axis. See Section 6.9 for particular examples and Appendix B for a coverage of all groups. 6.8.1 Supergroups of cyclic type The cyclic-type supergroups described in Case 2 and Case 3 are listed in Table 3 for each group class and each type of rotation center. For large enough $n$, this supergroup is the largest cyclic-type symmetry group of the orbit. In most cases, this is the same class of group with a larger parameter $n$. The only exception are the groups $G=\pm[T\times C_{n}]$ when $p$ is a 2-fold rotation center of $G^{h}=+T$. As can be seen in Table 3, the symmetry groups of cyclic type of the orbit are then of the form $\pm[O\times C_{n^{\prime}}]$ or $\pm\frac{1}{2}[O\times C_{n^{\prime}}]$. The reason for this exceptional behavior can already be seen at the level of the groups $G^{h}$ in three dimensions: On $S^{2}$, the group ${+T}$ is an index-2 subgroup of ${+O}$. The 2-fold rotation centers $p$ of ${+T}$ coincide with the 4-fold rotation centers of ${+O}$, and the orbit has size 6 in both cases. The group $G_{1}:=\pm[T\times C_{n}]$ is an index-2 subgroup of $G_{2}:=\pm[O\times C_{n}]$. One can show that when $n\equiv 0\bmod 4$, the orbits of both groups have a simple rotation fixing $K_{p}$ of order 2 (for $G_{1}$) and of order 4 (for $G_{2}$). In particular, both orbits follow Case 1 above and they form a regular $2n$-gon on each orbit circle. Since they also have the same orbit circles, these two orbits coincide. The other cases ($n\equiv 2\bmod 4$, and $n$ odd) are similar. Accordingly, all cells of the groups $\pm[T\times C_{n}]$ when $p$ is a 2-fold rotation center (Section B.4.2), appear also as cells of the groups $\pm\frac{1}{2}[O\times C_{n^{\prime}}]$ when $p$ is a 4-fold rotation center (Figure 13), and those when $n$ is a multiple of $4$ also appear for the groups $\pm[O\times C_{n^{\prime}}]$ (Section B.2.1). It is perhaps instructive to look at a particular example and compare the groups $\pm[T\times C_{24}]$ (Figure 44) and $\pm\frac{1}{2}[O\times C_{24}]$ (Figure 13 for $n=12$), which have equal, 4-sided cells. The allowed rotations between consecutive cells, apart from the necessary adjustment of $\pi/24$, are $0^{\circ}$ and $180^{\circ}$ in the first case and $\pm 90^{\circ}$ in the second case. The common supergroup that has all four rotations is $\pm[O\times C_{24}]$ (Figure 38). 6.8.2 Supergroups of dihedral type, and flip symmetries For each cyclic-type tubical group and for each rotation center $p$ of its induced group on $S^{2}$, there is a dihedral-type tubical group whose induced group on $S^{2}$ has a mirror through $p$, and the cyclic-type group is an index-2 subgroup of the dihedral-type group. Thus, by Proposition 6.11, the orbit of the cyclic-type group for a starting point on $K_{p}$ has extra symmetries coming from (a geometrically equal copy of) that dihedral-type tubical group. In particular, each cell of the polar orbit polytope will have a flip symmetry. See the figures in Section 6.9 and Appendix B. The dihedral-type supergroups are listed in Table 3. 6.9 Two examples of special starting points In this section we will discuss two cases of non-generic starting points. In particular, we want to consider orbits of cyclic-type tubical groups where the image of the starting point under $h$ is a rotation center of the induced group. In Table 3 and Appendix B, we summarize the results for the remaining groups and rotation centers. 6.9.1 $\pm[I\times C_{n}]$, 5-fold rotation center Let $G=\pm[I\times C_{n}]$. We want to consider the $G$-orbit of a point whose image under $h$ is a 5-fold rotation center $p$ of ${+I}$. By Proposition 6.4, any starting point on $K_{p}$ will give the same orbit, up to congruence. Notice also that the other orbit circles correspond to the other 5-fold rotation centers of ${+I}$. Thus, choosing $p$ to be an arbitrary 5-fold rotation center will yield the same orbit, up to congruence. So let $p$ be the 5-fold rotation center $p=\frac{1}{\sqrt{\varphi^{2}+1}}(0,1,\varphi)$, where $\varphi=\frac{1+\sqrt{5}}{2}$. Then $g=-\omega i_{I}=\cos\frac{\pi}{5}+p\sin\frac{\pi}{5}\in 2I$ defines the $72^{\circ}$ clockwise rotation $[g]\in{+I}$ around $p$. By Proposition 4.5, we know the elements of $G$ that preserve $K_{p}$. These elements form a subgroup $H=\langle[g,1],[1,e_{n}]\rangle$ of order $10n$. Proposition 4.5 also tells us the $H$ acts on $K_{p}$ as a 2-dimensional cyclic group. The rotation $[g,1]$ rotates $\vec{K}_{p}$ by $-\frac{\pi}{5}$, while $[1,e_{n}]$ rotates it by $\frac{\pi}{n}$. Thus, the $G$-orbit of a point on $K_{p}$ forms a regular $\operatorname{lcm}(2n,10)$-gon on $K_{p}$. We will discuss the orbit of a point $v\in K_{p}$ depending on the value of $n$. Figure 12 shows cells of the polar orbit polytopes for different values of $n$. $\bullet$ If $n$ is a multiple of 5, then the orbit points form a regular $2n$-gon on each orbit circle. So, every orbit point can be mapped to itself by 5 different elements of $G$. This is reflected on the cells of the polar orbit polytope where each cell has a 5-fold rotational symmetry whose axis is the cell axis. This case corresponds to Case 1 in Section 6.8, where $H$ contains a simple rotation of order $5$ fixing $K_{p}$. The element $[1,e_{n}]$ of $G$ maps an orbit point to an adjacent one on the same circle. Correspondingly, on each tube, the cells of the polar orbit polytope are stacked upon each other with a right screw by $\frac{\pi}{n}$. $\bullet$ If $n$ is not a multiple of 5, then the orbit points form a regular $10n$-gon on each orbit circle. That is, the orbit is free. So, every orbit point can be mapped to itself by a unique element of $G$. However, this orbit has extra symmetries. In particular, the rotation $[1,e_{5n}]$ maps each orbit point to an adjacent one on the same circle. Adjoining $[1,e_{5n}]$ to $G$ gives the supergroup $\pm[I\times C_{5n}]$, whose orbit of $n$ follows the first case. Accordingly, each cell of the polar orbit polytope has a 5-fold symmetry whose axis is the cell axis. This case corresponds to Case 2 in Section 6.8, where $H$ does not contain any simple rotation fixing $K_{p}$. The symmetry $[1,e_{5n}]$ (which is not in $G$) maps an orbit point to an adjacent one on the same circle. Correspondingly, on each tube, the cells of the polar orbit polytope are stacked upon each other with a right screw by $\frac{\pi}{5n}$. In accordance with Section 6.8.2, every cell has a flip symmetry, which is not included in $G$. It comes from (a group geometrically equal to) the group $\pm[I\times D_{2n}]$, which contains $G$ as an index-2 subgroup. The top and bottom faces in each cell are congruent. They resemble the shape of a pentagon. This corresponds to the fact that the spherical Voronoi cell of the ${+I}$-orbit of $p$ on the 2-sphere is a spherical regular pentagon, as shown in the top right picture of Figure 12. (Refer to the discussion in Section 6.3.) Since the ${+I}$-orbit of $p$ has size 12, the $G$-orbit of $v$ lies on 12 orbit circles. Accordingly, the cells of the polar orbit polytope can be decomposed into 12 tubes, each with $\operatorname{lcm}(2n,10)$ cells. In the PDF-file of this article, the interested reader can click on the pictures in Figure 12 for an interactive visualization of these tubes. We refer to Section 6.13 for more details. In accordance with the program set out in Figure 2 in Section 2 to understand the group by its action on the orbit polytope, we will now work out how each cell is mapped to the adjacent cell in the same tube. This requires a small number-theoretic calculation. The mapping between adjacent cells is obtained in cooperation between the right group and the left group. In particular, to get a rotation by $\tfrac{2\pi}{\operatorname{lcm}(2n,10)}$ along the orbit circle $\vec{K}_{p}$, we have to combine a left rotation by $-a\cdot\frac{\pi}{5}$ with a right rotation by $b\cdot\frac{\pi}{n}$, resulting in the angle $$\frac{b\pi}{n}-\frac{a\pi}{5}=\frac{2\pi}{\operatorname{lcm}(2n,10)}.$$ (14) For example, for $n=12$ we can solve this by $a=2,b=5$. The right screw angle between consecutive slices (or orbit points) is then $\frac{b\pi}{n}+\frac{a\pi}{5}$. Using (14), this can be rewritten as $$\frac{a\pi}{5}+\frac{b\pi}{n}=\frac{2a\pi}{5}+\frac{2\pi}{\operatorname{lcm}(2n,10)}=\left(\frac{a}{5}+\frac{1}{\operatorname{lcm}(2n,10)}\right)\cdot 2\pi,$$ (15) which is $(\frac{2}{5}+\frac{\pi}{120})\cdot 2\pi$ in our example. This angle is always of the form $(\frac{a}{5}+\frac{1}{\operatorname{lcm}(2n,10)})\cdot 2\pi$ for some integer $a$, in accordance with the requirement to match the pentagonal shape. The value $a$ can never be 0. The rotation angles for different values of $n$ are listed in Figure 12. When $n$ is not a multiple of 5, there is one element of the group that maps a cell to the upper adjacent one. Thus, $a$ has a unique value. When $n$ is a multiple of 5, each cell has a 5-fold symmetry included in the group. Thus, all values of $a$ are permissible. 6.9.2 $\pm\frac{1}{2}[O\times C_{2n}]$, 4-fold rotation center Let $G=\pm\frac{1}{2}[O\times C_{2n}]$. We want to consider the $G$-orbit of a point whose image under $h$ is a 4-fold rotation center $p$ of ${+O}$. The discussion will closely parallel that of the group from the previous section, but in connection with the 4-fold rotation, we will also meet Case 3. Any of the 4-fold rotation centers $p$ gives the same orbit. So let $p$ be the 4-fold rotation center $p=(0,1,0)$. Then $g=-\omega i_{O}=\cos\frac{\pi}{4}+p\sin\frac{\pi}{4}\in 2O$ defines the $90^{\circ}$ clockwise rotation $[g]\in{+O}$ around $p$. By Proposition 4.5, we determine the elements of $G$ that preserve $K_{p}$ as the subgroup $H=\langle[g,e_{2n}],[1,e_{n}]\rangle$ of order $8n$, which acts on $K_{p}$ as a 2-dimensional cyclic group. The rotation $[g,e_{2n}]$ rotates $\vec{K}_{p}$ by $-\frac{\pi}{4}+\frac{\pi}{2n}=-\frac{(n-2)\pi}{4n}$. Its order is $$\frac{2\pi}{\gcd(\frac{(n-2)\pi}{4n},2\pi)}=\frac{2\pi}{\frac{\pi}{4n}\gcd(n-2,8n)}=\frac{8n}{\gcd(n-2,8n-8(n-2))}=\frac{8n}{\gcd(n-2,16)}.$$ The other operation, $[1,e_{n}]$ rotates it by $\frac{\pi}{n}$. Thus, the $G$-orbit of a point on $K_{p}$ forms a regular polygon with $\operatorname{lcm}(2n,\frac{8n}{\gcd(n-2,16)})$ sides on $K_{p}$. The denominator $\gcd(n-2,16)$ can take the values $1,2,4,8,16$, but in the overall expression, the values $4,8,16$ make no distinction, and thus we can simplify the expression for the number of sides to $\frac{8n}{\gcd(n-2,4)}$. The structure of the orbit of a point $v\in K_{p}$ depends on $n$. Cells of the polar orbit polytopes for different values of $n$ are shown in Figure 13. $\bullet$ If $n-2$ is a multiple of $4$, then $\gcd(n-2,4)=4$ and $\frac{8n}{\gcd(n-2,4)}=2n$. The orbit points form a regular $2n$-gon on each orbit circle, and every point can be mapped to itself by 4 different elements of $G$. This is reflected on the polar orbit polytope where each cell has a 4-fold symmetry whose axis is the cell axis. This corresponds to Case 1 in Section 6.8, where $H$ contains a simple rotation of order $4$ fixing $K_{p}$. The element $[1,e_{n}]$ of $G$ maps an orbit point to an adjacent one on the same circle. Correspondingly, on each tube, the cells of the polar orbit polytope are stacked upon each other with a right screw by $\frac{\pi}{2n}$. $\bullet$ If $n-2\equiv 2\bmod 4$, then $\gcd(n-2,4)=2$ and $\frac{8n}{\gcd(n-2,4)}=4n$. The orbit points form a regular $4n$-gon on each orbit circle, and every point can be mapped to itself by 2 different elements of $G$. However, this orbit has extra symmetries. In particular, the rotation $[1,e_{2n}]$ maps each orbit point to an adjacent one on the same circle. Adjoining $[1,e_{2n}]$ to $G$ gives the supergroup $\pm[O\times C_{2n}]$, which contains $G$ as an index-2 subgroup. Thus, each orbit point can be mapped to itself by 2 extra symmetries that are not in $G$. Accordingly, as in the first case, every cell of the polar orbit polytope has a 4-fold symmetry whose axis is the cell axis. This corresponds to Case 3 in Section 6.8, where $H$ contains a simple rotation of order $2$ fixing $K_{p}$. The symmetry $[1,e_{2n}]$ (which is not in $G$) maps an orbit point to adjacent one on the same circle. Correspondingly, on each tube, the cells of the polar orbit polytope are stacked upon each other with a right screw by $\frac{\pi}{2n}$. $\bullet$ If $n-2$ is odd, then $\gcd(n-2,4)=1$ and $\frac{8n}{\gcd(n-2,4)}=8n$. The orbit is free. The orbit forms a regular $8n$-gon on each orbit circle. Every point can be mapped to any other point by a unique element of $G$. Again, the orbit has extra symmetries. In particular, the rotation $[1,e_{4n}]$ maps each orbit point to an adjacent one on the same circle. Adjoining $[1,e_{4n}]$ to $G$ gives the supergroup $\pm[O\times C_{4n}]$, which contains $G$ as an index-4 subgroup. Thus, each orbit point can be mapped to itself by 4 symmetries. Accordingly, as in the other cases, every cell of the polar orbit polytope has a 4-fold symmetry whose axis is the cell axis. This corresponds to Case 2 in Section 6.8, where $H$ does not contain a simple rotation fixing $K_{p}$. The symmetry $[1,e_{4n}]$ (which is not in $G$) maps an orbit point to the next one on the same circle. Correspondingly, on each tube, the cells of the polar orbit polytope are stacked upon each other with a right screw by $\frac{\pi}{4n}$. In accordance with Section 6.8.2, every cell has a flip symmetry, which is not included in $G$. It comes from (a group geometrically equal to) the group $\pm\frac{1}{2}[O\times\overline{D}_{4n}]$, which contains $G$ as an index-2 subgroup. The top and bottom faces in each cell are congruent. They resemble the shape of a rounded square, in agreement with the quadrilateral Voronoi cell on the 2-sphere, as shown in the top right figure in Figure 13. Since the ${+O}$-orbit of $p$ has size 6, the $G$-orbit of $v$ lies on 6 orbit circles. Accordingly, the cells of the polar orbit polytope can be decomposed into 6 tubes, each with $\frac{8n}{\gcd(n-2,4)}$ cells. Similar to the previous section, one can work out the right screw angle (in $G$) between consecutive slices. To summarize: When $n-2$ is odd, there is a unique angle of the form: $(\frac{k_{0}}{4}+\frac{1}{8n})\cdot 2\pi$ (with specific $k_{0}=1$, $2$, or $3$). When $n-2\equiv 2\bmod 4$, there are two angles: $(\frac{2k+1}{4}+\frac{1}{4n})\cdot 2\pi$ (with arbitrary $k$). When $n-2$ is a multiple of 4, there are four angles: $(\frac{k}{4}+\frac{1}{2n})\cdot 2\pi$ (with arbitrary $k$). 6.10 Consequences for starting points near rotation centers In Sections 6.7 and 6.8 we have discussed the different cases that can arise for an orbit near a rotation axis and on a rotation axis. Indeed, we can confirm this relation by comparing Figure 11 and Figure 12. By the analysis that lead to Figure 11, an orbit of $\pm[I\times C_{n}]$ near a 5-fold rotation axis forms a $4/5$, $2/5$, $3/5$, or $1/5$ staircase if $n\equiv 1,2,3,4\bmod 5$, respectively, and it forms pentagons if $n$ is a multiple of $5$. We can check in Figure 12 that these values are precisely the specified rotations (up to the twist by $\frac{\pi}{5n}$), except when $n$ is a multiple of $5$, and in that case all five rotations are allowed. Similarly, Figure 10 corresponds with Figure 38. Conversely, we can consult the appropriate entries in Appendix B for orbits on a rotation axis to conclude what type of pentagons, quadrilaterals, triangles, pairs, or staircases to expect for an orbit near this rotation axis. 6.11 Mappings between different tubes Continuing the discussion of the tubes for the groups $G=\pm\frac{1}{2}[O\times C_{2n}]$, from Section 6.9.2, we will now continue with the program set out in Figure 2 in Section 2, by asking, for this example, how cells in different tubes are mapped to each other. The cells in Figure 13 have a roughly four-sided shape. At corners of these quadrilaterals, three tubes meet. To understand what is happening there, we imagine putting a starting point $v^{\prime}$ near a corner. Then $h(v^{\prime})$ is near a three-fold rotation center of $+O$. Near such a rotation center, the orbit forms either a set of triangles, or a left or right staircase. As just discussed, we can check this by consulting the pictures for the orbit on a three-fold rotation axis: Figure 41. We see that those cells of Figure 13 that have a straight line segment $A$ between the top and the bottom face at the corners ($n=6,3,18,12$) correspond to cases where the orbit of $v^{\prime}$ consists of triangles. Indeed, one can imagine three cells arranges around a common edge $A$. (The cells don’t lie perpendicular to the axis $A$, but they are twisted.) For the remaining cases ($n=1,4,10,14,8,5,22$) the edge is broken into three parts between the top and the bottom face, and this is where the cells are arranged in a staircase-like fashion. 6.12 Small values of $n$ For small values of $n$, some of the cyclic-type tubical groups recover well-known decompositions of regular/uniform polytopes into tubes (or more commonly knows as rings). These appear in various places in the literature. We list some of the references. Next to each group, we state the rotation center of the induced group that is the image of the starting point. • $\pm[I\times C_{1}]$ and 5-fold rotation center (Figure 12): We get the decomposition of the 120-cell into 12 tubes, each with 10 regular dodecahedra.121212A remarkable paper model of a Schlegel diagram with two rings was produced by Robert Webb, https://youtu.be/2nTLI89vdzg. An interesting burr puzzle was made in [33] using pieces of these rings.. Figure 30 shows a picture of three dodecahedra from one tube, see also [15, Figure 21], [9, p. 75] and Coxeter [12, p. 53]. • $\pm[O\times C_{1}]$ and 4-fold rotation center (Figure 38): We get the decomposition of the bitruncated 24-cell (the 48-cell) into 6 tubes, each with 8 truncated cubes, stacked upon the octagonal faces. • $\pm[O\times C_{1}]$ and 3-fold rotation center (Figure 39): We get the decomposition of the bitruncated 24-cell (the 48-cell) into 8 tubes, each with 6 truncated cubes, stacked upon the triangular faces. [9, p. 75-76]. • $\pm[T\times C_{1}]$ and 3-fold rotation center (Figure 43): We get the decomposition of the 24-cell into 4 tubes, each with 6 octahedra [9, p. 74], [2]. • $\pm[T\times C_{1}]$ and 2-fold rotation center (Figure 44): We get the decomposition of the 24-cell into 6 tubes, each with 4 octahedra, touching each other via vertices. • $\pm\frac{1}{3}[T\times C_{3}]$ and 3-fold (type I) rotation center (Figure 45): This is a degenerate case. We get the decomposition of the hypercube into 4 “tubes”, but each “tube” is just a pair of opposite cube faces. We remark that the orbit of $G=\pm[L\times C_{1}]$, is the same, up to congruence, for any starting point. This follows since the $G$-orbit of a point $v\in\mathbb{R}^{4}$ can be obtained from the $G$-orbit of the quaternion $1$ by applying the rotation $[1,v]$: $$\mathrm{orbit}(v,G)=\{\,\bar{l}v\mid l\in L\}=[1,v]\{\,\bar{l}\mid l\in L\}=[1,v]\mathrm{orbit}(1,G).$$ 6.13 Online gallery of polar orbit polytopes The interested reader can explore polar orbit polytopes for the cyclic-type tubical groups with all special choices of starting points in an online gallery that provides interactive three-dimensional views.131313https://www.inf.fu-berlin.de/inst/ag-ti/software/DiscreteHopfFibration/. In the PDF-file of this article, the pictures of the cells in the figures in Section 6.9 and Appendix B are linked to the corresponding entries in the gallery. The polytopes are shown in a central projection to the three-dimensional tangent space at the starting point $v$ of the orbit. The projection center lies outside the polytope, close to the cell $F_{0}$ opposite to $v$. In the projection, $F_{0}$ becomes the outer cell that (almost) encloses all remaining projected cells. The orientation of the outer cell is reversed with respect to the other cells. We are mostly interested not in $F_{0}$ but in the cells near $v$, which are distorted the least in the projection, and as a consequence, we go with the majority and ensure that these cells are oriented according to our convention (Section 2.3). For large values on $n$, we have refrained from constructing true Schlegel diagrams, because this would have resulted in tiny inner cells. As a result, cells near the boundary of the projection wrap around and overlap. The goal of the gallery is to show the decomposition of the polytopes into tubes, and how these tubes are structured and interact with each other. It is possible to remove cells one by one to see more structure. The order of the cells is based on the distances of their orbit points to the starting point $v$. 6.14 $\pm[T\times C_{n}]$ versus $\pm\frac{1}{3}[T\times C_{3n}]$ Looking at the tubical groups in Table 2, we see that there are groups $G$ with the same induced symmetry group $G^{h}$ on $S^{2}$. Thus, for the same starting point, these groups have the same orbit circles. However, they differ in the way how the points on different circles are arranged relative to each other. In this section we will consider the case where the induced group is ${+T}$. For the same $n$, we will compare the actions of $\pm[T\times C_{n}]$ and $\pm\frac{1}{3}[T\times C_{3n}]$ on and around the circles of $\mathcal{H}$ that correspond to rotation centers of ${+T}$. We will see that these two groups have different sets of fixed circles of $\mathcal{H}$, which correspond to 3-fold rotation centers of ${+T}$. On such a fixed circle, the size of the orbit is reduced by a factor of 3 (from $24n$ to $8n$, see Table 3). In Figures 16 and 16, we visualize the effect of that difference on the orbit points and the cells of the polar orbit polytope around these circles. We will see that triangles and both types of staircases appear in $\pm[T\times C_{n}]$ and $\pm\frac{1}{3}[T\times C_{3n}]$, depending on $n$. In this sense, there is no sharp geometric distinction between the two families. 2-fold rotation center. Let $p\in S^{2}$ be a $2$-fold rotation center of ${+T}$ and let $[g]\in+T$ be the $180^{\circ}$ rotation around $p$. If $n$ is even, then $[g,e_{2}]$ is in both groups, and it is a simple rotation that fixes $K_{p}$. If $n$ is odd, then $K_{p}$ is not fixed. Thus, for the same $n$, $\pm[T\times C_{n}]$ and $\pm\frac{1}{3}[T\times C_{3n}]$ have the same set of fixed circles that correspond to $2$-fold rotation centers of ${+T}$. 3-fold rotation center. The eight 3-fold rotation centers of ${+T}$ belong to two conjugacy classes, depending on which ${+T}$-orbit they are in. The rotation centers of type I, are the ones in the orbit of $p_{0}=(-1,-1,-1)$, and the rotation centers of type II, are the ones in the orbit of $-p_{0}=(1,1,1)$. We will see that the group $\pm[T\times C_{n}]$ does not distinguish between the circles $K_{p_{0}}$ and $K_{-p_{0}}$. In particular, the orbit of a starting point on $p_{0}$ is congruent to the one of a starting point on $-p_{0}$. However, this is not the case for $\pm\frac{1}{3}[T\times C_{3n}]$. The quaternion $-\omega\in 2T$ defines the $120^{\circ}$ clockwise rotation $[-\omega]$ around $p_{0}$. That is $-\omega=\cos\frac{\pi}{3}+p_{0}\sin\frac{\pi}{3}$. The quaternion $-\omega^{2}\in 2T$ defines the $120^{\circ}$ clockwise rotation $[-\omega^{2}]$ around $-p_{0}$. That is $-\omega^{2}=\cos\frac{\pi}{3}-p_{0}\sin\frac{\pi}{3}$. By Proposition 4.5, the set of rotations that preserve $K_{p_{0}}$ is the same as the set of rotations that preserve $K_{-p_{0}}$. Let’s look at these rotations inside each of the two groups. • The elements of $\pm[T\times C_{n}]$ that preserve $K_{p_{0}}$ (and $K_{-p_{0}}$) form the subgroup $$\langle[-\omega,1],[1,e_{n}]\rangle=\langle[-\omega^{2},1],[1,e_{n}]\rangle$$ of order $6n$. The rotation $[-\omega,1]$ rotates $K_{p_{0}}$ by $\frac{\pi}{3}$ in one direction, while $[1,e_{n}]$ rotates it by $\frac{\pi}{n}$ in the other direction. Thus, the $\pm[T\times C_{n}]$-orbit of a starting point on $K_{p_{0}}$ forms a regular $\operatorname{lcm}(2n,3)$-gon on $K_{p_{0}}$. Similarly, the $\pm[T\times C_{n}]$-orbit of a starting point on $K_{-p_{0}}$ forms a regular $\operatorname{lcm}(2n,3)$-gon on $K_{-p_{0}}$. In particular, if $n$ is a multiple of $3$, $\pm[T\times C_{n}]$ has a simple rotation ($[-\omega,e_{3}]$) fixing $K_{p}$ and a simple rotation ($[-\omega^{2},e_{3}]$) fixing $K_{-p_{0}}$. If $n$ is not a multiple of $3$, $\pm[T\times C_{n}]$ has no simple rotation fixing $K_{p_{0}}$ or $K_{-p_{0}}$, and the orbit points on the three circles form a left or right staircase. • The elements of $\frac{1}{3}[T\times C_{3n}]$ that preserve $K_{p_{0}}$ (and $K_{-p_{0}}$) form the subgroup $$\langle[-\omega,e_{3n}],[1,e_{n}]\rangle=\langle[-\omega^{2},e_{3n}^{2}],[1,e_{n}]\rangle$$ of order $6n$. We will now consider the action of this subgroup on the circles $K_{p_{0}}$ and $K_{-p_{0}}$. On $K_{p_{0}}$, the rotation $[-\omega,e_{3n}]$ rotates $K_{p_{0}}$ by $\frac{\pi}{3}-\frac{\pi}{3n}=\frac{(n-1)\pi}{3n}$. Its order is $$\frac{2\pi}{\gcd(\frac{(n-1)\pi}{3n},2\pi)}=\frac{2\pi}{\gcd(\frac{\pi}{3n}(n-1),6n\frac{\pi}{3n})}=\frac{2\pi}{\frac{\pi}{3n}\gcd(n-1,6n)}=\frac{6n}{\gcd(n-1,6)}.$$ Thus, the $\pm\frac{1}{3}[T\times C_{3n}]$-orbit of a starting point on $K_{p_{0}}$ forms a regular polygon with $\operatorname{lcm}(2n,\frac{6n}{\gcd(n-1,6)})=\frac{6n}{\gcd(n-1,3)}$ sides. In particular, if $n-1$ is a multiple of $3$, $\pm\frac{1}{3}[T\times C_{3n}]$ has a simple rotation fixing $K_{p_{0}}$. Otherwise, $G$ has no simple rotation fixing $K_{p_{0}}$. On $K_{-p_{0}}$, the rotation $[-\omega^{2},e_{3n}^{2}]$ rotates $K_{-p_{0}}$ by $\frac{\pi}{3}-\frac{2\pi}{3n}=\frac{(n-2)\pi}{3n}$. Its order is $$\frac{2\pi}{\gcd(\frac{(n-2)\pi}{3n},2\pi)}=\frac{2\pi}{\gcd(\frac{\pi}{3n}(n-2),6n\frac{\pi}{3n})}=\frac{2\pi}{\frac{\pi}{3n}\gcd(n-2,6n)}=\frac{6n}{\gcd(n-2,12)}.$$ Thus, the $\pm\frac{1}{3}[T\times C_{3n}]$-orbit of a starting point on $K_{-p_{0}}$ forms a regular polygon with $\operatorname{lcm}(2n,\frac{6n}{\gcd(n-2,12)})=\frac{6n}{\gcd(n-2,3)}$ sides. In particular, if $n-2$ is a multiple of $3$, $\pm\frac{1}{3}[T\times C_{3n}]$ has a simple rotation fixing $K_{-p_{0}}$. Otherwise, $G$ has no simple rotation fixing $K_{-p_{0}}$. To summarize, $\pm[T\times C_{n}]$ fixes $K_{p_{0}}$ and $K_{-p_{0}}$ if and only if $n\equiv 0\bmod 3$. While, $\pm\frac{1}{3}[T\times C_{3n}]$ fixes $K_{p_{0}}$ if and only if $n\equiv 1\bmod 3$, and it fixes $K_{-p_{0}}$ if and only if $n\equiv 2\bmod 3$. Here, we have discussed the situation in terms of orbits near the axis. As discussed in Section 6.10, the results can be checked against Figures 43, 46, and 45. 7 The toroidal groups 7.1 The invariant Clifford torus We will now study the large class of groups of type $[D\times D]$ or $[C\times C]$ or $[C\times D]$, where both the left and the right group are cyclic or dihedral. At the beginning of Section 5.1, we have seen that these groups have an invariant Clifford torus $\mathbb{T}_{p}^{q}$. All tori $\mathbb{T}^{q}_{p}$ are the same up to orthogonal transformations. We can thus, without loss of generality, restrict our attention to the standard torus $\mathbb{T}^{i}_{i}$. Indeed this is the torus that is left invariant by the left and right multiplication with the groups $\pm[D_{2m}\times D_{2n}]$ and their subgroups, as follows from Proposition 4.13. When we speak of the torus in this section, we mean the torus $\mathbb{T}^{i}_{i}$ and we denote it by $\mathbb{T}$. Since we also have cases where the left and right subgroup are equal, we also have to deal with their achiral extensions. According to Proposition 3.2, the extending element can be taken as $e=*[1,c]$, which is a composition of ${*}\colon(x_{1},y_{1},x_{2},y_{2})\mapsto(x_{1},-y_{1},-x_{2},-y_{2})$, which leaves the torus fixed, with $[1,c]$, for an element $c$ of the right group, which also leaves the torus fixed. This means that the achiral extensions can also be found among the groups that leave the torus fixed. We call these groups, namely the subgroups $\pm[D_{2m}\times D_{2n}]$ and their achiral extensions, the toroidal groups. We will study and classify these groups by focusing on their action on $\mathbb{T}$. In particular, it will be of secondary interest whether the groups are chiral or achiral, or which Hopf bundles they preserve. These properties were important to derive the existence of the invariant torus, but we will not use them for the classification. Since $\mathbb{T}$ is a two-dimensional flat surface, the symmetry groups acting on $\mathbb{T}$ bear much resemblance to the discrete symmetry groups of the plane, i.e., the wallpaper groups. These groups are well-studied and intuitive. All wallpaper groups except those that contain 3-fold rotations will make their appearance (12 out of the 17 wallpaper groups). The reason for excluding 3-fold rotations is that a Clifford torus has two distinguished directions, which are perpendicular to each other, and these directions must be preserved. We don’t assume familiarity with the classification of the wallpaper groups. We will develop the classification as we go and adapt it to our needs. 7.2 Torus coordinates and the torus foliation The Clifford torus belongs to a foliation of $S^{3}$ by a family of tori, which, in terms of Cartesian coordinates $(x_{1},y_{1},x_{2},y_{2})$, have the equations $$x_{1}^{2}+y_{1}^{2}=r_{1}^{2},\ x_{1}^{2}+y_{2}^{2}=r_{2}^{2}$$ (16) for fixed radii $r_{1},r_{2}$ with $0<r_{1},r_{2}<1$ and $r_{1}^{2}+r_{2}^{2}=1$. The standard Clifford torus has the parameters $r_{1}=r_{2}=\sqrt{1/2}$. As limiting cases, $r_{1}=1$ gives the great circle in the $x_{1},y_{1}$-plane, and $r_{1}=0$ gives the great circle in the $x_{2},y_{2}$-plane. Every torus in this family is the Cartesian product of two circles, and thus is a flat torus, with a locally Euclidean metric, forming a $2\pi r_{1}\times 2\pi r_{2}$ rectangle with opposite sides identified. The best way to see the mapping to the rectangle is to use double polar coordinates: $$\begin{pmatrix}x_{1}\\ y_{1}\\ x_{2}\\ y_{2}\end{pmatrix}=\begin{pmatrix}r_{1}\cos\varphi_{1}\\ r_{1}\sin\varphi_{1}\\ r_{2}\cos\varphi_{2}\\ r_{2}\sin\varphi_{2}\end{pmatrix}$$ (17) Then $\varphi_{1}$ and $\varphi_{2}$ (appropriately scaled) can be used as rectangular two-dimensional coordinates, see Figure 17. The lines with $\varphi_{1}=\mathrm{const}$ and $\varphi_{2}=\mathrm{const}$ are what we would normally call meridian circles and parallel circles of the torus, except that there is no natural way to distinguish the two classes. These circles have radius $\sqrt{1/2}$. The $45^{\circ}$ lines with $\varphi_{2}=\mathrm{const}+\varphi_{1}$ and $\varphi_{2}=\mathrm{const}-\varphi_{1}$ are great circles. They are the circles from the Hopf bundles $\mathcal{H}_{i}$ and $\mathcal{H}^{i}$. Figure 18 gives a picture of corresponding patches around the origin $\varphi_{1}=\varphi_{2}=0$ for three tori. The middle one is the Clifford torus with $r_{1}=r_{2}=\sqrt{1/2}\approx 0.7$, the top one has $r_{1}=0.55<r_{2}\approx 0.835$, and the bottom one has the reversed values $r_{1}$ and $r_{2}$. Each torus is intrinsically flat, i.e., isometric to the Euclidean plane in every small patch, but, as the figure suggests, it is embedded as a “curved” surface inside $S^{3}$. The only “lines” in the torus that are geodesics of $S^{3}$ are those that are parallel to the diagonal lines $\varphi_{2}=\pm\varphi_{1}$. The dotted “vertical” lines connect points with the same $\varphi_{1},\varphi_{2}$-coordinates on different tori. They are great circles, and they intersect every torus of the family orthogonally. In Section 7.11.2, we will see the easy equation $x_{1}x_{3}=x_{2}x_{4}$ (24) for the same torus in a different coordinate system. 7.3 Symmetries of the torus Since the torus is locally like the Euclidean plane, and the plane is the universal covering space of the torus, we can investigate the isometric symmetries of the torus by studying the isometries of the plane. However, not every isometry of the plane can be used as a symmetry of the torus; it must be “compatible” with the torus structure. The following theorem makes this precise: Theorem 7.1. There is a one-to-one correspondence between • groups $G$ of isometries of the torus $[0,2\pi)\times[0,2\pi)$, • groups $\hat{G}$ of isometries $x\mapsto Ax+t$ of the $(\varphi_{1},\varphi_{2})$-plane with the following properties: (i) The directional part $A$ of every isometry in $\hat{G}$ keeps the integer grid $\mathbb{Z}^{2}$ invariant. (ii) The group contains the two translations $\varphi_{1}\mapsto\varphi_{1}+2\pi$ and $\varphi_{2}\mapsto\varphi_{2}+2\pi$. The proof uses the following lemma, which shows how to lift torus isometries to plane isometries: Lemma 7.2. Let $\Lambda$ denote the scaled integer grid $\{\,(k_{1}2\pi,k_{2}2\pi)\mid k_{1},k_{2}\in\mathbb{Z}\,\}$, and let $p\colon\mathbb{R}^{2}\to\mathbb{R}^{2}|_{\Lambda}$ be the quotient map from the plane to the torus $[0,2\pi)\times[0,2\pi)$: $$p(\varphi_{1},\varphi_{2})=(\varphi_{1}\bmod 2\pi,\varphi_{2}\bmod 2\pi)$$ For every isometry $T$ of the torus $[0,2\pi)\times[0,2\pi)$, there is an isometry $\hat{T}$ of the plane with the following properties. (a) $T(p(x))=p(\hat{T}(x))$ for all $x\in\mathbb{R}^{2}$. (b) $\hat{T}$ maps the grid $\Lambda$ to a translate of $\Lambda$. The isometry $\hat{T}$ is unique up to translation by a grid vector $t\in\Lambda$. Proof. Pick some point $y_{0}$ of the torus and let $T(y_{0})=y_{0}^{\prime}$. Find points $x_{0},x_{0}^{\prime}\in\mathbb{R}^{2}$ with $y_{0}=p(x_{0})$ and $y_{0}^{\prime}=p(x_{0}^{\prime})$. Since $p$ is locally injective, the mapping $T$ can be lifted to a mapping $\hat{T}(x)=p^{-1}(T(p(x)))$ in some neighborhood $N(x_{0})$ of $x_{0}\in\mathbb{R}^{2}$: $$\begin{tikzcd}$$ (18) In other words, $\hat{T}(x_{0})=x_{0}^{\prime}$, and for all $x\in N(x_{0})$: $$p(\hat{T}(x))=T(p(x))$$ (19) Moreover, since both $p$ and $T$ are locally isometries, $\hat{T}$ is an isometry in $N(x_{0})$. This isometry can be extended to a unique isometry $\hat{T}$ of the plane. To extend the validity of (19) from $N(x_{0})$ to the whole plane, we look at a path $x_{0}+\lambda t$ from $x_{0}$ to an arbitrary point $x_{0}+t$ of the plane, where ($0\leq\lambda\leq 1$). On the torus, it corresponds to a path $p(x_{0}+\lambda t)$, which is mapped to an image path $T(p(x_{0}+\lambda t))$, which in turn can be lifted to a path on $\mathbb{R}^{2}$. Since $p$ is locally invertible and an isometry, (19) must hold along the whole path, and therefore for an arbitrary point $x_{0}+t$ of the plane. This is claim (a). To show claim (b), consider any $t\in\Lambda$. By (19), $$p(\hat{T}(t))=T(p(t))=T(p(0))$$ that is, all values $\hat{T}(t)$ for $t\in\Lambda$ project to the same point $T(p(0))$ on the torus. It follows that the image of $\Lambda$ under $\hat{T}$ is contained in a translate of $\Lambda$. But then it must be equal to this translate of $\Lambda$. Once $x_{0}$ and $x_{0}^{\prime}$ have been chosen, the construction gives a unique transformation $\hat{T}$. The result can be varied by adding an arbitrary translation $t\in\Lambda$ to $x_{0}$ (before applying $\hat{T}$) or $t^{\prime}\in\Lambda$ to $x_{0}^{\prime}$ (after applying $\hat{T}$). By property (b), it makes no difference whether we are allowed to translate by an element of $\Lambda$ before applying $\hat{T}$ or after (or both). This proves the uniqueness claim of the lemma. ∎ As a consequence, we can write a torus isometry like a plane isometry in the form $x\mapsto Ax+t$ with an orthogonal matrix $A$ and a translation vector $t$, bearing in mind that $t$ is unique only up to grid translations. Proof of Theorem 7.1. Given a group $G$, we can construct the lifted group $\hat{G}$ as the set of lifted isometries $\hat{T}$ of the transformations $T\in G$ according to the lemma. The group property of $\hat{G}$ can be easily shown by extending the diagram (18): $$\begin{tikzcd}$$ The translations $\varphi_{1}\mapsto\varphi_{1}+2\pi$ and $\varphi_{2}\mapsto\varphi_{2}+2\pi$ arise as lifts of the identity $\mathrm{id}\in G$. It is clear that a matrix $A$ keeps the scaled integer grid $\Lambda:=\{\,(k_{1}2\pi,k_{2}2\pi)\mid k_{1},k_{2}\in\mathbb{Z}\,\}$ invariant (Property (b)) if and only if it keeps the standard integer grid $\mathbb{Z}^{2}$ invariant (Property (i)). Conversely, given a transformation $\hat{T}$ in the group $\hat{G}$, we can define $T$ as follows: For a point $y_{0}$ of the torus, pick a point $x_{0}$ with $p(x_{0})=y_{0}$, and define $T(y_{0})$ through the relation (18): $T(y_{0}):=p(\hat{T}(x_{0}))$. The choice of $x_{0}$ is ambiguous. It is determined only up a translation by $t\in\Lambda$, but we see that this has no effect on $T(y_{0})$: $$p(\hat{T}(x_{0}+t))=p(\hat{T}(x_{0})+t^{\prime})=p(\hat{T}(x_{0}))$$ By property (i), or property (b), $t^{\prime}\in\Lambda$, and therefore the ambiguity evaporates through the projection $p$. ∎ 7.3.1 Torus translations The simplest operations are the ones that appear as translations on the torus, modulo $2\pi$. We denote them by $$R_{\alpha_{1},\alpha_{2}}\colon(\varphi_{1},\varphi_{2})\mapsto(\varphi_{1}+\alpha_{1},\varphi_{2}+\alpha_{2})$$ in accordance with (1). In this notation, a left rotation $[\exp\alpha i,1]$ turns out to be a negative translation along the $45^{\circ}$ direction: $T_{-\alpha,-\alpha}$. A right rotation $[1,\exp\alpha i]$ is a translation in the $-45^{\circ}$ direction: $R_{\alpha,-\alpha}$. Arbitrary torus translations can be composed from left and right rotations, and the general translation is written in quaternion notation as $$R_{\alpha_{1},\alpha_{2}}=\left[\exp(\tfrac{-\alpha_{1}-\alpha_{2}}{2}i),\,\exp(\tfrac{\alpha_{1}-\alpha_{2}}{2}i)\right].$$ The torus translations $R_{\alpha,0}$ and $R_{0,\alpha}$ along the $\varphi_{1}$ and $\varphi_{2}$-axis are simple rotations, leaving the $x_{2},y_{2}$-plane or the $x_{1},y_{1}$-plane fixed, respectively. One should bear in mind that all “translations”, as they appear on the torus, are actually rotations of $S^{3}$. (Only the left and right rotations among them may be called translations of $S^{3}$ with some justification, because they correspond to the translations in elliptic 3-space.) 7.3.2 The directional group: symmetries with a fixed point We pick the point $O=(\sqrt{1/2},0,\sqrt{1/2},0)$ with torus coordinates $\varphi_{1}=\varphi_{2}=0$ as a reference point or origin on $\mathbb{T}$. Every isometry of $\mathbb{T}$ can be decomposed in a unique way into a symmetry that leaves $O$ fixed (the directional part), plus a torus translation (the translational part). Let us therefore study the symmetries that leave $O$ fixed. In the plane, these would be all rotations and reflections. However, according to Theorem 7.1 we can only use symmetries that leave the standard square grid $\mathbb{Z}^{2}$ invariant, apart from a translation. This allows rotations by multiples of $90^{\circ}$, as well as reflections in the coordinate axes and in the $45^{\circ}$-lines. In the plane, these seven operations together with the identity form the dihedral group $D_{8}$, the symmetries of the square. We denote the group by $D_{8}^{\mathbb{T}}$, to indicate that we think of the transformations of $S^{3}$ that leave the torus $\mathbb{T}$ invariant. Table 4 summarizes these operations and their properties. For each operation, we have chosen a symbol indicating the axis direction in case of a reflection, or otherwise some suggestive sign, and a name. We also give the quaternion representation, the effect in terms of the $\varphi_{1},\varphi_{2}$-coordinates, and the order of the group element. Some transformations may swap the two sides of $\mathbb{T}$, exchanging the tori with parameters $r_{1},r_{2}$ and $r_{2},r_{1}$. This is indicated by a “$-$” in the column “side”, and the names of these operations include the term “swap”. The nonswapping operations leave every torus of the foliation (16) invariant, not just the “central” Clifford torus. The column “det” indicates whether the operation is orientation-preserving ($+$) or orientation-reversing ($-$). One must keep in mind that the operation on the torus $\mathbb{T}$ induces a transformation of the whole $S^{3}$, and what appears as a reflection in the planar $\varphi_{1},\varphi_{2}$-picture of $\mathbb{T}$ may or may not be an orientation-reversing  transformation of $S^{3}$. Thus, it may at first sight come as a surprise that the torus swap        is orientation-preserving. The reason is that it goes together with a swap of the sides. As shown in Figure 18, it is actually a half-turn around the axis $S^{+}$. (The product of the signs in the “side” and “det” columns tells whether the operation is orientation-preserving when considered purely in the plane.) Figure 18 makes it clear why there is no “pure swap”, no “inversion” at the central torus that would keep the torus pointwise fixed and swap the two sides of the torus: such a mapping would flip the dashed perpendicular lines and thus map the long side of the rectangular patch on the top to the short side of the rectangular patch at the bottom. We see that a swap is only possible if it goes hand in hand with an exchange of the $\varphi_{1}$ and $\varphi_{2}$ axes. In particular, such an exchange comes with the rotations by $\pm 90^{\circ}$, the right and left swapturn operations, which are accordingly orientation-reversing. The column “conj.” indicates operations that are conjugate to each other, i.e., geometrically equivalent. Thus, for example, the operation        may, in a different coordinate system, appear as the operation      $-$. By contrast,        and         are distinguished: the axis of        belongs to the invariant left Hopf bundle $\mathcal{H}^{i}$, and the axis of         belongs to the invariant right Hopf bundle $\mathcal{H}_{i}$. The operations        and         are mirrors of each other, i.e., conjugate under an orientation-reversing transformation. This is indicated in the last column. When viewed in isolation, the half-turns $S^{+}=\raise 0.5pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$}$, $S^{-}=\raise 0.5pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$}$, and $F=\raise 0.5pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$}$ are conjugate to each other. However, they are distinct when considering only transformations that leave the torus invariant. 7.3.3 Choice of coordinate system The conjugacies discussed above introduces ambiguities in the representation of torus translations, which depend on the choice of the coordinate system for a given invariant torus. $R_{\alpha_{1},\alpha_{2}}$ may, in a different coordinate system, appear as $R_{-\alpha_{1},-\alpha_{2}}$ (conjugacy by      $\cdot$), or as $R_{\alpha_{2},\alpha_{1}}$ (conjugacy by        ), or as $R_{-\alpha_{2},-\alpha_{1}}$ (conjugacy by        ). (The operation $R_{\alpha_{1},-\alpha_{2}}$ or $R_{-\alpha_{1},\alpha_{2}}$ is its mirror operation.) The choice of origin in the $\varphi_{1},\varphi_{2}$-plane, on the other hand, has no influence on the torus translations. It only affects the other operations. 7.3.4 The directional group and the translational subgroup We have mentioned that every symmetry of the torus can be decomposed in a unique way (after fixing an origin) into a directional part and a translational part. For a group $G$, the torus translations contained in it form a normal subgroup, the translational subgroup, which we denote by $G_{\Box}$. The directional parts of the group operations form the directional group of $G$. It is a subgroup of $D_{8}^{\mathbb{T}}$, and we will use it as a coarse classification of the toroidal groups. (The directional group is isomorphic to the factor group $G/G_{\Box}$.) The ten subgroups of $D_{8}^{\mathbb{T}}$ are listed in Table 5, together with a characteristic symbol and a name. Figure 19 shows their pictorial representation. The following lemma is useful in order to restrict the translational subgroup for a given directional group. Lemma 7.3. For a group $G$ of torus symmetries, the translational subgroup $G_{\Box}$ is closed under every symmetry in the directional group of $G$. Proof. Assume that $t\in G_{\Box}$, and we have an operation in $G/G_{\Box}$ that is represented by an orthogonal $2\times 2$ matrix $A$. This means that $G$ contains some transformation $x\mapsto Ax+b$. If we conjugate the translation $x\mapsto x+t$ with this transformation, we get $x\mapsto A(A^{-1}(x-b)+t)+b=x+At$, i.e., a translation by $At$. ∎ 7.4 Overview of the toroidal groups After fixing the directional group, we have to look at the translational subgroup, and the interaction between the two. The result is summarized as follows. Proposition 7.4. The 4-dimensional point groups that have an invariant torus can be classified into 25 infinite families of toroidal groups, among them • 2 three-parameter families • 19 two-parameter families • 4 one-parameter families as shown in Table 6. The last column of Table 6 shows the names of these groups in the classification of Conway and Smith.141414To get a closer correspondence with our parameterization for the groups of type                and           $\cdot$      in the first two rows, we swap the role of the left and right factors in the generators given in Conway and Smith. Effectively, we consider the mirror groups. Accordingly, we have adapted the Conway–Smith convention of writing $\frac{1}{f}[C_{m}\times C_{n}^{(s)}]$, by decorating the left factor with the parameter $s$. More details are given in Appendix G. We make a comparison in Section 7.12. There is one difficulty that we have not addressed: We look at the groups that leave one particular Clifford torus invariant. However, there are some groups, in particular small groups, that have several invariant Clifford tori. This leads to ambiguities. For example, a torus translation by $180^{\circ}$ on one torus may appear as a swapturn        on a different torus. We investigate these cases in detail in Section 7.11. The natural constraint on the parameters $m$ and $n$ is $m,n\geq 1$ in all cases of Table 6, in the sense that all these choices (in a few cases under the additional constraint that $m\equiv n\pmod{2}$) lead to valid groups. (But note that some extra evenness constraints are already built into the notation, for example, when we write $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{2m,2n}^{\textbf{pm}}$ instead of $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{\textbf{pm}}$.) For the swapturn groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\raise 0.55pt\hbox{$\scriptstyle\circlearrowleft$}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{a,b}$, the natural choices are $a,b\geq 0$ except for $(a,b)=(0,0)$. The stricter conditions on $m$ and $n$ in Table 6 are imposed in order to exclude duplications. We will now go through the categories one by one. This closely parallels the classification of the wallpaper groups. When appropriate, we use the established notations for wallpaper groups to distinguish the torus groups. We have to choose suitable parameters for the different dimensions of each wallpaper group, and in some cases, we have to refine the classification of wallpaper groups because different axis directions are distinguished. 7.5 The torus translation groups, type                These are the groups that contain only torus translations. The pure translation groups are the simplest class, but they are also the richest type of groups, requiring three parameters for their description. The translations $(\alpha_{1},\alpha_{2})$ with $R_{\alpha_{1},\alpha_{2}}\in G$ form an additive group modulo $(2\pi,2\pi)$, and hence a lattice modulo $(2\pi,2\pi)$. In accordance with Theorem 7.1 we can also view it as a lattice in the plane that contains all points whose coordinates are multiples of $2\pi$, see Figure 20. We parameterize these lattices with three parameters $m,n,s$: The lattice subdivides the principal diagonal from $(0,0)$ to $(2\pi,2\pi)$ into some number $m\geq 1$ of segments. Then we choose $t_{1}=({\frac{2\pi}{m},\frac{2\pi}{m}})$ as the first generator of the lattice. The second parameter $n\geq 1$ is the number of lattice lines parallel to the principal diagonal that run between $(0,0)$ and $(2\pi,0)$, including the last one through $(2\pi,0)$. In the figure, we have $m=2$ and $n=5$. On each such line, the points are equidistant with distance $\frac{2\pi}{m}\cdot\sqrt{2}$. The first parallel lattice line thus contains a unique point $t_{2}=(\frac{\pi}{n},-\frac{\pi}{n})+(x,x)$ with $0\leq x<\frac{2\pi}{m}$, and we choose $x$ as the third parameter. The range from which $t_{2}$ can be chosen is indicated by a double arrow in the figure. We still have to take into account the ambiguity from the choice of the coordinate system (Section 7.3.3). The choice of origin is no problem, since a translation does not depend on the origin. Also, the “flip” ambiguity from      $\cdot$ is no problem at all: Rotating the coordinate system by $180^{\circ}$ maps the lattice to itself. The “swap” ambiguity from        , however, is more serious, as it exchanges the coordinate axes: $\alpha_{1}\leftrightarrow\alpha_{2}$. (From        , we get no extra ambiguity, since $\raise 0.5pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$}=\raise 0.5pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$}\cdot\raise 0.5pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$}$.) To eliminate this ambiguity, we look at the vectors $t_{1}-t_{2}$ and $t_{2}$. They form also a lattice basis, and they span a parallelogram whose diagonal $t_{1}$ lies on the $\alpha_{1}=\alpha_{2}$ axis. The alternate choice of the basis will reflect the parallelogram at this diagonal. Thus, the choices $x$ and $\frac{2\pi}{m}-x$ will lead to the same group. We can achieve a unique representative by stipulating that $t_{2}$ is not longer than $t_{1}-t_{2}$. This means that we restrict $t_{2}$ to the lower half of the range, including the midpoint, which is marked in the figure: $0\leq x\leq\frac{\pi}{m}$.151515This easy way of dealing with the duplications caused by        is the reason for preferring the oblique axes of Figure 20 for measuring the parameters $m$ and $n$ over the more natural $\alpha_{1},\alpha_{2}$-axes. This oblique system is also aligned with the specification of the group by its left and right group (of left translations and right translations) that underlies the classic classification, see Appendix G. Curiously, these duplications caused by        were overlooked by Conway and Smith [8], although they had escaped none of the previous classifications [20, p. 62, groupe I], [35, p. 20, item §1, formula (2)], [15, p. 55, first paragraph]. Finally, we look at the point $nt_{2}$, which lies on the $45^{\circ}$ line through $(2\pi,0)$. We have to ensure that it is one of the existing lattice points on this line because additional points would contradict the choice of $m$. Thus $$nt_{2}=(\pi,-\pi)+(nx,nx)=(2\pi,0)+s(\tfrac{2\pi}{m},\tfrac{2\pi}{m})$$ for some integer $s$, or in other words $$x=\frac{\pi}{n}+s\cdot\frac{2\pi}{mn}$$ Combining this with the constraint $0\leq x\leq\frac{\pi}{m}$, we get $$-\frac{m}{2}\leq s\leq-\frac{m}{2}+\frac{n}{2}$$ (20) This range contains $\lceil\frac{n}{2}\rceil$ integers if $m$ is odd and $\lceil\frac{n+1}{2}\rceil$ integers if $m$ is even. In particular, there is always at least one possible value $s$. Proposition 7.5. The point groups that contain only torus translations can be classified as follows: For any integers $m,n\geq 1$ and any integer $s$ in the range (20), there is one such group, the torus translation group $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s)}$, of order $mn$. It is generated by $R_{\frac{2\pi}{m},\frac{2\pi}{m}}$ and $R_{\frac{2\pi}{n}+\frac{2s\pi}{mn},\frac{2s\pi}{mn}}$. In terms of quaternions, these generators are $[\exp(-\tfrac{2\pi}{m}i),1]\text{ and }[\exp(-\tfrac{(m+2s)\pi}{mn}i),\exp\tfrac{\pi i}{n}].$ We emphasize that the two parameters $m$ and $n$ play different roles in this parameterization, and there is no straightforward way to read off the parameters of the mirror group from the original parameters $m,n,s$. (See for example the entries 11/01 and 11/02 in Table 17.) We have observed above that $x$ and $x^{\prime}=\frac{2\pi}{m}-x$ lead to the same group, and the same is true for $x^{\prime}=\frac{2\pi}{m}+x$. In terms of $s$ this means that the parameters $s^{\prime}=-m-s$ and $s^{\prime}=s+n$ lead to the same group as $s$. In Section 7.11, when we discuss duplications, it will be convenient to allow values $s$ outside the range (20). In particular, it is good to remember that $s=0$ corresponds to a generating point on the $\alpha_{1}$-axis. 7.5.1 Dependence on the starting point Proposition 7.6. Any two full-dimensional orbits of a toroidal translation group are linearly equivalent. Proof. Let $G$ be a toroidal translation group. We will show that any full-dimensional $G$-orbit can be obtained from the $G$-orbit of the point $(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}},0)$ by applying an invertible linear transformation. Let $v\in\mathbb{R}^{4}$ be a point whose $G$-orbit is full-dimensional. This is equivalent to requiring that the projections of $v$ to the $x_{1},y_{1}$-plane and to the $x_{2},y_{2}$-plane are not zero. We can map $v$ to a point $v^{\prime}$ of the form $(r_{1},0,r_{2},0)$, with $r_{1}\neq 0$ and $r_{2}\neq 0$, by applying a rotation of the form $$R_{\alpha_{1},\alpha_{2}}=\begin{pmatrix}R_{\alpha_{1}}&0\\ 0&R_{\alpha_{2}}\end{pmatrix}.$$ (21) The new point $v^{\prime}$ can be mapped to the point $(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}},0)$ by applying a matrix of the form $$\operatorname{diag}(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2})=\begin{pmatrix}\lambda_{1}&0&0&0\\ 0&\lambda_{1}&0&0\\ 0&0&\lambda_{2}&0\\ 0&0&0&\lambda_{2}\\ \end{pmatrix}.$$ (22) Since torus translations commute with the linear transformations (21) and (22), we are done. ∎ Frieder and Ladisch [17, Proposition 6.3 and Corollary 8.4] proved that the same conclusion holds for any abelian group: All full-dimensional orbits are linearly equivalent to each other in this case. 7.6 The torus flip groups, type           $\cdot$      These groups are generated by torus translations together with a single torus flip. Adding the flip operation is completely harmless. Conjugation with a flip changes $R_{\alpha_{1},\alpha_{2}}$ to $R_{-\alpha_{1},-\alpha_{2}}$, and therefore does not change the translation lattice at all. The order of the group doubles. If we choose the origin at the center of a $2$-fold rotation induced by a torus flip, then $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s)}$ is generated by $$[\exp(-\tfrac{2\pi i}{m}),1],[\exp(-\tfrac{(m+2s)\pi i}{mn}),\exp\tfrac{\pi i}{n}],[j,j].$$ 7.7 Groups that contain only one type of reflection These are the torus reflection groups                  and           $-$     , as well as the torus swap groups                  and                 . The groups of type                  and           $-$      are geometrically the same, because        (or        ) exchanges vertical mirrors with horizontal mirrors. Thus, Table 6 contains no entries for           $-$     . The groups                  and                  are mirrors, and their treatment is similar. If the directional part of a transformation is a reflection (in the plane), the transformation itself can be either a reflection or a glide reflection. In both cases there is an invariant line. We will classify the groups by placing a letter F on the invariant line and looking at its orbit. We need a small lemma that is familiar from the classification of the wallpaper groups: Lemma 7.7. If a two-dimensional lattice has an axis of symmetry, then the lattice is either (1) a rectangular lattice that is aligned with the axis, or (2) a rhombic lattice, which contains in addition the midpoints of the rectangles. In case (1), the symmetry axis goes through a lattice line or half-way between two lattice lines. In case (2), the symmetry axis goes through a lattice line. For an example, see the upper half of Figure 22, where the mirror lines are drawn as solid lines. Proof. Assume without loss of generality that the symmetry axis is the $y$-axis. (We may have to translate the lattice so that it no longer contains the origin.) With every lattice point $(x,y)$, the lattice contains also the mirror point $(-x,y)$, and thus $(2x,0)$ is a horizontal lattice vector. It follows that there must be a lattice point $(x_{0},y_{0})$ with smallest positive $x$-coordinate, since otherwise there would be arbitrarily short lattice vectors. Consider the horizontal lattice line $L$ through $(x_{0},y_{0})$. There are two cases, see Figure 21. (a) $(0,y_{0})$ is also a lattice point, and $(H,0)=(x_{0},0)$ is a lattice basis vector. (b) $(0,y_{0})$ is not a lattice point, and $(H,0)=(2x_{0},0)$ is a lattice basis vector. Now look at the next-higher horizontal lattice line $L^{\prime}$ above $L$, and choose a lattice point $(x^{\prime},y^{\prime})$ on $L^{\prime}$. $L^{\prime}$ contains the points $(x^{\prime}+kH,y^{\prime})$ for $k\in\mathbb{Z}$, and therefore a point $(x,y^{\prime})$ in the interval $-H/2\leq x\leq H/2$. The value of $x$ cannot be in the range $-x_{0}<x<0$ or $0<x<x_{0}$ because this would contradict the choice of $(x_{0},y_{0})$. Thus, either (i) $x=0$ or (ii) both points $(\pm x_{0},y^{\prime})$ are in the lattice. In case (a), both possibilities (i) and (ii) hold simultaneously, and this leads to a rectangular lattice with the axis through lattice points. If (b) and (ii) holds, we have a rectangular lattice with the axis between lattice lines. If (b) and (i) holds, we have a rhombic lattice. ∎ 7.7.1 The torus reflection groups, type                  We distinguish two major cases. M) The group contains a mirror reflection. G) The group contains only glide reflections. In both cases, every orientation-reversing transformation has a vertical invariant line. (Actually, since the translation $\varphi_{1}\mapsto\varphi_{1}+2\pi$ is always an element of the group, by Theorem 7.1, the invariant lines come in pairs $\varphi_{1}=\beta$ and $\varphi_{1}=\beta+\pi$.) As announced, we observe the orbit of the letter F. We put the bottom endpoint of the F on an invariant line $\ell$. First we look at the orbit under those transformations that leave $\ell$ invariant, see the left side of Figure 22. In case G, the images with and without reflection alternate along $\ell$. In case M, they are mirror images of each other. In case M, we have a mirror symmetry, and by Lemma 7.3, the translational subgroup must be closed under the mirror symmetry. Lemma 7.7 gives the two possibilities of a rectangular or a rhombic translational subgroup. Combining these translations with the mirror operations leads to the two cases in the top row of Figure 22. In case G, we cannot apply Lemma 7.7 right away. Let $H$ be the vertical distance between consecutive points on the axis. If we combine each glide reflection with a vertical translation by $-H$, we get mirror reflections, as in case M. To this modified group, we can apply Lemma 7.7, and we conclude that the translational group must either form a rectangular or a rhombic pattern. Adding back the translation by $H$ to the orientation-reversing transformations leads to the results in the lower row of Figure 22. In the rhombic case in the lower right picture we see that, when we try to combine glide reflections with a rhombic translational subgroup, we generate mirror symmetries, and thus, this case really belongs to case M. The picture looks different from the corresponding picture in the upper row because there are two alternating types of invariant lines: mirror lines, and lines with a glide reflection. Depending on where we put the F, we get different pictures. We are thus left with three cases, which we denote by superscripts that are chosen in accordance with the International Notation for these wallpaper groups: • mirror/rectangular: $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pm}}$, • mirror/rhombic: $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{cm}}$, and • glide/rectangular: $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pg}}$. The groups are parameterized by two parameters $m\geq 1$ and $n\geq 1$, the dimensions of the rectangular grid of translations in the $\varphi_{1}$ and $\varphi_{2}$ directions, see the left part of Figure 23. Since the invariant lines give a distinguished direction, we need not worry about duplications when exchanging $m$ and $n$. The order of each group $G$ is twice the order of the translational subgroup $G_{\Box}$. 7.7.2 The torus swap groups For the groups of type                 , we have to turn the picture by $45^{\circ}$. We have the same three cases, $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pm}}$, $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{cm}}$, and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pg}}$, but we must adapt the definition of $m$ and $n$, see the right part of Figure 23. We divide the principal diagonal from $(0,0)$ to $(2\pi,2\pi)$ into $m$ parts and the secondary diagonal from $(0,0)$ to $(2\pi,-2\pi)$ into $n$ parts. We cannot choose $m$ and $n$ freely because the midpoint $(2\pi,0)$ of the square spanned by these two diagonal directions, which represents the identity mapping, is always part of the lattice. Therefore, for the rectangular lattice cases $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pm}}$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pg}}$, $m$ and $n$ must be even, and the number of lattice points on the torus is $mn/2$. (We loose a factor of 2 compared to                 , because the tilted square in the figure covers the torus twice.) For the rhombic lattice case $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{cm}}$, $m$ and $n$ must have the same parity, and the number of lattice points on the torus is $mn$. We mention that the parameter $m$ in this case coincides with the parameter $m$ for the translations-only case                of Figure 20. The parameter $n$ coincides in the rhombic case; in the rectangular case, it is twice as big. As mentioned, the groups of type                  are mirrors of the groups of type                 , and we need not discuss them separately. Generators for                 ,                  and                 . Whenever a mirror line exists (cm and pm), we choose the origin of the coordinate system on such a line; otherwise (pg), we place it on an axis of glide reflection. With these conventions, the groups can be generated by the generators listed in Table 7. 7.8 The torus swapturn groups, type           $\scriptstyle\circlearrowleft$      By Lemma 7.3, the lattice of translations must be a square grid. The left part of Figure 24 shows how we parameterize a square grid on the torus. We take the sides $a\geq 0$ and $b\geq 0$ of the grid rectangle spanned by the two points $(0,0)$ and $(2\pi,0)$, measured in grid units. Since $(0,b)$ leads to the same grid rectangle as $(b,0)$, we require $a\geq 1$. Conjugation by        reflects the grid at the principal diagonal. Since the grid is symmetric under $90^{\circ}$ rotations, this has the same effect as reflection at a vertical axis, and it is easy to see that such a reflection swaps the parameters $a$ and $b$. Thus, $(a,b)$ and $(b,a)$ describe the same group, and we can assume $a\geq b$ without loss of generality. The number of grid points, i.e., the size of the translational subgroup, is $a^{2}+b^{2}$, and the order is $4(a^{2}+b^{2})$. The right part of Figure 24 shows the various centers of 2-fold and 4-fold rotations, and a typical orbit. This corresponds to the wallpaper group p4. The grid is generated by the two orthogonal vectors $(\alpha_{1},\alpha_{2})=2\pi(\frac{a}{{a^{2}+b^{2}}},\frac{b}{{a^{2}+b^{2}}})$ and $(\alpha_{1},\alpha_{2})=2\pi(\frac{b}{{a^{2}+b^{2}}},-\frac{-a}{a^{2}+b^{2}})$, with $c=\sqrt{a^{2}+b^{2}}$. If we choose the origin at the center of a $4$-fold rotation induced by a swapturn, then $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\raise 0.55pt\hbox{$\scriptstyle\circlearrowleft$}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{a,b}$ can be generated by $$[\exp\tfrac{(-a-b)\pi i}{a^{2}+b^{2}},\exp\tfrac{(a-b)\pi i}{a^{2}+b^{2}}],[\exp\tfrac{(a-b)\pi}{a^{2}+b^{2}},\exp\tfrac{(a+b)\pi i}{a^{2}+b^{2}}],*[-j,1].$$ 7.9 Groups that contain two orthogonal reflections, type           $+$      and           $\times$      As in the case of                 , we distinguish, for each axis separately, whether there are mirror reflections or only glide reflections. We know that the glide reflection case is inconsistent with the rhombic lattice (cf. Section 7.7.1). Hence, we have the following cases, see Figure 25. • The grid of translations is a rhombic grid. In this case, both axes directions must be mirrors: c2mm. • The grid of translations is a rectangular grid. In this case each axis direction can be a mirror direction or a glide reflection – p2mm. Two mirror directions – p2mg. One mirror direction and one glide direction – p2gg. Two glide directions In p2mg, the two families of invariant lines are distinguishable: one family of parallel lines consists of mirror lines, whereas the perpendicular family has only glide reflections. Thus, there are two different types, where the two directions change roles. However, for           $+$     , we need not distinguish two versions of $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{p2mg}}$, because conjugation with        maps one to the other. For           $\times$     , on the other hand, the two versions are distinct. They are mirror images. We distinguish $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{p2mg}}$, where the mirror lines are parallel to the principal diagonal $\varphi_{2}=+\varphi_{1}$, and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{p2gm}}$, where the mirror lines are parallel to the secondary diagonal direction $\varphi_{2}=-\varphi_{1}$.161616 This is in accordance with previous editions of the International Tables of X-Ray Crystallography, which explicitly provided variations of the symbols for different “settings” [21, Table 6.1.1, p. 542 in the 1952/1969 edition]: short symbol pmg, full symbol p2mg, or p2gm for other setting. The parameters $m$ and $n$ have the same meaning as in the corresponding groups                  and                 . These groups contain torus flips, as the product of two perpendicular reflections. We choose the origin on the center of a $2$-fold rotation induced by a torus flip. For the groups c2mm, we place origin at the intersection of two mirror lines. Then the groups can be generated by the generators given in Table 8. 7.10 The full torus groups, type           $+$$\times$      Finally, we have the groups where all directional transformations are combined. The conditions of           $+$      and           $\times$      force the lattice to be a rectangular lattice both in the $\varphi_{1},\varphi_{2}$ axis direction and in the $\pm 45^{\circ}$ direction, possibly with added midpoints (rhombic case). This means that the lattice is a square lattice. It appears as a rectangular lattice in one pair of perpendicular directions and as a rhombic lattice in the other directions. Thus, there are only two cases for the translation lattice: The square $n\times n$ lattice with $n^{2}$ translations (the upright grid “U”, Figure 26a), and its rhombic extension with $2n^{2}$ translations (the slanted grid “S”, Figure 26b). Let us first consider the slanted case, see Figure 26b. The lattice appears as a rhombic lattice for the           $+$      directions. From the point of view of the subgroups of type           $+$     , we know that this means that the “glide reflection” case is excluded (cf. the discussion in Section 7.7.1). There must be mirror reflections in the horizontal and vertical axes. For the           $\times$      directions, the lattice appears as a rectangular lattice. According to Section 7.9 we can have the cases mirror/mirror, mirror/glide, glide/glide. But since $90^{\circ}$ rotations are included, the mixed mirror/glide case is impossible. Two cases remain, which we call $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4mm}\mathrm{S}}$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4gm}\mathrm{S}}$. The latter is shown in Figure 26b. When the lattice appears as a square lattice for the           $+$      directions, the two pairs of directions           $+$      and           $\times$      change roles, and we have two more groups, $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4mm}\mathrm{U}}$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4gm}\mathrm{U}}$. The first one is shown in Figure 26a. The groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4mm}}$ have mirrors in all four directions, whereas the groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4gm}}$ have mirrors in two directions only. To list the generators for the full torus groups, we choose the origin of the coordinate system on the center of a $4$-fold rotation induced by a swapturn, see Table 8. This concludes the discussion of the toroidal groups. The reader who wishes to practice the understanding of these classes might try to count, as an exercise, all groups of order 100, see Appendix C. 7.11 Duplications As we have seen, every subgroup of a group $\pm[D_{2m}\times D_{2n}]$ has an invariant torus. So far, we have analyzed the groups that leave a fixed torus invariant. We have already mentioned that some subgroups have more than one invariant Clifford torus, and this leads to duplications. Unfortunately, when it comes to weeding out duplications, all classifications (including the classic classification) become messy.171717The difficulty caused by these ambiguous transformations, in particular in connection with achiral groups, was already acknowledged by Hurley [23, p. 656–7]. We analyze the situation as follows. Every orientation-preserving transformation is of the form $R_{\alpha_{1},\alpha_{2}}$, with $-\pi\leq\alpha_{1},\alpha_{2}\leq\pi$. If $\alpha_{1}\neq\pm\alpha_{2}$, there is a unique pair of absolutely orthogonal invariant planes, and hence, there is a unique invariant Clifford torus on which the transformation appears as a torus translation. We call this torus the primary invariant torus. Our strategy is to analyze the situation backwards. We look at all orientation-preserving transformations that are not torus translations, we write them in the form $R_{\alpha_{1},\alpha_{2}}$ and determine the translation vector $(\alpha_{1},\alpha_{2})$ by which they would appear on their primary invariant torus. The result is summarized in the following proposition. The torus translations that lead to ambiguity are shown in Figure 27: Proposition 7.8. The orientation-preserving transformations that have more than one invariant torus are the following: (a) Simple half-turns of the form $\operatorname{diag}(-1,-1,1,1)$. On their primary torus, they appear as torus translation by $(\pi,0)$ or $(0,\pi)$. There is an infinite family of alternate tori for which they are interpreted as torus flips or torus swaps. (b) Double rotations $R_{\alpha,\pi\pm\alpha}$. On an alternate torus, they appear as reflections or glide reflections associated to torus swaps        or         . (c) Left and right rotations $R_{\alpha,\pm\alpha}$, including $\mathrm{id}$ and $-\mathrm{id}$. (For $\alpha=\pm\pi/2$, these fall also under case (b).) A left rotation $R_{\alpha,\alpha}$ with $\alpha\neq\pm\pi/2$ appears as a torus translation by $(\alpha,\alpha)$ or by $(-\alpha,-\alpha)$ on every invariant torus. Similarly, a right rotation $R_{\alpha,-\alpha}$ with $\alpha\neq\pm\pi/2$ appears as a torus translation by $(\alpha,-\alpha)$ or by $(-\alpha,\alpha)$ on every invariant torus. Proof. The orientation-preserving transformations that are not torus translations are      $\cdot$ (torus flips) and        and         (reflections and glide reflections associated to torus swaps). Every torus flip is a half-turn, and these are covered in case (a). Let us look at reflections and glide reflections associated to the torus swaps        . The torus swap        at the principal diagonal is the transformation $[i,k]$. Both $i$ and $k$ are pure quaternions, in accordance with the fact that        is a half-turn. The general torus swap of type        is obtained by combining $[i,k]$ with an arbitrary torus translation $[\exp\beta_{l}i,\exp\beta_{r}i]$: $$[i\exp\beta_{l}i,k\exp\beta_{r}i]=[\exp(\tfrac{\pi}{2}i)\exp\beta_{l}i,k(\cos\beta_{r}+i\sin\beta_{r})]=[\exp((\tfrac{\pi}{2}+\beta_{l})i),k\cos\beta_{r}+j\sin\beta_{r}]$$ The right component $k\cos\beta_{r}+j\sin\beta_{r}$ is still a unit quaternion (rotation angle $\pi/2$), and hence the right rotation $[1,\exp\beta_{r}i]$ has no effect on the type of the transformation. This is in accordance with the fact that, on the $\varphi_{1},\varphi_{2}$-torus, a right rotation is a translation perpendicular to the reflection axis of        , whose effect is just to move the reflection axis. The left rotation, however, changes the rotation angle from $\pi/2$ to $\pi/2+\beta_{l}$. The result is a rotation of type $R_{\pi+\beta_{l},\beta_{l}}$. As a torus translation $R_{\alpha_{1},\alpha_{2}}$, it lies on the line $\alpha_{1}=\alpha_{2}+\pi$ (and $\alpha_{1}=\alpha_{2}-\pi$, considering that angles are taken modulo $2\pi$), see Figure 27. The operations of type         are the mirrors of        , and hence they appear on the reflected lines $\alpha_{1}=-(\alpha_{2}\pm\pi)$. Left and right rotations have infinitely many invariant tori, but cause no confusion for our classification, because a left rotation will appear as the same left rotation on any invariant torus (possibly with an inverted angle), except when it falls under case (b). ∎ We note the curious fact that the operations that don’t have a unique invariant torus coincide with the operations whose squares are left or right rotations. Corollary 7.9. A group may have more than one invariant torus only if the translational subgroup contains only elements on the diagonals and on the tilted square in Figure 27. This excludes from the search for duplications those groups for which the translational subgroup is sufficiently rich, i.e., when both parameters $m$ and $n$ are large. Still it leaves a large number of cases where one of the parameters is small. We present the list of duplications below. 7.11.1 List of Duplications As mentioned, we have imposed the stricter conditions on $m$ and $n$ (and $a$ and $b$) in Table 6 in order to exclude all duplications. As a rule, among equal groups, we have chosen the group with the larger subgroup of torus translations (with the chosen invariant torus) to stay in the table. Table 9 lists every group $G_{1}$ that is excluded from Table 6, together with a group $G_{2}$ to which it is conjugate, and a conjugation that converts the second group to the first one. The conjugations depend on the specific parameterizations that we have chosen and that were given with each class of groups discussed above, in particular in Tables 7 and 8. In this section, we use the notation $G_{1}\doteq G_{2}$ for groups that are geometrically the same, i.e., conjugate under an orientation-preserving transformation, and we reserve the sign “$=$” for groups that are equal in our chosen coordinate system. In some classes, the choice of the two parameters $m$ and $n$ is symmetric (e.g., $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}_{m,n}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}_{n,m}$). In those cases, we have achieved uniqueness by requiring $m\geq n$ in Table 6. Such symmetries between the parameters, and other general relations are listed first for each type of group in Table 9. This is followed by a list of groups with small parameters that are explicitly excluded in Table 6. We have made some simplifications to keep the table compact. As mentioned previously, we sometimes refer to groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s)}$ or $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s)}$ where the parameter $s$ lies outside the “legal” range (20), in order to avoid case distinctions. The parameter $s$ can be brought into that range by using the equalities $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s)}=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s\pm m)}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(-n-s)}$, and similarly for           $\cdot$     . If the permissible range of parameters $s$ contains only one integer, we omit the parameter and denote the group simply by $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}$ or $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}$. In such a case, any choice of $s$ will lead to the same group. We have a few cases with more than two equal groups: $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{cm}}_{1,1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{cm}}_{1,1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{{(0)}}_{1,1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{{(0)}}_{1,2}=\langle\operatorname{diag}(1,1,-1,-1)\rangle\text{ (order 2)}$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{{(-1)}}_{2,1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{{(0)}}_{2,2}=\langle\operatorname{diag}(1,1,-1,-1),\operatorname{diag}(-1,-1,1,1)\rangle\cong D_{4}\text{ (order 4)}$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2gg}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{4,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{2,4}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{(-2)}_{4,2}=\langle\operatorname{diag}(R_{\pi/2},R_{\pi/2}),\operatorname{diag}(R_{\pi/2},R_{-\pi/2})\rangle\text{ (order 8)}$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2gm}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{cm}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{2,4}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{(-1)}_{2,2}\doteq\langle-\mathrm{id},\operatorname{diag}(1,-1,1,-1),\operatorname{diag}(R_{\pi/2},R_{-\pi/2})\rangle\text{ (order 8)}$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mg}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{cm}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{4,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{{(-2)}}_{4,1}\doteq\langle-\mathrm{id},\operatorname{diag}(1,-1,1,-1),\operatorname{diag}(R_{\pi/2},R_{\pi/2})\rangle\text{ (order 8)}$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4gm\textrm{U}}}_{1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2gg}}_{2,1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2gg}}_{1,2}\doteq\langle\operatorname{diag}(-1,-1,1,1),\operatorname{diag}(1,1,-1,1),\operatorname{diag}(1,1,1,-1)\rangle\text{ (order 8)}$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}_{2,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}_{4,2}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}_{2,4}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{{(-2)}}_{4,2}\text{ (order 16)}$$ To reduce case distinctions, some of these groups $G_{1}$ point to groups $G_{2}$ that are themselves excluded in Table 6, and which must be looked up again in Table 9. The conjugations in Table 9 were found by computer search for particular values of $m$. In many cases, the conjugate group or the conjugacy mapping depends on the parity of some parameter. We tried to simplify the entries of the table by manually adjusting them. All conjugations were checked by computer for $m\leq 100$. When the groups are translated to the Conway-Smith classification using Table 6, the duplications have easy algebraic justifications: For example, $C_{2}$ and $D_{2}$ are obviously the same group. Also, $\bar{D}_{4}$ can be replaced by $D_{4}$, see Appendix G.1 for more information. 7.11.2 A duplication example By way of example, we treat one duplication in detail: $$\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,n}^{\textbf{c2mm}}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,2n}^{(\frac{n-1}{2})}\text{, for odd $n$.}$$ (23) Figure 28 shows the action of these groups on the torus for $n=5$. We can confirm that, in accordance with Corollary 7.9, the 10 torus translations of $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,10}^{(2)}$ lie only on a diagonal and on the line $\alpha_{1}+\alpha_{2}=\pm\pi$. The latter 5 translations become reflections and glide reflections in $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,5}^{\textbf{c2mm}}$. More precisely, in accordance with Figure 27, they are the reflections at the         diagonal (4 glide reflections and one reflection). The picture shows actually more glide reflection and reflection axes than the order of the group would allow. The reason is that every glide reflection in this group can also be interpreted as a reflection, at a different axis. We now prove the conjugacy formally. Since these groups have the same order $4n$, it is enough to show that $G_{2}=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,2n}^{(\frac{n-1}{2})}$ is contained in $G_{1}=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,n}^{\textbf{c2mm}}$. We do this by checking that the generators of $G_{2}$, under conjugation by the element $h$ from Table 9, are elements of $G_{1}$. Here are the generators we gave for these groups: $$\displaystyle G_{1}=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,n}^{\textbf{c2mm}}$$ $$\displaystyle=\langle[1,1],[1,e^{\frac{i2\pi}{n}}],[-1,e^{\frac{\pi i}{n}}],[i,k],[-k,i]\rangle\text{\quad(see Table~{}\ref{tab:full-generators})}$$ $$\displaystyle G_{2}=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,2n}^{(\frac{n-1}{2})}$$ $$\displaystyle=\langle[e^{-2\pi i},1],[-i,e^{\frac{\pi i}{2n}}],[j,j]\rangle=\langle[-i,e^{\frac{\pi i}{2n}}],[j,j]\rangle\text{\quad(see Section~{}\ref{sec:translations+flip})}$$ We have to choose different conjugations depending on the value of $n$ modulo $4$. • For $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,4m-1}^{\textbf{c2mm}}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,8m-2}^{(2m-1)}$, we do conjugation by $h_{1}=[1-j,1]$: $$\displaystyle[\tfrac{1+j}{2},1][-i,e^{\frac{\pi i}{8m-2}}][1-j,1]=[k,e^{\frac{\pi i}{8m-2}}]=[k,e^{\frac{i(14m-3)\pi}{8m-2}}]=[k,-i][1,e^{\frac{i2\pi}{4m-1}}]^{m}\in G_{1}$$ $$\displaystyle[\tfrac{1+j}{2},1][j,j][1-j,1]=[j,j]=[i,k][-k,i]\in G_{1}$$ • For $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,4m-3}^{\textbf{c2mm}}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,8m-6}^{(2m-2)}$, we do conjugation by $h_{2}=[1+j,1]$: $$\displaystyle[\tfrac{1-j}{2},1][-i,e^{\frac{\pi i}{8m-6}}][1+j,1]=[-k,e^{\frac{\pi i}{8m-6}}]=[j,j][1,e^{\frac{i2\pi}{4m-3}}]^{m-1}[i,k]\in G_{1}$$ $$\displaystyle[\tfrac{1-j}{2},1][j,j][1+j,1]=[j,j]=[i,k][-k,i]\in G_{1}$$ We can also study this transformation geometrically: What happens to the torus under this coordinate transformation? On which other torus do the glide reflections of $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,n}^{\textbf{c2mm}}$ appear as torus translations? Indeed, there is another simple equation for a Clifford torus that is commonly used. We can transform our equation for the torus $\mathbb{T}$ as follows: $$\displaystyle x_{1}^{2}+x_{2}^{2}$$ $$\displaystyle=x_{3}^{2}+x_{4}^{2}$$ $$\displaystyle x_{2}^{2}-x_{4}^{2}$$ $$\displaystyle=x_{3}^{2}-x_{1}^{2}$$ $$\displaystyle(x_{2}-x_{4})(x_{2}+x_{4})$$ $$\displaystyle=(x_{3}+x_{1})(x_{3}-x_{1})$$ (24) $$\displaystyle\tilde{x}_{2}\tilde{x}_{4}$$ $$\displaystyle=\tilde{x}_{1}\tilde{x}_{3},$$ (25) with transformed coordinates $(\tilde{x}_{1},\tilde{x}_{2},\tilde{x}_{3},\tilde{x}_{4})$. This is, for example, how the torus is introduced in Coxeter [12, Eq. (4.41)], who has a separate section on “the spherical torus” [12, §4.4, p. 35–37]. Now, the coordinate change from (24) to (25) is precisely what the transformation $h_{1}=[1-j,1]$ in our example achieves: $[1-j,1]$ maps the quaternion units $(1,i,j,k)\equiv(x_{1},x_{2},x_{3},x_{4})$ to $(1+j,i-k,-1+j,i+k)\equiv(x_{1}+x_{3},x_{2}-x_{4},-x_{1}+x_{3},x_{2}+x_{4})=(\tilde{x}_{1},\tilde{x}_{2},\tilde{x}_{3},\tilde{x}_{4})$. Many conjugations in Table 9 are of this form. The reason why we have chosen the example (23) for manual confirmation is that it corresponds to one of two duplications in the Conway-Smith classification that are not literally mentioned there: $$\displaystyle+\tfrac{1}{4}[D_{4}\times\bar{D}_{4n}]$$ $$\displaystyle\doteq+\tfrac{1}{4}[D_{4}\times D_{4n}^{(1)}]\text{ for odd $n$.}$$ $$\displaystyle\pm\tfrac{1}{4}[D_{4}\times\bar{D}_{4n}]$$ $$\displaystyle\doteq\pm\tfrac{1}{4}[D_{4}\times D_{4n}^{(1)}]$$ The second equality appears in Table 9 as $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}_{2,2m}$ for odd $m$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2gm}}_{2,2m}$ for even $m$. The reason behind these duplications is discussed in Section G.1. 7.12 Comparison with the classification of Conway and Smith Looking at the right column of Table 6, we see that our classification and the classification of Conway and Smith [8] have some similarity in the rough categorization. For example the “mixed” groups of type $[C\times D]$ are the torus swap groups (type                 ). In the finer details, however, the two classifications are often quite at odds with each other. Groups that come from one geometric family correspond to different classes in the CS classification from the algebraic viewpoint, depending on parity conditions. On the other hand, some groups that belong together algebraically appear in different categories of our classification. While we acquired some understanding of the classic classification of the toroidal groups according to Conway and Smith [8], in particular, of the simplest case of the torus translation groups (type               , corresponding to $[C\times C]$, see Appendix G), most entries in the right column of Table 6 were filled with the help of a computer, by generating the groups from the specified generators and comparing them by the fingerprints described in Section 10.2, and recognizing patterns. One reason for the difficulty is the distinction between haploid and diploid groups, a term borrowed from biology by Conway and Smith [8]. A group is diploid if it contains the central reflection $-\mathrm{id}$; otherwise, it is haploid.181818Threlfall and Seifert [35, § 5] used the terms zweistufig and einstufig for these groups. In the classic classification, the diploid groups arise easily, but the haploid groups must be specially constructed as index-2 subgroups of diploid groups. Thus, the presence or absence of $-\mathrm{id}$ appears at the very beginning of the classic classification by quaternions. In the notation of [8], diploid and haploid groups are distinguished by the prefix $\pm$ and $+$. For our geometric construction of the toroidal groups, this distinction is ephemeral. The central reflection $-\mathrm{id}$ is the torus translation $R_{\pi,\pi}$ in the center of the parameter square. It depends on some parity conditions of the translation parameters whether this element belongs to $G_{\Box}$. (For example, one can easily work out from Figure 23 that the groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pm}}$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{pg}}$ are diploid if $m$ and $n$ are even. The groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{cm}}$ are diploid if $m$ and $n$ have the same parity.) In elliptic geometry, where opposite points of $S^{3}$ are identified, the distinction between haploid and the corresponding diploid groups disappears, or in other words, only diploid groups play a role in elliptic space. 8 The polyhedral groups We will now explain the polyhedral groups, which are related to the regular 4-dimensional polytopes. The regular 4-dimensional polytopes have a rich and beautiful structure. They and their symmetry groups have been amply discussed in the literature, see for example [10, Chapters VIII and XIII], [15, §26, §27], and therefore we will be brief, except that we study in some more detail the groups that come in enantiomorphic pairs. Table 10 gives an overview,191919In Du Val’s enumeration of the achiral groups [15, p. 61], the descriptions of the orientation-reversing elements of the groups #41 $(T/V;T/V)^{*}$ and #42 $(T/V;T/V)^{*}_{-}$ are swapped by mistake. We follow Goursat and Hurley and go with the convention that the group with the more natural choice of elements should be associated to the name without a distinguishing subscript. Du Val himself, in the detailed discussion of these groups [15, p. 73], follows the same (correct) interpretation. and Table 16 in Appendix A lists these groups with generators and cross references to other classifications. We mention that pictures of the cube, the 120-cell, the 24-cell, and the bitruncated 24-cell (also known as the 48-cell, defined in Section 8.6.1) arise among the illustrations for the tubical groups, see Section 6.12. 8.1 The Coxeter notation for groups For the geometric description of the groups, we will use the notations of Coxeter, with adaptations by Conway and Smith [8, §4.4]. In the basic Coxeter group notation, a sequence of $n-1$ numbers $[p,q,\ldots,r,s]$ stands for the symmetry group of the $n$-dimensional polytope $\{p,q,\ldots,r,s\}$. This is generated by $n$ reflections $R_{1},\ldots,R_{n}$. Each reflection is its own mirror: $(R_{i})^{2}=1$, and any two adjacent reflections generate a rotation whose order is specified in the sequence: $(R_{1}R_{2})^{p}=(R_{2}R_{3})^{q}=\cdots=(R_{n-1}R_{n})^{s}=1$. Nonadjacent mirrors are perpendicular: $R_{i}R_{j}=R_{j}R_{i}$ for $|i-j|\geq 2$. $G^{+}$ denotes the chiral part of the group $G$, which contains products of an even number of reflections. When just one of the numbers $p,q,\ldots,r,s$ is even, say that between $R_{k}$ and $R_{k+1}$, there are three further subgroups. The two subgroups $[^{+}p,q,\ldots,r,s]$ and $[p,q,\ldots,r,s^{+}]$ consist of words that use respectively $R_{1},\ldots,R_{k}$ and $R_{k+1},\ldots,R_{n}$ an even number of times. Their intersection is the index-4 subgroup $[^{+}p,q,\ldots,r,s^{+}]$. Coxeter’s original notation for $[^{+}p,q,\ldots]$ is $[p^{+},q,\ldots]$. A second pair of brackets, like in $[[3,3,3]]$, indicates a swap between a polytope and its polar, following [11]. Some further extensions of the notation will be needed for the axial groups in Section 9, see Table 15. In some cases, we have extended the Coxeter notations in an ad-hoc manner, allowing us to avoid other ad-hoc extension of [8]. 8.2 Strongly inscribed polytopes We say that a polytope $P$ is strongly inscribed in a polytope $Q$ if every vertex of $P$ is a vertex of $Q$, and every facet of $Q$ contains a facet of $P$. Figure 29 shows two three-dimensional examples. This relation between $P$ and $Q$ is reversed under polarity: With respect to an origin that lies inside $P$, the polar polytope $Q^{\Delta}$ will be strongly inscribed in $P^{\Delta}$. In four dimensions, we will show two instances of this phenomenon where a rotated copy of the polar polytope $P^{\Delta}$ of a polytope $P$ can be strongly inscribed into $P$. Among the regular polytopes in three dimensions, there are just some degenerate cases, where every facet of $Q$ contains only an edge of $P$: In a cube $Q$, a regular tetrahedron $P$ can be inscribed, with the six edges of $P$ on the six square sides of $Q$. In a dodecahedron $Q$, a cube $P$ can be inscribed, with its twelve edges on the twelve pentagons of $Q$. The tetrahedron inscribed in a dodecahedron does not fall in this category, since its edges go through the interior of the dodecahedron. 8.3 Symmetries of the simplex The full symmetry group of the 4-simplex is $[3,3,3]$. The group $[[3,3,3]]$ additionally swaps (by negation) the simplex with its polar. The chiral versions are $[3,3,3]^{+}$ and $[[3,3,3]]^{+}$. The group $[[3,3,3]^{+}]$ allows the flip to the polar only in connection with a reversal of orientation. 8.4 Symmetries of the hypercube (and its polar, the cross-polytope) The full symmetry group of the hypercube is $[3,3,4]$. It is isomorphic to the semidirect product of coordinate permutations with sign flips $\{(\pm 1,\pm 1,\pm 1,\pm 1)\}\rtimes S_{4}$. This group has four subgroups. The cube has a natural 2-coloring of the vertices that gives alternating colors to adjacent vertices. One can check that the vertices of each color form a cross-polytope. This cross-polytope is strongly inscribed in the cube: Each facet of the hypercube contains exactly one (tetrahedral) facet of that cross-polytope. The subgroup $[3,3,4^{+}]$ contains those elements that preserve the 2-coloring. Equivalently, these are the elements that have an even number of sign changes. The subgroup $[^{+}3,3,4]$ contains those elements that have an even permutation of coordinates. It is isomorphic to $\{(\pm 1,\pm 1,\pm 1,\pm 1)\}\rtimes A_{4}$. The subgroup $[^{+}3,3,4^{+}]$ is their intersection. The subgroup $[3,3,4]^{+}$ contains the orientation-preserving transformations. These are the transformations where the parity of the sign changes matches the parity of the permutation. It is interesting to note that the 3-dimensional group $[3,4]$ closely mirrors the picture for $[3,3,4]$, see Table 11. Both in three and four dimensions, the “half-cube” is itself a regular polytope: in 3 dimensions, it is the regular tetrahedron, while in 4 dimensions, it is the cross-polytope. The subgroup $[3,4^{+}]=TO$ preserves the 2-coloring of the vertices, i.e. it contains all symmetries of the tetrahedron. Its subgroup $[^{+}3,4^{+}]=+T$ contains the orientation-preserving symmetries of the tetrahedron. The group $[^{+}3,4]=\pm T$ contains the orientation-preserving symmetries of the tetrahedron together with its central reflection. It is also characterized as those symmetries that subject the three space axes to an even permutation. The group $[3,4]^{+}$ contains all orientation-preserving transformations in $[3,4]$. For the groups $+T$ and $TO$ we have used alternate Coxeter names, which are equivalent to the standard ones, in order to highlight the analogy with 4 dimensions, cf. [6, p. 390]. 8.5 Symmetries of the 600-cell (and its polar, the 120-cell) The 120 quaternions $2I$ form the vertices of a 600-cell $P_{600}=\{3,3,5\}$. These quaternions are the centers of the 120 dodecahedra of the polar 120-cell $Q_{120}=\{5,3,3\}$, which has 600 vertices. The full symmetry group of $P_{600}$ (or $Q_{120}$) is $[3,3,5]$. Its chiral version is $[3,3,5]^{+}$. The group has four interesting subgroups, which come in enantiomorphic versions. Under the left rotations by elements of $2I$, or in other words, under the group $\pm[I\times C_{1}]$, the 600 vertices of $Q_{120}$ decompose into five orbits, as shown by the five labels $A,B,C,D,E$ for the cell $F_{0}$ in Figure 29(a), cf. [15, Figure 22, p. 84]. We can regard this as a 5-coloring of the vertices. (The points of each color are labeled $X,X^{\prime},X^{\prime\prime},X^{\prime\prime\prime}$ according to the horizontal levels in this picture, but this grouping has otherwise no significance.) One can indeed check that the mapping from a pentagonal face to the opposite face with a left screw by $\pi/5$, as effected by the elements of $\pm[I\times C_{1}]$, preserves the coloring. The vertices of one color form a regular tetrahedron inscribed in a regular dodecahedron, and there are thus five ways inscribe such a “left” tetrahedron in a regular dodecahedron. There is an analogous “right” 5-coloring by the orbits under $\pm[C_{1}\times I]$, and correspondingly, there are five ways of inscribing a “right” tetrahedron in a regular dodecahedron. One such tetrahedron is shown in Figure 29(b).202020The unions of these five or ten tetrahedra inside a dodecahedron form nice nonconvex star-like polyhedral compounds, see [15, Figures 14 and 15a–b]. See also https://blogs.ams.org/visualinsight/2015/05/15/dodecahedron-with-5-tetrahedra/ from the AMS blog “Visual Insight”. The left and right tetrahedra are mirrors of each other, and they can be distinguished by looking at the paths of length 3 on the dodecahedron between vertices of a tetrahedron: These paths are either S-shaped zigzag paths (for left tetrahedra) or they have the shape of an inverted S (for right tetrahedra). Every color class consists of the points $2I\cdot p_{0}$ for some starting point $p_{0}$, and hence it forms a rotated copy $P_{600}^{\prime}$ the 600-cell $P_{600}$. This polytope is strongly inscribed in $Q_{120}$: For each dodecahedron of $Q_{120}$, there is a unique left rotation in $\pm[I\times C_{1}]$ mapping $F_{0}$ to this dodecahedron, and in this way we get 120 images of the starting tetrahedron. Figure 29(a) shows these tetrahedra in three adjacent dodecahedra. (As a sanity check, one can perform a small calculation: A vertex is shared by four tetrahedra—one tetrahedron in each of the four dodecahedra meeting in the vertex—, and this gives a consistent vertex count, since every tetrahedron has four vertices and $120\cdot 4/4=120$.) The red points in Figure 29(b) form part of an analogous 600-cell $P_{600}^{\prime R}$ spanned by right inscribed tetrahedra. Some additional edges of this $P_{600}^{\prime R}$, which don’t lie in the three dodecahedra that are shown, are drawn in brown. The group $\pm[I\times T]$ consists of those symmetries of that simultaneously preserve the 120-cell $Q_{120}$ and its strongly inscribed “left” 600-cell $P_{600}^{\prime}$. To see this, consider the dodecahedral cell $F_{0}$ that is centered at the quaternion $1$. As mentioned, each left multiplication by an element $2I$ maps $F_{0}$, together with its inscribed tetrahedron $AA^{\prime}A^{\prime\prime}A^{\prime\prime\prime}$ to a unique dodecahedral cell of $Q_{120}$ with the corresponding tetrahedron. To understand the full group, we have to consider those group elements that keep $F_{0}$ fixed. $\pm[I\times T]$ consists of the elements $[l,r]$ with $(l,r)\in 2I\times 2T$. The transformation $[l,r]$ keeps $F_{0}$ fixed iff it maps $1$ to $1$, and this is the case iff $l=r$. These elements are the elements $[r,r]=[r]$ with $r\in 2T$, in other words, they form the tetrahedral group $\pm T$. And indeed, the symmetries of $F_{0}$ that keep the tetrahedron $AA^{\prime}A^{\prime\prime}A^{\prime\prime\prime}$ invariant form a tetrahedral group. We chose $[3,3,5]_{\frac{1}{5}L}^{+}$ as an ad-hoc extension of Coxeter’s notation for the group $\pm[I\times T]$, to indicate a $1/5$ fraction of the group $[3,3,5]^{+}$. Now, there is also the original 600-cell $P_{600}$, the polar of the $Q_{120}$, having one vertex in the center of each dodecahedron. This gives rise to a larger group $[[3,3,5]_{\frac{1}{5}L}^{+}]=\pm[I\times O]$ where the two 600-cells $P_{600}$ and $P_{600}^{\prime}$ (properly scaled) are swapped. This group is not a subgroup of any other 4-dimensional point group. When the starting point $s$ is chosen in the center of the dodecahedral cell of $Q_{120}$, the polar orbit polytope of this group has 240 cells. Figure 31 shows such a cell $C$. The points of the orbit closest to $s$ are four vertices of the dodecahedron (say, those of color $A$, the red points in Figure 29(a)). They form a tetrahedral cell of $P_{600}^{\prime}$, and they are responsible for the rough tetrahedral shape of $C$. The centers of the twelve neighboring dodecahedra in $Q_{120}$ give rise to the twelve small triangular faces, which are the remainders of the twelve pentagons of the original dodecahedral cell, when the polar is not present. In addition, there are four neighboring cells that are adjacent through hexagonal faces, opposite the large 12-gons. They centered at vertices of $P_{600}^{\prime}$. Two of these are shown as red points in Figure 29(a), the point adjacent to $C$ in the lower cell $F_{9}$, and the point adjacent to $D^{\prime\prime\prime}$ in the upper cell $F_{1}$. The cell has chiral tetrahedral symmetry $+T$. In particular, it is not mirror-symmetric. In [16, Figure 9], this cell is shown together with a fundamental domain inside it. Incidentally, this cell (and the orbit polytope) coincides with that of the tubical group $\pm[I\times C_{4}]$ when the starting point is chosen on a two-fold rotation center (Figure 37). If we use the “right” 5-coloring we get the corresponding groups $[3,3,5]_{\frac{1}{5}R}^{+}=\pm[T\times I]$ and $[[3,3,5]_{\frac{1}{5}R}^{+}]=\pm[O\times I]$. See Figure 29(b). These four groups come in two enantiomorphic pairs. The two corresponding groups are mirrors of each other. (They are therefore metachiral groups in the terminology of Conway and Smith [8, §4.6].) 8.6 Symmetries of the 24-cell The set of 24 quaternions of $2T$ form the vertices of a regular 24-cell $P_{T}$. The complete symmetry group of $P_{T}$ is $[3,4,3]$, and its chiral version is $[3,4,3]^{+}$. The points of $P_{T}$ can be 3-colored: There are 8 vertices of $P_{T}$ whose coordinates are the permutations of $(\pm 1,0,0,0)$. They form a cross-polytope. The 16 remaining vertices are of the form $(\pm 1/2,\pm 1/2,\pm 1/2,\pm 1/2)$. They are the vertices of a 4-cube, and they can be naturally divided into two color groups of 8, as mentioned in Section 8.4. In total, we have three groups of 8 vertices, which we interpret as a 3-coloring of the vertices by the colors $a,b,c$, see Figure 32a. Every triangular face contains vertices from all three colors. Thus, every symmetry of $P_{T}$ induces a permutation of the colors. We can look at those symmetries for which the permutations of the colors is even. In other words, besides the identity, we allow only cyclic shifts. These form the subgroup $[3,4,3^{+}]$. Another way to express this is to establish an orientation of the edges according to some cyclic ordering of the colors $a\to b\to c\to a$ (a coherent orientation [10, §8.3]). The subgroup $[3,4,3^{+}]$ consists of those elements that preserve this edge orientation. (This is analogous to the pyritohedral group $\pm T$ in three dimensions, which can also be described as preserving the orientation of the edges of the octahedron shown in Figure 32a.) The 24-cell is a self-dual polytope. In fact, the vertices of the polar polytope $P_{T_{1}}$ (properly scaled) are the quaternions in the coset of $2T$ in $2O$. If we add to $[3,4,3]$ the symmetries that swap $P_{T}$ and $P_{T_{1}}$, we get the group $[[3,4,3]]$, the symmetry group of the joint configuration $P_{O}=P_{T}\cup P_{T_{1}}$. Its chiral version is $[[3,4,3]]^{+}$. The subgroup $[[3,4,3]^{+}]$ contains the symmetries that exchange $P_{T}$ and $P_{T_{1}}$ only in combination with a reversal of orientation. This group is interesting, because it is achiral, but it contains no reflections. The polar polytope also has a three-coloring of its vertices. (One can give the partition explicitly in terms of the coordinates, as for $P_{T}$: The vertices of $P_{T_{1}}$ are the centers of the facets of $P_{T}$, properly scaled, and their coordinates $(x_{1},x_{2},x_{3},x_{4})$ are all permutations of the coordinates $(\pm 1,\pm 1,0,0)/\sqrt{2}$. The three color classes are characterized by the condition $|x_{1}|=|x_{2}|$, $|x_{1}|=|x_{3}|$, and $|x_{1}|=|x_{4}|$, respectively.) We can interpret this 3-coloring as a 3-coloring of the cells of $P_{T}$, which we denote by $A,B,C$. The group $[^{+}3,4,3]$ contains those symmetries of $P_{T}$ for which the permutation of the colors of the cells is even. This group is of course geometrically the same as $[3,4,3^{+}]$, but we can also have both conditions: $[^{+}3,4,3^{+}]$. 8.6.1 A pair of enantiomorphic groups Finally, we have two more groups, which are mirrors of each other. To understand these groups, let us look at the polar orbit polytope of $P_{O}=P_{T}\cup P_{T_{1}}$: The octahedral cells of the 24-cell shrink to truncated cubes with 6 regular octagons and 8 triangles as faces, see Figure 32b. This polytope is sometimes called the bitruncated 24-cell, or truncated-cubical tetracontaoctachoron. We will simply refer to it as the 48-cell. The small triangles are remainders from the triangular faces of the original octahedral cells of the 24-cell, which are centered at the points $P_{T}$. Figure 32b shows a cell of color $A$. The triangles lead to adjacent cells, colored $B$ or $C$, and we have labeled the triangles accordingly. The octagons lead to cells centered at points of $P_{T}$, and we have labeled them with the corresponding color $a$, $b$, or $c$. Figure 32c shows an adjacent “dual” cell of the 48-cell, centered at a point of color $c$. Note that these two cells are not attached in a straight way, but by a screw of $45^{\circ}$. We can enforce the screw to be a left screw by decorating each of the six octagonal faces with a diagonal, as shown in Figure 34. The group $\pm[O\times C_{1}]$ will map one selected cell to each cell by a unique left multiplication with an element of $2O$ and hence will carry the diagonal pattern to every truncated cube of the 48-cell. The diagonals on adjacent cells match: A left rotation that maps a cell to the adjacent cell performs a left screw by $45^{\circ}$, and one can check in Figure 34 that the screw that maps an octagon to the opposite octagon while maintaining the diagonal is a left screw. The group $\pm[O\times T]$ is the group that preserves the set of diagonals (ignoring the colors). This can be confirmed as in the case $\pm[I\times T]$ in Section 8.5: The group that fixes a cell should be the tetrahedral group $+T$, and indeed, the diagonal pattern of Figure 34 has tetrahedral symmetry: The diagonals connect only the $B$-triangles, and the $B$-triangles form a tetrahedral pattern. We have chosen the ad-hoc extension of Coxeter’s notation $[[^{+}3,4,3^{+}]]_{L}$ for the group $\pm[O\times T]$ to indicate that it extends the operations $[^{+}3,4,3^{+}]$ by a swap between $P_{T}$ and the polar polytope $P_{T_{1}}$, and this swap is effected by left rotations. Of course, there is a mirror pattern of Figure 34, which leads to the mirror group $\pm[T\times O]=[[^{+}3,4,3^{+}]]_{R}$, and these two groups are enantiomorphic. Analogies with three dimensions. As pointed out by Du Val [15, p. 71], there is a strong analogy between the symmetries of the different self-dual polytopes in three and in four dimensions, as shown in Table 12. The simplex is a self-dual regular polytope, both in 4 dimensions (Section 8.3) and in 3 dimensions. In 3 dimensions, moreover, the simplex and its polar form the cube, and thus we have used alternate Coxeter notations to highlight the analogy (opposite ones from Table 11, where the analogy with the cube is emphasized). Only five of the symmetries of the 24-cell and its polar are used. From the viewpoint of the cross-polytope, one could also match the group $\pm[T\times T]\cdot 2=[3,4,3^{+}]=[^{+}3,4,3]$ of order 576 with the pyritohedral group $\pm T=[^{+}3,4]$ of order 24, because they are both based on consistent edge orientations. A strongly inscribed polar polytope. The convex hull of the points $P_{O}=P_{T}\cup P_{T_{1}}$ is a polytope with 288 equal tetrahedral facets, which we call the 288-cell. It is polar to the 48-cell. We perform the same procedure as in Section 8.5 and split the vertices of the 48-cell into orbits under the action of $\pm[O\times C_{1}]$. We will see that this leads to another instance of polytope with a strongly inscribed copy of its polar. However, we won’t get any new groups. The 48-cell has 288 vertices, and they are partitioned into 6 orbits of size 48, as shown in Figure 34, cf. Du Val [15, Figure 24, p. 85]: There is a natural partition of the colors into three pairs $R_{1},R_{2}$; $G_{1},G_{2}$; and $B_{1},B_{2}$, according to the opposite octagons to which the colors belong. (The partition of each pair into $R_{1}$ and $R_{2}$, etc., is arbitrary.) Indeed, one can check that the transition from an octagon to the opposite octagon with a left screw of $45^{\circ}$ preserves the six colors (indicated for the red colors by two corresponding crosses.) Likewise, the transition from a triangle to the opposite triangle with a left screw of $60^{\circ}$ preserves the colors. Now, as in Section 8.5, the points of one color form a right coset of $2O$, and hence they form a rotated and scaled copy $P_{O}^{\prime}$ of the 288-cell $P_{O}$. This polytope is strongly inscribed in the 48-cell: Each truncated cube of the 48-cell contains one tetrahedron of $P_{O}^{\prime}$. Figure 35a shows one such tetrahedron, spanned by the vertices of color $R_{1}$. The geometry of this tetrahedron becomes clearer after rotating it by $45^{\circ}$ around the midpoints of the front and back octagons, as in Figure 35b. We see that the tetrahedron has four equal sides, whose length is the diagonal of the octagons, and two opposite sides of larger length, equal to the diagonal of a circumscribed square. The 2-faces are therefore congruent isosceles triangles. Such a tetrahedron is called a tetragonal disphenoid.212121The side length of the “untruncated” cube is $\sqrt{8}-2\approx 0.8$, which equals the edge length of a circumscribed 8-gon around a unit circle. Hence the two long edges of the tetrahedra, highlighted in bold, have length $\sqrt{2}(\sqrt{8}-2)=4-\sqrt{8}\approx 1.17$. The four short edges have length $\sqrt{8(10-\sqrt{98})}\approx 0.9$, and the edge length of the 48-cell is $6-\sqrt{32}\approx 0.34$. The symmetry group of the 48-cell together with its strongly inscribed 288-cell $P_{O}^{\prime}$ is the tubical group $\pm[O\times D_{4}]$, because the symmetry group of the disphenoid inside the truncated cube is only the vierergruppe $D_{4}$, consisting of half-turns through edge midpoints. We can try to start with the rotated tetrahedra of Figure 35b, spanned by two opposite diagonals used for the decoration in Figure 34, hoping to recover the group $\pm[O\times T]$. However, this tetrahedron contains vertices of two colors $B_{1}$ and $B_{2}$, and its orbit will thus contain the union of the orbits $B_{1}$ and $B_{2}$. Inside each truncated cube, the convex hull forms a quadratic antiprism, as shown Figure 35c. (The convex hull contains 48 such antiprisms plus 192 tetrahedral cells, for a total of 240 facets.) 9 The axial groups These are the finite subgroups of the direct product $\mathrm{O}(3)\times\mathrm{O}(1)$. The subgroup $\mathrm{O}(1)$ operates on the 4-th coordinate $x_{4}$, and we denote its elements by $\mathrm{O}(1)=\{+x_{4},-x_{4}\}$. Here $+x_{4}$ is the identity, and $-x_{4}$ denotes the reflection of the 4-th coordinate. Let $G$ be such an axial group. Let $G_{3}\in\mathrm{O}(3)$ be the “projection” of $G$ on $\mathrm{O}(3)$. That is, $$G_{3}:=\{\,g\in\mathrm{O}(3)\mid(g,+x_{4})\in G\text{ or }(g,-x_{4})\in G\,\}.$$ If $G_{3}$ itself is a 3-dimensional axial group, i.e. $G\leqslant\mathrm{O}(2)\times\mathrm{O}(1)$, then we may call $G$ a doubly axial group. In this case, we prefer to regard $G$ as a toroidal group in $\mathrm{O}(2)\times\mathrm{O}(1)\times\mathrm{O}(1)\leqslant\mathrm{O}(2)\times\mathrm{O}(2)$ and classify it as such. (These groups are the subgroups of $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\textbf{p2mm}}_{m,2}$.) Hence from now on, we assume that $G_{3}$ is not an axial 3-dimensional group, i.e., we assume that $G_{3}\leqslant O(3)$ is one of the seven polyhedral 3-dimensional groups. These are well-understood, and thus the axial groups are quite easy to classify. There are 21 axial groups (excluding the doubly axial groups), and their full list is given below in Table 15, with references to other classifications from the literature. Together with the polyhedral groups in Table 10, these groups exhaust all entries in [8, Tables 4.2 and 4.3] except the toroidal groups. Table 16 in Appendix A lists them with generators and cross references to other classifications. Note that the product $\mathrm{O}(3)\times\mathrm{O}(1)$ used here is different from the product $\pm[L\times R]$ on which the classic classification is based. Both are direct products in the group-theoretic sense, but $\mathrm{O}(3)\times\mathrm{O}(1)$ is a direct sum, a “Cartesian” product in a straightforward geometric sense, consisting of pairs of independent transformations in orthogonal subspaces, whereas the product $\pm[L\times R]$, which is specific to $\mathrm{SO}(4)$, refers to the representation $[l,r]$ by pairs of quaternions, which have by themselves a significance as operations $[l]$ and $[r]$ in $\mathrm{SO}(3)$. We will now derive the axial groups systematically. Let $G_{3}^{+x_{4}}\leqslant\mathrm{O}(3)$ be the subgroup of $G_{3}$ of those elements that don’t negate the 4-th coordinate. That is, $$G^{+x_{4}}_{3}:=\{\,g\in\mathrm{O}(3)\mid(g,+x_{4})\in G\,\}.$$ The subgroup $G^{+x_{4}}_{3}$ is either equal to $G_{3}$, or it is an index-2 subgroup of $G_{3}$. If $G^{+x_{4}}_{3}=G_{3}$, there are two cases, which are both easy: we can form the “pyramidal” group $G_{3}\times\{+x_{4}\}$, which does not move the 4-th dimension at all, or the full “prismatic” group $G_{3}\times\{+x_{4},-x_{4}\}$. This gives two axial groups for each three-dimensional polyhedral group $G_{3}\leqslant\mathrm{SO}(3)$, and they are listed in Table 13, together with their “CS names” following Conway and Smith [8], and their “Coxeter names”. The prismatic groups are never chiral. The pyramidal group $G_{3}\times\{+x_{4}\}$ is chiral iff $G_{3}$ is: These are the groups $+I$, $+O$, and $+T$. We are left with the case that $G_{3}^{+x_{4}}$ is an index-2 subgroup $H$ of $G_{3}$. In this case, the group $G$ is uniquely determined by $H$ and $G_{3}$: It consists of the elements $(g,+x_{4})$ for $g\in H$ and $(g,-x_{4})$ for $g\in G_{3}-H$. We denote this group as “$H$ in $G_{3}$”. As an abstract group, it is isomorphic to $G_{3}$. There are seven index-2 containments among the three-dimensional polyhedral groups. (See [8, Figures 3.9 and 3.10] for an overview about all index-2 containments in $\mathrm{O}(3)$.) They lead to seven “hybrid axial groups”, which are listed in Table 14. There are several methods by which such an index-2 containment can be constructed, and we indicate in the table which methods are applicable: 1. Chirality: $G_{3}^{+x_{4}}$ is the chiral part of an achiral group $G_{3}$. In this case, the resulting group will be chiral, because the orientation-reversing elements of $G_{3}$ are composed with the reflection of the axis. In other words, $G$ is the chiral part $(G_{3}\times\{x_{4},-x_{4}\})^{+}$ of the prismatic group $G_{3}\times\{x_{4},-x_{4}\}$. 2. Center: $G_{3}^{+x_{4}}$ does not contain the central reflection. In this case, an index-2 extension $G_{3}$ of $G_{3}^{+x_{4}}$ can always be obtained by adjoining the central reflection (in $\mathbb{R}^{3}$). The resulting group “$G_{3}^{+x_{4}}$ in $G_{3}$” is equivalently thought of as simply adjoining the central reflection (in $\mathbb{R}^{4}$) to $G_{3}^{+x_{4}}$. These groups can be recognized as having their Coxeter names prefixed with “$2.$”. $G$ is achiral iff $G_{3}^{+x_{4}}$ is achiral, and in this case, the construction is simultaneously a case of the chirality method. 3. Alternation: This applies to the octahedral groups, which are symmetries of the cube. The vertices of the cube can be two-colored. The subgroup consists of those transformations that preserve the coloring. 4. Edge orientation: There is only one case where this applies, namely the pyritohedral group $\pm T$ as a subgroup of the full octahedral group $\pm O$. The edges of the octahedron can be coherently oriented in such a way that the boundary of every face is a directed cycle. The subgroup consists of those transformations that preserve this orientation (cf. the use of the edge orientation for the 24-cell and its polar, Section 8.6). Often, the same result can be obtained by two methods. For example, $TO$ in $\pm O$ results both from alternation and from center. The group “$G_{3}^{+x_{4}}$ in $G_{3}$” is chiral if and only if $G_{3}^{+x_{4}}$ is chiral and $G_{3}$ is achiral, because the elements of $G_{3}\setminus G_{3}^{+x_{4}}$ are flipped by the $x_{4}$-reflection. These are the case of the form “$+G$ in $\pm G$” in the table, plus the group “$+T$ in $TO$”. The situation is very much analogous to the construction of the achiral groups in $\mathrm{O}(3)$ from the chiral groups in $\mathrm{SO}(3)$ and their index-2 subgroups in [8, §3.8], except that Conway and Smith prefer to extend by the algebraically simpler central inversion $-\mathrm{id}$ instead of the geometrically more natural reflection of the axial coordinate. The maximal axial groups are $\pm\frac{1}{60}[I\times I]\cdot 2=2.[3,5]$ and $\pm\frac{1}{24}[O\times O]\cdot 2=2.[3,4]$. Hence, the axial groups can be characterized as the symmetries of a 4-dimensional prism over an icosahedron or over an octahedron, and the subgroups of these. (This includes, however, the doubly axial groups, which we have classified under the toroidal groups.) We mention that, among the $3\times 7=21$ axial groups, there are 7 chiral ones and 14 achiral ones. Among the polyhedral groups, there are 14 chiral ones. We have no explanation for the frequent appearance of the magic number 7 and its multiples. 10 Computer calculations We used the help of computers for investigating the groups and checking the results, as well as for the preparation of the figures and tables. We used SageMath [34] and its interface to the GAP [19] software for group-theoretic calculations. The computer code is available in https://github.com/LaisRast/point-groups. 10.1 Representation of transformations and groups We represent the orthogonal transformations $[l,r]$ and $*[l,r]$ by the quaternion pair $(l,r)$ and a bit for indicating orientation reversal. In a group, each transformation is represented twice, by the equivalent pairs $(l,r)$ and $(-l,-r)$. We used two different representations for quaternions: For the elements of $2I$, $2O$, and $2T$, the quaternions $x_{1}+x_{2}i+x_{3}j+x_{4}$ are represented in the natural way with precise algebraic coefficients, using SageMath’s support for algebraic extension fields. For the elements of $2D_{2n}$, we used a tailored representation: These elements are of the form $e_{n}^{s}$ or $e_{n}^{s}j$, and we represent and manipulate them using the fraction $s/n$, and a bit that indicates whether the factor $j$ is present. (An exact algebraic representation would have required extension fields of arbitrarily high degree.) The left group and the right group don’t have to use the same representation: For elements of tubical groups, like $[l,r]\in\pm[I\times C_{n}]$, each of $l$ and $r$ uses its own appropriate representation. 10.2 Fingerprinting For preparing a catalog of groups, it is useful to have some easily computable invariants. We used the number of elements of each geometric type as a fingerprint. This technique was initiated by Hurley [23] in his classification of the 4-dimensional crystallographic groups. We first discuss the classification of the individual 4-dimensional orthogonal transformations, as introduced in Section 3.1. Every orientation-preserving orthogonal transformation can be written as a block diagonal matrix $R_{\alpha_{1},\alpha_{2}}$ of two rotation matrices (1). We must be aware of other angle parameters $R_{\alpha_{1}^{\prime},\alpha^{\prime}_{2}}$ that describe geometrically the same operation, in other words, that are conjugate by an orientation-preserving  transformation (see Section 7.3.3). If we swap the two invariant coordinate planes $(x_{1},x_{2})\leftrightarrow(x_{3},x_{4})$, this is an orientation-preserving transformation, and it turns $R_{\alpha_{1},\alpha_{2}}$ into $R_{\alpha_{2},\alpha_{1}}$. A simultaneous reflection in both coordinate planes ($x_{1}\leftrightarrow x_{2}$ and $x_{3}\leftrightarrow x_{4}$) is also orientation-preserving, and it turns $R_{\alpha_{1},\alpha_{2}}$ into $R_{-\alpha_{1},-\alpha_{2}}$. Thus, $R_{\alpha_{1},\alpha_{2}}\doteq R_{\alpha_{2},\alpha_{1}}\doteq R_{-\alpha_{1},-\alpha_{2}}\doteq R_{-\alpha_{2},-\alpha_{1}}$. On the other hand, $R_{\alpha_{1},\alpha_{2}}$ and $R_{\alpha_{1},-\alpha_{2}}$ are distinct unless one of the angles is $0$ or $\pm\pi$. They are mirrors of each other. The orientation-reversing transformations $\bar{R}_{\alpha}$ of (2) are characterized by a single angle $\alpha$. Since the simultaneous negation of $x_{1}$ and $x_{4}$ turns $\bar{R}_{\alpha}$ into $\bar{R}_{-\alpha}$, the parameter $\alpha$ can be normalized to the range $0\leq\alpha\leq\pi/2$. Since the angles are rational multiples of $\pi$, it is possible to encode the data about the operation into a short code. By collecting the codes of the elements in a group into a string, we obtained a “fingerprint” of the group, which we used as a key for our catalog.222222Here are some details: We actually use the quaternion pair $[l,r]$ for computing the code for a rotation: If $[l,1]$ and $[1,r]$ are rotations by $a\pi$ and $b\pi$, respectively, we use the pair of rational numbers $(a,b)$ with $0\leq a,b\leq 1$. The pair $[-l,-r]$, which represents the same rotation, gives the pair $(1-a,1-b)$, and hence we normalize by requiring that $a<b$ or $a=b\leq 1/2$. For example, the group $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hrule height=6.3pt,width=0.4pt}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathrm{pg}}_{2,4}$ has the fingerprint 0|0:2 0|1:2 1|1/4:4 1|3/4:4 1|1/2:4 *1/2:16. We tried to make the code concise while keeping it readable. The term /4 in 1|3/4:4 is a common denominator for both components, and hence 1|3/4 stands for the pair $(a,b)=(\frac{1}{4},\frac{3}{4})$, denoting a rotation of the form $[\exp\frac{\pi}{4},\exp\frac{3\pi}{4}]\doteq R_{-\pi/2,\pi}$. The number :4 after the colon denotes the multiplicity. Since our group representation contains both pairs $[l,r]$ and $[-l,-r]$ for each rotation, the multiplicity is always overcounted by a factor of 2, the group actually contains only two operations $R_{-\pi/2,\pi}$. (The reader may wish to identify them as torus translations of this group, see Figure 23.) The symbol 0|0 denotes the identity. The orientation-reversing transformations are written with a star. The sign *$a$ with a fraction $a$ denotes $\bar{R}_{(1-a)\pi}$. In our example, *1/2:16 denotes eight operations of the form $\bar{R}_{\pi/2}$. The sum of the written multiplicities is 32, in accordance with the fact that the group has order $32/2=16$. Experimentally, in all cases that we encountered, this method was sufficient to distinguish groups up to conjugacy. (As reported below, we considered, from the infinite families of groups, at least all groups of order up to 100.) The classification of the elements by Hurley [23] is almost equivalent, except that it disregards the orientation: He classified a transformation by the triplet of coefficients $(c_{3},c_{2},c_{0})$ of its characteristic equation $\lambda^{4}-c_{3}\lambda^{3}+c_{2}\lambda^{2}-c_{1}\lambda+c_{0}=0$: the trace $c_{3}$, the second invariant $c_{2}$, and the determinant $c_{0}$. Since all eigenvalues have absolute value 1, the linear coefficient $c_{1}$ is determined by the others through the formula $c_{1}=-c_{0}c_{3}$. The Hurley triplet determines the eigenvalues and thus the geometric conjugacy type and the rotation angles $\alpha_{1},\alpha_{2}$, but only up to orientation. $R_{\alpha_{1},\alpha_{2}}$ and $R_{\alpha_{1},-\alpha_{2}}$ have the same spectrum and the same Hurley symbol. The Hurley symbol. Hurley was interested in the crystallographic groups, and the operations in these groups must have integer coefficients in their characteristic polynomial. This restricts the operations to a finite set. Hurley denoted them by 24 letters (the Hurley symbols). They were also used in the monumental classification of the four-dimensional crystallographic space groups by Brown, Bülow, Neubüser, Wondratschek, Zassenhaus [4]. Brown et al. refined the classification by splitting the groups into conjugacy classes under the group operations, resulting in the Hurley pattern. It may happen that several operations are geometrically the same but not conjugate to each other by a transformation of the group that is under consideration.232323For example, the group 21/03 in [4] of order 12 has the Hurley pattern 1*1I, 1*1E, 2*3E, 1*2S’, 1*2B; in our classification, it corresponds to two enantiomorphic groups, $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}_{1,3}$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}_{3,1}$. The fingerprints of these groups are 0|0:2 0|2/3:4 1|1/2:14 3|5/6:4 and 0|0:2 1|3/6:4 1|3/3:4 1|1/2:14. Both groups contain 7 half-turns (code 1|1/2, Hurley symbol E). The second group, for example, is actually also a torus flip group: $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}_{3,1}\doteq\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\cdot$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{3,2}$. In this representation, it has 6 flip operations, which are half-turns. In addition, it contains the torus translation $R_{\pi,0}$, which is another half-turn. This half-turn is not conjugate to the other half-turns by operations of the group. It forms a conjugacy class of its own, as indicated by the code 1*1E in the Hurley pattern. The 6 flip operations split into two conjugacy classes of size 3, as indicated by the code 2*3E. Brown et al. [4, p. 9] report that their classification, which is more refined than ours but in another respect coarser, since it does not distinguish enantiomorphic groups, was also found to be sufficient to characterize the crystallographic point groups uniquely (up to mirror congruence). We could use the data in the Tables of [4] to match them with our classification. The results are tabulated in Tables 17–18 in Appendix D. 10.3 Computer checks As mentioned, the classic approach to the classification following Goursat’s method yields the chiral groups, and with the exception of the toroidal groups, they are obtained quite painlessly. However, the achiral groups must be found and classified as index-2 extensions of the chiral groups. This task has been carried out by Du Val [15] and Conway and Smith [8], but they only gave the results. Du Val [15, p. 61] explicitly lists the orientation-reversing elements of each achiral group. Conway and Smith [8, Tables 4.1–4.3] provide generating elements for each group. A detailed derivation is not presented in the literature. The considerations about the extension from chiral groups to achiral ones are only briefly sketched by Conway and Smith [8, p. 51–52], see Figures 54–55. Since we found this situation unsatisfactory, we ran a brute-force computer check. We generated all subgroups of the groups $\pm[I\times I]$, $\pm[O\times O]$ and $\pm[T\times T]$ and their achiral extensions. No missing groups were discovered. More details are given below. For the achiral extension of the subgroups of $\pm[C_{n}\times C_{n}]$, and $\pm[D_{2n}\times D_{2n}]$, we have supplanted the classic classification by own classification as toroidal groups. Nevertheless, we ran some computer checks also for these groups, see Section 10.5. 10.4 Checking the achiral polyhedral and axial groups For each group $\pm[I\times I]$, $\pm[O\times O]$ and $\pm[T\times T]$ in turn, we generate all subgroups, We kept only those subgroups for which the left and right subgroup is the full group $2I$, $2O$, or $2T$ respectively. (For an achiral group, we must extend a group whose left group is equal to its right group.) For each obtained subgroup, we identified the possible extending elements, using the considerations of Section 3.5. Each achiral group was classified by its fingerprint (the conjugacy types of its elements), and for each class, we managed to find geometric conjugations to show that all groups in that class are geometrically the same. We mention some details for the largest group $[I\times I]$. The group $\pm[I\times I]$ was represented by its double-cover $2I\times 2I$, and converted to a permutation group, in order to let GAP generate the subgroups. There are 19,987 subgroups in total, and they were found in about 5 minutes. 14,896 subgroups of them contain the pair $(-1,-1)$, which is necessary to have a double cover of a rotation group in $\pm[I\times I]$, and only 241 of these groups have the left and right subgroups equal to $2I$. These represented the group $\pm[I\times I]$ itself, and 60 different copies of each group $\pm\frac{1}{60}[I\times I]$, $\pm\frac{1}{60}[I\times\bar{I}]$, $+\frac{1}{60}[I\times I]$, $+\frac{1}{60}[I\times\bar{I}]$. For each of the 241 groups, we tried to extend it by an element $*[1,c]$ in all possible ways, following Proposition 3.2. Actually, it is easy to see that elements $c$ and $c^{\prime}=cx$ that are related by an element $x$ in the kernel lead to the same extension, and thus they need not be tried separately. This leads to 361 distinct groups. Again there are 60 representatives of each of the six achiral groups with fraction $\frac{1}{60}$, plus one for the group $\pm[I\times I]\cdot 2$ itself. Since we searched for conjugacies in a systematic but somewhat ad-hoc manner, it took about half a week for the computer to show that all 60 groups in each class are geometrically the same. With hindsight, the multiplicity of 60 is not surprising, since there are 60 conjugacies that map the elements of $2I$ to themselves. 10.5 Checking the toroidal groups The toroidal groups form an infinite family, and hence we can only generate them up to some limit. We set the goal of checking all chiral toroidal groups up to order 200 and all achiral groups up to order 400. For this purpose, we generated all groups $\pm[D_{n}\times D_{n}]\cdot 2$ (for even $n$) and $\pm[C_{n}\times C_{n}]\cdot 2$ in the range $100<n\leq 200$, together with their subgroups. For generating the subgroups, we took a different approach than for the polyhedral groups: We constructed a permutation group representation of the achiral group and computed all its subgroups. We took all subgroups, regardless of whether the left and right group is the full group $C_{n}$ or $D_{n}$. For each chiral group up to order 200 and each achiral group up to order 400 that was generated, we checked that it is conjugate to one on the known groups according to our classification. We also checked whether all known toroidal groups within these size bounds are found. This turned out to be the case with a few exceptions. The exceptions were the chiral groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathrm{cm}}_{m,n}$, $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathrm{cm}}_{n,m}$ $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathrm{cm}}_{m,n}$, and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{-45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathrm{cm}}_{n,m}$, for 13 pairs $(m,n)=(3,17),(3,19),\ldots,(7,13),(9,11)$ of orders $2mn$ between 100 and 200. The reason that these groups were missed is that they are of the form $+\frac{1}{2}[D_{2m}\times C_{2n}]\leqslant+\frac{1}{2}[D_{2m}\times D_{4n}]$, and the smallest group $\pm[D_{n^{\prime}}\times D_{n^{\prime}}]\cdot 2$ that contains them has $n^{\prime}=4\cdot\mathrm{lcm}(m,n)$, which exceeds 200. The group with the largest number of subgroups was $\pm[D_{192}\times D_{192}]\cdot 2$. It has 1,361,642 subgroups. For 1,249,563 of these groups, the order was within the limits. This computation requires a workstation with large memory, on the order of about 100 gigabytes. The whole computation took about 10 days. 11 Higher dimensions In the classification of Theorem 1.1, there are categories that we expect in any dimension: the polyhedral groups, which are related to the regular polytopes, the toroidal groups, and the axial groups, which come from direct sums of lower-dimensional groups. On the other hand, the tubical groups are more surprising. They rely on the covering $\mathrm{SO}(3)\times\mathrm{SO}(3)\stackrel{{\scriptstyle 2:1\,}}{{\longrightarrow}}\mathrm{SO}(4)$, which provides a different product structure in terms of lower-dimensional groups than the direct sum. The scarcity of regular polytopes in high dimensions might be an indication that these groups are not very exciting. On the other hand, the root systems $E_{6}$, $E_{7}$, and $E_{8}$ in 6, 7, and 8 dimensions promise some richer structure in certain dimensions. In five dimensions, the orientation-preserving case has been settled by Mecchia and Zimmermann [30], see [37, Corollary 2]: Theorem 11.1. The finite subgroups of the orthogonal group $\mathrm{SO}(5)$ are (i) subgroups of $\mathrm{O}(4)\times\mathrm{O}(1)$ or $\mathrm{O}(3)\times\mathrm{O}(2)$ (the reducible case); (ii) subgroups of the symmetry group $(\mathbb{Z}_{2})^{5}\rtimes S_{5}$ of the hypercube; (iii) or isomorphic to $A_{5}$, $S_{5}$, $A_{6}$ or $S_{6}$. (This includes symmetries of the simplex and its polar.) The irreducible representations of the groups in (iii) can be looked up in the character tables of the books on Representation Theory. It would be interesting to know what the 5-dimensional representations are in geometric terms (besides the symmetries of the simplex). This theorem gives only the chiral groups, but in odd dimensions like 5, it is in principle straightforward to derive the achiral groups from the chiral ones: All one needs to know are the chiral groups and their index-2 subgroups. See [8, §3.8] for the three-dimensional case. Briefly, one can say that nothing unexpected happens for the point groups in 5 dimensions. Six dimensions. The richest part of the 4-dimensional groups were the toroidal groups, which have an invariant Clifford torus. The sphere $S^{5}$ contains an analogous three-dimensional torus $$x_{1}^{2}+x_{2}^{2}=x_{3}^{2}+x_{4}^{2}=x_{5}^{2}+x_{6}^{2}={1/3}$$ A group that leaves this torus invariant behaves similarly to a three-dimensional space group, involving translations, reflections, and rotations in terms of torus coordinates $\varphi_{1},\varphi_{2},\varphi_{3}$. Thus, the three-dimensional space groups will make their appearance in the classification of 6-dimensional point groups. The situation in 4 dimensions was similar: We have studied the toroidal groups in analogy to the wallpaper groups (the two-dimensional space groups). In contrast to the situation in the plane, a 6-fold rotation in 3-space is not inconsistent with the requirement that the lattice of translations contains a cubical lattice. Thus, we may expect that all of the 230 three-dimensional space groups show up in the 6-dimensional point groups. (In one dimension lower, we have another instance of this phenomenon: The frieze groups appear as the 3-dimensional axial point groups.) Thus, a classification of the point groups in 6 dimensions will be much more laborious than in 5 dimensions. It has already been observed by Carl Hermann in 1952 [22, p. 33], in connection with the crystallographic groups, that “going up from an odd dimension to the next higher even one leads by far to more surprises than the opposite case”. References [1] Simon L. Altmann. Hamilton, Rodrigues, and the quaternion scandal. Mathematics Magazine, 62(5):291–308, 1989. doi:10.1080/0025570X.1989.11977459. [2] Thomas F. Banchoff. Torus decompositions of regular polytopes in 4-space. In Marjorie Senechal, editor, Shaping Space, pages 257–266. Springer, 2013. doi:10.1007/978-0-387-92714-5_20. [3] Marcel Berger. Geometry II. Springer Science & Business Media, 2009. [4] Harold Brown, Rolf Bülow, Joachim Neubüser, Hans Wondratschek, and Hans Zassenhaus. Crystallographic Groups of Four-Dimensional Space. Wiley, New York, 1978. [5] Christopher Cashen. Quasi-isometries between tubular groups. Groups, Geometry, and Dynamics, 4(3):473–516, 2010. doi:10.4171/GGD. [6] John H. Conway, Heidi Burgiel, and Chaim Goodman-Strauss. The Symmetries of Things. A K Peters, 2008. [7] John H. Conway, Ronald H. Hardin, and Neil J. A. Sloane. Packing lines, planes, etc.: packings in Grassmannian spaces. Experimental Mathematics, 5(2):139–159, 1996. doi:em/1047565645. [8] John H. Conway and Derek A. Smith. On Quaternions and Octonions. CRC Press, 2003. [9] H. S. M. Coxeter. Symmetrical definitions for the binary polyhedral groups. In Finite Groups, volume 1 of Proc. Sympos. Pure Math., pages 64–87. Amer. Math. Soc., 1959. [10] H. S. M. Coxeter. Regular Polytopes. Dover, 3rd edition, 1973. [11] H. S. M. Coxeter. Regular and semi-regular polytopes. II. Math. Z., 188:559–591, 1985. doi:10.1007/BF01161657. [12] H. S. M. Coxeter. Regular Complex Polytopes. Cambridge Univ. Press, 2nd edition, 1991. [13] James Cruickshank and Seamus Kelly. Rearrangement inequalities and the alternahedron. Discrete & Computational Geometry, 35(2):241–254, 2006. doi:10.1007/s00454-005-1199-6. [14] Paul de Medeiros and José Figueroa-O’Farrill. Half-BPS M2-brane orbifolds. Advances in Theoretical and Mathematical Physics, 16(5):1349–1408, 2012. doi:10.4310/ATMP.2012.v16.n5.a1. [15] Patrick Du Val. Homographies, Quaternions and Rotations. Clarendon Press, 1964. [16] William D. Dunbar. Nonfibering spherical 3-orbifolds. Transactions of the American Mathematical Society, 341(1):121–142, 1994. doi:10.2307/2154616. [17] Erik Friese and Frieder Ladisch. Affine symmetries of orbit polytopes. Advances in Mathematics, 288:386–425, 2016. [18] Erik Friese and Frieder Ladisch. Classification of affine symmetry groups of orbit polytopes. Journal of Algebraic Combinatorics, 48(3):481–509, 2018. URL: https://rdcu.be/cDHw3, doi:10.1007/s10801-017-0804-0. [19] The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.11.1, 2021. URL: https://www.gap-system.org. [20] Edouard Goursat. Sur les substitutions orthogonales et les divisions régulières de l’espace. Annales scientifiques de l’É.N.S. 3$\,{}^{e}\!$ série, 6:9–102, 1889. doi:10.24033/asens.317. [21] Norman F. M. Henry, editor. International tables for X-ray crystallography, Vol. 1, Symmetry Groups. Kynoch Press, Birmingham, 2nd edition, 1952. [22] C. Hermann. Translationsgruppen in $n$ Dimensionen. In H. O’Daniel, editor, Zur Struktur und Materie der Festkörper, pages 24–33. Springer-Verlag, Berlin, Göttingen, Heidelberg, 1952. doi:10.1007/978-3-662-29427-7_2. [23] A. C. Hurley. Finite rotation groups and crystal classes in four dimensions. Mathematical Proceedings of the Cambridge Philosophical Society, 47(4):650–661, 1951. doi:10.1017/S0305004100027109. [24] A. C. Hurley. Finite rotation groups and crystal classes in four dimensions: II. revised tables and projection of groups of antisymmetry in three dimensions. In Per Olov Löwdin, editor, Quantum Theory of Atoms, Molecules, and the Solid State: A Tribute to John C. Slater, pages 571–586. Academic Press, 1966. [25] Heuna Kim and Günter Rote. Congruence testing of point sets in 4 dimensions, March 2016. arXiv:1603.07269. [26] Heuna Kim and Günter Rote. Congruence testing of point sets in 4-space. In Sándor Fekete and Anna Lubiw, editors, 32st International Symposium on Computational Geometry (SoCG 2016), volume 51 of Leibniz International Proceedings in Informatics (LIPIcs), pages 48:1–48:16. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPIcs.SOCG.2016.48. [27] Christian Lange and Marina A. Mikhaîlova. Classification of finite groups generated by reflections and rotations. Transformation Groups, 21:1155–1201, 2016. doi:10.1007/s00031-016-9385-6. [28] David W. Lyons. An elementary introduction to the Hopf fibration. Mathematics Magazine, 76(2):87–98, 2003. doi:10.2307/3219300. [29] M. A. Maerchik. Finite groups generated by pseudoreflections in four-dimensional Euclidean space. Trudy Kirgiz Gos. Univ. Ser. Mat. Nauk, 11:66–72, 1976. (in Russian). [30] Mattia Mecchia and Bruno Zimmermann. On finite groups acting on homology 4-spheres and finite subgroups of SO(5). Topology and its Applications, 158(6):741–747, 2011. doi:10.1016/j.topol.2011.01.017. [31] J. L. Nicolas and G. Robin. Majorations explicites pour le nombre de diviseurs de $n$. Canadian Mathematical Bulletin, 26(4):485–492, 1983. doi:10.4153/CMB-1983-078-5. [32] G. de B. Robinson. On the orthogonal groups in four dimensions. Mathematical Proceedings of the Cambridge Philosophical Society, 27(1):37–48, 1931. doi:10.1017/S0305004100009312. [33] Henry Segerman and Saul Schleimer. Puzzling the 120-cell. Notices of the AMS, 62(11):1309–1316, 2015. doi:10.1090/noti1297. [34] The Sage Developers. SageMath, the Sage Mathematics Software System (Version 9.5), 2022. URL: https://www.sagemath.org. [35] W. Threlfall and H. Seifert. Topologische Untersuchung der Diskontinuitätsbereiche endlicher Bewegungsgruppen des dreidimensionalen sphärischen Raumes. Math. Annalen, 104:1–70, 1931. doi:10.1007/BF01457920. [36] W. Threlfall and H. Seifert. Topologische Untersuchung der Diskontinuitätsbereiche endlicher Bewegungsgruppen des dreidimensionalen sphärischen Raumes (Schluß). Math. Annalen, 107:543–586, 1933. doi:10.1007/BF01448910. [37] Bruno P. Zimmermann. On finite groups acting on spheres and finite subgroups of orthogonal groups. Sib. Èlektron. Mat. Izv., 9:1–12, 2012. URL: http://mi.mathnet.ru/eng/semr/v9/p1, arXiv:1108.2602. Appendix A Generators for the polyhedral and axial groups Table 16 gives a complete summary of the polyhedral (Table 10) and axial groups (Table 15), following the numbering by Goursat [20], as extended to the haploid groups by Du Val [15], together with a set of generators for each group. The axial groups can be recognized as having only two numbers different from 2 in their Coxeter name. Our adaptations of Du Val’s names was explained in Table 15 and footnote 19 on p. 19. The top part contains the chiral groups (#20–#32) and the bottom part the achiral ones (#39–#51).242424A similar table, containing some four-dimensional reflection groups and their subgroups, appears in Coxeter [11, p. 571], with correspondences between Coxeter’s own notation and Du Val’s names. The very first entry in that table, $[3,3,2]^{+}$, mistakenly refers to Du Val’s group #21 $(T/C_{2};T/C_{2})=\pm\frac{1}{12}[T\times T]$, while it is actually #26${}^{\prime\prime}$ $(O/C_{1};O/C_{1})^{\prime\prime}=+\frac{1}{24}[O\times\overline{O}]$. The fifth entry, $[3,3,2]$, refers to Du Val’s group $(O/C_{1};O/C_{1})^{*}$, while it should actually be $(O/C_{1};O/C_{1})^{*}_{-}$, or more precisely #44${}^{\prime\prime}$ $(O/C_{1};O/C_{1})^{*\prime\prime}_{-}=+\frac{1}{24}[O\times\overline{O}]\cdot 2_{1}$. The confusing ambiguity of Du Val’s names for the groups 44${}^{\prime}$ and 44${}^{\prime\prime}$ mentioned in the caption of Table 15 was apparently not realized by Coxeter. Where appropriate, we include a reference to the numbering of crystallographic point groups according to Brown, Bülow, Neubüser, Wondratschek, Zassenhaus (BBNWZ) [4], see also Appendix D. In addition to the quaternions defined in (6) in Section 3.7, the following elements are used for generating the groups: $$\displaystyle\bar{}\omega$$ $$\displaystyle=\tfrac{1}{2}(-1-i-j-k)$$ (order 3) $$\displaystyle i_{I}^{\dagger}$$ $$\displaystyle=\tfrac{1}{2}\bigl{(}i+\tfrac{-\sqrt{5}-1}{2}j+\tfrac{-\sqrt{5}+1}{2}k\bigr{)}$$ (order 4) (26) $$\displaystyle i_{I}^{\prime}$$ $$\displaystyle=\tfrac{1}{2}\bigl{(}-\tfrac{\sqrt{5}-1}{2}i-\tfrac{\sqrt{5}+1}{2}j+k\bigr{)}$$ (order 4) (27) $\bar{}\omega$ is simply the conjugate quaternion of $\omega$. We tried to reduce the number of generators by trial and error, confirming by computer whether the generated groups did not change. For a few groups, the groups given by Conway and Smith are not identical to the groups of Du Val, and our table lists both possibilities. Conway and Smith [8, Tables 4.2–4.3] specified the five groups of type $[I\times\bar{I}]$ (#32, #32${}^{\prime}$ and #51–#51${}^{\prime\prime}$) by the generating set “$[\omega,\omega],[i_{I},\pm i_{I}^{\prime}]$”, possibly extended by $*$ or $-{*}$ for the achiral groups, but they did not define what $i_{I}^{\prime}$ is.252525Five years later, the tables were almost literally reproduced in another book [6, Chapter 26], still without a definition of $i_{I}^{\prime}$. We tried all 120 elements of $2I$, and it turned out that (27) is the only value that works in this way. We don’t see how we could have predicted precisely this element, and we have no explanation for it. Du Val [15], on the other hand, specifies generators for these five groups in terms of the quaternion $i_{I}^{\dagger}$ defined in (26), which is obtained by flipping the sign of $\sqrt{5}$ in the expression for $i_{I}=\tfrac{1}{2}(i+\tfrac{\sqrt{5}-1}{2}j+\tfrac{\sqrt{5}+1}{2}k)$. This alternative choice generates a group $2I^{\dagger}$ that is different from $2I$. With this setup, it is not possible to use the simple extending elements $*$ and $-{*}$ for the three achiral extensions #51–#51${}^{\prime\prime}$: For example, the square of the element ${*}[i_{I}^{\dagger},i_{I}]$ is $[i_{I}i_{I}^{\dagger},i^{\dagger}_{I}i_{I}]$ with $i_{I}i^{\dagger}_{I}=\frac{1}{4}+\frac{\sqrt{5}}{4}(i+j-k)$, and this element is in neither of the groups $2I$ or $2I^{\dagger}$. Du Val [15, p. 55–56] gives a thorough and transparent exposition of these groups and explains why they represent the symmetries of the 4-simplex. For the axial groups of type $\frac{1}{12}[T\times\bar{T}]$ (#40 and #40${}^{\prime}$), the natural generators from an algebraic viewpoint involve the quaternion $\bar{\omega}$, and these were chosen by Conway and Smith. However, the axis that is kept invariant by the groups is then spanned by the quaternion $j-k$. With $*[i_{O},\pm i_{O}]$ as the orientation-reversing generator, the invariant axis becomes the real axis, and only in this representation, the groups are subgroups of the larger axial group $\pm\frac{1}{24}[O\times O]$ (#44). Appendix B Orbit polytopes for tubical groups with special starting points We show polar orbit polytopes for the tubical groups of cyclic type with all choices of special starting points. Each subsection considers a left tubical group $G$ together with a representative $f$-fold rotation center $p$ of $G^{h}$, corresponding to an entry in Table 3. The particular data are given in the caption. In addition, we indicate the subgroup $H$ of $G$ of elements that preserve $K_{p}$. An alternate group refers to an index-2 dihedral-type supergroup of $G$ that, for an appropriate starting point on $K_{p}$, produces the same orbit as $G$. Two of these groups were already illustrated in the main text (Figures 12 and 13), and we follow the same conventions as in these figures: On the top left, we show the $G^{h}$-orbit polytope of $p$, and on the top right the spherical Voronoi diagram of that orbit. Then we show the cells of the polar $G$-orbit polytopes of a starting point on $K_{p}$, for different values of $n$, in increasing order of the size of the orbit. For each cell, we indicate the values of $n$, and in addition, the counterclockwise angle (as seen from the top) by which the group rotates the cell as it proceeds to the next cell above. A blue vertical line indicates the cell axis, the direction towards the next cell along $K_{p}$. For small values of $n$, this axis sometimes exits through a vertex or an edge of the cell, but for large enough $n$ it goes through the top face where the next cell is attached. When the same orbit arises for several values of $n$, then the specified rotation angle is the unique valid angle only for the smallest value $n_{0}$ that is given. For a larger value $n=n_{0}f$, this can be combined with arbitrary multiples of an $f$-fold rotation. For example, in Figure 36, we have the same cell for $n=5$ and $n=15$. The specified rotation angle $(\frac{1}{3}+\frac{1}{30})\cdot 2\pi$ is the unique valid angle between consecutive cells in the group $\pm[I\times C_{5}]$, but in the larger group $\pm[I\times C_{15}]$, it can be combined with all multiples of $\frac{2}{3}\pi$. That is, all three rotation angles $\frac{1}{15}\pi$, $(\frac{2}{3}+\frac{1}{15})\pi$, and $(\frac{4}{3}+\frac{1}{15})\pi$ are valid. In some cases, such as $n=18$, the angle is never unique, and this is indicated by a free parameter $k$ in the angle specification, which can take any integer value. By observing the rotation angles for the successive cells in the figures, one can recognize the pattern that they follow. B.1 $\pm[I\times C_{n}]$ B.1.1 $\pm[I\times C_{n}]$, 3-fold rotation center $$n=1,3$$$$(\frac{2}{3}+\frac{1}{6})\cdot 2\pi$$ $$n=2,6$$$$(\frac{1}{3}+\frac{1}{12})\cdot 2\pi$$ $$n=9$$$$(\frac{k}{3}+\frac{1}{18})\cdot 2\pi$$ $$n=4,12$$$$(\frac{2}{3}+\frac{1}{24})\cdot 2\pi$$ $$n=5,15$$$$(\frac{1}{3}+\frac{1}{30})\cdot 2\pi$$ $$n=18$$$$(\frac{k}{3}+\frac{1}{36})\cdot 2\pi$$ $$n=7,21$$$$(\frac{2}{3}+\frac{1}{42})\cdot 2\pi$$ $$n=8,24$$$$(\frac{1}{3}+\frac{1}{48})\cdot 2\pi$$ $$n=27$$$$(\frac{k}{3}+\frac{1}{54})\cdot 2\pi$$ $$n=10,30$$$$(\frac{2}{3}+\frac{1}{60})\cdot 2\pi$$ $$n=11,33$$$$(\frac{1}{3}+\frac{1}{66})\cdot 2\pi$$ $$n=36$$$$(\frac{k}{3}+\frac{1}{72})\cdot 2\pi$$ $$n=13,39$$$$(\frac{2}{3}+\frac{1}{78})\cdot 2\pi$$ $$n=14,42$$$$(\frac{1}{3}+\frac{1}{84})\cdot 2\pi$$ $$n=45$$$$(\frac{k}{3}+\frac{1}{90})\cdot 2\pi$$ B.1.2 $\pm[I\times C_{n}]$, 2-fold rotation center    $n=1,2$$(\frac{1}{2}+\frac{1}{4})\cdot 2\pi$ $n=4$$(\frac{k}{2}+\frac{1}{8})\cdot 2\pi$ $n=3,6$$(\frac{1}{2}+\frac{1}{12})\cdot 2\pi$ $$n=8$$$$(\frac{k}{2}+\frac{1}{16})\cdot 2\pi$$ $$n=5,10$$$$(\frac{1}{2}+\frac{1}{20})\cdot 2\pi$$ $$n=12$$$$(\frac{k}{2}+\frac{1}{24})\cdot 2\pi$$ $$n=7,14$$$$(\frac{1}{2}+\frac{1}{28})\cdot 2\pi$$ $$n=16$$$$(\frac{k}{2}+\frac{1}{32})\cdot 2\pi$$ $$n=9,18$$$$(\frac{1}{2}+\frac{1}{36})\cdot 2\pi$$ $$n=20$$$$(\frac{k}{2}+\frac{1}{40})\cdot 2\pi$$ $$n=11,22$$$$(\frac{1}{2}+\frac{1}{44})\cdot 2\pi$$ $$n=24$$$$(\frac{k}{2}+\frac{1}{48})\cdot 2\pi$$ $$n=13,26$$$$(\frac{1}{2}+\frac{1}{52})\cdot 2\pi$$ $$n=28$$$$(\frac{k}{2}+\frac{1}{56})\cdot 2\pi$$ $$n=15,30$$$$(\frac{1}{2}+\frac{1}{60})\cdot 2\pi$$ B.2 $\pm[O\times C_{n}]$ B.2.1 $\pm[O\times C_{n}]$, 4-fold rotation center $$n=1,2,4$$$$(\frac{3}{4}+\frac{1}{8})\cdot 2\pi$$ $$n=8$$$$(\frac{k}{4}+\frac{1}{16})\cdot 2\pi$$ $$n=3,6,12$$$$(\frac{1}{4}+\frac{1}{24})\cdot 2\pi$$ $$n=16$$$$(\frac{k}{4}+\frac{1}{32})\cdot 2\pi$$ $$n=5,10,20$$$$(\frac{3}{4}+\frac{1}{40})\cdot 2\pi$$ $$n=24$$$$(\frac{k}{4}+\frac{1}{48})\cdot 2\pi$$ $$n=7,14,28$$$$(\frac{1}{4}+\frac{1}{56})\cdot 2\pi$$ $$n=32$$$$(\frac{k}{4}+\frac{1}{64})\cdot 2\pi$$ $$n=9,18,36$$$$(\frac{3}{4}+\frac{1}{72})\cdot 2\pi$$ $$n=40$$$$(\frac{k}{4}+\frac{1}{80})\cdot 2\pi$$ $$n=11,22,44$$$$(\frac{1}{4}+\frac{1}{88})\cdot 2\pi$$ $$n=48$$$$(\frac{k}{4}+\frac{1}{96})\cdot 2\pi$$ $$n=13,26,52$$$$(\frac{3}{4}+\frac{1}{104})\cdot 2\pi$$ $$n=56$$$$(\frac{k}{4}+\frac{1}{112})\cdot 2\pi$$ $$n=15,30,60$$$$(\frac{1}{4}+\frac{1}{120})\cdot 2\pi$$ B.2.2 $\pm[O\times C_{n}]$, 3-fold rotation center $$n=1,3$$$$(\frac{2}{3}+\frac{1}{6})\cdot 2\pi$$ $$n=2,6$$$$(\frac{1}{3}+\frac{1}{12})\cdot 2\pi$$ $$n=9$$$$(\frac{k}{3}+\frac{1}{18})\cdot 2\pi$$ $$n=4,12$$$$(\frac{2}{3}+\frac{1}{24})\cdot 2\pi$$ $$n=5,15$$$$(\frac{1}{3}+\frac{1}{30})\cdot 2\pi$$ $$n=18$$$$(\frac{k}{3}+\frac{1}{36})\cdot 2\pi$$ $$n=7,21$$$$(\frac{2}{3}+\frac{1}{42})\cdot 2\pi$$ $$n=8,24$$$$(\frac{1}{3}+\frac{1}{48})\cdot 2\pi$$ $$n=27$$$$(\frac{k}{3}+\frac{1}{54})\cdot 2\pi$$ $$n=10,30$$$$(\frac{2}{3}+\frac{1}{60})\cdot 2\pi$$ $$n=11,33$$$$(\frac{1}{3}+\frac{1}{66})\cdot 2\pi$$ $$n=36$$$$(\frac{k}{3}+\frac{1}{72})\cdot 2\pi$$ $$n=13,39$$$$(\frac{2}{3}+\frac{1}{78})\cdot 2\pi$$ $$n=14,42$$$$(\frac{1}{3}+\frac{1}{84})\cdot 2\pi$$ $$n=45$$$$(\frac{k}{3}+\frac{1}{90})\cdot 2\pi$$ B.2.3 $\pm[O\times C_{n}]$, 2-fold rotation center    $n=1,2$$(\frac{1}{2}+\frac{1}{4})\cdot 2\pi$ $n=4$$(\frac{k}{2}+\frac{1}{8})\cdot 2\pi$ $n=3,6$$(\frac{1}{2}+\frac{1}{12})\cdot 2\pi$ $$n=8$$$$(\frac{k}{2}+\frac{1}{16})\cdot 2\pi$$ $$n=5,10$$$$(\frac{1}{2}+\frac{1}{20})\cdot 2\pi$$ $$n=12$$$$(\frac{k}{2}+\frac{1}{24})\cdot 2\pi$$ $$n=7,14$$$$(\frac{1}{2}+\frac{1}{28})\cdot 2\pi$$ $$n=16$$$$(\frac{k}{2}+\frac{1}{32})\cdot 2\pi$$ $$n=9,18$$$$(\frac{1}{2}+\frac{1}{36})\cdot 2\pi$$ $$n=20$$$$(\frac{k}{2}+\frac{1}{40})\cdot 2\pi$$ $$n=11,22$$$$(\frac{1}{2}+\frac{1}{44})\cdot 2\pi$$ $$n=24$$$$(\frac{k}{2}+\frac{1}{48})\cdot 2\pi$$ $$n=13,26$$$$(\frac{1}{2}+\frac{1}{52})\cdot 2\pi$$ $$n=28$$$$(\frac{k}{2}+\frac{1}{56})\cdot 2\pi$$ $$n=15,30$$$$(\frac{1}{2}+\frac{1}{60})\cdot 2\pi$$ B.3 $\pm\frac{1}{2}[O\times C_{2n}]$ B.3.1 $\pm\frac{1}{2}[O\times C_{2n}]$, 3-fold rotation center    $n=1,3$$(\frac{2}{3}+\frac{1}{6})\cdot 2\pi$ $n=2,6$$(\frac{1}{3}+\frac{1}{12})\cdot 2\pi$ $n=9$$(\frac{k}{3}+\frac{1}{18})\cdot 2\pi$ $$n=4,12$$$$(\frac{2}{3}+\frac{1}{24})\cdot 2\pi$$ $$n=5,15$$$$(\frac{1}{3}+\frac{1}{30})\cdot 2\pi$$ $$n=18$$$$(\frac{k}{3}+\frac{1}{36})\cdot 2\pi$$ $$n=7,21$$$$(\frac{2}{3}+\frac{1}{42})\cdot 2\pi$$ $$n=8,24$$$$(\frac{1}{3}+\frac{1}{48})\cdot 2\pi$$ $$n=27$$$$(\frac{k}{3}+\frac{1}{54})\cdot 2\pi$$ $$n=10,30$$$$(\frac{2}{3}+\frac{1}{60})\cdot 2\pi$$ $$n=11,33$$$$(\frac{1}{3}+\frac{1}{66})\cdot 2\pi$$ $$n=36$$$$(\frac{k}{3}+\frac{1}{72})\cdot 2\pi$$ $$n=13,39$$$$(\frac{2}{3}+\frac{1}{78})\cdot 2\pi$$ $$n=14,42$$$$(\frac{1}{3}+\frac{1}{84})\cdot 2\pi$$ $$n=45$$$$(\frac{k}{3}+\frac{1}{90})\cdot 2\pi$$ B.3.2 $\pm\frac{1}{2}[O\times C_{2n}]$, 2-fold rotation center    $n=1$$(\frac{k}{2}+\frac{1}{2})\cdot 2\pi$ $n=3$$(\frac{k}{2}+\frac{1}{6})\cdot 2\pi$ $n=2$$(\frac{1}{2}+\frac{1}{8})\cdot 2\pi$ $$n=5$$$$(\frac{k}{2}+\frac{1}{10})\cdot 2\pi$$ $$n=7$$$$(\frac{k}{2}+\frac{1}{14})\cdot 2\pi$$ $$n=4$$$$(\frac{1}{2}+\frac{1}{16})\cdot 2\pi$$ $$n=9$$$$(\frac{k}{2}+\frac{1}{18})\cdot 2\pi$$ $$n=11$$$$(\frac{k}{2}+\frac{1}{22})\cdot 2\pi$$ $$n=6$$$$(\frac{1}{2}+\frac{1}{24})\cdot 2\pi$$ $$n=13$$$$(\frac{k}{2}+\frac{1}{26})\cdot 2\pi$$ $$n=15$$$$(\frac{k}{2}+\frac{1}{30})\cdot 2\pi$$ $$n=8$$$$(\frac{1}{2}+\frac{1}{32})\cdot 2\pi$$ B.4 $\pm[T\times C_{n}]$ B.4.1 $\pm[T\times C_{n}]$, 3-fold rotation center $$n=1,3$$$$(\frac{2}{3}+\frac{1}{6})\cdot 2\pi$$ $$n=2,6$$$$(\frac{1}{3}+\frac{1}{12})\cdot 2\pi$$ $$n=9$$$$(\frac{k}{3}+\frac{1}{18})\cdot 2\pi$$ $$n=4,12$$$$(\frac{2}{3}+\frac{1}{24})\cdot 2\pi$$ $$n=5,15$$$$(\frac{1}{3}+\frac{1}{30})\cdot 2\pi$$ $$n=18$$$$(\frac{k}{3}+\frac{1}{36})\cdot 2\pi$$ $$n=7,21$$$$(\frac{2}{3}+\frac{1}{42})\cdot 2\pi$$ $$n=8,24$$$$(\frac{1}{3}+\frac{1}{48})\cdot 2\pi$$ $$n=27$$$$(\frac{k}{3}+\frac{1}{54})\cdot 2\pi$$ $$n=10,30$$$$(\frac{2}{3}+\frac{1}{60})\cdot 2\pi$$ $$n=11,33$$$$(\frac{1}{3}+\frac{1}{66})\cdot 2\pi$$ $$n=36$$$$(\frac{k}{3}+\frac{1}{72})\cdot 2\pi$$ $$n=13,39$$$$(\frac{2}{3}+\frac{1}{78})\cdot 2\pi$$ $$n=14,42$$$$(\frac{1}{3}+\frac{1}{84})\cdot 2\pi$$ $$n=45$$$$(\frac{k}{3}+\frac{1}{90})\cdot 2\pi$$ B.4.2 $\pm[T\times C_{n}]$, 2-fold rotation center $$n=1,2$$$$(\frac{1}{2}+\frac{1}{4})\cdot 2\pi$$ $$n=4$$$$(\frac{k}{2}+\frac{1}{8})\cdot 2\pi$$ $$n=3,6$$$$(\frac{1}{2}+\frac{1}{12})\cdot 2\pi$$ $$n=8$$$$(\frac{k}{2}+\frac{1}{16})\cdot 2\pi$$ $$n=5,10$$$$(\frac{1}{2}+\frac{1}{20})\cdot 2\pi$$ $$n=12$$$$(\frac{k}{2}+\frac{1}{24})\cdot 2\pi$$ $$n=7,14$$$$(\frac{1}{2}+\frac{1}{28})\cdot 2\pi$$ $$n=16$$$$(\frac{k}{2}+\frac{1}{32})\cdot 2\pi$$ $$n=9,18$$$$(\frac{1}{2}+\frac{1}{36})\cdot 2\pi$$ $$n=20$$$$(\frac{k}{2}+\frac{1}{40})\cdot 2\pi$$ $$n=11,22$$$$(\frac{1}{2}+\frac{1}{44})\cdot 2\pi$$ $$n=24$$$$(\frac{k}{2}+\frac{1}{48})\cdot 2\pi$$ $$n=13,26$$$$(\frac{1}{2}+\frac{1}{52})\cdot 2\pi$$ $$n=28$$$$(\frac{k}{2}+\frac{1}{56})\cdot 2\pi$$ $$n=15,30$$$$(\frac{1}{2}+\frac{1}{60})\cdot 2\pi$$ B.5 $\pm\frac{1}{3}[T\times C_{3n}]$ B.5.1 $\pm\frac{1}{3}[T\times C_{3n}]$, 3-fold (type I) rotation center $$n=1$$$$(\frac{k}{3}+\frac{1}{2})\cdot 2\pi$$ $$n=4$$$$(\frac{k}{3}+\frac{1}{8})\cdot 2\pi$$ $$n=2$$$$(\frac{2}{3}+\frac{1}{12})\cdot 2\pi$$ $$n=7$$$$(\frac{k}{3}+\frac{1}{14})\cdot 2\pi$$ $$n=3$$$$(\frac{1}{3}+\frac{1}{18})\cdot 2\pi$$ $$n=10$$$$(\frac{k}{3}+\frac{1}{20})\cdot 2\pi$$ $$n=13$$$$(\frac{k}{3}+\frac{1}{26})\cdot 2\pi$$ $$n=5$$$$(\frac{2}{3}+\frac{1}{30})\cdot 2\pi$$ $$n=16$$$$(\frac{k}{3}+\frac{1}{32})\cdot 2\pi$$ $$n=6$$$$(\frac{1}{3}+\frac{1}{36})\cdot 2\pi$$ $$n=19$$$$(\frac{k}{3}+\frac{1}{38})\cdot 2\pi$$ $$n=22$$$$(\frac{k}{3}+\frac{1}{44})\cdot 2\pi$$ B.5.2 $\pm\frac{1}{3}[T\times C_{3n}]$, 3-fold (type II) rotation center $$n=2$$$$(\frac{k}{3}+\frac{1}{4})\cdot 2\pi$$ $$n=1$$$$(\frac{1}{3}+\frac{1}{6})\cdot 2\pi$$ $$n=5$$$$(\frac{k}{3}+\frac{1}{10})\cdot 2\pi$$ $$n=8$$$$(\frac{k}{3}+\frac{1}{16})\cdot 2\pi$$ $$n=3$$$$(\frac{2}{3}+\frac{1}{18})\cdot 2\pi$$ $$n=11$$$$(\frac{k}{3}+\frac{1}{22})\cdot 2\pi$$ $$n=4$$$$(\frac{1}{3}+\frac{1}{24})\cdot 2\pi$$ $$n=14$$$$(\frac{k}{3}+\frac{1}{28})\cdot 2\pi$$ $$n=17$$$$(\frac{k}{3}+\frac{1}{34})\cdot 2\pi$$ $$n=6$$$$(\frac{2}{3}+\frac{1}{36})\cdot 2\pi$$ $$n=20$$$$(\frac{k}{3}+\frac{1}{40})\cdot 2\pi$$ $$n=7$$$$(\frac{1}{3}+\frac{1}{42})\cdot 2\pi$$ B.5.3 $\pm\frac{1}{3}[T\times C_{3n}]$, 2-fold rotation center $$n=1,2$$$$(\frac{1}{2}+\frac{1}{4})\cdot 2\pi$$ $$n=4$$$$(\frac{k}{2}+\frac{1}{8})\cdot 2\pi$$ $$n=3,6$$$$(\frac{1}{2}+\frac{1}{12})\cdot 2\pi$$ $$n=8$$$$(\frac{k}{2}+\frac{1}{16})\cdot 2\pi$$ $$n=5,10$$$$(\frac{1}{2}+\frac{1}{20})\cdot 2\pi$$ $$n=12$$$$(\frac{k}{2}+\frac{1}{24})\cdot 2\pi$$ $$n=7,14$$$$(\frac{1}{2}+\frac{1}{28})\cdot 2\pi$$ $$n=16$$$$(\frac{k}{2}+\frac{1}{32})\cdot 2\pi$$ $$n=9,18$$$$(\frac{1}{2}+\frac{1}{36})\cdot 2\pi$$ $$n=20$$$$(\frac{k}{2}+\frac{1}{40})\cdot 2\pi$$ $$n=11,22$$$$(\frac{1}{2}+\frac{1}{44})\cdot 2\pi$$ $$n=24$$$$(\frac{k}{2}+\frac{1}{48})\cdot 2\pi$$ Appendix C The number of groups of given order We will see that the number of groups of order $N$ is always at least $N/2$, and less than $O(N^{2})$. If $N$ is an odd prime, there are exactly $(N+3)/2$ groups, namely the torus translation groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{1,N}^{(s)}$ for $0\leq s\leq(N-1)/2$ and $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{N,1}^{((1-N)/2)}$. The richest class of groups are the toroidal groups, and among them, the most numerous groups are the torus translation groups, of type               : For each divisor $m$ of $N$, there are $\sim n/2$ groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m,n}^{(s)}$, where $n=N/m$. Thus, the number of groups is about $1/2$ times the sum $\sigma(N)$ of divisors of $N$, which is bounded by $N^{1+\frac{1+O(1/\log\log n)}{\log_{2}\ln N}}\leq N^{2}$ [31]. The upper bound of $O(N^{2})$ is very weak; the actual bound is slightly superlinear. The number of groups of type           $\cdot$      is of similar magnitude, provided that $N$ is even. For all the other types, there is at most one group for every divisor of $N$, except for the swapturn groups, whose number is related to the number of integer points on the circle $a^{2}+b^{2}=N/4$, and this number is at most $N$. From all the remaining classes of groups (tubical, polyhedral, or axial), there can be only a constant number of groups of a given order. The number of groups of order 100. As an exercise, let us compute the number of point groups of order $N=100$. We proceed through the toroidal classes of groups in Table 6 one by one. For the pure translation groups of type               , we can write $100=mn=1\cdot 100=2\cdot 50=4\cdot 25=5\cdot 20=10\cdot 10=20\cdot 5=25\cdot 4=50\cdot 2=100\cdot 1$ with accordingly $50+26+13+10+6+3+2+2+1=113$ choices of $s$, see the remark after (20) in Section 7.5. For the flip groups of type           $\cdot$      of order $100=2mn$, we have to factor 50 instead of 100. The possibilities are $50=1\cdot 50=2\cdot 25=5\cdot 10=10\cdot 5=25\cdot 2=50\cdot 1$ with $25+13+5+3+1+1=48$ choices of $s$. For the swap groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pm}}_{m,n}$ of order $4mn$, we have to split $25=mn$ into two factors $mn$ larger than 1. There is one possibility: $25=5\times 5$. For the groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{pg}}_{m,n}$, only the first factor $m$ must be larger than 1. This gives 2 choices. For $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\vbox{\hbox{\rotatebox{45.0}{\vrule height=8.91013pt,width=0.42pt}}}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{cm}}_{m,n}$ of order $2mn$, $mn=50$ must be split into two factors of the same parity. This is impossible since $mn\equiv 2\pmod{4}$. Thus, in total we have 3 swap groups of type                 . Clearly, there is the same number of 3 swap groups of type                 . Finally, for the full torus swap groups, almost all types have order $8mn$, which cannot equal 100. We only need to consider the groups of type $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}_{m,n}$, of order $4mn$, We have to split $100/4=25$ into two factors $\geq 3$ of the same parity. There is one possibility: $25=5\times 5$. In total, we get $113+48+3+3+1=168$ chiral toroidal groups of order 100. Let us turn to the achiral groups: For the reflection groups                 , we have to consider all factorizations $100=2mn$ (types $\mathbf{pm}$ and $\mathbf{pg}$) or $100=4mn$ (type $\mathbf{cm}$). This gives $2\times\sigma_{0}(50)+\sigma_{0}(25)=2\times 6+3=15$ groups, where $\sigma_{0}$ denotes the number of divisors of a number. For the full reflection groups           $+$     , we have to consider all factorizations $100=4mn$ or $100=8mn$, respectively, where in one case ($\mathbf{p2mg}$), we distinguish the order of the factors. We get $2+3+2+0=7$ possibilities. For general $N$, there are $2\lceil\sigma_{0}(\frac{N}{4})/2\rceil+\sigma_{0}(\frac{N}{4})+\lceil\sigma_{0}(\frac{N}{8})/2\rceil$ full reflection groups of order $N$, where $\sigma_{0}(x)=0$ if $x$ is not an integer. For           $\scriptstyle\circlearrowleft$     , we must have $100=4(a^{2}+b^{2})$ with $a\geq b\geq 0$. There are two possibilities: $(a,b)=(5,0)$ or $(4,3)$. For the full torus groups           $+$$\times$     , the order would have to be a multiple of 8; so there are no such groups of order $100$. In total, we get $15+7+2=24$ achiral toroidal groups of order 100, and 192 toroidal groups altogether. $N=100$ does not occur as the order of any of the other types of groups. So 192 is the total number of 4-dimensional point groups of order $100$. Enantiomorphic pairs. As an advanced exercise, we can ask, how many of the 168 chiral groups or order 100 are their own mirror image? For the groups of type               , we are looking for a lattice of translations of size 100 that has an orientation-reversing symmetry. If it is symmetric with respect to a horizontal axis, then, according to Lemma 7.7, the possibilities are an $m\times n$ rectangular grid of $mn$ points or a rhombic grid of $2mn$ points. In this case, it is also symmetric with respect to a vertical axis. Thus, we have to split $100=mn$ and $50=mn$ into two factors $m$ and $n$. The order of the factors plays no role, because the reflection        swaps the factors. We have 5 possibilities for $100=1\cdot 100=2\cdot 50=4\cdot 25=5\cdot 20=10\cdot 10$ and 3 possibilities for $50=1\cdot 50=2\cdot 25=5\cdot 10$, which gives $5+3=8$ possibilities in total. (Alternatively, adding a vertical and horizontal mirror to such a translational subgroup will produce a group of type $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}$ or $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}$. So we can equivalently count the groups of these types of order $4N=400$.) There is also the possibility that the lattice is symmetric with respect to a swapturn operation       $\scriptstyle\circlearrowleft$. The number of these groups equals the number of groups of type           $\scriptstyle\circlearrowleft$      of order $4N=400$. It can be computed as the number of integer points $(a,b)$ on the circle $100=a^{2}+b^{2}$ with $a\geq b\geq 0$. There are two possibilities: $(10,0)$ and $(8,6)$. We have overcounted the lattices that are symmetric with respect to both      $+$ and      $\scriptstyle\circlearrowleft$, in other words, the upright or slanted square lattices. There is one lattice of this type: the $10\times 10$ upright lattice. In total, $8+2-1=9$ groups among the 113 groups of type                are equal to their own mirror. For the groups of type           $\cdot$     , we can repeat the same game, except that we are looking for a translation lattice of half the size, 50. For the lattices with      $+$ symmetry, we have 3 possibilities for $50=1\cdot 50=2\cdot 25=5\cdot 10$, and 2 possibilities for $25=1\cdot 25=5\cdot 5$, giving $3+2=5$ possibilities in total. There are two possibilities for $50=a^{2}+b^{2}$ with $a\geq b\geq 0$: $(7,1)$ and $(5,5)$. We have to subtract 1 for the slanted $5\times 5$ grid, for a total of $5+2-1=6$ groups among the 48 flip groups. The mirrors of the groups of type                  are the groups of type                 , and hence none of them is its own mirror. The groups of type           $\times$      are easy to handle: The two parameters $m$ and $n$ must be equal. We have one such group, $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\times$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{c2mm}}_{5,5}$. In total, $9+6+1=16$ chiral groups are their own mirror images. The remaining $168-16=152$ chiral groups consist of enantiomorphic pairs. The number of groups of order 7200. To look at a more interesting example, let us count the groups of order 7200. The count of toroidal groups follows the same calculation as above, and it amounts to 19,319 chiral and 216 achiral groups. In addition, we have 22 tubical groups: $\pm[I\times C_{60}],\pm[I\times D_{60}],\pm[O\times C_{150}],\pm[O\times D_{150}],\pm[T\times C_{300}],\pm[T\times D_{300}],\pm\frac{1}{2}[O\times D_{300}],\pm\frac{1}{2}[O\times\overline{D}_{300}],\pm\frac{1}{2}[O\times C_{300}],\pm\frac{1}{6}[O\times D_{900}],\pm\frac{1}{3}[T\times C_{900}],$ and their mirrors. Finally, there is one polyhedral group $\pm[I\times I]$. In total, we have $19{,}319+22+1=19{,}342$ chiral groups and 216 achiral ones. The number of groups of order at most $M$. While the number of groups of a given order $N$ fluctuates between a linear lower bound and a slightly superlinear upper bound, the “average number” can be estimated quite precisely: We have seen that the number of groups of order $N$ is of order $\Theta(\sigma(N))$, where $\sigma(N)$ is the sum of divisors of $N$. If we look at all groups of order at most $M$, we can sum over all potential divisors $d$ and get $$\sum_{N=1}^{M}\sigma(N)=\sum_{d=1}^{M}d\lfloor M/d\rfloor=\Theta(M^{2}).$$ Thus, the number of four-dimensional groups of order at most $M$ is $\Theta(M^{2})$. The majority of these groups is chiral, but the achiral ones alone are already of the order $\Theta(M^{2})$: There is essentially one swapturn group for each integer point $(a,b)$ in the disk $a^{2}+b^{2}\leq M/4$, with roughly a factor 8 of overcounting of symmetric points, and this gives $\Theta(M^{2})$ chiral groups. Appendix D The crystallographic point groups Brown, Bülow, Neubüser, Wondratschek, Zassenhaus classified the four-dimensional crystallographic space groups in 1978 [4]. They grouped them by the underlying point groups (geometric crystal classes, or $\mathbb{Q}$-classes), and assigned numbers to these groups. The crystallographic point groups are characterized as having some lattice that they leave invariant. There are 227 crystallographic points groups, sorted into 33 crystal systems according to the holohedry, i.e., the symmetry group of the underlying lattice. Tables 17–18 give a reference from the 227 groups in the list of [4, Table 1C, pp. 79–260] to our notation (for the toroidal groups) or Conway and Smith’s notation (for the remaining groups). When appropriate, we list two enantiomorphic groups. The first classification of the four-dimensional crystallographic point groups was obtained by Hurley in 1951 [23], see Section 10.2. A few mistakes were later corrected [24]. All these groups are subgroups of only four maximal groups: • $31/07=\pm\frac{1}{60}[I\times\bar{I}]\cdot 2=[[3,3,3]]$ (the simplex and its polar, order 240) • $33/16=\pm\frac{1}{2}[O\times O]\cdot 2=[3,4,3]$ (the 24-cell, order 1152). Taking the permutations of $(\pm 1,\pm 1,0,0)$ as the vertices of a 24-cell, this set generates a lattice, and this lattice is invariant under the group. The symmetries of the hypercube/cross-polytope, $32/21=\pm\frac{1}{6}[O\times O]\cdot 2=[3,3,4]$, are contained in this group as a subgroup. • $30/13=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$\hbox to 0.0pt{\hss$+$\hss}\kern 0.0pt\hbox to 0.0pt{\hss$\times$\hss}$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p4mm}\textrm{U}}_{6}=\pm\frac{1}{2}[\bar{D}_{12}\times\bar{D}_{12}]\cdot 2$, order 288. The invariant lattice is the Cartesian product of two hexagonal plane lattices. • $20/22=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mm}}_{6,4}=\pm\frac{1}{24}[D_{24}\times D_{24}^{(5)}]\cdot 2^{(0,0)}$, order 96. The invariant lattice is the Cartesian product of a hexagonal lattice and a square lattice. The last three items in Table 18 are the “pseudo crystal groups” of Hurley [24]: Each such group consists of transformations that can individually occur in crystallographic groups, but as a whole, it is not a crystallographic group. All its proper subgroups are crystallographic groups. Appendix E Geometric interpretation of oriented great circles Section 4.1.2 introduced the notation $\vec{K}_{p}^{q}$ to denote oriented great circles on $S^{3}$. Here we give a geometric interpretation of the orientation. In fact, we will give two equivalent geometric interpretations. However, at the boundary cases $p=q$ and $q=-p$, one or the other of the interpretations loses its meaning, and only by combining both interpretations we get a consistent definition that covers all cases. We start from the definition (7) of $K_{p}^{q}$ as the set of rotations $[x]$ that map $p$ to $q$ in $S^{2}$. The centers $r$ of these rotations lie on the bisecting circle $B(p,q)$ between $p$ and $q$. In Figure 48, we have drawn $p$ and $q$ on the equator, with $p$ east of $q$. If we observe the clockwise rotation angle $\varphi$ as $r$ moves along $B(p,q)$, we see that $\varphi$ has two extrema: If the angular distance between $p$ and $q$ is $\alpha$, the minimum clockwise angle $\varphi=\alpha$ is achieved when $r$ is at the North Pole. The maximum $2\pi-\alpha$ is achieved at the South Pole. The poles bisect $B(p,q)$ into two semicircles, the near semicircle and the far semicircle, according to the distance from $p$ and $q$. To define an orientation, we let $r$ move continuously on $B(p,q)$, see Figure 49 for an illustration on a small patch of $S^{2}$. We make the movement in such a way that (i) the rotation center $r$ moves in counterclockwise direction around $p$; (ii) simultaneously, the clockwise rotation angle $\varphi$ increases when $r$ is on the near semicircle and decreases when $r$ is on the far semicircle. In Figure 49, as $r$ moves from $r_{1}$ to $r_{2}$ along the thick arrow, the angle $\varphi$ increases from $\varphi_{1}$ to $\varphi_{2}$. These rules define an orientation of $B(p,q)$. When we want to transfer this orientation to $K_{p}^{q}$, we must be aware of the $2:1$ relation between quaternions $x=\cos\frac{\varphi}{2}+r\cdot\sin\frac{\varphi}{2}$ and rotations $[x]$ of $S^{2}$. The angle $\varphi$ is defined only up to multiples of $2\pi$, and hence a rotation corresponds to two opposite quaternions $x$ and $-x$. Thus, there are two ways of defining a continuous dependence from $r$ via $\varphi$ to $x$. Both possibilities lead to the same orientation of $K_{p}^{q}$, but we can select one of them by restricting $\varphi$ to the interval $0\leq\varphi<2\pi$. Once this mapping is chosen, two opposite points $r$ and $-r$ on $B(p,q)$, which define the same rotation $[r]$ of $S^{2}$, correspond to opposite quaternions $x$ and $-x$ on $K_{p}^{q}$. (The easiest way to check this is for the midpoint of $p$ and $q$ in Figure 48 and the opposite point. Both have the same rotation angle $\varphi=\pi$. Generally, the transition from $\varphi$ to $2\pi-\varphi$ changes the sign of $\cos\frac{\varphi}{2}$ and leaves $\sin\frac{\varphi}{2}$ unchanged.) Thus, as $r$ traverses $B(p,q)$, $x$ traverses $K_{p}^{q}$ once, and this traversal defines the orientation $\vec{K}_{p}^{q}$. The rules break down in the degenerate situations when $q=\pm p$. Luckily, in each situation, there is one rule that works. • When $p=q$, the only rotations centers are $r=p$ and $r=-p$. In this case, we can maintain rule (ii): We consider increasing rotation angles around $r=p$. • When $p=-q$, the rotation angle $\varphi=180^{\circ}$ is constant, but we can stick to rule (i): The rotation centers $r$ lie on the circle $B(p,-p)$ that has $p$ and $-p$ as poles, and we let them move counterclockwise around $p$. Considering the definition (7) of $K_{p}^{q}$, it is actually surprising that $K_{p}^{q}$ makes a smooth transition when $q$ approaches $p$: The locus $B(p,q)$ of rotation centers changes discontinuously from a circle to a set of opposite points. When $p$ and $q$ are exchanged with $-p$ and $-q$, everything changes its direction: A counterclockwise movement of $r$ around $p$ becomes a clockwise movement when seen from $-p$, and $r$ is on the near semicircle of $p$ and $q$ if it is on the far semicircle of $-p$ and $-q$. Thus, $\vec{K}_{-p}^{-q}$ has the opposite orientation. Appendix F Subgroup relations between tubical groups Figure 50 shows the subgroup structure between different tubical groups. Some types are included multiple times with different parameters to indicate common supergroups. However, all the types appear at least once with the parameter “$n$”. (Those are the ones in red.) Appendix G Conway and Smith’s classification of the toroidal groups We describe the parameterization of the lattice translations for the Conway–Smith classification of the groups of types $\pm[C\times C]$ and $+[C\times C]$ in geometric terms and relate them to our torus translations groups (type               ). This might be interesting for readers who want to study the classic classification for the toroidal groups and understand the connections. As before, we describe the groups in terms of the lattice of torus translations in the $(\alpha_{1},\alpha_{2})$ coordinate system, see Figure 51. We put the origin at the top right corner $(2\pi,2\pi)$ because the left rotations $[e_{m},1]$ is a shift by $\pi/m$ along the negative $\alpha_{1}=\alpha_{2}$ axis. This is the axis for the left rotations, and we call it the $L$-axis. The right rotations move on the $\alpha_{2}=-\alpha_{1}$ axis in the southeast direction, and we call this the $R$-axis. We first describe the diploid groups $\pm\frac{1}{f}[C_{m}^{(s)}\times C_{n}]$, and we related them to our groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m^{\prime},n}^{(s^{\prime})}$. The left and right groups are determined by the grid formed by drawing $\pm 45^{\circ}$ lines through all points. If $2m$ grid lines cross the $L$-axis between $(0,0)$ and $(2\pi,2\pi)$, then the left group is $C_{m}$. Similarly, if there are $2n$ grid intervals on the $R$-axis between $(2\pi,2\pi)$ and $(4\pi,0)$, (or equivalently, on the $-45^{\circ}$ diagonal of the square), the right group is $C_{n}$. The translation vectors on these diagonals form the left kernel $C_{m/f}$ and the right kernel $C_{n/f}$. The factor $f$ is determined by the number of grid steps along the diagonal from one point to the next. In the picture, these are $f=5$ steps. The parameter $m^{\prime}$ for our parameterization is hence $2m/f$. The kernels span a slanted rectangular grid; one rectangular box of this grid is shaded in the picture. In terms of grid lines, the diagonal is an $f\times f$ square, and it contains exactly one point per grid line of either direction, for a total of $f$ points (counting the four corners only once). In geometric terms, Conway and Smith parameterize the lattice by looking at the first grid line below the $L$-axis, as in our parameterization. They measure $s$ as the number of grid steps to the first lattice point, starting from the $R$-axis in southwest direction. The number $s$ must be relatively prime to $f$, because otherwise, additional points on the $R$-axis would be generated. By contrast, the parameter $s^{\prime}$ in our setup (Figure 20) is effectively measured in the same units along the same diagonal line, but starting from the intersection with the $\alpha_{1}$-axis, in the northeast direction. Our parameterization is simpler because we don’t specify in advance the number of points on the $R$-axis. This allows us to freely choose $s^{\prime}$ within some range. The group $\pm\frac{1}{f}[C_{m}^{(s)}\times C_{n}]$ is therefore generated by the translation vectors $[e_{m}^{f},1]$ along the $L$-axis, $[1,e_{n}^{f}]$ along the $R$-axis, and the additional vector $[e_{m}^{s},e_{n}]$. (The second generator $[1,e_{n}^{f}]$ is actually redundant because $[e_{m}^{s},e_{n}]^{f}[e_{m}^{f},1]^{-s}=[1,e_{n}^{f}]$.) For our group $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{m^{\prime},n}^{(s^{\prime})}$, the parameter $n$ is the same, and $m^{\prime}=2m/f$. The parameter $s^{\prime}$ can be computed as follows. Choose generators for $\pm\frac{1}{f}[C_{m}^{(s)}\times C_{n}]$ as in Figure 20. These generators are then $t_{1}=(\frac{f\pi}{m},\frac{f\pi}{m})$ and $t_{2}=(\tfrac{\pi}{n}-\frac{s\pi}{m}+\frac{f\pi}{m},-\tfrac{\pi}{n}-\frac{s\pi}{m}+\frac{f\pi}{m})$. Comparing them with the generators in Proposition 7.5, we get $s^{\prime}=\frac{-m+(f-s)n}{f}$. As mentioned in footnote 14, we have swapped the roles of the left and right groups with respect to Conway and Smith’s convention, to get a closer correspondence. In the original convention of Conway and Smith, the group $\pm\frac{1}{f}[C_{m}\times C_{n}^{(s)}]$ is considered, whose third generator is $[e_{m},e_{n}^{s}]$. This group is the mirror of the group $\pm\frac{1}{f}[C_{n}^{(s)}\times C_{m}]$. A haploid group $+\frac{1}{f}[C_{m}^{(s)}\times C_{n}]$ exists if both $m/f$ and $n/f$ are odd. We modify the first generator to $[e_{m}^{2f},1]$. This omits every other point on the $L$-axis (and on every line parallel to it) and thus avoids the point $(\pi,\pi)=-\mathrm{id}$. In addition to being relatively prime to $f$, $s$ must be odd, because otherwise, since $[e_{m}^{s},e_{n}]^{n}[e_{m}^{2f},1]^{-n/f\cdot s/2}=[1,e_{n}^{n}]=[1,-1]$, we would nevertheless generate the point $(\pi,\pi)=-\mathrm{id}$. Reflection in the $L$-axis gives the same group. Hence $\pm\frac{1}{5}[C_{15}^{(4)}\times C_{5}]\doteq\pm\frac{1}{5}[C_{15}^{(1)}\times C_{5}]=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{6,5}^{(1)}$, and $+\frac{1}{5}[C_{15}^{(9)}\times C_{5}]\doteq+\frac{1}{5}[C_{15}^{(1)}\times C_{5}]=\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu_{3,5}^{(2)}$. This reflection changes the parameter $s$ to $f-s$ for the diploid groups and to $2f-s$ for the haploid groups. To eliminate these duplications, the parameter $s$ should be constrained to the interval $0\leq s\leq f/2$ for the diploid groups and $0\leq s\leq f$ for the haploid groups. As mentioned in footnote 15, these constraints are not stated in Conway and Smith. This concerns the last four entries of [8, Table 4.2], see Figure 53. With the help of the geometric picture of Figure 51 for the parameterization of Conway and Smith, one can give a geometric interpretation to the conditions $s=fg\pm 1$ of [8, pp. 52–53] for the last 4 lines of Table 4.3: The condition $s=fg-1$ expresses the fact that a square lattice is generated, as is necessary for the torus swapturn groups           $\scriptstyle\circlearrowleft$      (type $[D\times D]\cdot\bar{2}$). The condition $s=fg+1$ characterizes a rectangular lattice, as required for the groups of type                  and           $+$     . (Accordingly, for the two types of groups $\pm[C\times C]\cdot 2^{(\gamma)}$ and $+[C\times C]\cdot 2^{(\gamma)}$ in the upper half of [8, pp. 53], the condition $s=fg-1$ must be corrected to $s=fg+1$, see Figure 56.) G.1 Index-4 subgroups of $D_{4m}$ There is one ambiguity that is notorious for causing oversights and omissions. It arises when the group $C_{m}$ is used as an index-4 subgroup of $D_{4m}$. $D_{4m}$ is the chiral symmetry group of a regular $2m$-gon $P_{2m}$ in space. In Figure 52 we show such a $2m$-gon with an alternating 2-coloring of its vertices. $C_{m}$ is the normal subgroup of rotations around the principal axis, perpendicular to the polygon, by multiples of $2\pi/m$ (those that respect the coloring). $C_{m}$ has three cosets in $D_{4m}$: The “cyclic coset” $C_{m}^{\prime}$ of rotations by odd multiples of $\pi/m$ (those that swap the coloring), and two “half-turn cosets” $C_{m}^{0}$ and $C_{m}^{1}$. One of these contains the half-turns through the vertices of $P_{2m}$ (the dashed axes, keeping the colors), and the other the half-turns through the edge midpoints of $P_{2m}$ (the dotted axes, swapping colors). However, when we rotate $P_{2m}$ by $\pi/(2m)$, the involved groups and subgroups don’t change, and hence we see that $C_{m}^{0}$ and $C_{m}^{1}$ are geometrically the same, whereas $C_{m}^{\prime}$ is clearly distinguishable (unless $m=1$). The case of the index-4 subgroups $C_{m}$ and $C_{n}$ of $D_{4m}$ and $D_{4n}$ is denoted in Conway and Smith [8] by the notation $\frac{1}{4}[D_{4m}\times D_{4n}]$, possibly with some decoration to distinguish different cases. The actual group is determined by an isomorphism between the cosets of $D_{4m}/C_{m}$ and $D_{4n}/C_{n}$. For this there are two possibilities. (a) The cyclic coset $C_{m}^{\prime}$ is matched with the cyclic coset $C_{n}^{\prime}$. (b) The cyclic coset $C_{m}^{\prime}$ and the cyclic coset $C_{n}^{\prime}$ are not matched to each other. Goursat’s omission. In the earliest enumeration by Goursat from 1889, the less natural possibility (b) has been overlooked. This was noted by Threlfall and Seifert in 1931, [35, footnote 9 on p. 14]262626Referring to Goursat’s work: “Gruppen dieser Substitutionen – mit unseren Paargruppen 1-isomorph – sind mit einer Ausnahme (§ 4 S. 18 Fußnote und § 4 S. 22) vollständig angegeben.” (Groups of these substitutions – which are 1-isomorphic to our pair groups – are completely specified with one exception, see § 4 p. 18 footnote 13 and § 4 p. 22.) In fact, in footnote 13 on p. 18, they use two such groups as an example of groups with equal normal subgroups $L_{0}$ and $R_{0}$ that are different already as abstract groups. It is curious that Threlfall and Seifert, in the same paper, when they came to the actual classification, overlooked this class of groups again. They noted the gap themselves and filled it in part II [36, pp. 585–586, Appendix II, Note 5]. and by Hurley in 1951 [23, bottom of p. 652],272727“In the course of this calculation we find that Goursat has omitted one family of groups. This omission appears to have passed unnoticed by subsequent writers.” who consequently extended the classification by adding an additional class XIII${}^{\prime}$ of groups to Goursat’s list. Du Val [15] followed Goursat and omitted case (b) again. A missed duplication in Conway and Smith. Conway and Smith [8] denote case (b) by adding a bar to the second factor as follows: $$\pm\tfrac{1}{4}[D_{4m}\times\bar{D}_{4n}]\text{ or }+\tfrac{1}{4}[D_{4m}\times\bar{D}_{4n}]$$ When $n=1$, the distinction between case (a) and (b) disappears. $D_{4}$ is the Vierergruppe, whose nontrivial operations are half-turns around three perpendicular axes, and these elements are geometrically indistinguishable. Conway and Smith express this succinctly in the concluding sentence of their classification (see Figure 56): “In the last eight lines, it is always permissible to replace $D_{2}$ by $C_{2}$ and $\bar{D}_{4}$ by $D_{4}$.” However, this formulation in connection with the choice of notation might lead an unwary reader into a trap:282828Besides, the rule should also apply to entries that are not in the last eight lines of the tables. Accordingly, the constraint $n\geq 2$, which is stated for five of the eleven tubical groups in Table 2, should also be applied to the corresponding groups in [8, Table 4.1]. For the group $+\frac{1}{2}[D_{2m}\times C_{2n}]$ in the penultimate line of Table 4.1, the obvious condition that $m$ and $n$ should be odd was forgotten. This omission has already been noted by Medeiros and Figueroa-O’Farrill [14, p. 1405]. The choice (b) of an alternative mapping between the index-4 cosets in $\frac{1}{4}[D_{4m}\times D_{4n}]$ is not a property associated to $D_{4n}$ and its chosen normal subgroup, and it would more appropriate to add the bar to the $\times$ operator or the whole expression. The distinction disappears when at least one of $D_{4m}$ and $D_{4n}$ is $D_{4}$, and hence, the bar can also be removed in a case like $[D_{4}\times\bar{D}_{4n}]$ when the first factor is $D_{4}$. This duplication example has been treated in detail in Section 7.11.2. Conway and Smith use the bar notation $\bar{D}_{4n}$ also for something different, namely in the index-2 case, for example in $\pm\frac{1}{2}[O\times\bar{D}_{4n}]$, see Table 2. It indicates that, as the kernel $R_{0}$ (or $L_{0}$) of $D_{4n}$, the normal subgroup $D_{2n}$ is used, as opposed to $C_{2n}$. Also in this case, the distinction disappears for $n=1$, but this time, it is a property of the group $D_{4n}$ and its normal subgroup, and hence the notation of attaching the bar to $D_{4n}$ causes no confusion. Another duplication in Conway and Smith. Our computer check unveiled another duplication in Conway and Smith’s classification. It concerns the groups $\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mg}}_{m,n}$ for $m=n$: $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mg}}_{n,n}$$ $$\displaystyle\doteq\pm\tfrac{1}{4}[D_{2n}\times D_{2n}^{(1)}]\cdot 2^{(1,0)}\doteq\pm\tfrac{1}{4}[D_{2n}\times D_{2n}^{(1)}]\cdot 2^{(1,1)}$$ for even $$n$$ $$\displaystyle\mskip 1.0mu\vbox{\hrule\hbox{\vrule\kern 0.6pt$\vbox{\kern 0.6pt\hbox{$\vbox{\hrule\hbox{\vrule height=6.3pt\kern 6.3pt\vrule}\hrule}\kern-0.4pt\kern-6.3pt\hbox to 6.3pt{\hss$+$\hss}\kern 0.4pt$} \kern 0.6pt}$\kern 0.6pt\vrule}\hrule}\mskip 1.0mu^{\mathbf{p2mg}}_{n,n}$$ $$\displaystyle\doteq+\tfrac{1}{2}[D_{2n}\times D_{2n}^{(1)}]\cdot 2^{(0,0)}\doteq+\tfrac{1}{2}[D_{2n}\times D_{2n}^{(1)}]\cdot 2^{(0,2)}$$ for odd $$n$$ Neither of these duplications is warranted according to the equalities listed in [8, pp. 52–53]. For example, for $\pm\tfrac{1}{4}[D_{2n}\times D_{2n}^{(s)}]\cdot 2^{(\alpha,\beta)}$ in the first line, we need a transition from $(\alpha,\beta)=(1,0)$ to $(\alpha,\beta)=(1,1)$. In this example, $f=2$ and $g=0$. The only rule according to [8, bottom of p. 52] that allows this change is the transition from $\langle s,\alpha,\beta\rangle$ to $\langle s+f,\alpha,\beta-\alpha\rangle$ (see Figure 55), but it comes with a simultaneous change of $s$ from $s=1$ to $s+f=3$. The parameter $s$ is regarded modulo $2f=4$. We did not investigate the reason for this duplication. Since $f=2$ in both cases, it may have to do with “…  the easy cases when $f\leq 2$, which we exclude” [8, p. 52, line 2], see Figure 55. The book of Conway and Smith [8] is otherwise a very nice book on topics related to quaternions and octonions, but it suffers from a concentration of mistakes near the end of Chapter 4, in particular, concerning the achiral groups. As an “erratum” to [8, §4], we attach in Figures 53–56 the Tables 4.1–4.2 and the last three pages of Chapter 4 of [8] with our additional explanations and corrections, as far as we could ascertain them, but we certainly did not fix all problems.
Cognitive Argumentation and the Suppression Task Emmanuelle-Anna Dietz Saldanha,${}^{\text{a}}$ Antonis Kakas${}^{\text{b}}\thanks{The authors are mentioned in alphabetical order.}$    ${}^{a}$ International Center for Computational Logic, TU Dresden, Germany, [email protected] ${}^{b}$ Department of Computer Science, University of Cyprus, Nicosia, Cyprus, [email protected] The authors are mentioned in alphabetical order. Abstract This paper addresses the challenge of modeling human reasoning, within a new framework called Cognitive Argumentation. This framework rests on the assumption that human logical reasoning is inherently a process of dialectic argumentation and aims to develop a cognitive model for human reasoning that is computational and implementable. To give logical reasoning a human cognitive form the framework relies on cognitive principles, based on empirical and theoretical work in Cognitive Science, to suitably adapt a general and abstract framework of computational argumentation from AI. The approach of Cognitive Argumentation is evaluated with respect to Byrne’s suppression task, where the aim is not only to capture the suppression effect between different groups of people but also to account for the variation of reasoning within each group. Two main cognitive principles are particularly important to capture human conditional reasoning that explain the participants’ responses: (i) the interpretation of a condition within a conditional as sufficient and/or necessary and (ii) the mode of reasoning either as predictive or explanatory. We argue that Cognitive Argumentation provides a coherent and cognitively adequate model for human conditional reasoning that allows a natural distinction between definite and plausible conclusions, exhibiting the important characteristics of context-sensitive and defeasible reasoning. 1 Introduction How do humans reason? What conclusions do they draw? Can we provide a satisfactory explanation to these questions by means of a coherent, computational and cognitively adequate model? A computational model that is able to reproduce human reasoning in a faithful way while at the same time accounting for the variability in the reasoning observed across the population? In this paper we aim to contribute in addressing this challenge by formulating and studying human reasoning within the framework of Cognitive Argumentation. This is a framework that is built by synthesizing together the general and abstract theory of computational argumentation from AI with cognitive principles born out of empirical and theoretical findings of Cognitive Science. We will examine how Cognitive Argumentation could give a new underlying formal basis for human reasoning through an in depth and extensive analysis of Byrne’s \citeyearbyrne:89 suppression task, one of the most well-known psychological experiments on human (conditional) reasoning. But why argumentation? Humans reason with knowledge whose generic form is that of an association or link between different pieces of information. In contrast to formal logical reasoning which is strict and rigid and rarely matches that of human reasoning at large, argumentation provides a more flexible logical framework, both in the representation of knowledge, that is closer to the generic form of associations and in the actual process of reasoning to conclusions. It is a framework well suited for handling conflicting and dynamically changing information, as indeed is the case we are confronted with in our everyday human-level reasoning. Support for argumentation exists from the early work of Aristotle on dialectic syllogistic reasoning to numerous works in Cognitive Science and Philosophy in the last century and more recently in AI. Two recent results provide direct support in favour of the alternative of argumentation for human reasoning. Firstly, there is now strong evidence from Cognitive Psychology in various studies, brought together in the work of Mercier and Sperber \citeyearmercier:sperber:2011, that humans arrive at conclusions and justify these by arguments. Arguments are the means for human reasoning and the process of reasoning is one of evaluation, elaboration and acceptance or rejection of arguments. Secondly, recent results have shown that such an argumentative form of reasoning, or Argumentation Logic as it is called in [\citeauthoryearKakas, Mancarella\BCBL \BBA ToniKakas \BOthers.\APACyear2018, \citeauthoryearKakasKakas\APACyear2019], can be arranged to give, as a special case, a reasoning process that is completely equivalent to classical logical entailment. Hence the departure of argumentation from formal logic is not radical, but instead one that uniformly encompasses both formal and informal reasoning. 1.1 Cognitive Argumentation Cognitive Argumentation emerges out of the synthesis of formal argumentation theory in AI and empirical and theoretical studies of the psychology of reasoning from Cognitive Psychology and Philosophy. Formal argumentation in AI provides a good computational basis for argumentation but for this to become a cognitive model for human reasoning it needs to be informed and guided by cognitive principles of human thinking that have been emerging out of studies in Cognitive Psychology over many decades now. Cognitive Argumentation therefore puts an emphasis on accommodating empirical observations of human reasoning and letting this phenomenology guide its development. In realizing Cognitive Argumentation we need to consider how cognitive principles affect and to a certain extent determine the construction and evaluation of arguments. The construction of arguments would be based on cognitive argument schemes (a central notion in the study of argumentation [\citeauthoryearToumlinToumlin\APACyear1958, \citeauthoryearPollockPollock\APACyear1995, \citeauthoryearWaltonWalton\APACyear1996]) that capture the typical common sense knowledge about our physical world or about our human behavior, physical, mental or emotional. Cognitive principles should also guide the selection of which cognitively valid argument schemes to actually use under the different dynamically changing conditions of the environment in which the argumentation reasoning process takes place. This selection depends on a process of awareness of relevant argument schemes under the current circumstances which in turn is guided by belief biases and other extra-logical assumptions made by humans. The dynamic nature of real-life human reasoning presents a major challenge for any cognitive model of human reasoning. New information can have a major effect on the reasoning and hence any computational cognitive model needs to be properly immersed into the external environment, adapting to new and changing contextual information. Thus in Cognitive Argumentation the form of arguments and the relation between them will be context-dependent allowing for the ensuing reasoning via argumentation to be context-sensitive, e.g. influencing which arguments to consider and the intensity of the process of reasoning. The methodological aim of Cognitive Argumentation is to study different cases of human reasoning in order to incrementally inform and extend the framework into an increasingly more general cognitive model for human reasoning. The approach of Cognitive Argumentation has already been applied and tested on one such example concerning how humans reason with Aristotelian Syllogisms [\citeauthoryearDietz Saldanha \BBA KakasDietz Saldanha \BBA Kakas\APACyear2019]. This is an important first case as in this humans are asked to reason as close as possible to a formal setting and a cognitive model would need to match together the formal and non-formal human reasoning that is observed to occur. Cognitive Argumentation performs well in doing this as attested in the recent first Syllogism Challenge 2017.111https://www.cc.uni-freiburg.de/modelingchallenge/challenge-2017 1.2 Study and Structure of Paper In this paper, we will consider a second example of human reasoning, in a very different setting from that of syllogistic reasoning, namely that of informal or common sense reasoning with everyday conditional information. The aim is to formulate within Cognitive Argumentation human conditional reasoning and test this by examining how it can capture the experimental results of the suppression task [\citeauthoryearByrneByrne\APACyear1989, \citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000], in a coherent and complete way. The larger aim is to use this study to probe more deeply the framework of Cognitive Argumentation in order to understand more generally how to build and apply this framework. A large amount of literature exists that investigates the experimental results of the suppression task, as well as some formal approaches that suggest how to model the task (e.g. [\citeauthoryearStenning \BBA van LambalgenStenning \BBA van Lambalgen\APACyear2008, \citeauthoryearDietz, Hölldobler\BCBL \BBA RagniDietz \BOthers.\APACyear2012]). These works mostly concentrate on understanding the suppression effect between the different groups. Yet, the results contain more information. In particular, we can identify different kinds of majorities, ranging from close to average to almost all participants. The experiment thus contains the additional challenge to provide an explanation for the (significant) majorities and the variation of conclusions drawn amongst them. We will see that this is indeed the case in Cognitive Argumentation, accounting not only for the effect of suppression but also for these variations among the population participating in the same groups. The formal framework of Cognitive Argumentation will be presented in Section 4. This will be defined as an instance of preference-based argumentation [\citeauthoryearKakas, Mancarella\BCBL \BBA DungKakas \BOthers.\APACyear1994, \citeauthoryearGarcía \BBA SimariGarcía \BBA Simari\APACyear2004, \citeauthoryearPrakken \BBA SartorPrakken \BBA Sartor\APACyear1997, \citeauthoryearModgil \BBA PrakkenModgil \BBA Prakken\APACyear2013, \citeauthoryearAmgoud, Dimopoulos\BCBL \BBA MoraitisAmgoud \BOthers.\APACyear2008, \citeauthoryearKakas \BBA MoraitisKakas \BBA Moraitis\APACyear2003], suitably adapted for the task of capturing human logical reasoning, and whose acceptability semantics has the degree of flexibility needed for the informal nature of human reasoning. After a brief introduction of the suppression task in Section 2, we will analyze a set of relevant cognitive principles with particular attention to the cognitive links between conditionals and argument schemes (Section 3). Section 5 presents an analysis of argumentative reasoning of all cases of the suppression task and evaluates this in accordance with the observed experimental data, accounting for the suppression effect and the signifigant variation of individual responses in the same case. Section 6 introduces COGNICA, a web based system for automating the process of human conditional reasoning through Cognitive Argumentation. The paper ends with a general discussion of human reasoning via argumentation (Section 7), summarizing the essential elements of this and the main challenges that lay ahead. 2 The Suppression Task The suppression task [\citeauthoryearByrneByrne\APACyear1989], is a well-known psychological study on human reasoning. The experimental setting was as follows: Three groups of participants were asked to derive conclusions given variations of a set of premises. Group I was given the following two premises:222The participants received the natural language sentences but not the abbreviated notation on the right hand side. If she has an essay to finish, then she will study late in the library. ($$e\leadsto\ell$$) She has an essay to finish. ($$e$$) The participants were asked what necessarily had to follow assuming that the above two premises were true. They could choose between the following three answer possibilities:333Here and in the sequel, we will denote with an overbar the negation or complement of an atomic statement, e.g. $\overline{e}$ and $\overline{\ell}$ denote the negation of $e$ and the negation of $\ell$, respectively. She will study late in the library. ($$\ell$$) She will not study late in the library. ($$\overline{\ell}$$) She may or may not study late in the library. ($$\ell$$ or $$\overline{\ell}$$) In this first group, 96% of the participants concluded that She will study late in the library. In addition to the above two premises for Group I, Group II was given the following premise: If she has a textbook to read, then she will study late in the library. ($$t\leadsto\ell$$) Still, 96% of the participants concluded that She will study late in the library. Finally, Group III received, together with the two premises of Group I, additionally the following premise: If the library stays open, then she will study late in the library. ($$o\leadsto\ell$$) In this group only 38% concluded that She will study late in the library: The conclusion drawn in the previous groups was suppressed in Group III. The results of this experiment show that previously drawn conclusions seem to be suppressed given (appropriate) additional information, i.e. participants seemed to reason non-monotonically in a context-sensitive way. A natural explanation why participants in Group III did not conclude necessarily that She will study late in the library, is because they were not sure whether The library stays open, which is a necessary requirement for her to study late in the library. In the first two groups the majority of the participants did not have this doubt, as they had not been made aware of the possibility that the library may not be open. In this paper, we will show how this experiment and its observed data can be naturally understood by formalizing human reasoning in terms of building supporting arguments for conclusions and defending such arguments against their counterarguments. To reason in terms of argumentation, we can construct an argument based on $e$ and $e\leadsto\ell$, which supports the conclusion $\ell$. In Groups I and II, the only argument that we can construct for $\overline{\ell}$ consists of hypothesizing $\overline{\ell}$ itself. This forms a counterargument to the above argument for $\ell$. But according to a relative preference or strength of the explicitly stated premises, the argument of $e$ and $e\leadsto\ell$, can defend against $\overline{\ell}$, but not vice-versa as $\overline{\ell}$ is a weaker argument. The left graph in Figure 1 representing Group I, shows at the bottom this winning argument. To distinguish explicitly stated premises not those not stated explicitly we will call the latter hypothetical premises denoting them with a prefix of ‘Hyp’. At the top of the figure we see another argument supporting $\ell$, namely the hypothesis of $\ell$. The figure shows how this is attacked by hypothesis argument for $\overline{\ell}$ and that how $\ell$ can defend back against this as there is no preference (or strength) between these two hypotheses. Given that the hypothesis argument for $\overline{\ell}$ cannot defend against its counterargument, $e$ and $e\leadsto\ell$, we have a good quality, or acceptable as it is normally called in AI, argument for $\ell$ but not for $\overline{\ell}$. The middle part of Figure 1 shows the case for Group II where we can construct another argument for $\ell$ based on the hypothesis of $t$ and $t\leadsto\ell$. In both groups, $\ell$, is the only statement supported by acceptable arguments. This corresponds to the majority’s conclusion, that She will study late in the library holds, i.e. that this is a definite conclusion. For Group III, the case is different. The participants might have become aware of the common sense knowledge that If the library does not stay open, then she will not (be able to) study late in the library ($\overline{o}\leadsto\overline{\ell}$). This together with the possibility, or the hypothesis, that The library does not stay open ($\overline{o}$), gives an argument that supports the conclusion that $\overline{\ell}$. Furthermore, it seems that this argument is at least as preferred as the argument supporting $\ell$ based on $e$ and $e\leadsto\ell$. Hence now we also have an argument for $\overline{\ell}$ that is acceptable and those human participants that reason with this argument are prevented from deriving that She will study late in the library as a definite conclusion. For them, it is only a plausible conclusion. Figure 1 illustrates this in the right-most part where although the same argument, $e$ together with $e\leadsto\ell$, continues to defeat the hypothetical argument for $\overline{\ell}$ it does not defeat the argument built from $\overline{o}$ and $\overline{o}\leadsto\overline{\ell}$, as seen at the bottom of the figure. Both these arguments can defend against each other and hence they are both acceptable. In total, [\citeauthoryearByrneByrne\APACyear1989] reported the experimental results of twelve cases of the suppression task. For each of the three groups, four different cases of reasoning were considered by combining their general knowledge with one of the following factual information: She has an essay to finish. ($$e$$) She will study late in the library. ($$\ell$$) She does not have an essay to finish. ($$\overline{e}$$) She will not study late in the library. ($$\overline{\ell}$$) Table 1 shows for each group (column 1), the conditional information they received (column 2) together with the factual information for each of the four cases (column 3 to 6). In each row we can see the percentage of responses by the participants in the group corresponding to the row of the table. Those in gray are the responses demonstrating the suppression effect. The majority’s responses in Group II diverges in two cases from the majority’s responses in Group I and III: When participants received the information, that She does not have an essay to finish ($\overline{e}$), only 4% concluded that She will not study late in the library ($\overline{\ell}$), and when they received the information that She will study late in the library, only 13% concluded that She has an essay to finish. Contrary to these cases, the suppression effect for Group III took place when participants received the information that She has an essay to finish (only 38% concluded that She will study late in the library), or when they were given the fact that She will not study late in the library (only 33% concluded that She does not have an essay to finish. We will use this experiment to motivate and test our model of Cognitive Argumentation for human reasoning by examining how it can uniformly capture the experimental results in all twelve cases accounting for the suppression effect as well as the variation of responses within each group. 3 Cognitive Principles Humans make various (implicit) assumptions while reasoning, many of which are not necessarily valid under (formal) classical logic. We will specify such (typically) extra-logical properties and formalize them as cognitive principles to help us develop a framework of argumentation that is in tune with human reasoning. 3.1 Maxim of Quality According to Grice’s \citeyeargrice:1975 conversational implicature, humans communicate according to a cooperation principle. The maxim of quality states that humans try to be truthful and thus information that we get in conversation is assumed to be true and trusted. In the context of the suppression task, for example, this principle implies that what the experimenter states, e.g. She has an essay to finish ($\mathit{e}$), is believed to be true by the participants: it is trusted as strong information that does not need to be questioned or is questioned only in an extreme case. Accordingly, we will establish a (strong) factual argument scheme to encompass this principle. 3.2 Maxim of Relevance People consider different scenarios depending on whether they have been made aware of alternative options while reasoning [\citeauthoryearSperber \BBA WilsonSperber \BBA Wilson\APACyear1995, \citeauthoryearByrneByrne\APACyear2005]. This awareness may not be through some direct and explicit mention of the alternative. Nevertheless, considering Grice’s \citeyeargrice:1975 maxim of relevance, it seems natural to consider (and account) for the possibility of these alternatives, as the participants might believe that, otherwise this information would not have been mentioned, e.g. in a dialogue. We can capture this cognitive principle of considering different awareness (or relevance) driven possibilities through a (weak) hypothesis argument scheme. Hence for information that we are made aware of and not given explicitly as factual information, people can still construct various context-dependent hypotheses supporting statements concerning this information. As there is no direct evidence that these hypotheses hold, they are only plausible, the hypothesis argument scheme is weaker than other argument schemes based on explicitly given information. 3.3 Conditional Reasoning Byrne \citeyearbyrne:2005 distinguishes between different types of conditionals and conditions, assumed to be perceived by humans in situations like those of the selection task [\citeauthoryearWasonWason\APACyear1968, \citeauthoryearGriggs \BBA CoxGriggs \BBA Cox\APACyear1982] and the suppression task [\citeauthoryearDietz Saldanha, Hölldobler\BCBL \BBA RochaDietz Saldanha \BOthers.\APACyear2017]. We will extend this distinction and propose canonical associations related to different types of conditions. In particular, we will introduce and distinguish between prediction and explanatory associations and related argument schemes. Consider the following conditional: If I need milk, then I will buy milk. ($$\mathit{need}\leadsto\mathit{buy}$$) The condition I need milk can be understood as sufficient, in the sense that if the condition holds, then this forms a support for the consequent, I will buy milk, to hold as well (modus ponens). On the other hand, the negation of the condition, I don’t need milk, seems to be a plausible support for the negation of the consequent, I will not buy milk (denying the antecedent). Thus the condition can also be understood as necessary for the consequence to hold. Consider now, additionally to ($\mathit{need}\leadsto\mathit{buy}$), the following conditional: If my mother asks me to get her milk, then I will buy milk. ($$\mathit{asks}\leadsto\mathit{buy}$$) Both conditions in ($\mathit{need}\leadsto\mathit{buy}$) and ($\mathit{asks}\leadsto\mathit{buy}$), which are independent of each other, are separately sufficient in order for the consequence to hold. However, the negation of either of these conditions alone is not enough to conclude the negation of the consequence, I will not buy milk. Only the negation of both conditions together, gives sufficient support to conclude the negation of the consequence. Therefore, individually the conditions in ($\mathit{need}\leadsto\mathit{buy}$) and ($\mathit{asks}\leadsto\mathit{buy}$) are not necessary conditions. Now that there is a second way to bring about the consequent, the condition I need milk has lost its necessary property. Let us now assume that, in addition to ($\mathit{need}\leadsto\mathit{buy}$) and ($\mathit{asks}\leadsto\mathit{buy}$), we are also given the following conditional: If I have enough money, then I will buy milk. ($$\mathit{money}\leadsto\mathit{buy}$$) By this conditional ($\mathit{money}\leadsto\mathit{buy}$) we are made aware of the possibility of a situation where, even in the case where, I need milk or my mother asks me to get her milk, I might not buy milk, because possibly I don’t have enough money. Having enough money is a necessary condition for the consequent: without it the consequent cannot hold, i.e. I cannot buy milk, no matter what other (sufficient) conditions might hold at the time. Also in comparison with the the above cases we might consider this a strong necessary condition in the sense that it is more or very unlikely for this to loose its necessary property. On the other hand, the condition of ($\mathit{money}\leadsto\mathit{buy}$) cannot be considered as a sufficient condition: even if I have enough money, I might not buy milk. The distinction between the two different types of conditions, sufficient and necessary, shows up when we consider explanations of the consequent and its negation. Assume that we are given the information that I did not buy milk. ($$\overline{\mathit{buy}}$$) It is reasonable that, given ($\mathit{need}\leadsto\mathit{buy}$) and ($\mathit{asks}\leadsto\mathit{buy}$) (without ($\mathit{money}\leadsto\mathit{buy}$)), to conclude the negation of the condition of both conditionals, namely that I did not need milk and my mother did not ask me to get her milk (modus tollens). Adding the conditional ($\mathit{money}\leadsto\mathit{buy}$) in the context of reasoning would not extend this conjunction but would result in a disjunctive addition of the negation of the new (necessary) condition: Either (I do not need milk and my mother does not ask me to get her milk) or I do not have enough money. Hence the observation of the negation of the consequent can be explained by the negation of a necessary condition (e.g. I do not have enough money) or by assuming that there is “no reason” for the consequent to hold, resulting in a more complex explanation, namely that none of the sufficient conditions can hold. In contrast, if we are given the positive information that a consequent holds, e.g. I buy milk, then this can be simply explained by any one of the sufficient conditions for the consequent, e.g. either by I need milk or by my mother asks me to get her milk (affirming the consequent). It is important to note that typically we will not consider that two such sufficient conditions, together, form an explanation. In fact, we typically consider that different explanations are incompatible with each other, except perhaps in very exceptional cases where many different reasons can hold together. Hence we will only accept one, either I need milk or my mother asks me to get her milk to explain the consequent I buy milk but not both together. Similarly, when we are explaining the negation of the consequent, e.g. I did not buy milk, we will only accept one of the explanations, either I do not have enough money or there is “no reason”, i.e. I did not need milk and my mother did not ask me to get her milk. Hence different explanations are in general considered to be in tension with each other. They are competing or contrasting alternatives as implied for example by the maxim of “Inference to the best explanation” (see e.g. [\citeauthoryearLiptonLipton\APACyear2003, \citeauthoryearRubenRuben\APACyear1990]). The process of explanation is not merely to find why something holds but also why this is indeed the reason for holding and not for some other reason. In [\citeauthoryearKelleyKelley\APACyear1973, \citeauthoryearSlomanSloman\APACyear1994] a cognitive principle of explanatory discounting is identified which assumes that alternative explanations are in conflict with each other so that support for one explanation results in diminishing support, thus countersupport, against alternative explanations. Human explanatory reasoning also follows a principle of simplicity by choosing simple explanations depending on the context at hand. Hence even when a “logically complete” explanation would contain a conjunction, such as the explanation of I did not need milk and my mother did not ask me to get her milk, we would consider these conjuncts as separate explanations drawn from the observation, depending on the context of reasoning. In one context, when we are buying milk for ourselves, we would explain not buying milk by I did not need milk and conclude this, without necessarily also considering the explanation my mother did not ask me to get her milk. Finally, we note that depending on the nature of the condition, sufficient or necessary, we can draw conclusions in a secondary predictive mode from factual information about the consequent. Observing the negation of the consequent can lead us to predictively conclude the negation of any of its sufficient conditions. This can be seen as a “modus tollens” conclusion via Reductio ad Absurdum based on the fact that when a sufficient condition holds its consequent also has to hold. Furthermore, in some cases these conclusions can be the same as conclusions drawn in a secondary explanatory mode by considering that the negation of a sufficient condition is an explanation of the observed failure of the consequent to hold in the particular context in which we are reasoning. On the other hand, observing that the consequent holds can lead us to predictively conclude that a necessary condition holds, e.g. observing I buy milk, we can conclude that I have enough money. This follows from the way a necessary condition is understood, i.e. I have enough money must necessarily hold for the consequent to hold (affirming the consequent). As discussed previously, cognitively, an explanation is required to discriminate between different possible alternatives. Howevever, a necessary condition cannot be considered as a possible explanation for the consequent because it always holds and hence it does not offer any discriminatory information. Prediction of several necessary conditions, as opposed to different explanations, do not compete with each other [\citeauthoryearFernbach, Darlow\BCBL \BBA SlomanFernbach \BOthers.\APACyear2010] and hence they can hold together when we are given that the consequent holds. Summarizing, we note that the interpretation of conditions within conditionals as sufficient or necessary and the possible conclusions drawn either as predictions or explanations depends on the context in which these are considered. This context-sensitive nature of interpretation and reasoning can vary among the population, depending on the background knowledge of conditionals that each individual has or that is made aware of in a scenario of discourse. 3.4 Canonical Associations of Condition and Consequence We will now establish canonical associations of different types of conditions with respect to predictions and explanations, which in turn will correspond to argument schemes that will form the basis for the argumentative reasoning. Consider a condition and a consequence coming from some conditional: “if condition then consequence”. We establish the following rule associations444Associations are written with instead of $\rightarrow$ to emphasize their defeasible nature. between a condition and a consequent: 1. Predictions: The canonical predictive association for a sufficient condition: condconsq (suff_p) The canonical predictive association for a necessary condition: $$\displaystyle\overline{\mbox{cond}}\leadsto\overline{\mbox{consq}}$$ (necc_p) 2. Explanations: The canonical explanatory association for a necessary condition: $$\displaystyle\overline{\mbox{consq}}\leadsto\overline{\mbox{cond}}$$ (necc_e) The canonical explanatory association for a sufficient condition: consqcond (suff_e) Note that these explanatory associations are the reverse of the predictive ones. 3. Secondary Associations: (a) Secondary Predictions: The secondary association (which corresponds to the contrapositive of the prediction association) for a sufficient condition, is: $$\displaystyle\overline{\mbox{consq}}\leadsto\overline{\mbox{cond}}$$ (sec_suff_p) The secondary association for a necessary condition is: consqcond (sec_necc_p) (b) Secondary Explanations: The secondary explanatory association for a sufficient condition is: $$\displaystyle\overline{\mbox{consq}}\leadsto\overline{\mbox{cond}}$$ (sec_suff_e) (c) Exogenous Explanations: Psychological experiments (e.g. [\citeauthoryearFernbach, Darlow\BCBL \BBA SlomanFernbach \BOthers.\APACyear2010]) show that humans are sometimes likely to come up with alternative causes that are not appearing within the given context. These exogenous explanations can be captured via associations which link the consequent (positive or negative) with an exogenous explanation (represented by $\small\mbox{exo}(\mbox{consq})$ and $\small\mbox{exo}(\overline{\mbox{consq}})$ respectively). $$\displaystyle\mbox{consq}\leadsto\small\mbox{exo}(\mbox{consq})\quad\quad\quad% \quad\mbox{and}\quad\quad\quad\quad\overline{\mbox{consq}}\leadsto\small\mbox{% exo}(\overline{\mbox{consq}})$$ (exo_e) 4. Strength of Associations: Predictive associations from necessary conditions (necc_p) are stronger than conflicting associations from sufficient conditions (suff_p). This reflects the strength of a pragmatic disabling condition over a motivational enabling condition for the same consequent. 5. Incompatibility: Explanatory associations are typically incompatibly exclusive. For example, if there is more than one explanatory sufficient condition for the consequence then they are incompatible with each other and of equal strength. Table 3 provides a summary of the (in)compatibility of explanations. Note that exogenous explanations are by their nature in conflict with other explanations: people introduce them only when they are in doubt about other explanations. Following the discussion above, Table 2 gives a summary of the condition types together with their canonical predictions and explanations with respect to the given facts. It is important to note that these canonical predictions and explanations are not meant to necessarily represent definite conclusions but rather that they are plausible conclusions that are cognitively admissible in human reasoning. Table 4 shows, as an example, the canonical associations that apply for the suppression task. For the cases of Group I and III the condition She has an essay to finish can be interpreted both as sufficient and necessary. For some part of the population this may only be a sufficient condition in which case the associations shown in parentheses in the columns for Groups I and III will not apply. In Group II, She has an essay to finish is no longer considered as necessary due to the presence of a second sufficient condition of She has a textbook to read. They are both considered in the whole population of Group II only as sufficient conditions. 4 Cognitive Argumentation We will now define the formal framework of Cognitive Argumentation for human reasoning. This framework will encompass the conditional associations and other cognitive principles as argument schemes together with their strength relation. It will be based on the standard framework of abstract argumentation in AI,  [\citeauthoryearDungDung\APACyear1995], as realized in preference based structured argumentation (see e.g. [\citeauthoryearKakas, Mancarella\BCBL \BBA DungKakas \BOthers.\APACyear1994, \citeauthoryearPrakken \BBA SartorPrakken \BBA Sartor\APACyear1997, \citeauthoryearKakas \BBA MoraitisKakas \BBA Moraitis\APACyear2003, \citeauthoryearModgil \BBA PrakkenModgil \BBA Prakken\APACyear2013]). Before presenting its technical definition we will first summarize, informally, its essential elements and how logical reasoning is realized through argumentation. 4.1 Reasoning via Argumentation In argumentation the essential and general structure of knowledge is an argument scheme: a structure that simply associates or links two pieces of information, the premises with the claim or position of the scheme. Arguments are built from (argument) schemes, and support the corresponding particular instances of the positions of these schemes. Then reasoning in argumentation is a process of analysis of alternatives, e.g. a conclusion and its negation, by a consideration of different arguments for and against the various competing alternatives. In comparison with other classical logical approaches, reasoning via argumentation is an explicit process of examining the alternatives either at the level of the final conclusion we are interested in or at the level of other information, e.g. premises of arguments, that support the alternative possible conclusions. More concretely, in a framework of argumentation-based reasoning the essential structural elements are, the argument schemes, a notion of relative conflict and a notion of relative strength between argument schemes and thus between arguments formed from them. Reasoning is then a process of dialectic argumentation where starting from an argument supporting a position of interest we consider arguments that, under the comparative conflict relation, compete, e.g. arguments supporting incompatible positions, and examine how we can defend against such counterarguments through arguments which are stronger or at least not weaker than the counterarguments. Arguments that can defend themselves against their counterarguments are called acceptable and the positions that they support are plausible conclusions. When in addition no such acceptable argument can be constructed for the complement of a position then this forms a definite conclusion. Definite conclusions correspond to logical conclusions in formal systems of logic: they are certain and undisputed. On the other hand, plausible conclusions are useful in informal human reasoning indicating the possibility of holding. We can therefore notice that in its general form, reasoning via argumentation is close to model construction, similar as in mental model theory [\citeauthoryearJohnson-LairdJohnson-Laird\APACyear1980, \citeauthoryearJohnson-Laird, V.Girotto\BCBL \BBA P.LegrenziJohnson-Laird \BOthers.\APACyear2004]. Any framework for human reasoning when placed into practise needs to be influenced by other, possibly extra-logical, factors that play a significant role in the reasoning process by giving it a cognitive form. In computational terms we can think of this as cognitive-based heuristics that humans have learned to use to make their reasoning effective. Examples of such extra-logical factors that govern the shape of human reasoning when this is understood via argumentation are: awareness of the relevant argument schemes in the current context of reasoning, recognition and selection of relatively strong arguments and the bounded application of the dialectic process by concentrating only on some of the possible counterarguments. The challenge is to apply the process of dialectic argumentation in a dynamic and context-sensitive way that focuses on the parts of the knowledge pertinent to the reasoning task at hand. 4.2 Argumentation Logic Framework Within the formal argumentation logic framework, the atomic statements in natural language can be represented as propositional variables.555For simplicity of presentation we will only consider propositional argumentation frameworks. Given a propositional logical language, $\mathcal{L}$, the set of propositional variables in $\mathcal{L}$ is denoted by $\mathcal{P}_{\mathcal{L}}$. For the “milk example” of Section 3.4, $\mathcal{P}_{\mathit{milk}}$, consists of variables representing I need milk, I will buy milk, My mother asks me to get her milk and I have enough money, as follows: $$\begin{array}[]{lll}\mathcal{P}_{\mathit{milk}}&=&\{\mathit{need},\ \mathit{% asks},\ \mathit{buy},\ \mathit{money}\}.\end{array}$$ The negation of $\mathcal{P}_{\mathcal{L}}$, is denoted by $\neg\mathcal{P}_{\mathcal{L}}=\{\overline{x}\mid x\in\mathcal{P}\}$, where $\overline{x}=\neg x$. In general, $\overline{L}=\neg A$ if $L=A$ and $\overline{L}=A$ if $L=\neg A$. Accordingly, the negation of $\mathcal{P}_{\mathit{milk}}$ is given by: $$\begin{array}[]{lll}\neg\mathcal{P}_{\mathit{milk}}&=&\{\overline{\mathit{need% }},\ \overline{\mathit{asks}},\ \overline{\mathit{buy}},\ \overline{\mathit{% money}}\}.\end{array}$$ We will be interested in reasoning within a cognitive state $\mathcal{S}=(\mathcal{F},\mathcal{A})$ where, $\mathcal{F}$, is a set of facts and $\mathcal{A}$ is an awareness set. Both these elements of a cognitive state are linked to the environment of the reasoner, the first consisting of explicit factual information that the environment provides while the second consists of the concepts that the reasoner is made aware of by the environment. We have $\mathcal{F}\subseteq(\mathcal{P}_{\mathcal{L}}\cup\neg\mathcal{P}_{\mathcal{L}})$ and $\mathcal{A}\subseteq\mathcal{P}_{\mathcal{L}}$. Note that the awareness of concepts in $\mathcal{A}$ does not necessarily mean that we are aware whether they hold or not but simply that they and knowledge about them might be relevant to the reasoning at hand. Also any concept for which we have a fact in $\mathcal{F}$ belongs to $\mathcal{A}$, i.e. if $A\in\mathcal{F}$ or $\overline{A}\in\mathcal{F}$, then $A\in\mathcal{A}$. Given a propositional language, $\mathcal{L}$, an argumentation logic framework is a triple $\mathcal{A}_{\mathcal{L}}=\langle\mathcal{A}s,\mathcal{C},\succ\rangle$ where $\mathcal{A}s$ is a set of argument schemes, $\mathcal{C}$ is a conflict relation on $\mathcal{A}s$, typically induced by the notion of conflict in the language $\mathcal{L}$, and $\succ$ is a binary strength relation on $\mathcal{A}s$. An (argument) scheme, $\small\textsf{as}\in\mathcal{A}s$, is a tuple of the form $\small\textsf{as}=(\small\textsf{pre},\small\textsf{pos})$ where precondition pre and position pos are (sets of) statements in the language $\mathcal{L}$. For instance, an argument scheme will link subsets of propositional variables, i.e. $\small\textsf{pre},\small\textsf{pos}\subseteq(\mathcal{P}_{\mathcal{L}}\cup% \neg\mathcal{P}_{\mathcal{L}})$. Argument schemes were introduced as stereotypical reasoning patterns that are typically non-deductive and non-monotonic [\citeauthoryearPollockPollock\APACyear1995, \citeauthoryearWaltonWalton\APACyear1996]. In general, they allow us to link the information in pre with that of pos. Usually argument schemes are parametric so that a scheme can be applied for different values of its parameters to give or to construct arguments. We normally say that pre are the premises on which the position pos is supported by an argument constructed through the scheme $\small\textsf{as}=(\small\textsf{pre},\small\textsf{pos})$. We will use argument schemes to capture the canonical associations motivated by the cognitive principles in Section 3. Recall the principle of Maxim of Quality in Section 3.1, under which what is given as premise (e.g. by the experimenter), is taken to hold. Accordingly, we introduce the following fact scheme: $$\begin{array}[]{l@{\hspace{1cm}}ll}{\textsf{\small fact}}(L)=(\emptyset,L)\in% \mathcal{A}s\hskip 28.452756pt&\mbox{ if }&L\in\mathcal{F}.\end{array}$$ Given the requirement that $L$ needs to be in $\mathcal{F}$, the fact scheme is only applicable when indeed $L$ is a fact in the cognitive state. Similarly, for the principle of Maxim of Relevance in Section 3.2, where everything we are made aware of, can possibly hold or not, we introduce a corresponding hypothesis scheme as follows: $$\begin{array}[]{lll@{\hspace{1cm}}ll}{\textsf{\small hyp}}(A)=(\emptyset,A)\in% \mathcal{A}s&\mbox{ and }&{\textsf{\small hyp}}(\overline{A})=(\emptyset,% \overline{A})\in\mathcal{A}s\hskip 28.452756pt&\mbox{ if }&A\in\mathcal{A}.% \end{array}$$ Here we require that the concept referred to in $A$ or $\overline{A}$ needs to appear in the awareness set in order for the argument scheme to be applicable: once we are aware of a concept we can hypothesize that it holds or that it does not hold. In Section 3.4 we have presented how humans consider different types of conditions in relation to a consequent of interest and thus different associations between them. The canonical prediction and explanation associations as summarized in Table 2, are straightforwardly represented by the following argument schemes: $$\begin{array}[]{rll@{\hspace{3cm}}rll}{\textsf{\small suff\_p}}&=&(\mbox{cond}% ,\mbox{consq})\hskip 85.358268pt&{\textsf{\small suff\_e}}&=&(\mbox{consq},% \mbox{cond})\\ {\textsf{\small sec\_suff\_p}}&=&(\overline{\mbox{consq}},\overline{\mbox{cond% }})\hskip 85.358268pt&{\textsf{\small sec\_suff\_e}}&=&(\overline{\mbox{consq}% },\overline{\mbox{cond}})\\ {\textsf{\small necc\_p}}&=&(\overline{\mbox{cond}},\overline{\mbox{consq}})% \hskip 85.358268pt&{\textsf{\small necc\_e}}&=&(\overline{\mbox{consq}},% \overline{\mbox{cond}})\\ {\textsf{\small sec\_necc\_p}}&=&(\mbox{consq},\mbox{cond})\hskip 85.358268pt&% \mbox{exo\_e}(\mbox{consq})&=&(\mbox{consq},\small\mbox{exo}(\mbox{consq}))\\ &&\hskip 85.358268pt&\mbox{exo\_e}(\overline{\mbox{consq}})&=&(\overline{\mbox% {consq}},\small\mbox{exo}(\overline{\mbox{consq}}))\end{array}$$ Depending on the given conditionals and their type of condition, the respective schemes will be in $\mathcal{A}s$, and applicable for the construction of arguments. Consider the milk example from Section 3.4: For the two conditionals with sufficient condition ($\mathit{need}\leadsto\mathit{buy}$)   and   ($\mathit{asks}\leadsto\mathit{buy}$) the following two prediction schemes apply: $$\begin{array}[]{l@{\hspace{0.2mm}}l@{\hspace{0.2mm}}l@{\hspace{0.2cm}}l@{% \hspace{0.2cm}}l@{\hspace{0.2mm}}l@{\hspace{0.2mm}}l}{\textsf{\small suff\_p}}% (\mathit{need}\leadsto\mathit{buy})\hskip 0.569055pt&=\hskip 0.569055pt&(% \mathit{need},\mathit{buy})\hskip 5.690551pt&\mbox{ and }\hskip 5.690551pt&{% \textsf{\small suff\_p}}(\mathit{asks}\leadsto\mathit{buy})\hskip 0.569055pt&=% \hskip 0.569055pt&(\mathit{asks},\mathit{buy}).\end{array}$$ For the conditional with the necessary condition, ($\mathit{money}\leadsto\mathit{buy}$) where I have enough money is a necessary condition for I will buy milk, and thus, if I don’t have enough money, then I will not buy milk, the following prediction scheme applies: $$\begin{array}[]{lll@{\hspace{1cm}}lll@{\hspace{1cm}}lll}{\textsf{\small necc\_% p}}(\overline{\mathit{money}}\leadsto\overline{\mathit{buy}})&=&(\overline{% \mathit{money}},\overline{\mathit{buy}})\hskip 28.452756pt\end{array}$$ We also have explanation schemes from the explanation associations, e.g.: $$\begin{array}[]{l@{\hspace{0.2mm}}l@{\hspace{0.2mm}}l@{\hspace{0.2cm}}l@{% \hspace{0.2cm}}l@{\hspace{0.2mm}}l@{\hspace{0.2mm}}l}{\textsf{\small suff\_e}}% (\mathit{buy}\leadsto\mathit{need})\hskip 0.569055pt&=\hskip 0.569055pt&(% \mathit{buy},\mathit{need})\hskip 5.690551pt&{\textsf{\small necc\_e}}(% \overline{\mathit{buy}}\leadsto\overline{\mathit{money}})\hskip 5.690551pt&=% \hskip 0.569055pt&(\overline{\mathit{buy}},\overline{\mathit{money}}).\hskip 0% .569055pt\end{array}$$ These schemes are parametric on the propositional variables of the given language in which the various conditions and consequences are expressed. By choosing a set of values for the parameters we say that we apply the argument scheme to construct an individual argument. An argument $\Delta$, is any (non-empty) set of individual arguments. Let us now formalize how an argument supports a position (claim or conclusion). Given a cognitive state $\mathcal{S}=(\mathcal{F},\mathcal{A})$, an individual argument $a$ supports $L$ iff $a={\textsf{\small fact}}(L)$ or $a={\textsf{\small hyp}}(L)$. More generally, given a cognitive state $\mathcal{S}=(\mathcal{F},\mathcal{A})$, an argument $\Delta$ supports $L$ iff either 1. there is an individual argument, $a\in\Delta$, such that $a$ supports $L$, or 2. there is an individual argument $a=(\{L_{1},\dots,L_{k}\},\small\textsf{pos})\in\Delta$, such that $L\in\small\textsf{pos}$ and there are $a_{1},\dots,a_{k}\in\Delta$ such that {$a_{1},\dots,a_{k}$} supports each one of $L_{1},\dots,L_{k}$. We will say that $\Delta$ minimally supports $L$ iff there is no $\Delta^{\prime}\subset\Delta$ such that $\Delta^{\prime}$ supports $L$. Reasoning to a conclusion within an argumentation logic framework is based on considering arguments that support the conclusion and other related statements (e.g. that support the premises of a conclusion). It is important to note that the base case of the above definition of support has as a consequence that the argumentative reasoning is strongly grounded on the current cognitive state of the reasoner. All arguments need to eventually be based on information that comes from the cognitive state. Then as the cognitive state changes the relevant arguments can change. Consider again the above example with $\mathcal{S}^{\prime}=(\mathcal{F}^{\prime},\mathcal{A}^{\prime})=(\{\mathit{% need}\},\{\mathit{need},\ \mathit{asks},\ \mathit{buy},\ \mathit{money}\})$. We can construct the argument $$\begin{array}[]{lll}\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{% \leadsto}\mathit{buy}}=\{{\textsf{\small fact}}(\mathit{need}),{\textsf{\small suff% \_p}}(\mathit{need}\leadsto\mathit{buy})\},\end{array}$$ that supports the position that I will buy milk ($\mathit{buy}$). Note that the fact scheme $$\begin{array}[]{lll}{\textsf{\small fact}}(\mathit{need})&=&(\emptyset,\mathit% {need}),\end{array}$$ is grounded in the above specified state $\mathcal{S}^{\prime}$ because $\mathit{need}\in\mathcal{F}$, which in turn, is the premise of ${\textsf{\small suff\_p}}(\mathit{need}\leadsto\mathit{buy})$, that supports $\mathit{buy}$. 4.3 Attack and Defense between Arguments The last two elements of an argumentation logic framework, $\mathcal{A}_{\mathcal{L}}=\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, are used to define the notions of attack and defense amongst arguments, on which we build the semantics of good quality or acceptable arguments then the semantics of argumentation based reasoning. The second element in $\mathcal{A}_{\mathcal{L}}=\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, $\mathcal{C}$, denotes a conflict relation which is used to specify when arguments conflict with each other. The conflict relation is typically based on a conflict relation defined in the language, $\mathcal{L}$, of the argumentation framework expressing which type of statements are in (direct) conflict with each other. Hence when arguments support conflicting positions then they are in conflict with each other and we say that they form a counter argument of each other. The conflict relation can also contain elements expressing explicitly that two individual arguments are in conflict because the argument schemes that they are based on cannot be applied together. Arguments are required to be conflict-free, i.e. they cannot support simultaneously conflicting positions, e.g. both $L$ and $\overline{L}$, or contain schemes that are explicitly in conflict. The conflict relation defines directly the notion of attack between two arguments: an argument, $\Delta^{\prime}$ attacks or is a counterargument of another argument, $\Delta$, iff there exists an $L$, such that $\Delta$ supports $L$ and $\Delta^{\prime}$ supports $\overline{L}$, or they contain individual arguments whose argument schemes are in conflict. To illustrate these notions, let us assume that we are given the current state $$\mathcal{S}^{\prime\prime}=(\mathcal{F}^{\prime\prime},\mathcal{A}^{\prime% \prime})=(\{\mathit{need},\overline{\mathit{money}}\},\{\mathit{need},\ % \mathit{asks},\ \mathit{buy},\ \mathit{money}\}),$$ i.e. we are given the (factual) information, that I need milk and I do not have enough money. Together with the argument, $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$, that we considered above and which supports $\mathit{buy}$, we can also construct another argument $$\begin{array}[]{lll}\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{% money}}\overset{\textit{}}{\leadsto}\overline{\mathit{buy}}}=\{{\textsf{\small fact% }}(\overline{\mathit{money}}),{\textsf{\small necc\_p}}(\overline{\mathit{% money}}\leadsto\overline{\mathit{buy}})\},\end{array}$$ which supports $\overline{\mathit{buy}}$. Note that this argument is grounded in $\mathcal{S}^{\prime\prime}$, because ${\textsf{\small fact}}(\overline{\mathit{money}})$ is grounded in $\mathcal{F}$. As $\mathit{buy}$ and $\overline{\mathit{buy}}$ are in conflict with each other, these two arguments form counterarguments for each other. The third element in $\mathcal{A}_{\mathcal{L}}=\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, $\succ$, is a binary strength relation among the argument schemes. This relation is required to be strongly non reflexive, i.e. it does not specify any argument scheme as stronger than itself. Informally, it is meant to capture the relative strength among argument schemes: Given two argument schemes as and $\small\textsf{as}^{\prime}$, $\small\textsf{as}\succ\small\textsf{as}^{\prime}$ means as is stronger than $\small\textsf{as}^{\prime}$. In the following, we will assume, as is typically the case, that schemes are only comparable to each other, when they are in conflict, i.e. they support opposing positions or their schemes are in conflict. Recall the discussion in Section 3.4: ($\mathit{money}\leadsto\mathit{buy}$) blocks a possible prediction of ($\mathit{need}\leadsto\mathit{buy}$), because If I do not have enough money, then I can not buy milk even if If I need milk. We therefore assumed a cognitive principle where the associations from necessary conditions are stronger. This is captured by: $$\begin{array}[]{lll}{\textsf{\small necc\_p}}(\overline{\mathit{money}}% \leadsto\overline{\mathit{buy}})&\succ&{\textsf{\small suff\_p}}(\mathit{need}% \leadsto\mathit{buy}).\end{array}$$ If this is the only strength relation defined on the individual arguments occurring in $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$ and $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$, then we will consider $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$ as a stronger argument than $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$, in the sense that $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$ contains an individual argument that is stronger than some individual argument in $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$ but not vice versa. Following the discussion in Section 3.1 and Section 3.2, factual schemes are stronger than any other (opposing) scheme, and hypotheses schemes are weaker than any other (opposing) scheme. For the running example, the strength of facts principle gives us (among others), the following relations: $$\begin{array}[]{l}{\textsf{\small fact}}(\mathit{buy})\succ{\textsf{\small necc% \_p}}(\overline{\mathit{money}}\leadsto\overline{\mathit{buy}}),\quad{\textsf{% \small fact}}(\mathit{buy})\succ{\textsf{\small hyp}}(\overline{\mathit{buy}})% ,\\ {\textsf{\small fact}}(\overline{\mathit{buy}})\succ{\textsf{\small suff\_p}}(% \mathit{need}\leadsto\mathit{buy}),\hskip 28.452756pt{\textsf{\small fact}}(% \overline{\mathit{buy}})\succ{\textsf{\small hyp}}(\mathit{buy}).\end{array}$$ The notion of defense between arguments is similar to that of attack, but where now we additionally require that, informally, the defending argument is not of lower strength than the argument it is defending against. In other words, for an argument $\Delta$ to defend against $\Delta^{\prime}$, $\Delta$ must be stronger than $\Delta^{\prime}$ or at least of the same strength. Formally, given $\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, an argument $\Delta$ defends against another argument $\Delta^{\prime}$ iff there exists an $L$ and $\Delta_{min}\subseteq\Delta$, $\Delta^{\prime}_{min}\subseteq\Delta^{\prime}$ such that 1. $\Delta_{min}$, $\Delta^{\prime}_{min}$ minimally support $L$, $\overline{L}$ respectively, or $\Delta_{min}$, $\Delta^{\prime}_{min}$ minimally support the claim of an argument $a\in\Delta_{min}$ , $a^{\prime}\in\Delta^{\prime}_{min}$ respectively, such that (the argument schemes) $a$ and $a^{\prime}$ are in conflict, and 2. if there exists $a^{\prime}\in\Delta^{\prime}_{min}$, $a\in\Delta_{min}$ such that $a^{\prime}\succ a$ then there exists $b\in\Delta_{min}$, $b^{\prime}\in\Delta^{\prime}_{min}$, such that $b\succ b^{\prime}$. In this definition, the first condition simply requires that $\Delta$ is in conflict with $\Delta^{\prime}$ while the second condition uses the strength relation on the individual arguments to lift this to a strength relation on sets of individual arguments. The particular way that this lifting is done can vary in different argumentation frameworks. Here we adopt a specific choice of this based on a ‘‘weakest link’’ principle that gives a liberal form of defense.666This choice has been proven useful in a variety of different problems in AI that were studied under the particular GORGIAS preference based argumentation framework [\citeauthoryearKakas, Mancarella\BCBL \BBA DungKakas \BOthers.\APACyear1994, \citeauthoryearKakas, Moraitis\BCBL \BBA SpanoudakisKakas \BOthers.\APACyear2019] on which we are more particularly basing our framework of Cognitive Argumentation. It says that if an argument $\Delta$ contains at least one individual argument which is stronger than an individual argument in another conflicting argument $\Delta^{\prime}$ then $\Delta$ can defend against $\Delta^{\prime}$. Otherwise, $\Delta$ can only defend against $\Delta^{\prime}$ if $\Delta$ does not contain any weaker individual elements, i.e. that the individual arguments in the two arguments are non-comparable under the strength relation. In the milk example above, given $\mathcal{S}^{\prime\prime}$, $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$ defends against $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$, but not vice versa. 4.4 Acceptability of Arguments and Conclusions We are now ready to give the formal notion of good quality or acceptable arguments and using this to formulate an argumentation-based form of reasoning. Informally, an acceptable argument is one that can defend against all its counterarguments. Formally, given an argumentation logic framework, $\mathcal{A}_{\mathcal{L}}=\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, and a cognitive state $\mathcal{S}=(\mathcal{F},\mathcal{A})$, an argument $\Delta$ is acceptable or admissible in $\mathcal{A}_{\mathcal{L}}(\mathcal{S})$ iff $\Delta$ is conflict-free and $\Delta$ defends against all arguments attacking $\Delta$. A statement, $L$, is acceptable or a credulous conclusion of $\mathcal{A}_{\mathcal{L}}(\mathcal{S})$, iff there exists an acceptable argument $\Delta$ in $\mathcal{A}_{\mathcal{L}}(\mathcal{S})$ that supports $L$. Furthermore, $L$ is a skeptical conclusion of $\mathcal{A}_{\mathcal{L}}(\mathcal{S})$ iff $L$ is a credulous conclusion of $\mathcal{A}_{\mathcal{L}}(\mathcal{S})$, and $\overline{L}$ is not a credulous conclusion of $\mathcal{A}_{\mathcal{L}}(\mathcal{S})$, i.e. there is no acceptable argument supporting $\overline{L}$. Credulous and skeptical conclusions represent plausible or possible and definite or certain conclusions, respectively. The existence of an acceptable argument supporting a conclusion makes this only a possible conclusion: there is a good reason to conclude this but there could also be just as good reasons to conclude something contrary to it and thus we cannot be certain or definite in our conclusion. Only when such possibilities of the contrary do not exist then we can have a high confidence in a definite conclusion. Let us consider again the cognitive state $\mathcal{S}^{\prime\prime}$ and the arguments $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$ and $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$: $$\begin{array}[]{l@{\hspace{0.1cm}}l@{\hspace{0.1cm}}ll@{\hspace{0.1cm}}l@{% \hspace{0.1cm}}l}\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}% \overset{\textit{}}{\leadsto}\overline{\mathit{buy}}}\hskip 2.845276pt&\mbox{ % (minimally) supports }\hskip 2.845276pt&\overline{\mathit{buy}}\quad\quad\mbox% { and }\quad\quad\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{% \leadsto}\mathit{buy}}&\mbox{ (minimally) supports }\hskip 2.845276pt&\mathit{% buy}.\hskip 2.845276pt\end{array}$$ There is an individual argument in $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$, namely ${\textsf{\small necc\_p}}(\overline{\mathit{money}}\leadsto\overline{\mathit{% buy}})$, which is stronger than an argument in $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$, namely  ${\textsf{\small suff\_p}}(\mathit{need}\leadsto\mathit{buy})$, but not vice versa. Therefore, $\Delta^{\overline{\mathit{money}}}_{\overline{\mathit{money}}\overset{\textit{% }}{\leadsto}\overline{\mathit{buy}}}$ defends against $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$, but not vice versa. Accordingly, $\overline{\mathit{buy}}$ is acceptable but $\mathit{buy}$ is not acceptable in $\mathcal{A}_{\mathit{milk}}(\mathcal{S}^{\prime\prime})$. Therefore $\overline{\mathit{buy}}$ is a skeptical or definite conclusion of $\mathcal{A}_{\mathit{milk}}(\mathcal{S}^{\prime\prime})$. Note that technically an acceptable argument $\Delta$ for a conclusion $L$ may need to contain arguments to defend against counterarguments on the subset of $\Delta$ that minimally supports $L$. To illustrate this, consider again $\mathcal{S}^{\prime}$, which is similar to $\mathcal{S}^{\prime\prime}$, but does not contain the fact $\overline{\mathit{money}}$. Note that $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$ would not be acceptable by itself. The argument: $$\Delta_{\overline{\mathit{money}}\overset{\textit{n}}{\leadsto}\overline{% \mathit{buy}}}=\{{\textsf{\small hyp}}(\overline{\mathit{money}}),{\textsf{% \small necc\_p}}(\overline{\mathit{money}}\leadsto\overline{\mathit{buy}})\},$$ forms an attack against which $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$ cannot defend against by itself (because $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$ does not contain any individual argument that is stronger than some individual argument in $\Delta_{\overline{\mathit{money}}\overset{\textit{n}}{\leadsto}\overline{% \mathit{buy}}}$). However, $$\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}% ,m}=\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{% buy}}\cup\{{\textsf{\small hyp}}(\mathit{money})\}$$ is an acceptable argument (for $\mathit{buy}$) because $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}% ,m}\mbox{ defends against }\Delta_{\overline{\mathit{money}}\overset{\textit{n% }}{\leadsto}\overline{\mathit{buy}}}$ by defending against the conclusion $\overline{\mathit{money}}$ of $\Delta_{\overline{\mathit{money}}\overset{\textit{n}}{\leadsto}\overline{% \mathit{buy}}}$ since the individual arguments ${\textsf{\small hyp}}(\mathit{money})$ and ${\textsf{\small hyp}}(\overline{\mathit{money}})$ are of equal strength under $\succ$. 4.5 Dialectic Argumentation Process The semantics of acceptable arguments, as defined above, affords a natural and simple dialectic argumentation process for constructing such arguments.777This process has been studied extensively in the literature of argumentation in AI (see e.g. [\citeauthoryearDung, Kowalski\BCBL \BBA ToniDung \BOthers.\APACyear2006, \citeauthoryearKakas \BBA ToniKakas \BBA Toni\APACyear1999]). We will describe here, informally, this process and briefly point out its important links to cognitive argumentation. The basic steps are as follows: Step 1 construct a root argument supporting a conclusion of interest, Step 2 consider a counterargument against the root argument, Step 3 find a defense argument against the counterargument, Step 4 check that this defense argument is not in conflict with the root argument, Step 5 add this defense argument to the root argument, Repeat from Step 2, i.e. consider another counterargument to the now extended root argument. This is a form of a debate between supporting a position and trying to defeat it. Carrying out the process until there are no other counterarguments in Step 2 that have not already being considered, clearly results in an extended root argument that is an acceptable argument supporting the conclusion of interest. Let us illustrate the dialectic argumentation process with the running example, where the cognitive state is $\mathcal{S}^{\prime}$ (as above). The position of interest is $\mathit{buy}$. In Step 1 we construct a root argument, $$\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}% }=\{{\textsf{\small fact}}(\mathit{need}),{\textsf{\small suff\_p}}(\mathit{% need}\leadsto\mathit{buy})\},\mbox{ supporting }\mathit{buy}.$$ In Step 2, we check whether there are counterarguments against this argument. We can construct the following counterargument: $$\Delta_{\overline{\mathit{money}}\overset{\textit{n}}{\leadsto}\overline{% \mathit{buy}}}=\{{\textsf{\small hyp}}(\overline{\mathit{money}}),{\textsf{% \small necc\_p}}(\overline{\mathit{money}}\leadsto\overline{\mathit{buy}})\},$$ which is grounded in $\mathcal{S}^{\prime}$, because $\mathit{money}\in\mathcal{A}^{\prime}$. Can we find (Step 3) a defense against $\Delta_{\overline{\mathit{money}}\overset{\textit{n}}{\leadsto}\overline{% \mathit{buy}}}$? Note that $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$ cannot itself defend against $\Delta_{\overline{\mathit{money}}\overset{\textit{n}}{\leadsto}\overline{% \mathit{buy}}}$, as shown in the end of Section 4.4. Nevertheless, a defense is given by the hypothesis argument ${\textsf{\small hyp}}(\mathit{money})=(\emptyset,\mathit{money})$, which we can add (Step 4 and Step 5) to $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$. This new extended argument is denoted as $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}% ,m}$. Returning to Step 2 we look for other counterarguments. Such a counterargument is given by the hypothesis argument, ${\textsf{\small hyp}}(\overline{\mathit{buy}})=(\emptyset,\overline{\mathit{% buy}})$, which is trivially defended against with the root argument, $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$. In other words, there is no need to find a different defense argument and extend further the current the root argument in Step 1. The dialectic argumentation process for constructing acceptable arguments can be given a tree structure. This can be illustrated by figures that show how the various arguments attack and defend each other. On the left of Figure 2, the above example is shown. On the right of Figure 2 the same process for the non-acceptability of $\Delta^{\mathit{need}}_{\mathit{need}\overset{\textit{}}{\leadsto}\mathit{buy}}$ is shown given $\mathcal{S}^{\prime\prime}$, i.e. when $\overline{\mathit{money}}\in\mathcal{F}^{\prime\prime}$. Here and in the following figures, (temporarily) acceptable arguments are highlighted in gray and non-acceptable arguments are in white. $\uparrow$ shows attacks and/or weak defenses, i.e. defenses that are also defended against by the argument they are defending against. $\Uparrow$ shows strong defenses, i.e. defenses that cannot be defended against by the argument they are defending against. In many cases these strong defenses determine the (final) acceptability of arguments. Although the description of the dialectic argumentation process is relatively simple one can easily notice that it is computationally intensive. This is because in all the main steps, Step 1, Step 2 and Step 3 a choice is required. What guides these choices? In particular, within the context of Cognitive Argumentation what cognitive principles can be used to control the process? For instance, in Step 1, where we construct the initial root argument, it seems natural to choose relatively strong arguments first, i.e. the ones based on facts rather than hypotheses. People would not start by making hypotheses, when they can choose other stronger supporting arguments based on factual information. In Step 2, it is important to note that we only need to consider attacks that are strictly stronger than the root argument, avoiding to consider weaker arguments as attacks as these are already defendable by the root argument. In Step 3, like in Step 1, we would have a preference to choose strong defenses thus minimizing the extra counterattacks that we would have to consider when we extend in Step 5 the root argument by the defense argument. A thorough analysis and study of this second group of cognitive principles that are directly related to the argumentative reasoning process is outside the scope of the present paper. 5 The Suppression Task We are now ready to apply the framework of Cognitive Argumentation to the suppression task. To start with we need to construct the argumentation logic framework, $\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, with the (argument) schemes, $\mathcal{A}s$, that are relevant to the knowledge that participants have in the suppression task. Table 5 gives an overview of these schemes showing how they are motivated by the cognitive principles as introduced in the previous Sections 3 and 4. Note that the factual scheme is only shown for the properties $\mathit{e}$ and $\ell$ as this is sufficient for the experimental setup (where the participants are only given facts on these two properties) whereas the hypothesis argument scheme can be applied on any property in the language used in the experiment. Motivated by the observations in Section 3, the preference or strength relation, $\succ$, among these schemes in $\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, is specified as follows: 1. Fact schemes are stronger than any other conflicting scheme. 2. Hypotheses schemes are weaker than any other conflicting scheme. 3. Prediction schemes established from necessary conditions (necc_p schemes), are stronger than conflicting schemes established from sufficient conditions (suff_p schemes). The third part of this strength relation, as discussed before, reflects the strength of a pragmatic disabling condition over a motivational enabling condition for the same consequent. We are left to give the middle component, $\mathcal{C}$ of the realization of the cognitive argumentation framework, $\langle\mathcal{A}s,\mathcal{C},\succ\rangle$, for the suppression task. This conflict relation, $\mathcal{C}$, is defined through the complement relation of negation in the propositional language of the suppression task. Hence schemes (and arguments constructed from them) which support complementary positions are in conflict with each other (and hence the corresponding arguments form counterarguments of each other). In addition, we consider a second element of the conflict relation that explanation schemes with the same premises but different position can be in conflict with each other, following the general Table 3 in Section 3.4. For example, ${\textsf{\small suff\_e}}(\overline{\ell}\leadsto\overline{e})$ and ${\textsf{\small necc\_e}}(\overline{\ell}\leadsto\overline{o})$ are in conflict with each other as they produce different explanations for the same observation $\overline{\ell}$ (i.e. She did not study late in the library.) On the other hand, ${\textsf{\small suff\_e}}(\overline{\ell}\leadsto\overline{e})$ and ${\textsf{\small suff\_e}}(\overline{\ell}\leadsto\overline{t})$ are not in conflict with each other. Rather, both $\overline{e}$ and $\overline{t}$ form constituents parts of an explanation for $\overline{\ell}$. Table 6 provides an overview of the (in)compatibility of the explanatory schemes of the suppression task, according to Table 3. Finally, we note that the cognitive state, $\mathcal{S}=(\mathcal{F},\mathcal{A})$, for each group depends on what is factually stated in the experiment and what the participants in each different group are made aware of, as presented in Section 3.2. Hence for example, for Group I the awareness set is $\mathcal{A}=\{e,\ell\}$, whereas for Group II $\mathcal{A}=\{e,\ell,t\}$, and for Group III, $\mathcal{A}=\{e,\ell,o\}$. These sets determine which schemes can be applied within the dialectic argumentation process. 5.1 Cognitive Adequacy for the Suppression Task Data Participants were asked to select between three alternatives about a (natural language) statement $L$: (i) $L$ holds, (ii) ${L}$ does not hold and (iii) $L$ may or ${L}$ may not hold. We can see that the first two options refer to a definite conclusion for or against $L$, while the third option refers to a plausible conclusion about $L$ or its complement. It is thus important to note that the experimental conditions encourage the participants to think and reason both about definite and plausible conclusions. What is then a suitable evaluation method of a theoretical model for capturing the experimental data that evaluates the cognitive adequacy of the model? Given that Cognitive Argumentation contains both forms of definite and plausible conclusions, this allows us to set up a criterion of evaluation of the cognitive adequacy that can probe deeply the reasoning of the approach. This criterion will be based on comparing the percentage of the participants’ responses for a position (e.g. for She will study late in the library ($\ell$)) with the existence of acceptable arguments for the position that is asked and the existence of acceptable arguments for its complement. In particular, we will examine if in each case of the experiment: (i) there is an acceptable argument for that position asked but not for its negation (i.e. we have a skeptical definite conclusion), or whether (ii) there is an acceptable argument for that position and for the negation of that position (i.e. we have a credulous plausible conclusion). This distinction will then be required to qualitatively correspond to variations in the observed percentage of answers within the population across the three groups but also the variation within each group as follows: 1. If the population agrees on a position overwhelmingly ($>90\%$), then the position asked should follow in a skeptical, definite way. 2. If there is no overwhelming majority, i.e. the percentage of the population answering for the position is less that $90\%$, then there should exist acceptable arguments for both the position and its negation, i.e. they both should follow credulously. The second case (2.) reflects the fact that when there is no clear majority for the position asked, there are significant parts within the population, that believe the possibility of the opposite. In other words, although these participants may be able to build acceptable arguments for the position asked (just like those participants that have answered the question in a definite way) they also recognize the possibility of the complement of the position holding, by being able to also build an acceptable argument for the complement. As not the percentages of all answers were reported, but only the percentages of the answer in question, we cannot conclude that the participants who did not chose for the answer in question, were prevented from choosing any answer in a definite way. Even though unlikely, it is possible that these participants chose (skeptically) for the complement answer (ignoring the counterarguments). We will now show in each of the four cases of the experiment (where different factual information is given to participants in all three groups) how we can capture, within this framework of cognitive argumentation, the reported experimental results. Each of the four cases will be presented in a separate subsection. In these we will introduce the main arguments that can be constructed for and against the property that is asked and show which ones are acceptable by analyzing the relevant dialectic argumentation processes. These will be illustrated by figures that show how the various arguments attack and defend each other, following the definitions introduced in Section 4.5. On Labeling Arguments, Schemes, Attacks and Acceptability In the sequel, we will use superscripts and subscripts in the name of an argument, $\Delta$, to indicate the scheme types applied for the construction of the argument: Superscripts denote fact schemes (if applicable), whereas subscripts will denote all other schemes. A (curly) arrow used as a subscript will indicate the association captured by an argument scheme. Arrows are labeled by $\overset{\textit{\tiny s}}{\leadsto}$ referring to schemes based on a sufficient condition, or as $\overset{\textit{\tiny n}}{\leadsto}$ referring to schemes based on a necessary condition. For the figures, the interpretation of arrows and acceptability is as in Section 4.5: Acceptable arguments are highlighted in gray and non-acceptable arguments stay white. $\uparrow$ shows attacks and/or weak defenses, i.e. arguments which can also be defended against by the argument they are attacking or defending against. $\Uparrow$ shows strong attacks and/or defenses, i.e. arguments which cannot be defended against by the argument they are attacking or defending against. 5.2 She has an essay to finish In the first case all three groups were given the factual information, She has an essay to finish ($e$), and were asked whether She will study late in the library ($\ell$). In Group I, participants were assumed to be only aware of $e$ and $\ell$ as no other property is involved in the conditional information or question that they were given. Thus, the cognitive state that represents this group of participants is $\mathcal{S}=(\{e\},\{e,\ell\})$. Figure LABEL:fig:stepbystep:e:lib gives step-by-step the dialectic construction of the main (i.e. stronger and more cognitively plausible) arguments for $\ell$ and $\overline{\ell}$ in Group I. 888In this figure and the following ones, the numbers in parentheses below the graphs refer to the steps in the dialectic argumentation process as specified in Section 4.5. (1, $\dots)$ denotes for which position the argument is being constructed in step 1. The (strongest) argument supporting $\ell$ is given by combining the fact scheme for $e$ together with the sufficient prediction scheme for $\ell$ (Figure LABEL:fig:stepbystep:e:lib.1, $\ell$): $$\begin{array}[]{lll}\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}&% =&\{{\textsf{\small fact}}(\mathit{e}),{\textsf{\small suff\_p}}(e\leadsto\ell% )\}.\end{array}$$ It is easy to recognize this as a modus ponens argument expressed here in an argumentation perspective. For supporting $\overline{\ell}$ the main argument is constructed by applying the necessary prediction scheme for $\overline{\ell}$ with the hypothesis scheme for $\overline{e}$ (Figure LABEL:fig:stepbystep:e:lib.1, $\overline{\ell}$): $$\begin{array}[]{lll}\Delta_{\overline{e},\overline{e}\overset{\textit{n}}{% \leadsto}\overline{\ell}}&=&\{{\textsf{\small hyp}}(\overline{e}),{\textsf{% \small necc\_p}}(\overline{e}\leadsto\overline{\ell})\}.\end{array}$$ These two arguments attack each other but as the figures show only $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$ is able to defend against the attack via the argument $\Delta^{e}=\{{\textsf{\small fact}}(\mathit{e})\}$ that it contains. In fact, $\Delta_{\overline{e},\overline{e}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ is immediately defeated by the stronger argument $\Delta^{e}$ which attacks $\Delta_{\overline{e},\overline{e}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ on the hypothesis part it contains and for which there is no defense. Consequently, $\ell$ is an acceptable (plausible) conclusion whereas $\overline{\ell}$ is not. Combining the two results for $\ell$ and $\overline{\ell}$ we see that $\ell$ is a skeptical conclusion: the modus ponens argument of $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$ prevails. This conforms with our criterion of evaluation, to reflect with a skeptical conclusion the overwhelming majority of responses for She will study late in the library in this first group (96/88%)999Here, and in the sequel, the first percentage refers to the results in [\citeauthoryearByrneByrne\APACyear1989], and the second percentage refers to the results in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000].. For Group II, the argumentation analysis is essentially the same as for Group I. The new awareness of She has a textbook to read ($t$) does not have a significant effect. In particular, it does not introduce any new arguments supporting $\overline{\ell}$ and hence $\ell$ remains a skeptical conclusion, as required by the overwhelming majority (96%/93%) also in Group II. For Group III, the cognitive state of its participants, is $\mathcal{S}=(\{e\},\{e,\ell,o\})$. Differently from Group I and Group II, we can now construct another strong argument for $\overline{\ell}$ based on the hypothesis prediction scheme for $\overline{o}$ together with the necessary prediction scheme for $\overline{\ell}$ (Figure 4, $\overline{\ell}$): $$\begin{array}[]{lll}\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{% \leadsto}\overline{\ell}}&=&\{{\textsf{\small hyp}}(\overline{o}),{\textsf{% \small necc\_p}}(\overline{o}\leadsto\overline{\ell})\}.\end{array}$$ In Figure 4 (left) we see how $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ attacks the modus ponens argument $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$. This in turn is able to defend against $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ with the help of $\Delta_{o}=\{{\textsf{\small hyp}}(o)\}$ by opposing the hypothesis part ${\textsf{\small hyp}}(\overline{o})$ inside $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$. Thus $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}\cup\Delta_{o}$ can defend against all its attacks and so it is an acceptable argument for $\ell$. On the other hand, Figure 4 (right) shows how $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ supporting $\overline{\ell}$ can itself defend against its attack by $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$, as ${\textsf{\small necc\_p}}(\overline{o}\leadsto\overline{\ell})$ in $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ is stronger than ${\textsf{\small suff\_p}}(e\leadsto\ell)$ in $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$. Hence $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ is an acceptable argument for $\overline{\ell}$.101010The optimization of the dialectic argumentation process in Figure 4 (only with strong attacks), stops after (3-5). Summing up, we see that in Group III both $\ell$ and $\overline{\ell}$, are plausible (credulous) conclusions. This then accounts for the observed suppression effect, as here only 38%/60% concluded that She will study late in the library. It is likely, that these participants constructed only the argument $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$ for $\ell$ and hence concluded definitely $\ell$. It seems likely that the other 62%/40% of the participants were able to construct both $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$ and $\Delta_{\overline{o},\overline{o}\overset{\textit{n}}{\leadsto}\overline{\ell}}$ and hence were not (skeptically) sure that $\ell$ held. However, as already mentioned earlier, the percentages of the other two answer possibilities were not reported, and thus possibly, some participants skeptically concluded that $\overline{\ell}$ held (ignoring $\Delta_{e\overset{\textit{s}}{\leadsto}\ell}^{\mathit{e}}$). 5.3 She does not have an essay to finish In this second case all three groups were given the fact that She does not have an essay to finish ($\mathit{\overline{e}}$) and were asked whether She will study late in the library ($\ell$). Similarly to the previous case, for Group I the corresponding cognitive state is $\mathcal{S}=(\{\overline{e}\},\{e,\ell\})$. Figure 5 (left) shows two arguments that support $\ell$, one based on the hypothesis scheme for $e$: $$\begin{array}[]{lll@{\hspace{2cm}}lll}\Delta_{e,e\overset{\textit{s}}{\leadsto% }\ell}&=&\{{\textsf{\small hyp}}(\mathit{e}),{\textsf{\small suff\_p}}(e% \leadsto\ell)\},\hskip 56.905512pt\end{array}$$ and another by simply hypothesizing $\ell$, namely the argument $\Delta_{\ell}=\{{\textsf{\small hyp}}(\ell)\}$. The first argument, $\Delta_{e,e\overset{\textit{s}}{\leadsto}\ell}$, is easily defeated (i.e. attacked with no defense against the attack) by the factual argument $\Delta^{\overline{\mathit{e}}}=\{{\textsf{\small fact}}(\overline{e})\}$ that attacks the weak part, ${\textsf{\small hyp}}(\mathit{e})$, of $\Delta_{e,e\overset{\textit{s}}{\leadsto}\ell}$. The other argument, $\Delta_{\ell}$, for $\ell$ is defeated by a strong argument supporting $\overline{\ell}$, constructed from the factual scheme for $\overline{e}$ and the ${\textsf{\small necc\_p}}(\overline{e}\leadsto\overline{\ell})$ scheme: $$\begin{array}[]{lll@{\hspace{2cm}}lll}\Delta_{\overline{e}\overset{\textit{n}}% {\leadsto}\overline{\ell}}^{\overline{\mathit{e}}}&=&\{{\textsf{\small fact}}(% \overline{e}),{\textsf{\small necc\_p}}(\overline{e}\leadsto\overline{\ell})\}% .\hskip 56.905512pt\end{array}$$ This is indeed a strong argument for $\overline{\ell}$ and as shown in Figure 5 (right) it is able to defend against its attacks. Hence $\overline{\ell}$ is an acceptable plausible conclusion whereas $\ell$ is not so. Thus $\overline{\ell}$ is a skeptical definite conclusion. This corresponds to what about half of the participants (46%/49%) concluded, namely that She will not study late in the library. This is similar for Group III (63%/49%). Note that this conclusion depends on the assumption that participants consider $e$ also as a necessary condition in $e\leadsto\ell$, which enables the necessary prediction scheme, ${\textsf{\small necc\_p}}(\overline{e}\leadsto\overline{\ell})$ that is in $\Delta_{\overline{e}\overset{\textit{n}}{\leadsto}\overline{\ell}}^{\overline{% \mathit{e}}}$. For those participants who understand $e$ only as a sufficient condition the argumentation process is depicted in Figure 6. Figure LABEL:fig:stepbystep:ne:lib:suff (left) shows that $\Delta_{\ell}$, can now only be attacked by the opposite hypothetical argument $\Delta_{\overline{\ell}}$, which it can easily defend against, and so $\ell$ is acceptable. Similarly, the only argument for $\overline{\ell}$ is based on the hypothesis scheme for $\overline{\ell}$. Figure 6 (right) shows that $\Delta_{\overline{\ell}}$ is acceptable when combined with $\Delta^{\overline{\mathit{e}}}$ which is needed as a defense against the second attack of $\Delta_{e,e\overset{\textit{s}}{\leadsto}\ell}$ (Figure 6.2b). Hence both $\ell$ and $\overline{\ell}$ are plausible conclusions when $e$ is understood only as a sufficient condition, which covers the other half of participants, (54%/51%), that did not choose the definite conclusion She will not study late in the library. Lets us now consider Group II where a significant suppression effect is observed. Hence, participants are additionally made aware of She might (not) have a textbook to read, where She has a textbook to read ($t$) is a sufficient condition for $\ell$. The cognitive state that represents this group is $\mathcal{S}=(\{\overline{e}\},\{e,\ell,t\})$. Now $e$ in $e\leadsto\ell$ cannot be understood as a necessary condition anymore, because an alternative reason, $t$, for $\ell$ is made aware by the second conditional. Hence with the absence of ${\textsf{\small necc\_p}}(\overline{e}\leadsto\overline{\ell})$ we cannot construct a strong argument for $\overline{\ell}$. Consequently, the majority is more likely to construct, in the way we saw above for Group I, acceptable arguments for either conclusion, $\ell$ and $\overline{\ell}$, and thus both follow credulously. This accounts for the fact that the majority of the participants did not conclude a definite answer, i.e. only 4%/22% concluded that She will not study late in the library. 5.4 She will study late in the library In this third case, all groups were asked whether She has an essay to finish ($\mathit{e}$), given the factual information that She will study late in the library ($\ell$). Hence the arguments that we need to consider are not for $\ell$ or $\overline{\ell}$ as in the previous cases but for $\mathit{e}$ and $\mathit{\overline{e}}$. In this case (as well as the next case of the experiment whose given factual information is She will not study late in the library) where the factual information concerns the consequent, it is natural to assume that a significant amount of participants entered into an explanatory mode (or diagnostic mode). In this mode these participants tried to explain the factual observation in the context of the information that they are given in each group. We will therefore analyze both modes of reasoning, predictive and explanatory. As above we will also consider the two cases where participants understood $e$ in the conditional statement, $e\leadsto\ell$, as a sufficient and necessary condition and others understood $e$ only as sufficient, although this will not be significant for the explanatory mode. 5.4.1 Prediction mode: She has an essay to finish is necessary and sufficient For those participants in Group I who understood $e$ as sufficient and necessary in $e\leadsto\ell$, a (strong) argument for $e$ is based on the fact scheme for $\ell$ together with the secondary necessary prediction scheme for $e$: $$\begin{array}[]{lll}\Delta_{\ell\overset{\textit{n}}{\leadsto}e}^{\ell}&=&\{{% \textsf{\small fact}}(\ell),\mbox{sec\_necc\_p}(\ell\leadsto e)\}.\end{array}$$ Even though $$\begin{array}[]{lll}\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}% \overline{e}}&=&\{{\textsf{\small hyp}}(\overline{\ell}),\mbox{sec\_suff\_p}(% \overline{\ell}\leadsto\overline{e})\}\end{array}$$ attacks $\Delta_{\ell\overset{\textit{n}}{\leadsto}e}^{\ell}$, it can be easily defeated by the (strong) factual argument, $\Delta^{\ell}=\{{\textsf{\small fact}}(\ell)\}$, as Figure 7 (left) shows. Indeed, $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}$ is cognitively improbable, i.e. it is unlikely to be considered by someone exactly because it directly contradicts the known fact that She will study late in the library. Similarly, the other counterargument against $\Delta_{\ell\overset{\textit{n}}{\leadsto}e}^{\ell}$, namely $\Delta_{\overline{e}}=\{{\textsf{\small hyp}}(\overline{e})\}$ can be simply defended against by $\Delta_{\ell\overset{\textit{n}}{\leadsto}e}^{\ell}$ itself. Figure LABEL:fig:stepbystep:lib:ne:nec (right) shows that no argument for $\overline{e}$ is acceptable. In both cases, they have strong counterarguments ($\Delta_{\ell\overset{\textit{n}}{\leadsto}e}^{\ell}$ and $\Delta^{\ell}$, respectively), which they cannot defend against. Summing up, when $e$ in $e\leadsto\ell$ is understood also as a necessary condition, in a predictive mode or reasoning we can construct acceptable arguments only for $e$ and thus $e$ follows skeptically. This captures the observation that a significant part of the participants, 71%/53% concluded that She has an essay to finish. 5.4.2 Explanatory mode: She has an essay to finish is only sufficient The participants in Group I (29%/47%) who were not sure whether She has an essay to finish holds, might have entered in explanatory mode. Thus explanatory schemes can be applied to explain the given observation, that She will study late in the library. Furthermore, participants who did not understand $e$ in $e\leadsto\ell$ as necessary may have considered that there is a possibility of some other unknown explanation for $\ell$, thus using the exogenous explanation scheme. Such alternative exogenous explanation can help to support the opposite hypothesis of an explanation: in this case to support the hypothesis $\overline{e}$ opposing the explanation $e$. Figure 8 (left) shows the acceptability of the following explanatory argument supporting $e$: $$\begin{array}[]{lll}\Delta_{\ell\overset{\textit{s}}{\leadsto}e}^{\ell}&=&\{{% \textsf{\small fact}}(\ell),{\textsf{\small suff\_e}}(\ell\leadsto e)\},\end{array}$$ which can be attacked by $$\begin{array}[]{lll}\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}\textsf{% \tiny exo}}&=&\{{\textsf{\small fact}}(\ell),\mbox{exo\_e}(\ell\leadsto\small% \mbox{exo}(l))\},\end{array}$$ an argument for an alternative explanation constructed via the explanation scheme, $\mbox{exo\_e}(l)=(\ell,\small\mbox{exo}(l))$. $\Delta_{\ell\overset{\textit{s}}{\leadsto}e}^{\ell}$ and $\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}\textsf{\tiny exo}}$ can defend against each other and thus $\Delta_{\ell\overset{\textit{s}}{\leadsto}e}^{\ell}$ is acceptable. Figure LABEL:fig:stepbystep:lib:ne:suff (right) shows the acceptable argument supporting $\overline{e}$. The weak hypothesis argument $\Delta_{\overline{e}}$ is defended with the help of $\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}\textsf{\tiny exo}}$ and hence $\Delta_{\overline{e}}\cup\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}% \textsf{\tiny exo}}$ is acceptable for $\overline{e}$. In other words, the hypothesis that She does not have an essay to finish can stand up as valid by assuming that there was some other unknown reason for which She will study late in the library. Summarizing, we can construct acceptable arguments for both, $\mathit{e}$ and for $\mathit{\overline{e}}$, and thus $\mathit{e}$ is only a plausible (credulous) conclusions. This reflects well the other significant amount of participants (29%/47%) who did not choose to answer that She has an essay to finish. Furthermore, we note, that this split in the answers can be captured for one part of the participants (29%/47%) the possibility of an unknown exogenous explanation and for the other part (61%/53%) not allowing exogenous explanations. Lets us now consider Group II, where a suppression effect is observed. In this group for most participants (i) it is unlikely to consider $e$ as a necessary condition and (ii) the possibility of an alternative explanation, such as $\small\mbox{exo}(l)$, for the observed fact is made explicit: As $t\in\mathcal{A}$, we can apply the explanatory scheme, ${\textsf{\small suff\_e}}(\ell\leadsto t)=(\ell,t)$, from which we can construct a new argument: $$\begin{array}[]{lll}\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}\textsf{% \tiny t}}&=&\{{\textsf{\small fact}}(\ell),{\textsf{\small suff\_e}}(\ell% \leadsto t)\}.\end{array}$$ We can then construct acceptable arguments for both $e$ and $\overline{e}$ in the same way as we have shown above by simply replacing $\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}\textsf{\tiny exo}}$ with this explicit form of an alternative explanation $\Delta^{\ell}_{\ell\overset{\textit{s}}{\leadsto}\textsf{\tiny t}}$: the construction is completely analogous as in Figures 8. Accordingly, $\mathit{e}$ and $\mathit{\overline{e}}$ are credulous conclusions for most participants, which reflects well the suppression effect in the second group, as there was no majority (only 13%/16%), that concluded that She has an essay to finish. Finally, for Group III we can apply a similar analysis as in Group I to account in an analogous way for the split in the answers: about half of the participants (54%/53%) concluded that She has an essay to finish. This is because the extra information given or made aware of in Group III, namely the (necessary) condition the library is open does not offer any new explanatory arguments for the given observation that She will study late in the library. 5.5 She will not study late in the library In the fourth and last case of the experiment, all groups were asked whether She has an essay to finish ($\mathit{e}$), given that She will not study late in the library ($\overline{\ell}$). In this case of the experiment, there is a significant discrepancy between the results provided in [\citeauthoryearByrneByrne\APACyear1989] and in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000]: Whereas in Group I and Group II in [\citeauthoryearByrneByrne\APACyear1989], above 90% answered that She does not have an essay to finish, in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000], only 69% gave that answer in the same groups. One way of explaining this difference, is that in [\citeauthoryearByrneByrne\APACyear1989], (almost) all of the participants in these two groups reasoned in the prediction mode, whereas in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000] some participants in Group I and Group II might have reasoned in the explanation mode instead. As in the case of Section 5.4 we will analyze both modes of reasoning, predictive and explanatory, assuming that again it is natural for some participants to reason in explanatory mode as the factual information given to them concerns the consequent of the conditional(s) in the general information (or the context) with which they are given to reason with. The cognitive states for Group I, II and III are $(\{\overline{\ell}\},\{e,\ell\})$, $(\{\overline{\ell}\},\{e,\ell,t\})$ and $(\{\overline{\ell}\},\{e,\ell,o\})$, respectively. 5.5.1 Prediction Mode Let us first consider those participants that reason in the prediction mode and ask if they can build acceptable arguments for $e$ or $\overline{e}$. For Group I, a cognitively plausible and strong argument for $\overline{e}$ is: $$\begin{array}[]{lll}\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}% \overline{e}}^{\overline{\ell}}&=&\{{\textsf{\small fact}}(\overline{\ell}),% \mbox{sec\_suff\_p}(\overline{\ell}\leadsto\overline{e})\}.\end{array}$$ which is able to defend against any of its attacks (see Figure 9, right). Hence $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ is an acceptable argument for $\overline{e}$. For supporting $e$, a possible argument consists of the hypothesis scheme for $e$ (Figure LABEL:fig:stepbystep:nlib:e, left): $$\begin{array}[]{lll}\Delta_{e}&=&\{{\textsf{\small hyp}}(e)\}.\end{array}$$ However, $\Delta_{e}$ is attacked by $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ against which $\Delta_{e}$ cannot defend, and so $\Delta_{e}$ is not acceptable. Note that this attack against $\Delta_{e}$ by $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ reflects the informal counterargument that If she had an essay to finish, then she would study late in the library but we are told that She will not study late in the library. In more formal terms this is a Reductio ad Absurdum counterargument rendering the hypothesis that She has an essay to finish as non acceptable. Similarly, the argument $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ for $\overline{e}$ (Figure 9, right), can be recognized as reasoning with modus tollens in an argumentation perspective. Hence, there is no acceptable argument for $e$, and thus $\overline{e}$ is a definite skeptical conclusion. This corresponds well with the high majority of the participants (92%) in [\citeauthoryearByrneByrne\APACyear1989] who concluded and answered that She does not have an essay to finish. The case for Group II is analogous to that for Group I conforming with the same observed result (96%). Also for Group III, participants who reason in a predictive mode will reach the same result of $\overline{e}$ being a skeptical conclusion. But this is not observed in the experimental data where we have a significant reduction in the number of participants who answer that $\overline{e}$ holds. In addition, as mentioned above, in the second experiment in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000] even for Groups I and II the percentage of participants that answered that $\overline{e}$ holds is only 69%, thus not all participants (skeptically) concluded $\overline{e}$. One way to account for these two observations is to consider that a significant number of participants in Group III and similarly in Groups I and II in the [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000] experiment, reason in an explanation mode, as we will describe below. 5.5.2 Explanatory Mode In an explanatory mode of reasoning we can explain the given factual information of $\overline{\ell}$ either using the explanatory argument scheme, ${\textsf{\small necc\_e}}(\overline{\ell}\leadsto\overline{e})$, or $\mbox{sec\_suff\_e}(\overline{\ell}\leadsto\overline{e})$, depending on whether $e$ is understood as a necessary condition for $\ell$ or not. Accordingly, we construct the following two arguments: $$\begin{array}[]{lll}\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}% \overline{e}}^{\overline{\ell}}=\{{\textsf{\small fact}}(\overline{\ell}),{% \textsf{\small necc\_e}}(\overline{\ell}\leadsto\overline{e})\}\quad\quad\mbox% {and}\quad\quad\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{% e}}^{\overline{\ell}}=\{{\textsf{\small fact}}(\overline{\ell}),\mbox{sec\_% suff\_e}(\overline{\ell}\leadsto\overline{e})\}.\end{array}$$ Figure 10 (right) shows that $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ is acceptable. Analogously, $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ is also acceptable (we can replace $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ with $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$). What about arguments supporting $e$? The hypothesis argument, $\Delta_{e}$ as shown in Figure LABEL:fig:stepbystep:nlib:e:necc:noexo (left) is not acceptable. But can we find another argument to defend against its attack? Analogously to the previous case in Section 5.4, we can defend $\Delta_{e}$ by considering alternative explanation schemes which are incompatible with those in the attacks $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ and $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ (see Table 6 on the incompatibility between explanation schemes). For Group I, the only other explanation is an exogenous one through which we can build an alternative explanatory argument: $$\begin{array}[]{lll}\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}% \overline{\textsf{\tiny exo}}}=\{{\textsf{\small fact}}(\overline{\ell}),{% \textsf{\small necc\_e}}(\overline{\ell}\leadsto\small\mbox{exo})\}.\end{array}$$ Figure 10 (middle) then illustrates this defense through $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{\textsf{\tiny exo% }}}$ and hence $\Delta_{e}\cup\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{% \textsf{\tiny exo}}}$ forms an acceptable argument for $e$. Summarizing the analysis for Group I, for the participants that did not have an exogenous explanation and thus $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{\textsf{\tiny exo% }}}$ in mind, $\overline{e}$ follows skeptically accounting thus for the experimental observation of (69% in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000]) answering that $\overline{e}$ holds. But when $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{\textsf{\tiny exo% }}}$ is considered as an argument, then both, $e$ and $\overline{e}$ are credulous conclusions. This accounts for the other participants in Group I , likely to be the 31% in [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000], who were not sure whether $\overline{e}$ holds. Let us now consider Group II. In this case we have a second sufficient condition ($t$) and hence we can now construct two arguments based on the secondary sufficient explanation for $\overline{\ell}$: $$\begin{array}[]{lll}\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}% \overline{e}}^{\overline{\ell}}=\{{\textsf{\small fact}}(\overline{\ell}),% \mbox{sec\_suff\_e}(\overline{\ell}\leadsto\overline{e})\}\quad\quad\mbox{and}% \quad\quad\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{t}}^{% \overline{\ell}}=\{{\textsf{\small fact}}(\overline{\ell}),\mbox{sec\_suff\_e}% (\overline{\ell}\leadsto\overline{t})\}.\end{array}$$ But differently from the third case in the previous subsection, $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ and $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{t}}^{\overline{% \ell}}$ are not incompatible with each other (see Table 3) but rather components of an explanation that would apply in different contexts. Hence the existence of the second argument does not affect the acceptability of the arguments for $e$ or $\overline{e}$ in an explanatory mode as presented for Group I above. This then conforms with the fact that in this case the experimental results for Group II are identical to those for Group I. Finally, let us consider Group III. Participants were made aware of the possibility of an explicit or concrete alternative explanation for the given observation, namely that the reason for She will not study late in the library is that library might not be open. This means that participants can construct the argument $$\begin{array}[]{lll}\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}% \overline{o}}^{\overline{\ell}}=\{{\textsf{\small fact}}(\overline{\ell}),{% \textsf{\small necc\_e}}(\overline{\ell}\leadsto\overline{o})\}.\end{array}$$ As ${\textsf{\small necc\_e}}(\overline{\ell}\leadsto\overline{o})$ is incompatible with the explanatory schemes supporting $\overline{e}$, this new argument will defend against the arguments for $\overline{e}$, namely against $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{e}}^{\overline{% \ell}}$ and $\Delta_{\overline{\ell}\overset{\textit{s}}{\leadsto}\overline{e}}^{\overline{% \ell}}$. Hence $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{o}}^{\overline{% \ell}}$ can take the place of $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{\textsf{\tiny exo% }}}$ in the analysis of the explanatory argumentative reasoning in the previous groups (e.g. we can replace $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{\textsf{\tiny exo% }}}$ with $\Delta_{\overline{\ell}\overset{\textit{n}}{\leadsto}\overline{o}}^{\overline{% \ell}}$ in Figure 10). Furthermore, the acceptability of the explanatory arguments supporting $\overline{e}$ are not affected by the existence of this new explanatory argument supporting $\overline{o}$, as these are of equal strength and therefore they can defend against each other. Hence the suppression effect in Group III can be accounted for simply by assuming that a higher proportion of the participants (in comparison with Groups I and II) thought of an alternative explanation, i.e. other than that of $\overline{e}$, now that they are made explicitly aware of the possible explanation of the library not being open. Only those participants who did not consider an alternative explanation concluded for sure that $\overline{e}$ holds and indeed the proportion of participants that did so was significantly lower (33%/44%). 5.6 Summary of all cases of the BST experiments Table 7 gives a summary of the analysis of the cognitive argumentation reasoning for each of the four cases of the suppression task. The first column shows the given fact: $e$, $\overline{e}$, $\ell$ or $\overline{\ell}$, respectively. The second column refers to the groups. Column three and four denote the conclusions within cognitive argumentation derived when participants are assumed to be in the predictive mode of reasoning, where ‘suff&necc’ and ‘suff’ mean that $e$ is understood as sufficient and necessary or only sufficient, respectively. Note that, whenever only one conclusion is listed in any of these four columns, then the conclusion is skeptically entailed (definite), otherwise, it is credulously entailed (possible). This applies analogously for the explanatory mode of reasoning in columns five and six. In the four columns (columns 3 to 6),  ‘-’ means that this case does not seem to be cognitively plausible. Finally, columns seven and eight show the experimental results from [\citeauthoryearByrneByrne\APACyear1989] and [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000], so that we can compare them with our conclusions. The framed cells in columns 7 and 8 show where the suppression effect occurs. This table summarizes the analysis presented above and reveals the following two results by comparing the entries in columns 7 and 8 with the corresponding entries in columns 3-6: Suppression: Observed suppression coincides with the loss, in the suppression group, of skeptical conclusions drawn in the other two groups. The conclusion at hand is exclusively skeptical or can be skeptical in the other two groups whereas in the suppression group it is predominantly credulous. It is therefore more likely for participants in the suppression group to consider the conclusion only plausible and hence avoid choosing the conclusion in their answer. Variation: Observed significant variation in the answers in any case and any group coincides with the conclusion at hand been credulous. Across all rows whenever the percentages in columns 7 and 8 are overwhelmingly high we only have skeptical conclusions in the corresponding columns 3 to 6 and whenever the percentages are split we have credulous conclusions in columns 3 to 6. Hence the approach of Cognitive Argumentation offers a model that captures the suppression effect as well as offering an account for the qualitative difference in the degree of certainty, across the population, for the conclusions drawn. These results stem from two important properties of Cognitive Argumentation: (i) its natural distinction between definite and plausible conclusions via the formal notions of skeptical and credulous conclusions, (ii) its property to adapt to new and different forms of information resulting in a context-sensitive form of reasoning. 6 COGNICA – Cognitive Argumentation on the Web Systems of Cognitive Argumentation can easily be built using existing argumentation technology from AI. We have developed a web based system, called COGNICA111111http://cognica.cs.ucy.ac.cy/, based on the underlying technology of Gorgias [\citeauthoryearKakas, Moraitis\BCBL \BBA SpanoudakisKakas \BOthers.\APACyear2019].121212http://www.cs.ucy.ac.cy/~nkd/gorgias/,$\ \ $ http://gorgiasb.tuc.gr/ The general long-term aim of COGNICA is to build a cognitive reasoner with a simple and natural interface so that it can be used easily by humans at large. The purpose is to use this system (i) to evaluate the theoretical developments in the framework of Cognitive Argumentation and (ii) to support new experimental studies that would help with the further development of the framework. At this initial stage of development of COGNICA our first aim is confined to testing the model of Cognitive Argumentation on human conditional reasoning and to confirm its realizability. The specific goal is to be able to reproduce the type of reasoning found in the psychological experiment of Byrne’s suppression task and confirm the theoretical results presented in this paper. In designing COGNICA we have set the following functional requirements: • Accept conditional common sense knowledge given in the form of controlled natural language. • Reason with two levels of confidence: certain with definite answers and possible with plausible answers. • Reason from observations to explanations. • Reason to conclusions based on explanations accounting for the observations. • Provide explanations to users on how the system has arrived at a certain definite (skeptical) or possible (credulous) conclusion. The internal operation of the system is required to be completely transparent to the human user with a natural interaction which does not require the user to have any knowledge of the underlying process of computational argumentation. To illustrate the system let us consider how we would use it to test its reasoning behaviour in the case of Byrne’s suppression task. For each group we can enter under a different heading or context the general conditional information given to the group. Figure 12 shows the case of Group III. As can be seen from this figure, conditional knowledge that is based on a sufficient condition is entered in a Whenever statement, directly linking the condition with the consequent, whereas knowledge based on a necessary condition is entered in the form of a When statement, expressing a link between the negation of the condition with the negation of the consequent. We can then select factual information and query the system for conclusions. For example, we can select that She has an essay to finish holds and ask whether She will study late in the library holds. Figure 13 shows this case where the answer of the system ismaybe expressing that the system considers this as possible but not definite. The figure also shows an explanation of why this is so. The explanation is presented in the form of a controlled natural language, giving the reasons for why the query may hold or may not hold, each of which reflecting the acceptable arguments in the theory of Cognitive Argumentation for the query and its complement (see case 1, presented in Section 5.2). Similarly, we can give the system factual information asking the system to provide explanations for this and to predict if some conditions would necessarily hold or not. As mentioned above, our future plans for developing and using the COGNICA system is to set up crowd source experiments similar to those of Byrne’s suppression task, in order to gather information on how humans reason in different circumstances. The results of these experiments will then be integrated in our plan of developing further the framework of Cognitive Argumentation. Another interesting question to examine is how human users are affected by the system in their representation and reasoning of the problem: (i) COGNICA explicitly asks for placing conditionals within a when or whenever sentence, which forces users to consciously distinguish between different types of conditions. (ii) COGNICA’s argumentative reasoning might affect the reasoning of the users as soon as they are presented with the explanation of how the system arrives at its conclusions. 7 Discussion We have formulated human conditional reasoning in terms of argumentation and evaluated our approach on the results of the suppression task. As Cognitive Argumentation can account for all cases of the task, it seems to form a good basis for developing a cognitively adequate model of human reasoning at large. 7.1 Cognitive Argumentation at Large Let us summarize the essential elements of Cognitive Argumentation. First, argument schemes provide a succinct form of knowledge representation, which allows us to uniformly capture facts, hypotheses or associations between information. Second, a strength relation between the schemes complements the representation. It flexibly has three important properties: it is partial, qualitative and context-sensitive. Third, reasoning is then captured via a dialectic argumentation process of attacks and defenses between arguments supporting (or not) conclusions of interest. Argument scheme associations are not only categorized by the types of conditions they involve but also by the modes of reasoning. Whereas modus ponens and denial of the antecedent belong to a predictive mode, affirmation of the consequent and modus tollens belong to a explanatory (diagnostic) mode. This classification results in important cognitive distinctions in the reasoning. For instance the discriminatory and competing nature of explanations in explanatory reasoning is absent in predictive reasoning. These features together help to match closer the cognitive reasoning behaviour of humans in general. Individual differences in reasoning amount to choosing different arguments and different degrees of scrutiny apply in determining the acceptability or validity of the chosen arguments. Another essential notion within CA is that of a cognitive state. The cognitive state represents the current context of a limited subset of concepts and knowledge associated to them that we are (consciously) aware of and where our attention in reasoning is currently focused. The dialectic argumentation process of reasoning is guided by this cognitive state, which also affects when to terminate the process, even if the process has not been exhaustively carried out. This is in accordance to human reasoning in everyday tasks, where weighing up all possibilities would be infeasible, and therefore decisions are guided by heuristically restricting our consideration. It is important to notice that the local (conditional) knowledge in the current context is captured by argument schemes that have a simple form. We do not need to include explicitly in the premises of an argument scheme additional premises that would preclude exceptional or adversarial cases. In other words, whenever there is no need, or awareness made to assume differently, then the normal case is assumed. Consider the (modus ponens) scheme ${\textsf{\small suff\_p}}(e\leadsto\ell)$: the additional plethora of possible extra premises such as She is not ill or The deadline has not passed, or She has not decided to drop the course, or indeed that The library is open, would only need to be accounted for, whenever the current context brings them to the foreground. Whenever this is the case, these can be addressed through counterarguments from separate and equally simple argument schemes, as we have seen for example in the first case for Group III. 7.2 Future Work The major long term challenge of CA as a cognitive model, particularly in an open setting of reasoning, lies in recognizing the context at hand within a dynamic environment, suitably adapting the argumentation framework when the context changes or is refined with further information. What happens if Group III receives the extra information, that She has the library keys? Will this affect their conclusions, and how? Does this only affect what argument schemes to consider, or does it also affect the strength relation between them? We therefore need to understand in detail how does this cognitive state of humans get formed (and altered) and how the current argumentation framework is populated with the relevant knowledge of argument schemes in the current context that we are in. This goal drives our future work which will revolve along three aspects. Firstly, can we test (partially) the cognitive inferential adequacy of our CA model? If yes, how should be the experimental setup such that we can recoard how participants have arrived or are supporting their conclusions? Can we challenge humans with new counterarguments that introduce extra elements into their cognitive state of awareness and monitor their reasoning process? Secondly, a major challenge is to link the whole framework and process of Cognitive Argumentation with Natural language, and in particular the way language is used to point to the current state of awareness and context of knowledge of argument schemes. COGNICA, is a first step towards this direction, starting from a controlled natural language user interface, which we plan to incrementally develop as close as possible to (free) natural language, employing the help of powerful existing NLP tools. Finally, in order to understand how CA relates to practical human reasoning, we plan to apply this framework to human decision making as it is studied in the field of Behavioural Economics, based on [\citeauthoryearKahneman \BBA TverskyKahneman \BBA Tversky\APACyear1979]. Home economic and similarly moral decisions taken by people at large, have been observed to deviate from logically strict but also from the rational form of reasoning, but rather follow a heavily biased form of reasoning. As several studies have shown, small changes within the framing of a task, e.g. saying saves 90 out of 100 rather than kills 10 out of 100, has a very significant change in the final decisions of participants [\citeauthoryearKühbergerKühberger\APACyear1995]. Our initial investigation indicates that experimental observations of a significant change or reversal in a human decision can be accounted for in Cognitive Argumentation in a very similar manner as the suppression effect. 7.3 Related Work The framework of Cognitive Argumentation is built in a multidisciplinary approach, bringing together work from AI and Cognitive Science/Psychology. In both of these areas it was long recognized that human logical reasoning would need to transcend classical formal logic. In particular, reasoning should be non-monotonic where conclusions could not hold anymore when the context of information is changed. The feature of non-monotonicity in logic-based AI was identified from the very start in the seminal work of McCarthy, “Programs with Common Sense” [\citeauthoryearMcCarthyMcCarthy\APACyear1995] and work thereafter, with a biannual workshop series starting in 1978. An intense activity on defining new non-monotonic logics followed using a variety of formal approaches. During this however, the original objective of modeling human commonsense reasoning shifted out of focus [\citeauthoryearMcDermottMcDermott\APACyear1987]. The proposed approaches were mostly theoretical and did not apply to real case studies, possibly because the expertise of cognitive scientists was not consulted. An important development in the study of non-monotonic logics in AI was the relatively recent introduction of formal argumentation. Several works, see e.g. [\citeauthoryearBondarenko, Dung, Kowalski\BCBL \BBA ToniBondarenko \BOthers.\APACyear1997], had shown that argumentation provides a uniform basis for reformulating most if not all different existing proposals of non-monotonic logics. Nevertheless, the study of argumentation was also primarily motivated by and confined to the formal aspects of the framework, under the same culture that capturing human reasoning is just a matter of finding first the right logical framework: a matter of pure logic. Exceptions to this perspective, where cognitive considerations play at least a limited role, are the study of legal reasoning through argumentation [\citeauthoryearPrakkenPrakken\APACyear2011] and the more recent work on reviews and debate analysis using argumentation technology [\citeauthoryearLawrence, Snaith, Konat, Budzynska\BCBL \BBA ReedLawrence \BOthers.\APACyear2017]. Within Cognitive Psychology/Science it was long recognized that formal logic will not serve as a good model for human reasoning. Several cognitive models have been proposed motivated primarily from experimental results of observing human reasoning and thus trying to formulate theories that would fit in the characteristics of human reasoning, such as its non-monotonic defeasible nature. The list of these works is very long. We mention here only some, to which our approach comes closer. These inlude the work of [\citeauthoryearStenning \BBA van LambalgenStenning \BBA van Lambalgen\APACyear2008] with an approach linked with the non-monotonic nature of logic programming, the proposal of [\citeauthoryearPollockPollock\APACyear1987] where argumentation plays a significant role in a holistic proposal for cognition, similarly the approach of [\citeauthoryearMercier \BBA SperberMercier \BBA Sperber\APACyear2011] where argumentation is considered central to human reasoning and several works on conditional reasoning whose exposition can be found in the book of [\citeauthoryearNickersonNickerson\APACyear2015]. More recent work, following the Logic Programming approach of [\citeauthoryearStenning \BBA van LambalgenStenning \BBA van Lambalgen\APACyear2008], is found in [\citeauthoryearDietz Saldanha, Hölldobler\BCBL \BBA MörbitzDietz Saldanha \BOthers.\APACyear2018], where cognitive principles to customize the formal logical system was adopted and which methodology we are following in this paper. A recent special issue of the Künstliche Intelligenz Journal, 33(2), 2019, on Cognitive Reasoning, exposes state of the art challenges for understanding and automating human reasoning. Our approach concurs closely with the framework of mental models [\citeauthoryearJohnson-LairdJohnson-Laird\APACyear1983, \citeauthoryearKhemlani, Byrne\BCBL \BBA Johnson-LairdKhemlani \BOthers.\APACyear2018] with its emphasis on human reasoning as cases of possibilities. Argumentation is by nature a process of considering alternatives and thus argumentative reasoning is inherently possibilistic. Building acceptable arguments in Cognitive Argumentation can be seen to correspond closely to the process of constructing mental (cognitive) models. Our correspondence of the different types of conditions, sufficient or necessary, into distinct argument schemes with a separate identity and the distinction between predictive and explanatory schemes (again drawn from the same conditional) can be reflected into the different possibilities of mental models associated to conditionals. Having drawn this parallel, it must be noted that the theory of mental models includes extensive studies on how to recognize these distinctions from the form of the expression of a conditional in natural language and other pragmatics in the context of reasoning. This is an aspect that is currently missing in our approach and which can benefit greatly from the existing work. On another level of comparison, as we have seen, Cognitive Argumentation supports two types of conclusions with a qualitative difference in the confidence or certainty of the conclusion: plausible or possible and definite conclusions. This stems form the relative strength between different argument schemes that is qualitative, although it can be learned through a quantitative statistical analysis from past experiences [\citeauthoryearMichaelMichael\APACyear2016]. As in the more recent work [\citeauthoryearKhemlani, Byrne\BCBL \BBA Johnson-LairdKhemlani \BOthers.\APACyear2018, \citeauthoryearKhemlani \BBA Johnson-LairdKhemlani \BBA Johnson-Laird\APACyear2019] within the framework of mental models, such natural distinctions between the “degree” of conclusions would correspond closer to real-life human reasoning than formal logical modalities of “possible” and “necessary” in the framework of Modal Logics. 8 Conclusion We have seen how human reasoning can be formulated within a framework of dialectic argumentation, called Cognitive Argumentation, where reasoning to conclusions is understood as a process of contemplating between alternatives and the arguments that support them. The framework of Cognitive Argumentation is based on the theory of computational argumentation from relatively recent studies of argumentation in AI. It uses a variety of general cognitive principles of reasoning, identified in Cognitive Science and Philosophy over many decades, to “calibrate” the abstract and general framework from AI in order to adapt and apply it to the case of informal human reasoning. The salient features of context-sensitive reasoning, variability of reasoning within the population, distinguishing definite and possible conclusions, and the defeasibility of reasoning, all emerge naturally within the framework of Cognitive Argumentation. Cognitive Argumentation has been validated as a good cognitive adequate model of human reasoning by explaining well the reasoning behaviors of participants in the celebrated experiments of the suppression task. Although we have concentrated here on reasoning with conditionals the framework is general enough to accommodate wider forms of human reasoning by suitably extending it with argument schemes appropriate for new reasoning forms and sculpting this with further relevant cognitive principles. This is a difficult and challenging task, but one that promises to be instructive in forming a better and more complete understanding of the relationship of argumentation with human reasoning at large. References [\citeauthoryearAmgoud, Dimopoulos\BCBL \BBA MoraitisAmgoud \BOthers.\APACyear2008] \APACinsertmetastarAmgoudDM08{APACrefauthors}Amgoud, L., Dimopoulos, Y.\BCBL \BBA Moraitis, P.  \APACrefYearMonthDay2008. \BBOQ\APACrefatitleMaking Decisions through Preference-Based Argumentation Making decisions through preference-based argumentation.\BBCQ \BIn G. Brewka \BBA J. Lang (\BEDS), \APACrefbtitleKR KR (\BPGS 113–123). \APACaddressPublisherAAAI Press. \PrintBackRefs\CurrentBib [\citeauthoryearBondarenko, Dung, Kowalski\BCBL \BBA ToniBondarenko \BOthers.\APACyear1997] \APACinsertmetastarBondarenko97{APACrefauthors}Bondarenko, A., Dung, P\BPBIM., Kowalski, R\BPBIA.\BCBL \BBA Toni, F.  \APACrefYearMonthDay1997. \BBOQ\APACrefatitleAn Abstract, Argumentation-Theoretic Approach to Default Reasoning An abstract, argumentation-theoretic approach to default reasoning.\BBCQ \APACjournalVolNumPagesArtif. Intell.9363–101. \PrintBackRefs\CurrentBib [\citeauthoryearByrneByrne\APACyear1989] \APACinsertmetastarbyrne:89{APACrefauthors}Byrne, R\BPBIM\BPBIJ.  \APACrefYearMonthDay1989. \BBOQ\APACrefatitleSuppressing valid inferences with conditionals Suppressing valid inferences with conditionals.\BBCQ \APACjournalVolNumPagesJournal of Memory and Language3161–83. \PrintBackRefs\CurrentBib [\citeauthoryearByrneByrne\APACyear2005] \APACinsertmetastarbyrne:2005{APACrefauthors}Byrne, R\BPBIM\BPBIJ.  \APACrefYear2005. \APACrefbtitleThe rational imagination: How people create alternatives to reality The rational imagination: How people create alternatives to reality. \APACaddressPublisherMIT press. \PrintBackRefs\CurrentBib [\citeauthoryearDietz, Hölldobler\BCBL \BBA RagniDietz \BOthers.\APACyear2012] \APACinsertmetastarcogsci:2012{APACrefauthors}Dietz, E\BHBIA., Hölldobler, S.\BCBL \BBA Ragni, M.  \APACrefYearMonthDay2012. \BBOQ\APACrefatitleA Computational Logic Approach to the Suppression Task A computational logic approach to the suppression task.\BBCQ \BIn N. Miyake, D. Peebles\BCBL \BBA R\BPBIP. Cooper (\BEDS), \APACrefbtitleProceedings of the 34th Annual Conference of the Cognitive Science Society (COGSCI) Proceedings of the 34th annual conference of the Cognitive Science Society (COGSCI) (\BPGS 1500–1505). \APACaddressPublisherCognitive Science Society. \PrintBackRefs\CurrentBib [\citeauthoryearDietz Saldanha, Hölldobler\BCBL \BBA MörbitzDietz Saldanha \BOthers.\APACyear2018] \APACinsertmetastardeclare:2017{APACrefauthors}Dietz Saldanha, E\BHBIA., Hölldobler, S.\BCBL \BBA Mörbitz, R.  \APACrefYearMonthDay2018. \BBOQ\APACrefatitleThe Syllogistic Reasoning Task: Reasoning Principles and Heuristic Strategies in Modeling Human Clusters The syllogistic reasoning task: Reasoning principles and heuristic strategies in modeling human clusters.\BBCQ \BIn D. Seipel, M. Hanus\BCBL \BBA S. Abreu (\BEDS), \APACrefbtitleDeclarative Programming and Knowledge Management Declarative programming and knowledge management (\BVOL 10997, \BPGS 149–165). \APACaddressPublisherSpringer Nature Switzerland AG. \PrintBackRefs\CurrentBib [\citeauthoryearDietz Saldanha, Hölldobler\BCBL \BBA RochaDietz Saldanha \BOthers.\APACyear2017] \APACinsertmetastardietz:hoelldobler:rocha:2017{APACrefauthors}Dietz Saldanha, E\BHBIA., Hölldobler, S.\BCBL \BBA Rocha, I\BPBIL.  \APACrefYearMonthDay2017. \BBOQ\APACrefatitleObligation versus Factual Conditionals under the Weak Completion Semantics Obligation versus factual conditionals under the weak completion semantics.\BBCQ \BIn S. Hölldobler, A. Malikov\BCBL \BBA C. Wernhard (\BEDS), \APACrefbtitleProceedings of the Young Scientist’s Second International Workshop on Trends in Information Processing (YSIP2) 2017. Proceedings of the young scientist’s second international workshop on trends in information processing (YSIP2) 2017. \APACaddressPublisherCEUR Workshop Proc. \PrintBackRefs\CurrentBib [\citeauthoryearDietz Saldanha \BBA KakasDietz Saldanha \BBA Kakas\APACyear2019] \APACinsertmetastarki:2019{APACrefauthors}Dietz Saldanha, E\BHBIA.\BCBT \BBA Kakas, A.  \APACrefYearMonthDay201901. \BBOQ\APACrefatitleCognitive Argumentation for Human Syllogistic Reasoning Cognitive argumentation for human syllogistic reasoning.\BBCQ \APACjournalVolNumPagesKI - Künstliche Intelligenz333229–242. \PrintBackRefs\CurrentBib [\citeauthoryearDieussaert, Schaeken, Schroyens\BCBL \BBA D’YdewalleDieussaert \BOthers.\APACyear2000] \APACinsertmetastardieussaert:2000{APACrefauthors}Dieussaert, K., Schaeken, W., Schroyens, W.\BCBL \BBA D’Ydewalle, G.  \APACrefYearMonthDay2000. \BBOQ\APACrefatitleStrategies During Complex Conditional Inferences Strategies during complex conditional inferences.\BBCQ \APACjournalVolNumPages62125–161. \PrintBackRefs\CurrentBib [\citeauthoryearDungDung\APACyear1995] \APACinsertmetastardung95{APACrefauthors}Dung, P\BPBIM.  \APACrefYearMonthDay1995. \BBOQ\APACrefatitleOn the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-person Games On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-person Games.\BBCQ \APACjournalVolNumPagesArtificial Intelligence77321-357. \PrintBackRefs\CurrentBib [\citeauthoryearDung, Kowalski\BCBL \BBA ToniDung \BOthers.\APACyear2006] \APACinsertmetastarDungKT06{APACrefauthors}Dung, P\BPBIM., Kowalski, R\BPBIA.\BCBL \BBA Toni, F.  \APACrefYearMonthDay2006. \BBOQ\APACrefatitleDialectic proof procedures for assumption-based, admissible argumentation Dialectic proof procedures for assumption-based, admissible argumentation.\BBCQ \APACjournalVolNumPagesArtificial Intelligence1702114–159. \PrintBackRefs\CurrentBib [\citeauthoryearFernbach, Darlow\BCBL \BBA SlomanFernbach \BOthers.\APACyear2010] \APACinsertmetastarexo:reasons:2010{APACrefauthors}Fernbach, P., Darlow, A.\BCBL \BBA Sloman, S.  \APACrefYearMonthDay201003. \BBOQ\APACrefatitleNeglect of Alternative Causes in Predictive but Not Diagnostic Reasoning Neglect of alternative causes in predictive but not diagnostic reasoning.\BBCQ \APACjournalVolNumPagesPsychological science21329-36. \PrintBackRefs\CurrentBib [\citeauthoryearFernbach, Darlow\BCBL \BBA SlomanFernbach \BOthers.\APACyear2010] \APACinsertmetastarFernbachSloman2010{APACrefauthors}Fernbach, P\BPBIM., Darlow, A.\BCBL \BBA Sloman, S\BPBIA.  \APACrefYearMonthDay2010. \BBOQ\APACrefatitleNeglect of Alternative Causes in Predictive but Not Diagnostic Reasoning Neglect of alternative causes in predictive but not diagnostic reasoning.\BBCQ \APACjournalVolNumPagesPsychological Science213329-336. \PrintBackRefs\CurrentBib [\citeauthoryearGarcía \BBA SimariGarcía \BBA Simari\APACyear2004] \APACinsertmetastarDeLP{APACrefauthors}García, A\BPBIJ.\BCBT \BBA Simari, G\BPBIR.  \APACrefYearMonthDay2004. \BBOQ\APACrefatitleDefeasible Logic Programming: An Argumentative Approach Defeasible logic programming: An argumentative approach.\BBCQ \APACjournalVolNumPagesTheory Pract. Log. Program.4295–138. \PrintBackRefs\CurrentBib [\citeauthoryearGriceGrice\APACyear1975] \APACinsertmetastargrice:1975{APACrefauthors}Grice, H\BPBIP.  \APACrefYearMonthDay1975. \BBOQ\APACrefatitleLogic and conversation Logic and conversation.\BBCQ \BIn P. Cole \BBA J\BPBIL. Morgan (\BEDS), \APACrefbtitleSyntax and semantics Syntax and semantics (\BVOL 3). \APACaddressPublisherNew York: Academic Press. \PrintBackRefs\CurrentBib [\citeauthoryearGriggs \BBA CoxGriggs \BBA Cox\APACyear1982] \APACinsertmetastargriggs:cox:1982{APACrefauthors}Griggs, R\BPBIA.\BCBT \BBA Cox, J\BPBIR.  \APACrefYearMonthDay1982. \BBOQ\APACrefatitleThe elusive thematic-materials effect in Wason’s selection task The elusive thematic-materials effect in wason’s selection task.\BBCQ \APACjournalVolNumPagesBritish Journal of Psychology733407–420. \PrintBackRefs\CurrentBib [\citeauthoryearJohnson-Laird, V.Girotto\BCBL \BBA P.LegrenziJohnson-Laird \BOthers.\APACyear2004] \APACinsertmetastarjohnsonlaird:2004{APACrefauthors}Johnson-Laird, P\BPBIN., V.Girotto\BCBL \BBA P.Legrenzi.  \APACrefYearMonthDay2004. \BBOQ\APACrefatitleReasoning from Inconsistency to Consistency Reasoning from inconsistency to consistency.\BBCQ \APACjournalVolNumPagesPsychological Review1113640 - 661. \PrintBackRefs\CurrentBib [\citeauthoryearJohnson-LairdJohnson-Laird\APACyear1980] \APACinsertmetastarjohnsonlaird:1980{APACrefauthors}Johnson-Laird, P.  \APACrefYearMonthDay1980. \BBOQ\APACrefatitleMental Models in Cognitive Science Mental models in cognitive science.\BBCQ \APACjournalVolNumPagesCognitive Science4171-115. \PrintBackRefs\CurrentBib [\citeauthoryearJohnson-LairdJohnson-Laird\APACyear1983] \APACinsertmetastarjohnsonlaird:1983{APACrefauthors}Johnson-Laird, P\BPBIN.  \APACrefYear1983. \APACrefbtitleMental models: towards a cognitive science of language, inference, and consciousness Mental models: towards a cognitive science of language, inference, and consciousness. \APACaddressPublisherCambridge, MAHarvard University Press. \PrintBackRefs\CurrentBib [\citeauthoryearKahneman \BBA TverskyKahneman \BBA Tversky\APACyear1979] \APACinsertmetastarpropecttheory:1979{APACrefauthors}Kahneman, D.\BCBT \BBA Tversky, A.  \APACrefYearMonthDay1979. \BBOQ\APACrefatitleProspect Theory: An Analysis of Decision under Risk Prospect theory: An analysis of decision under risk.\BBCQ \APACjournalVolNumPagesEconometrica472263-91. \PrintBackRefs\CurrentBib [\citeauthoryearKakasKakas\APACyear2019] \APACinsertmetastarInformalizing{APACrefauthors}Kakas, A\BPBIC.  \APACrefYearMonthDay2019. \BBOQ\APACrefatitleInformalizing Formal Logic Informalizing formal logic.\BBCQ \APACjournalVolNumPagesInformal Logic392169–204. \PrintBackRefs\CurrentBib [\citeauthoryearKakas, Mancarella\BCBL \BBA DungKakas \BOthers.\APACyear1994] \APACinsertmetastarKMD94{APACrefauthors}Kakas, A\BPBIC., Mancarella, P.\BCBL \BBA Dung, P\BPBIM.  \APACrefYearMonthDay1994. \BBOQ\APACrefatitleThe Acceptability Semantics for Logic Programs The acceptability semantics for logic programs.\BBCQ \BIn \APACrefbtitleProc. of 11th Int. Conf. on Logic Programming Proc. of 11th int. conf. on logic programming (\BPGS 504–519). \PrintBackRefs\CurrentBib [\citeauthoryearKakas, Mancarella\BCBL \BBA ToniKakas \BOthers.\APACyear2018] \APACinsertmetastarALStudia2018{APACrefauthors}Kakas, A\BPBIC., Mancarella, P.\BCBL \BBA Toni, F.  \APACrefYearMonthDay2018. \BBOQ\APACrefatitleOn Argumentation Logic and Propositional Logic On argumentation logic and propositional logic.\BBCQ \APACjournalVolNumPagesStudia Logica1062237–279. \PrintBackRefs\CurrentBib [\citeauthoryearKakas \BBA MoraitisKakas \BBA Moraitis\APACyear2003] \APACinsertmetastarKM03{APACrefauthors}Kakas, A\BPBIC.\BCBT \BBA Moraitis, P.  \APACrefYearMonthDay2003. \BBOQ\APACrefatitleArgumentation based decision making for autonomous agents Argumentation based decision making for autonomous agents.\BBCQ \BIn \APACrefbtitleProc. of 2nd Int. Joint Conf. on Autonomous Agents & Multiagent Systems, AAMAS Proc. of 2nd int. joint conf. on autonomous agents & multiagent systems, AAMAS (\BPGS 883–890). \APACaddressPublisherACM. \PrintBackRefs\CurrentBib [\citeauthoryearKakas, Moraitis\BCBL \BBA SpanoudakisKakas \BOthers.\APACyear2019] \APACinsertmetastarGorgias2019{APACrefauthors}Kakas, A\BPBIC., Moraitis, P.\BCBL \BBA Spanoudakis, N.  \APACrefYearMonthDay2019. \BBOQ\APACrefatitleGORGIAS: Applying Argumentation Gorgias: Applying argumentation.\BBCQ \APACjournalVolNumPagesArgument and Computation10155–81. \PrintBackRefs\CurrentBib [\citeauthoryearKakas \BBA ToniKakas \BBA Toni\APACyear1999] \APACinsertmetastarKT99{APACrefauthors}Kakas, A\BPBIC.\BCBT \BBA Toni, F.  \APACrefYearMonthDay1999. \BBOQ\APACrefatitleComputing Argumentation in Logic Programming Computing argumentation in logic programming.\BBCQ \APACjournalVolNumPagesLogic Computation94515–562. \PrintBackRefs\CurrentBib [\citeauthoryearKelleyKelley\APACyear1973] \APACinsertmetastarkelley:1973{APACrefauthors}Kelley, H.  \APACrefYearMonthDay1973. \BBOQ\APACrefatitleThe processes of causal attribution The processes of causal attribution.\BBCQ \APACjournalVolNumPagesAmerican Psychologist282107–128. \PrintBackRefs\CurrentBib [\citeauthoryearKhemlani, Byrne\BCBL \BBA Johnson-LairdKhemlani \BOthers.\APACyear2018] \APACinsertmetastarKhemlani2018FactsAP{APACrefauthors}Khemlani, S\BPBIS., Byrne, R\BPBIM\BPBIJ.\BCBL \BBA Johnson-Laird, P\BPBIN.  \APACrefYearMonthDay2018. \BBOQ\APACrefatitleFacts and Possibilities: A Model-Based Theory of Sentential Reasoning Facts and possibilities: A model-based theory of sentential reasoning.\BBCQ \APACjournalVolNumPagesCognitive science. \PrintBackRefs\CurrentBib [\citeauthoryearKhemlani \BBA Johnson-LairdKhemlani \BBA Johnson-Laird\APACyear2019] \APACinsertmetastarKhemlanilaird:2019{APACrefauthors}Khemlani, S\BPBIS.\BCBT \BBA Johnson-Laird, P\BPBIN.  \APACrefYearMonthDay2019. \BBOQ\APACrefatitleWhy machines don’t (yet) reason like people Why machines don’t (yet) reason like people.\BBCQ \APACjournalVolNumPagesKI - Künstliche Intelligenz333219–228. \PrintBackRefs\CurrentBib [\citeauthoryearKühbergerKühberger\APACyear1995] \APACinsertmetastarkuehberger:1995{APACrefauthors}Kühberger, A.  \APACrefYearMonthDay1995. \BBOQ\APACrefatitleThe Framing of Decisions: A New Look at Old Problems The framing of decisions: A new look at old problems.\BBCQ \APACjournalVolNumPagesOrganizational Behavior and Human Decision Processes622230 - 240. \PrintBackRefs\CurrentBib [\citeauthoryearLawrence, Snaith, Konat, Budzynska\BCBL \BBA ReedLawrence \BOthers.\APACyear2017] \APACinsertmetastarReed{APACrefauthors}Lawrence, J., Snaith, M., Konat, B., Budzynska, K.\BCBL \BBA Reed, C.  \APACrefYearMonthDay2017. \BBOQ\APACrefatitleDebating Technology for Dialogical Argument: Sensemaking, Engagement, and Analytics Debating technology for dialogical argument: Sensemaking, engagement, and analytics.\BBCQ \APACjournalVolNumPagesACM Trans. Internet Techn.17324:1–24:23. \PrintBackRefs\CurrentBib [\citeauthoryearLiptonLipton\APACyear2003] \APACinsertmetastarLiptonBook{APACrefauthors}Lipton, P.  \APACrefYear2003. \APACrefbtitleInference to the Best Explanation Inference to the best explanation. \APACaddressPublisherRoutledge. \PrintBackRefs\CurrentBib [\citeauthoryearMcCarthyMcCarthy\APACyear1995] \APACinsertmetastarMcCarthy:1995{APACrefauthors}McCarthy, J.  \APACrefYearMonthDay1995. \BBOQ\APACrefatitleComputation &Amp; Intelligence Computation &amp; intelligence.\BBCQ \BIn G\BPBIF. Luger (\BED), (\BPGS 479–492). \APACaddressPublisherAmerican Association for Artificial Intelligence. \PrintBackRefs\CurrentBib [\citeauthoryearMcDermottMcDermott\APACyear1987] \APACinsertmetastarMcDermott87{APACrefauthors}McDermott, D.  \APACrefYearMonthDay1987. \BBOQ\APACrefatitleA critique of pure reason A critique of pure reason.\BBCQ \APACjournalVolNumPagesComputational Intelligence31151-160. \PrintBackRefs\CurrentBib [\citeauthoryearMercier \BBA SperberMercier \BBA Sperber\APACyear2011] \APACinsertmetastarmercier:sperber:2011{APACrefauthors}Mercier, H.\BCBT \BBA Sperber, D.  \APACrefYearMonthDay2011. \BBOQ\APACrefatitleWhy do humans reason? Arguments for an argumentative theory Why do humans reason? arguments for an argumentative theory.\BBCQ \APACjournalVolNumPagesBehavioral and Brain Sciences34257-74. \PrintBackRefs\CurrentBib [\citeauthoryearMichaelMichael\APACyear2016] \APACinsertmetastarLoizosMichael2016{APACrefauthors}Michael, L.  \APACrefYearMonthDay2016. \BBOQ\APACrefatitleCognitive Reasoning and Learning Mechanisms Cognitive reasoning and learning mechanisms.\BBCQ \BIn \APACrefbtitleAIC. Aic. \PrintBackRefs\CurrentBib [\citeauthoryearModgil \BBA PrakkenModgil \BBA Prakken\APACyear2013] \APACinsertmetastarModgilPrakken2013{APACrefauthors}Modgil, S.\BCBT \BBA Prakken, H.  \APACrefYearMonthDay2013. \BBOQ\APACrefatitleA general account of argumentation with preferences A general account of argumentation with preferences.\BBCQ \APACjournalVolNumPages195361–397. \PrintBackRefs\CurrentBib [\citeauthoryearNickersonNickerson\APACyear2015] \APACinsertmetastarnickerson:2015{APACrefauthors}Nickerson, R.  \APACrefYear2015. \APACrefbtitleConditional Reasoning: The Unruly Syntactics, Semantics, Thematics, and Pragmatics of ”if” Conditional reasoning: The unruly syntactics, semantics, thematics, and pragmatics of ”if”. \APACaddressPublisherOxford University Press. \PrintBackRefs\CurrentBib [\citeauthoryearPollockPollock\APACyear1987] \APACinsertmetastarPollock1987{APACrefauthors}Pollock, J.  \APACrefYearMonthDay1987. \BBOQ\APACrefatitleDefeasible Reasoning Defeasible reasoning.\BBCQ \APACjournalVolNumPagesCognitive Science11(4)481–518. \PrintBackRefs\CurrentBib [\citeauthoryearPollockPollock\APACyear1995] \APACinsertmetastarpollock:1995{APACrefauthors}Pollock, J\BPBIL.  \APACrefYear1995. \APACrefbtitleCognitive Carpentry: A Blueprint for How to Build a Person Cognitive carpentry: A blueprint for how to build a person. \APACaddressPublisherMIT Press. \PrintBackRefs\CurrentBib [\citeauthoryearPrakkenPrakken\APACyear2011] \APACinsertmetastarPrakkenLaw{APACrefauthors}Prakken, H.  \APACrefYear2011. \APACrefbtitleLogical Tools for Modelling Legal Argument: A Study of Defeasible Reasoning in Law Logical tools for modelling legal argument: A study of defeasible reasoning in law. \APACaddressPublisherSpringer. \PrintBackRefs\CurrentBib [\citeauthoryearPrakken \BBA SartorPrakken \BBA Sartor\APACyear1997] \APACinsertmetastarSartorPrakkenS97{APACrefauthors}Prakken, H.\BCBT \BBA Sartor, G.  \APACrefYearMonthDay1997. \BBOQ\APACrefatitleArgument-Based Extended Logic Programming with Defeasible Priorities Argument-based extended logic programming with defeasible priorities.\BBCQ \APACjournalVolNumPages7125–75. \PrintBackRefs\CurrentBib [\citeauthoryearRubenRuben\APACyear1990] \APACinsertmetastarRubenBook{APACrefauthors}Ruben, D\BHBIH.  \APACrefYear1990. \APACrefbtitleExplaining Explanation Explaining explanation. \APACaddressPublisherRoutledge. \PrintBackRefs\CurrentBib [\citeauthoryearSlomanSloman\APACyear1994] \APACinsertmetastarSloman94{APACrefauthors}Sloman, S.  \APACrefYearMonthDay1994. \BBOQ\APACrefatitleWhen explanations compete: the role of explanatory coherence on judgements of likelihood When explanations compete: the role of explanatory coherence on judgements of likelihood.\BBCQ \APACjournalVolNumPagesCognition5211 - 21. \PrintBackRefs\CurrentBib [\citeauthoryearSperber \BBA WilsonSperber \BBA Wilson\APACyear1995] \APACinsertmetastarsperber:wilson:1995{APACrefauthors}Sperber, D.\BCBT \BBA Wilson, D.  \APACrefYear1995. \APACrefbtitleRelevance: Communication and Cognition Relevance: Communication and cognition. \APACaddressPublisherOxfordBlackwell Publishers. \PrintBackRefs\CurrentBib [\citeauthoryearStenning \BBA van LambalgenStenning \BBA van Lambalgen\APACyear2008] \APACinsertmetastarstenning:vanlambalgen:2008{APACrefauthors}Stenning, K.\BCBT \BBA van Lambalgen, M.  \APACrefYear2008. \APACrefbtitleHuman Reasoning and Cognitive Science Human reasoning and cognitive science. \PrintBackRefs\CurrentBib [\citeauthoryearToumlinToumlin\APACyear1958] \APACinsertmetastarToumlin1958{APACrefauthors}Toumlin, S.  \APACrefYear1958. \APACrefbtitleThe Uses of Argument The uses of argument. \APACaddressPublisherCambridge University Press. \PrintBackRefs\CurrentBib [\citeauthoryearWaltonWalton\APACyear1996] \APACinsertmetastarwalton:1996{APACrefauthors}Walton, D\BPBIN.  \APACrefYear1996. \APACrefbtitleArgumentation Schemes for Presumptive Reasoning Argumentation schemes for presumptive reasoning. \PrintBackRefs\CurrentBib [\citeauthoryearWasonWason\APACyear1968] \APACinsertmetastarwason:68{APACrefauthors}Wason, P.  \APACrefYearMonthDay1968. \BBOQ\APACrefatitleReasoning about a rule Reasoning about a rule.\BBCQ \APACjournalVolNumPagesQuarterly J. of Experimental Psychology203273–281. \PrintBackRefs\CurrentBib
Percolation transition in the packing of bidispersed particles on curved surfaces Andrew M. Mascioli Department of Physics and Astronomy, Tufts University, 574 Boston Avenue, Medford, Massachusetts 02155, USA    Christopher J. Burke Department of Physics and Astronomy, Tufts University, 574 Boston Avenue, Medford, Massachusetts 02155, USA    Timothy J. Atherton [email protected] Department of Physics and Astronomy, Tufts University, 574 Boston Avenue, Medford, Massachusetts 02155, USA Abstract We study packings of bidispersed spherical particles on a spherical surface. The presence of curvature necessitates defects even for monodispersed particles; bidispersity either leads to a more disordered packing for nearly equal radii, or a higher fill fraction when the smaller particles are accomodated in the interstices of the larger spheres. Variation in the packing fraction is explained by a percolation transition, as chains of defects or scars previously discovered in the monodispersed case grow and eventually disconnect the neighbor graph. Bidispersed mixtures of hard spheres are an important elementary model of a glass transitionStillinger and Debenedetti (2013): at high temperature and low density they flow freely, while as temperature is reduced they become kinetically arrested and form rigid but highly disordered structuresDonev (2004). At zero temperature and stress, a similar jamming transition to rigidity occurs as a function of densityCates et al. (1998); Liu and Nagel (2010) which in 2D tends to occur around a packing fraction of $\Phi=0.84$ O’Hern et al. (2002); Reichhardt and Reichhardt (2014). Jammed structures exhibit distinctive properties including isostaticity: the average number of inter-particle contacts is the minimum number required for mechanical stabilityAlexander (1998). Powerful mathematical tools existDonev et al. (2004a) to classify jammed and glassy packings of hard particles according to a hierarchy, depending on where individual particles, groups or boundary deformations can unjam the systemTorquato and Stillinger (2001). Sphere packings, the high density and zero temperature limit of these processes, have been extensively studied in both 2D and 3D Euclidean spaceLiu and Nagel (2010); Torquato and Stillinger (2010); Donev (2004); Lubachevsky et al. (1991) revealing strong dimensional dependence: 2D monodispersed spheres tend to crystallize readily, because the locally dense hexagonal packing fills space; in 3D the locally dense tetrahedral packing cannot fill space, permitting a random close packed structure that is the subject of much debateTorquato et al. (2000); Donev et al. (2004b); Kamien and Liu (2007). Even in 2D, however, disorder can be induced in bidispersed systems. Molecular dynamics simulations have shown that there is a transition from order to disorder as the degree of bidispersity is increasedHamanaka and Onuki (2006); Sadr-Lahijany et al. (1997); Vermohlen and Ito (1995); Watanabe et al. (2005), and statistical models of bidispersed particle packings have been used to predict the local features of disordered bidispersed packingsHilgenfeldt (2013); Richard et al. (2001). The degree of order or disorder can be measured by an order parameter such as the hexatic bond orientational orderNelson and Halperin (1979). Crystalline order is geometrically frustrated on curved surfacesSeung and Nelson (1988): an incompatibility between the preferred hexagonal symmetry of the crystalline packing and the topology of the surface necessitates a minimal number of defects—particles with a number of neighbors other than 6—to accommodate the curvature. For monodispersed particles, the packings are mainly crystalline with a transition between isolated defects for small particle number and chains of defects or scars akin to grain boundaries in bulk systems that occur above a critical number of particles $N_{c}\approx 110$ and grow with system sizeBausch et al. (2003); Bowick et al. (2000). The scars may join in asterisk-like motifsBowick et al. (2000) and are aligned by anisotropic curvatureBurke et al. (2015). Jammed packings on spheres or spherical codes have recently been studied in multiple dimensions Cohn et al. (2011). In this Letter, we investigate the packing of bidispersed particles on a spherical surface as a simple model of how glasses interact with curvature. We determine the packing fraction, connectivity and hexatic order parameter as a function of particle number $N$, fraction of large particles $\chi=N_{L}/N$ and bidispersity $b=\left(r_{1}-r_{2}\right)/\left(r_{1}+r_{2}\right)$ where $r_{1}$ and $r_{2}$ are the radii of the particles and $r_{1}\geq r_{2}$. By identifying topological defects from the neighbor graph we show that variation in these parameters is explained by a percolation transition due to growth and connectivity of the scar network, as well as by the possibility of commensurate local packings. Simulations—Packings with high coverage fraction were produced using a surface relaxation algorithm: $N$ spherical particles are initially placed using random sequential absorption with their centers of mass on a sphere of radius $R=1$. Particles are randomly assigned to two categories corresponding to larger and smaller radii respectively. The simulation proceeds by, first, diffusion sweeps where, particles are moved in random order some distance drawn from a Gaussian distribution of width $\sigma=2r_{1}\times 10^{-3}$ in a random direction along the surface. Moves that cause overlap are rejected. As the packing becomes dense, an adaptive step size is used to reduce the number of moves rejected due to overlap: $\sigma=10\langle s\rangle$, where $\langle s\rangle$ is the geometric mean of the separation between each particle and its three nearest neighbors. Secondly, surface relaxation moves slowly decrease the radius of the surface by an amount $\Delta R$, where initially $\Delta R=10^{-5}$. After the surface radius is reduced, particles are projected down onto the nearest point on the surface. After projection, a gradient descent minimization is run on the particles (where the interparticle energy is linear in the amount of overlap) until overlap is undone. If overlap can not be undone, the surface relaxation move is undone and particle positions are reset, and simulation continues with $\Delta R$ set to $\Delta R/2$. $20$ diffusion sweeps are carried out between each surface relaxation step. The simulation halts when $\Delta$R is reduced to $2^{-14}$ times its original value. Configurations produced by this procedure are referred to as arrested, because they remain metastable if the simulation is restarted; eventually, however, a Monte Carlo move will unjam the arrested configuration, potentially facilitating further relaxation and a consequent increase in the packing fraction. This process occurs in real glasses and is known as aging. Extending a powerful technique due to Donev et al. Donev et al. (2004a), we artificially age the arrested structures using a linear program to find and execute an unjamming motion of the particles and further relax the surface. Iterative unjamming and relaxation guides the packing toward a state that is collectively jammed with respect to movement of the particles and further relaxation. As we report elsewhereBurke and Atherton (2016), the convergence of this procedure is greatly accelerated by preconditioning the packing, attaching a short range repulsive interaction to the particles beyond the hard inter-penetrability constraint and minimizing the corresponding energy by gradient descent. This procedure moves the particles into the center of the feasible region from which the linear program is more effectively able to identify an unjamming motion. Each arrested structure was subjected to this artificial aging process to produce a corresponding ensemble of jammed structures. For monodispersed particlesBausch et al. (2003), neighbors are assigned from a Voronoi tessellationAurenhammer (1991) of the particle centers of mass, partitioning the surface into $N$ polygonal regions closest to a particular particle. Two particles are neighbors if they share an adjacent edge on the Voronoi tessellation. Generalizing this construction to bidispersed particles with a weighted distance fails to uniquely assign all points on the surface to a particle; two proposed alternatives Richard et al. (2001) are the radical tessellation and the navigation map, both of which recover the Voronoi tessellation in the limit of monodispersed spheres. The radical tessellation utilizes the radical plane as a separatrix between each pair of particles; the navigation map partitions the surface into regions closest to the surface of the particles rather than their center of mass. We found little difference between quantities calculated from these constructions and use the radical tessellation exclusively in the remainder of the paper. From the radical tessellation, the adjoint neighbor graph was constructed for each packing and the coordination number determined for each particle. Results and Discussion—For each value of bidispersity on the interval $b\in[0,1]$ with a resolution of $\Delta b=0.005$, an ensemble of 20 jammed configurations was generated with $\chi=1/2$ and for different numbers of particles $N$. The packing fraction $\Phi$, i.e. the fraction of the surface enclosed by the particles, was calculated for each configuration and shown in fig. 1. For particle numbers above about $N=200$, slight deviations from the monodispersed case immediately introduce disorder and reduce the packing fraction as expected. Above a critical value of bidispersity $b_{c}\sim 0.1$, however, we see a transition and $\Phi$ increases, with an apparent shoulder at $b\approx 0.4$, up to a maximum value of $\Phi\approx 0.87$ at $b=b_{A}\sim 0.7$ and then decreases as $b\to 1$. For $N<200$, $\Phi$ increases monotonically up to a maximum at a slightly lower value of $b\sim 0.6$. In the lower inset of Fig. 1, we compare the packing fraction for 800 particles for the ensemble of arrested and jammed packings. It is clear that the arrested structures are slightly less efficiently packed, but the trends are identical. We find simular results for all $N$; this correspondence affirms that the trends are geometric in origin rather than due to variation in the performance of the algorithm at different $b$. The maximum at $b=b_{A}$ is immediately explicable: it corresponds to the special point at which the smaller particles fit exactly in the interstices between the larger particles, depicted in the upper inset of Fig. 1. We denote this the Apollonian point in reference to the tiling. Packings around and above $b_{A}$ appear mostly crystalline with the smaller particles separated into the interstices; the packing fraction at $b=1$ corresponds exactly to that for $N/2$ particles. No such immediate explanation is obvious for the low and medium bidispersity results, which appear to be well mixed; we therefore seek a more detailed understanding of the structure. One structural measure that reflects the degree to which the packings are locally crystalline is the hexatic order parameter $\psi_{6}=\left\langle\exp(i6\theta_{i})\right\rangle$, where the average is taken over the neighboring particles. This is shown calculated from the dataset as a function of $b$ and $N$ in Fig. 2A. A maximum occurs for all $N$ at $b=0$ as expected; the value is reduced for smaller $N$ reflecting the disruption of crystallinity by the curvature. The hexatic order drops with $b$, reaches a minimum around $b\sim 0.45$, rises and then forms a plateau above the Apollonian point, albeit at a value significantly lower than the $b=0$ case, because here the large particles have a higher coordination number. Variation in $\psi_{6}$ is significantly attenuated for low $N$ where the influence of the curvature is stronger. To see whether hexatic order is replaced by other ordering, we calculated $n$-atic order parameters $\psi_{n}=\left\langle\exp(in\theta_{i})\right\rangle$ for $N=1600$ as a function of $b$; the results are plotted in Fig. 2B. In contrast to the hexatic order parameter, $\psi_{n}$ for $n\neq 6$ increases with $b$ from $b=0$; moreover all $\psi_{n}$ exhibit a plateau above the Appollonian point confirming the distinct nature of this regime. Two values, $n=8,10$ have $\psi_{n}$ narrowly greater than $\psi_{6}$ for intermediate values of $b$ and possess maxima at $b=0.45$ and $b=0.6$ respectively. Examining the packings, this is due to the presence of octagonally and decagonally coordinated arrangements: a common and commensurate motif, depicted in the inset of fig. 2B, where four large and four small particles are arranged around a central large particle, is allowed first for $b=\sqrt{2}-1\approx 0.41$, which coincides with the position of the shoulder in the plot of $\Phi(b)$ in Fig. 1. A variety of similar motifs exist for $b$ around this value with the same coordination number but different mixtures of large and small neighboring particles and appear to cause the shoulder. It is interesting to note significant decatic ordering: 10-fold rotational symmetry is incompatible with long range order and is rarely seen in packings in flat space with the exception of quasicrystalsShechtman et al. (1984); Fischer et al. (2011); Talapin et al. (2009). As long range order is also incompatible with curvature, it appears that curvature may promote the increased 10-fold ordering. We now examine the coordination number directly. In fig. 2C, we plot the average coordination number per particle, separated into large-large, large-small and small-small contacts and for different $N$. At infinitesimal $b$, each particle has six neighbors, three smaller and three larger on average. With increasing $b$, the number of large-small contacts per particle remains a constant value of three; larger particles gain more large neighbors while smaller particles lose small contacts. At the Apollonian point, the smaller particles are surrounded by three larger neighbors, while the larger particles are on average surrounded by six large neighbors and three smaller neighbors. For $b>b_{A}$, the coordination numbers remain constant, consistent with the discussion above where smaller particles are caged within the interstices of the larger particles. Smaller values of $N$ follow similar trends, but tend to have lower coordination numbers. Finally, we calculated the pair correlation function $g(s)$ that encodes particle’s local environment; results are displayed in Fig. 2D. For $b=0$, we see persistent peaks at large $s$ indicative of long range order and a split second peak in agreement with previous studies in flat spaceDonev et al. (2005). Increasing bidispersity slightly to $b=0.06$ causes the split peak to disappear, representing the disruption of local crystalline packing, but the long range order persists. Proceeding to $b=0.13$, $g(s)$ is now flat, indicating that the long range order has disappeared. This is our first indication that the minimum in $\Phi$ at $b_{c}$ observed in Fig. 1 is associated with a transition where long range crystalline order is disrupted. One measure of the abundance of crystallinity is the fraction $\phi_{6}$ that possess a coordination number of $6$. In fig. 3A, we plot $1-\phi_{6}$ as a function of bidispersity revealing a transition: as $b$ increases from zero, $1-\phi_{6}$ is approximately constant then rises rapidly to unity, reaching a value of $\frac{1}{2}$ at $b=b_{p}\approx 0.15$. Above bidispersity $b\approx 0.5$, a vanishing fraction of particles possess six neighbors. These trends persist for all values of $N$ shown, but $1-\phi_{6}$ is larger at $b=0$ for small $N$ since topology mandates a minimal number of defects. To understand this transition further, it is necessary to examine the microstructural information encoded in the neighbor graphs, the adjoint graph of the radical tessellation. We crudely separate the crystalline and non-crystalline components by deleting from a neighbor graph all vertices that have six neighbors, yielding the “non-hexatic” subgraph. Illustrative examples of these subgraphs are depicted in fig. 3B. For $b=0$ the subgraph consists of small disconnected components corresponding to the previously-studied scars, which are essentially linear in morphology, with a small number of branches. As bidispersity increases to $b=0.1$, just below $b_{p}$, the connected subgraphs are still recognizably scar-like in nature, but have a branching morphology and are substantially longer. By $b=0.14$, close to $b_{p}$, the defect subgraph remains disconnected, but is now dominated by a few large connected graphs that are mostly linear with branches. Finally, above $b_{p}$ at $b=0.2$ the defect subgraph is now mostly a single connected structure with a small number of additional isolated defects; it is no longer branching, but with linear sections that link into a foam-like structure. For $b=0.3$, the defect subgraph retains this structure, but is more densely connected. The gradual growth and long-range connection of the non-hexatic subgraph due to bidispersity is therefore a percolation transition that occurs: As $b$ increases around $b_{p}$, the number of sites participating in the non-hexatic subgraph increases until they form a connected structure. Percolation transitions are well-studiedGrimmett (1997). The canonical formulation is: given a network, and selecting a fraction $p$ sites, what is the probability that one of the selected sites belongs to a long-range connected structure? Clearly, the system under consideration cannot be precisely mapped onto this problem because the neighbor graph changes with $b$. However, by averaging over all particle pairs in Fig. 2B we see that the mean coordination number remains 6 for all $b$. Thus, we examine the canonical percolation problem on the neighbor graph of a monodisperse packing for $N=800$ particles. From such a graph, we randomly select a fraction $p$ sites and repeat this procedure to form $n$ trials. Plotted in Fig. 3C is the fraction of trials where the selected components form a connected structure (gray line) and where the remaining components retain their connectivity (black line). We compare this to the bidispersity percolation transition by the placing the non-hexatic subgraph in correspondence to the selected subgraph in the random percolation model; the selected fraction is therefore $p=1-\phi_{6}$. The fraction of connected hexatic and non-hexatic subgraphs at each value of $p$ is plotted as points in Fig. 3C, showing that the percolation thresholds are in good agreement. Notably, the hexatic subgraph become disconnected around $p\approx 0.4$, which occurs at $b\lesssim b_{p}=0.15$ in Fig. 2A. Percolation implies a growing lengthscale, so we also computed the size of the largest connected component of the selected and unselected subgraphs, plotted in solid lines in Fig. 3D. Again calculating corresponding values from the bidispersed neighbor graphs, shown as points in Fig. 3D, we see excellent agreement. We infer from this that the qualitative features of the bidispersity percolation transition are predicted by a random percolation transition on the monodisperse neighbor graph. To test this, we attempted to disrupt the transition by varying the fraction of large particles $\chi=N_{L}/N$, motivated by the idea that growth of the scars might be prevented if sufficiently few minority particles are present. The packing fraction for several values of $\chi$ is shown in fig. 4A. Small $\chi$ leads to a dramatic enhancement of the packing fraction at the Apollonian point, but $\chi=0.9$ flattens it as well as suppresses the low $b$ minimum. Looking at the defect subgraphs, those in $\chi=0.9$ do not exhibit connected defect subgraphs. For a given $\chi$, bidispersity determines $1-\phi_{6}$, which is the parameter that determines whether we have percolation or not: Examining this, plotted in fig. 4B, shows that for $\chi=0.9$, $1-\phi_{6}$ is just short of the threshold $\sim 0.4$ for random percolation. Conclusion—We have shown that the packing fraction of bidispersed packings of spheres on a spherical surface is determined by three influences: an Apollonian packing for $b\approx 0.73$ where small particles fit into the interstices of large particles produces a global maximum; commensurate eight and tenfold coordinated configurations of particles yield an inflexion point at $b\approx 0.41$; a minimum at $b\approx 0.1$ is due to the growth and percolation of “scars” previously observed in the monodispersed case. By adjusting the ratio of large particles, we have shown that preventing the percolation transition greatly attenuates the minimum. The growing lengthscale and critical fraction necessary for percolation were found to be in agreement with those for random percolation on the monodispersed neighbor graph. Acknowledgement—The authors thank the Research Corporation for Science Advancement for funding through a Cottrell Award. References Stillinger and Debenedetti (2013) F. H. Stillinger and P. G. Debenedetti, Annu. Rev. Condens. Matter Phys. 4, 263 (2013). Donev (2004) A. Donev, Journal of Applied Physics 95, 989 (2004). Cates et al. (1998) M. Cates, J. Wittmer, J.-P. Bouchaud,  and P. Claudin, Physical review letters 81, 1841 (1998). Liu and Nagel (2010) A. J. Liu and S. R. Nagel, Annual Review of Condensed Matter Physics 1, 347 (2010). O’Hern et al. (2002) C. O’Hern, S. Langer, A. Liu,  and S. Nagel, Physical Review Letters 88, 075507 (2002). Reichhardt and Reichhardt (2014) C. Reichhardt and C. O. Reichhardt, Soft matter 10, 2932 (2014). Alexander (1998) S. Alexander, Physics Reports 296, 65 (1998). Donev et al. (2004a) A. Donev, S. Torquato, F. H. Stillinger,  and R. Connelly, Journal of Computational Physics 197, 139 (2004a), arXiv:0208502 [cond-mat] . Torquato and Stillinger (2001) S. Torquato and F. H. Stillinger, Journal of Physical Chemistry 105, 11849 (2001). Torquato and Stillinger (2010) S. Torquato and F. H. Stillinger, Reviews of Modern Physics 82, 2633 (2010). Lubachevsky et al. (1991) B. Lubachevsky, F. Stillinger,  and E. Pinson, Journal of Statistical Physics 64, 501 (1991). Torquato et al. (2000) S. Torquato, T. M. Truskett,  and P. G. Debenedetti, Physical review letters 84, 2064 (2000). Donev et al. (2004b) A. Donev, S. Torquato, F. H. Stillinger,  and R. Connelly, Phys. Rev. E 70, 043301 (2004b). Kamien and Liu (2007) R. D. Kamien and A. J. Liu, Physical review letters 99, 155501 (2007). Hamanaka and Onuki (2006) T. Hamanaka and A. Onuki, Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 74, 1 (2006), arXiv:0605251 [cond-mat] . Sadr-Lahijany et al. (1997) M. R. Sadr-Lahijany, P. Ray,  and H. E. Stanley, Physical Review Letters 79, 3206 (1997), arXiv:9705218 [cond-mat] . Vermohlen and Ito (1995) W. Vermohlen and N. Ito,  51 (1995). Watanabe et al. (2005) H. Watanabe, S. Yukawa,  and N. Ito, Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 71, 1 (2005). Hilgenfeldt (2013) S. Hilgenfeldt, Philosophical Magazine 93, 4018 (2013). Richard et al. (2001) P. Richard, L. Oger, J. P. Troadec,  and a. Gervois, European Physical Journal E 6, 295 (2001). Nelson and Halperin (1979) D. Nelson and B. Halperin, Physical Review B 19 (1979). Seung and Nelson (1988) H. Seung and D. Nelson, Physical Review A 38, 1005 (1988). Bausch et al. (2003) A. Bausch, M. Bowick, A. Cacciuto, A. Dinsmore, M. Hsu, D. Nelson, M. Nikolaides, A. Travesset,  and D. Weitz, Science (New York, N.Y.) 299, 1716 (2003). Bowick et al. (2000) M. Bowick, D. Nelson,  and A. Travesset, Physical Review B 62, 8738 (2000). Burke et al. (2015) C. J. Burke, B. L. Mbanga, Z. Wei, P. Spicer,  and T. Atherton, Soft Matter  (2015), 10.1039/C5SM01118C. Cohn et al. (2011) H. Cohn, Y. Jiao, A. Kumar,  and S. Torquato, Geometry & Topology 15, 2235 (2011). Burke and Atherton (2016) C. J. Burke and T. J. Atherton, submitted , arXiv:1605.09478 (2016). Aurenhammer (1991) F. Aurenhammer, ACM Computing Surveys (CSUR) 23, 345 (1991). Shechtman et al. (1984) D. Shechtman, I. Blech, D. Gratias,  and J. W. Cahn, Phys. Rev. Lett. 53, 1951 (1984). Fischer et al. (2011) S. Fischer, A. Exner, K. Zielske, J. Perlich, S. Deloudi, W. Steurer, P. Lindner,  and S. Förster, Proceedings of the National Academy of Sciences 108, 1810 (2011). Talapin et al. (2009) D. V. Talapin, E. V. Shevchenko, M. I. Bodnarchuk, X. Ye, J. Chen,  and C. B. Murray, Nature 461, 964 (2009). Donev et al. (2005) A. Donev, S. Torquato,  and F. Stillinger, Physical Review E 71, 011105 (2005). Grimmett (1997) G. Grimmett, in Lectures on Probability Theory and Statistics (Springer, 1997) pp. 153–300.
Random Matrix approach to collective behavior and bulk universality in protein dynamics Raffaello Potestio SISSA - Scuola Internazionale Superiore di Studi Avanzati, Via Beirut 2/4 - 34151 Trieste, Italy    Fabio Caccioli SISSA - Scuola Internazionale Superiore di Studi Avanzati, Via Beirut 2/4 - 34151 Trieste, Italy Istituto Nazionale di Fisica Nucleare, sezione di Trieste, Italy    Pierpaolo Vivo ICTP - Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11 - 34151 Trieste, Italy (November 20, 2020) Abstract Covariance matrices of amino acid displacements, commonly used to characterize the large-scale movements of proteins, are investigated through the prism of Random Matrix Theory. Bulk universality is detected in the local spacing statistics of noise-dressed eigenmodes, which is well described by a Brody distribution with parameter $\beta\simeq 0.8$. This finding, supported by other consistent indicators, implies a novel quantitative criterion to single out the collective degrees of freedom of the protein from the majority of high-energy, localized vibrations. pacs: 02.50.Cw, 87.15.La ††preprint: APS/123-QED Introduction. Proteins are biomolecules of essential importance for biological aspects ranging from structural (e.g. viral capsids, cytoskeleton) to biochemical ones (e.g. enzymes). The biological functionality of a protein often relies on its capability to undergo large-scale conformational changes (Bragg and Perutz, 1952; Zen et al., 2008). In order to characterize such motions, a convenient starting point is the Displacement Covariance Matrix (DCM) $\mathcal{C}_{ij}=\langle\delta r_{i}\delta r_{j}\rangle$, where $\delta r$ is the deviation of a protein’s backbone ($\mathrm{C}_{\alpha}$) atom from a given reference structure (in this notation $i,j$ indicate both amino acid and Cartesian indices) and $\langle\cdots\rangle$ indicates the ensemble average. The eigenmodes of the DCM with largest eigenvalues, corresponding to low-energy excitations, are responsible for large-scale, collective motions of the protein. These modes play a major role in the molecule’s functionality, and a handful of them account for a large fraction of the overall fluctuation (Noguti and Go, 1982). The other more numerous eigenmodes instead describe high-energy, localized vibrations which are more heavily affected by the fine details of structure and interactions. Atomistic force-field Molecular Dynamics (MD) is a widely employed tool to investigate the dynamical properties of a protein. The duration of a MD trajectory, however, poses a problem in the sampling of the phase space (García, 1992), since the essential space can vary from run to run. Nevertheless, the principal components are more robust than the high-energy sector of the spectrum and are consistent amongst independent simulations. It is therefore highly desirable to single out the statistically significant low-energy modes from the bulk of ‘noise-dressed’ ones. Far-reaching implications include the description of large-scale fluctuations in terms of a suitable set of collective coordinates Pontiggia et al. (2008) and the identification of a few relevant modes as a basis for coarse-grained models of structure and dynamics Potestio et al. (2009). The purpose of this Letter is to address this issue quantitatively. We propose a novel criterion for statistical significance based on the comparison with local eigenvalue statistics from Random Matrix Theory (RMT). Applications of RMT to biophysical issues are so far rather limited Ciliberti et al. (2006); Luo et al. (2006); Lacelle (1984); Majumdar and Nechaev (2005); Şener and Schulten (2002); Bandyopadhyay and Jalan (2007); Orland and Zee (2002). Our new approach relies on an ensemble of DCM, rather than a single instance. This permits to characterize in a statistical sense the low-energy subspace of a specific protein against a ‘null’ model with random correlations. The sharp onset of RMT-like level spacing statistics from the first few modes onwards, along with other consistent indicators, signals a clear-cut separation of the spectrum in two statistically incompatible regions, associated with discordant dynamical properties. Preliminaries. In order to collect a DCM ensemble we resort to exactly-solvable Elastic Networks Models (ENM) Hinsen (1998), rather than other available strategies such as the calculation of the Hessian matrix (Brooks et al., 1995). Indeed it is known Tirion (1996) that the low-energy modes of the spectrum, the most interesting for the present work, are well reproduced by ENM with lighter computational effort. Our matrix ensemble is obtained from a MD simulation carried out in Pontiggia et al. (2008) on E. Coli Adenylate Kinase, a $N=214$ amino acid single-chain phosphotransferase protein Lou and Cukier (2006). During this 50 ns simulation, the molecule explores many different free energy minima, or substates. Within each substate, the protein fluctuates around a well-defined average structure. The 4000 configurations (MD frames) of the shortest substate (2 ns) are taken as reference structures for the anisotropic $\beta$-Gaussian network model ($\beta$-GM) Micheletti et al. (2004). This model retains only two centroids per amino acid: the $\mathrm{C}_{\alpha}$ atom and a bead representing the sidechain. The free energy profile is approximated, for small deviations from the reference structure, with a network of anisotropic springs of equal strength connecting all the centroids within the cutoff distance of 7.5 Å. Due to the quadratic nature of the interactions, the DCM $\mathcal{C}$ and the eigenspace $(\lambda_{k},|\lambda_{k}\rangle)$ of a given protein structure are obtained inverting the resulting effective free energy matrix $\mathcal{M}$ (eq. 1-7 of Micheletti et al. (2004)) as $\mathcal{C}=\mathcal{M}^{-1}$, where $\mathcal{C}|\lambda_{k}\rangle=\lambda_{k}|\lambda_{k}\rangle,\quad(\lambda_{k% }>\lambda_{k+1})$. Note that the total number of Degrees of Freedom (DoF) is $3N-6=636$, since $\mathcal{M}$ always presents six null modes (corresponding to global symmetries of the model): the matrix inversion is obviously meaningful only within the subspace orthogonal to $\mathrm{ker}(\mathcal{M})$ Micheletti et al. (2004). The local spectral statistics (probing correlations on a scale comparable with the mean level spacing) of the obtained ensemble of DCM is then compared with the predictions of random correlation matrices. Analogous results (not included here), have been obtained on other substates of the same trajectory and on a MD simulation of G protein Pontiggia et al. (2007), a 56 residues long protein, as a preliminary validation of the conclusions drawn in the present work on different case studies. Random Matrix Predictions. The Wishart-Laguerre (WL) Mehta (2004); Wishart (1928) ensemble of random covariances includes $N\times N$ matrices $\mathcal{W}$ of the form $\mathcal{W}=(1/T)\mathcal{X}\mathcal{X}^{\mathrm{t}}$, where $\mathcal{X}$ is an $N\times T$ matrix containing $N$ time series of $T$ independent elements drawn from a Gaussian distribution with zero mean and fixed variance, $\mathcal{X}\sim\mathrm{N}(0,\sigma^{2})$. Since $\mathcal{W}$ is the covariance matrix of a maximally random data set, it is usually the optimal candidate as a ‘null model’ with the lowest degree of built-in information to compare empirical data with. This program has been implemented on financial data Laloux et al. (1999), internet routers networks Barthélemy et al. (2002), EEG data Šeba (2003) and atmospheric correlations Santhanam and Patra (2001) among others. The requirement of fixed data variance $\sigma^{2}$ guarantees rotational invariance and thus the exact solvability of WL. Nevertheless, it makes the comparison inappropriate in the present case, where the empirical ‘data matrix’ $\tilde{\mathcal{X}}$ is not accessible and its row variances $\tilde{\sigma}^{2}_{i}$ (corresponding to the $\mathcal{C}_{ii}$ entries) turn out to be unevenly spread. We thus consider a slightly improved $\sigma$-WL model (non-invariant) where different variances are randomly assigned to each data row. Details about this improved null model, unimportant for the present discussion, will be presented elsewhere; it suffices to say that the individual statistics is rather insensitive to the distribution of variances allocated to the rows. As a typical example of local level statistics, we mainly consider the Individual Eigenvalue Spacing (IES) $s_{k}$ defined as Muller et al. (2006) $s_{k}=(\lambda_{k}-\lambda_{k+1})/\langle\lambda_{k}-\lambda_{k+1}\rangle$, where the average $\langle\cdot\rangle$ is taken over many samples and clearly $\langle s_{k}\rangle=1$ for any $k$. We numerically found that for $\sigma$-WL the IES is well approximated by the Brody Brody (1973) one-parameter distribution, $p_{\beta}(s)=c_{\beta}(1+\beta)s^{\beta}\exp(-c_{\beta}s^{1+\beta})$, with $c_{\beta}=[\Gamma((2+\beta)/(1+\beta))]^{1+\beta}$, and $\beta\approx 0.84\pm 0.02$ (obtained from a one-parameter fit). Analytical predictions for the level distributions in non-invariant ensembles are generally lacking. The Brody distribution is the simplest and most commonly employed fitting formula for non-invariant RMT spacings (see e.g. Le Caer et al. (2007) and references therein), interpolating between the limiting Poisson $(\beta=0)$ and Wigner $(\beta=1)$ distributions. Results and Discussion. The RMT local statistics is used here as a null model for two sets of spectra: $\{\lambda_{k}^{(j)}\}$ (the $k$-th bare eigenvalue of $j$-th DCM sample, $k=1$ being the largest) and $\{\mu_{k}^{(j)}:=\lambda_{k}^{(j)}(3N-6)/\mathrm{Tr}[\mathcal{C}^{(j)}]\}$. The $\mu$’s are normalized so that their sum reproduces the number of DoF. The $k$-th eigenvector triples $\{\mathsf{v}_{k,x}^{(j)},\mathsf{v}_{k,y}^{(j)},\mathsf{v}_{k,z}^{(j)}\}$ are also extracted. The first quantity we analyzed is the average partial trace, or cumulative fraction of captured motion, defined as: $$\displaystyle f_{n}=\Big{\langle}\frac{1}{\mathrm{Tr}[\mathcal{C}]}\sum_{k=1}^% {n}\ \lambda_{k}\Big{\rangle}\equiv\frac{1}{3N-6}\Big{\langle}\sum_{k=1}^{n}\ % \mu_{k}\Big{\rangle}$$ (1) and plotted in fig. 1. As expected, the first 3-4 eigenvalues capture more than $70\%$ of the protein’s overall mobility, and this value is typically larger in MD simulations (Amadei et al., 1993). The very narrow dispersion validates in a statistical sense the persistence of this feature during the MD simulation. In order to statistically characterize each eigenvalue we plot in fig. 2 its relative dispersion (stdev/mean) vs. its index $k$: a low ratio signals a strong localization. The $\mu$’s display an almost constant ratio, suggesting a certain stability in the distribution of the fraction of total mobility captured by each mode. In comparison, the bare $\lambda$’s ratio rapidly decays to very low values after crossing the range spanned by the $\mu$’s approximately between the 3rd and the 4th eigenvalue, indicating a broad dispersion of the largest eigenvalues. In contrast with the different dispersion properties of absolute and normalized eigenvalues, the spacing distributions in the bulk show a remarkable universal pattern. In figs. 3 and 4, the distributions of the $\lambda$’s and $\mu$’s are fitted with a Brody distribution. With a $\chi^{2}$ test, the Brody hypothesis can be rejected with high confidence ($1\%$ level) for the first 3 spacings in both cases, while the subsequent ones give overall quite a good agreement with a fit parameter $\beta=0.8\pm 0.1$ (standard error among the first 100 spacings). The same $\beta$ (within the statistical bounds) fits the spacing distribution for the null $\sigma$-WL model, indicating a strong degree of randomness in the largest fraction of the DCM spectra. The analysis of level spacings is completed with a Kolmogorov-Smirnov (KS) test among all pairs of spacing distributions. Fig. 5 shows the color-coded values of the KS distances between the cumulative distributions. The same distribution appears to be closely shared by the spacings beyond the 4th, while no “partner” is found for the top four ones. Therefore, the sought significance criterion can be easily expressed as follows: there exists a transition between a few collective modes, characterized by a non-standard local level statistics, and the bulk of modes sharing the same quasi-universal distribution. In order to further validate these indications, in fig. 6 we report a color-coded table of KS distances among eigenvector distributions. The entry $(\ell,k)$ represents the KS distance between the $\mathsf{v}_{\ell}$ and $\mathsf{v}_{k}$ cumulative distributions, after a proper weighting of the three Cartesian components. Again, their distribution stabilizes only from the $5-6$th eigenvector onwards, while the first $4$ clearly display a very poor overlap with the full set. As a final check, we divided the MD trajectory in two halves and computed the DCM for each sub-trajectory. We then applied the method introduced in Pontiggia et al. (2008) to determine an optimal redefinition of the orthonormal basis vectors of the two essential spaces in order to quantify the degree of overlap between the two sets. Specifically, the redefined basis vectors in one set are ranked in order of decreasing overlap with the linear space spanned by the vectors in the other set. The statistical significance of the first modes is confirmed by the high overlap of the first few optimal eigenvector pairs. This criterion, based on the properties of the essential space vectors rather than the eigenvalues, identifies about 4 relevant modes for the conserved subspace with a confidence level of about $90\%$ (see fig. 7). Summary and Outlook. In the present paper we applied RMT tools to an ensemble of covariance matrices obtained in a biophysical context. Making use of an anisotropic ENM, applied to each configuration of a MD simulation, we produced a large ensemble of covariance matrices for Adenylate Kinase. The statistical properties of these DCM eigenspaces have been compared with universal RMT predictions, such as the Brody level spacing distribution of a suitable random ensemble. The present study highlights signatures of bulk universality and random-like behavior shared by all but the first 3-4 eigenmodes of the analyzed DCM ensemble. The consequence is a quantifiable separation between the few most significant modes, characterized by their own peculiar statistics, and the bulk of quasi-random ones. Likely implications include a more precise identification of the collective variables describing the large-scale, functionally relevant fluctuations of biological molecules. The marriage between RMT techniques and models of protein dynamics is expected to have a broad range of applications. Possible directions for future investigation include i) the temporal evolution of the significancy pattern during the protein’s exploration of different substates; ii) the study of global spectral properties (such as density and higher order correlation functions) of DCM within a single substate and along the full trajectory; iii) correlation structure among functional sub-units of a single protein, involving only a finite fraction of highly connected components; and finally iv) independent validation of the method on other protein structures and using MD covariance matrices rather than ENM ones. Acknowledgments. We warmly thank C. Micheletti, F. Pontiggia, Y. Gerelli and G. Akemann for helpful discussions and advices. We are indebted with G. Colombo for making the MD simulations performed in Pontiggia et al. (2007) available to us and with A. Liguori for a careful reading of the manuscript. FC acknowledges the grant 2007JHLPEZ (MIUR). References Bragg and Perutz (1952) W. L. Bragg and M. F. Perutz, Proc. Roy. Soc. A 213, 425 (1952). Zen et al. (2008) A. Zen, V. Carnevale, A. M. Lesk, and C. Micheletti, Protein Sci. 17, 918 (2008). Noguti and Go (1982) T. Noguti and N. Go, Nature 296, 776 (1982). García (1992) A. E. García, Phys. Rev. Lett. 68, 2696 (1992). Pontiggia et al. (2008) F. Pontiggia, A. Zen, and C. Micheletti, Biophys. J. 95, 5901 (2008). Potestio et al. (2009) R. Potestio, F. Pontiggia, and C. Micheletti, Biophys. J. 96, 4993 (2009). Ciliberti et al. (2006) S. Ciliberti, P. De Los Rios, and F. Piazza, Phys. Rev. Lett. 96, 198103 (2006). Luo et al. (2006) F. Luo et al., Phys. Lett. A 357, 420 (2006). Lacelle (1984) S. Lacelle, Biophys. J. 46, 181 (1984). Majumdar and Nechaev (2005) S. N. Majumdar and S. Nechaev, Phys. Rev. E 72, 020901(R) (2005). Şener and Schulten (2002) M. K. Şener and K. Schulten, Phys. Rev. E 65, 031916 (2002). Bandyopadhyay and Jalan (2007) J. N. Bandyopadhyay and S. Jalan, Phys. Rev. E 76, 026109 (2007). Orland and Zee (2002) H. Orland and A. Zee, Nucl. Phys. B 620, 456 (2002). Hinsen (1998) K. Hinsen, Proteins 33, 417 (1998). Brooks et al. (1995) B. R. Brooks, D. Janezic, and M. Karplus, J. Comp. Chem. 16, 1522 (1995). Tirion (1996) M. M. Tirion, Phys. Rev. Lett. 77, 1905 (1996). Lou and Cukier (2006) H. F. Lou and R. I. Cukier, J. Phys. Chem. B 110, 12796 (2006). Micheletti et al. (2004) C. Micheletti, P. Carloni, and A. Maritan, Proteins 55, 635 (2004). Pontiggia et al. (2007) F. Pontiggia, G. Colombo, C. Micheletti, and H. Orland, Phys. Rev. Lett. 98, 048102 (2007). Mehta (2004) M. L. Mehta, Random Matrices (Academic Press, 2004), 3rd ed. Wishart (1928) J. Wishart, Biometrika 20, 32 (1928). Laloux et al. (1999) L. Laloux, P. Cizeau, J. P. Bouchaud, and M. Potters, Phys. Rev. Lett. 83, 1467 (1999). Barthélemy et al. (2002) M. Barthélemy, B. Gondran, and E. Guichard, Phys. Rev. E 66, 056110 (2002). Šeba (2003) P. Šeba, Phys. Rev. Lett. 91, 198104 (2003). Santhanam and Patra (2001) M. S. Santhanam and P. K. Patra, Phys. Rev. E 64, 016102 (2001). Muller et al. (2006) M. Muller et al., Phys. Rev. E 74, 041119 (2006). Brody (1973) T. A. Brody, Lett. Nuovo Cimento 7, 482 (1973). Le Caer et al. (2007) G. Le Caer, C. Male, and R. Delannay, Physica A 383, 190 (2007). Amadei et al. (1993) A. Amadei, A. B. M. Linssen, and H. J. C. Berendsen, Proteins 17, 412 (1993).
A deconvolution map-making method for experiments with circular scanning strategies D. L. Harrison 1Institute of Astronomy, Madingley Road, Cambridge, CB3 0HA, UK 13Kavli Institute for Cosmology3    F. van Leeuwen 1Institute of Astronomy, Madingley Road, Cambridge, CB3 0HA, UK 1    M. A. J. Ashdown 2Astrophysics Group, Cavendish Laboratory, Madingley Road, Cambridge, CB3 0HE, UK 23Kavli Institute for Cosmology3 (Received: date / Revised version: date) Key Words.: Methods: data analysis - cosmic microwave background Abstract Context: Aims:To investigate the performance of a deconvolution map-making algorithm for an experiment with a circular scanning strategy, specifically in this case for the analysis of Planck data, and to quantify the effects of making maps using simplified approximations to the true beams. Methods:We present an implementation of a map-making algorithm which allows the combined treatment of temperature and polarisation data, and removal of instrumental effects, such as detector time constants and finite sampling intervals, as well as the deconvolution of arbitrarily complex beams from the maps. This method may be applied to any experiment with a circular scanning-strategy. Results: Low-resolution experiments were used to demonstrate the ability of this method to remove the effects of arbitrary beams from the maps and to demonstrate the effects on the maps of ignoring beam asymmetries. Additionally, results are presented of an analysis of a realistic full-scale simulated data-set for the Planck LFI 30 GHz channel. Conclusions:Our method successfully removes the effects of the beams from the maps, and although it is computationally expensive, the analysis of the Planck LFI data should be feasible with this approach. 1 Introduction Recently, there has been a lot of activity in the development of map-making methods for the European Space Agency satellite, Planck (Tauber et al., 2010; Planck Collaboration et al., 2011). Planck has been designed to produce high-resolution temperature and polarisation maps of the cosmic microwave background (CMB). It has detectors divided between 9 frequency channels sensitive to the frequency range of 30 to 857 GHz. These frequency channels are split between two instruments the HFI and LFI, the High and Low Frequency Instruments, respectively. Many of these map-making methods have been part of a coordinated development within the Planck collaboration, and have been tested using increasingly more sophisticated simulated data (Poutanen et al., 2006; Ashdown et al., 2007a, b). The method presented here was developed with Planck in mind, but is applicable to any experiment with a circular scanning strategy. It should be noted that this method is not, as of yet, part of the official data processing pipeline for the Planck project and has not been applied to the actual Planck data. Planck spins about its axis once per minute, and as the line of sight (LOS) of the centre of the focal plane is almost perpendicular to this spin axis, the path of each detector describes an almost great circle on the sky. The spin axis is repositioned at least once every hour, with the sequence of spin-axis positions defining the scanning strategy. The nominal path of the LOS for each rotation of the satellite reobserves the same almost-great circle on the sky, with the beams in the same orientations, for the duration of each pointing period. A pointing period is the period of time between two sequential repositionings of the satellite spin-axis. The nutation of the spin axis about its nominal position will produce variations in the LOS direction about the nominal path, changing the part of the sky which is observed. Provided the displacement of the LOS from its nominal path remains small with respect to the beam, the roughly 60 or so circles corresponding to a single pointing period may be thought of as a 1-dimensional ring on the sky and may be analysed together with our method. It should be noted that the LFI with its larger beams is inherently more robust to this issue, than the HFI. If the effects of beam asymmetries are not accounted for in the map-making process, then this will result in systematic errors in the maps and in the recovered power spectra. The systematic effects due to axisymmetric beams have been assessed by Carretti et al. (2004) and a completely general assessment of the asymmetries by O’Dea et al. (2007). The map-making method described in this paper provides a mechanism to account and correct for arbitrary beam asymmetries, hence the term deconvolution map-making. Our approach is not the only one to this problem; Armitage & Wandelt (2004) have developed a method to account for the effects of the asymmetries, which is less computationally expensive than our method in the case of simple beam asymmetries, but is effectively restricted in the detail of the beam description that can be implemented. Here we may account for any degree of beam complexity. Both methods may also remove from the data the instrumental effects due to detector time constants and the finite sampling interval, whereby each data point is formed from the signal integrated over a small time interval. Removing the effects of the beam from the map could be extremely useful for the study of resolved objects, which would otherwise be distorted by the beam. Additionally, removing the distorting effects of the beam could aid in studies of the lensing of the CMB (Lewis & Challinor, 2006; Perotto et al., 2010). We have followed two approaches, to solving the map-making equations, in the development of our deconvolution map-making method: a direct approach from which the full noise covariance matrix may be recovered at low resolution and an iterative approach, which although still computationally expensive, will allow the resolutions required for the analysis of the LFI data to be reached. Both our approaches may be used to analyse polarised data with arbitrary beams, and recover the underlying multipoles on the sky without the need to pixelate the data. We discuss the methods used and the implementation of our deconvolution map-making method in Section 2, from the preprocessing of the ring data in Section 2.1 to the reconstruction of the sky in Section 2.2. The simulations generated to fully test our method in the case of arbitrarily complex beams are described in Section 3 together with the results of this analysis. The results of the analysis of more realistic Planck data, used in a previous map-making comparison paper (Ashdown et al., 2007a), is described in Section 4. 2 Method and Implementation Our method splits the data processing in a way which is natural in terms of how Planck acquires the data, allowing the time-ordered data (TOD) from each pointing period to be processed separately. The TOD corresponding to each pointing period may be reduced, without loss of information, to Fourier coefficients on the rings as described below in Section 2.1. These Fourier coefficients together with the mean pointing information for each ring may then be analysed to recover the multipoles on the sphere, as outlined in Section 2.2. 2.1 TOD to Fourier Coefficients on rings The time-ordered data for each pointing period is processed broadly as described in van Leeuwen et al. (2002). The implementation of this method to more realistic simulated data, however, has necessitated some modifications to the method presented in that paper. In this section we outline the processing required to extract the Fourier coefficients from the TOD, highlighting these modifications. The position of the line-of-sight of a detector for a pointing period may be described in terms of the mean spin axis position and two angles, as shown in Figure 1. These angles are the opening angle, which is the angle between the spin axis position and the line-of-sight of the detector, and the phase, which defines the position around the ring from a given reference point. By assessing the value of the phase for each sample in the TOD, these data may be binned in phase, which effectively compresses the data by a factor of 40 to 50. If a suitable number of phase-bins is chosen, it is possible to recover the Fourier coefficients, which represent the TOD, from the phase-binned data with a negligible loss of accuracy as compared to evaluating them directly from the TOD, but with a significant reduction in the processing required. It should be noted that our implementation differs from that in van Leeuwen et al. (2002) as to the number of phase-bins required, and the number of moments which are included in equations (4)-(8) in that paper. Our investigation showed that including the $\rm{3^{rd}}$ order terms of the phase improved the recovery of the Fourier coefficients and that there was no loss of accuracy in reducing the number of phase-bins from $6n_{max}$ as used in van Leeuwen et al. (2002) to $4n_{max}$, where $n_{max}$ is the highest mode extracted. The value of $n_{max}$ chosen should be such that $n_{max}\geq\ell_{max}$, where $\ell_{max}$ is the desired value to which the sky multipoles are to be recovered. The above procedure successfully recovers the required Fourier coefficients with negligible loss of accuracy when the distribution of the TOD is relatively uniform in phase. If there is a resonance between the spin and sampling rates then the distribution of the data in phase will no longer be uniform, as samples from a previous rotation will occur at the same location in phase as the current rotation. The performance of this method at recovering the Fourier coefficients from data with a non-uniform distribution in phase has been investigated. The degree of non-uniformity in phase, in terms of the maximum gap present between two subsequent phase-ordered samples, which can be tolerated before the effect on the recovered values for the Fourier coefficients becomes significant with respect to the noise on the data, was assessed. Should this phase-gap size be exceeded, the Fourier coefficients will be evaluated directly. In the case of the simulations, described in Section 4, this meant that 5% of the rings were not phase-binned, with the Fourier coefficients being evaluated directly from the TOD for these rings. Although, when possible we phase-bin the TOD to reduce the processing requirement, the effect of this binning is negligible, as the Fourier coefficients recovered from the phase-binned rings are of a negligible difference from those evaluated directly from the TOD. Therefore there can be no effects in our data analysis due to the binning or pixelisation of the data, which is one way in which our method differs from that of Armitage & Wandelt (2004). 2.2 Sky reconstruction The data in terms of the Fourier coefficients for a single ring, $d_{r}$, may be expressed as $$d_{r}=R_{r}a+n_{r}$$ (1) where $n_{r}$ represents the noise on the Fourier coefficients, $R_{r}$ is the coupling matrix which describes the connection between the multipole moments, $a_{\ell m}$ on the sphere, represented by the vector a and the Fourier coefficients. The data from all the rings may be combined and analysed together $$d=Ra+n$$ (2) where $R$ is the equivalent of the pointing matrix in conventional pixel-based map-making. The maximum likelihood estimates for multipoles on the sky may then be found by solving the matrix equation $$\left(R^{T}N^{-1}R\right){\hat{a}}=R^{T}N^{-1}d$$ (3) where N is the noise covariance matrix of the Fourier coefficients, given by $\left<nn^{T}\right>$. The coupling matrix is derived in Challinor et al. (2002) and requires information on the detector orientations, opening angles and beam profiles, together with the mean spin axis positions for each ring. The coupling matrix is constructed as in Challinor et al. (2002) with the exception of accounting for those effects on the data which occur around the rings such as those due to sampling intervals and the detector time-constants, as removing these effects involves adjusting the Fourier coefficients for each pointing period, and is more naturally included in TOD to Fourier coefficient code, rather than in the sky reconstruction code. As shown by Challinor et al. (2002) the correlations in the noise between different Fourier modes on the rings are expected to be negligible. This expectation was verified and we therefore treat the noise covariance matrix, N, for those data as diagonal. This method could use the full noise covariance for each ring, making N block diagonal, however this is not necessary as gaps in the data, due to glitches or otherwise, should not occur in any preferential location, hence the number of samples per phase bin should be on average the same. This will ensure the near stationarity of the noise at the ring level, and negligible level of noise correlations between Fourier modes. In the case of an experiment which does not have this redundancy, this method would still be applicable provided that a block diagonal N is used. The presence of $1/f$ noise in the data results in striping in the maps, due to a different offset on each ring (Burigana et al., 1999; Delabrouille, 1998). If the knee frequency is low then by removing the zero-frequency Fourier coefficients from our analysis we may ‘destripe’ the maps, effectively removing the contribution of the $1/f$ noise, and projecting out correlated noise on time-scales longer than a ring. Given that the noise covariance matrix is diagonal, the zero-frequency Fourier coefficients may be removed from the analysis by setting the diagonal elements of the inverse noise covariance matrix which correspond to these coefficients to zero. This is equivalent to increasing the noise on these coefficients to infinity and hence their contribution to the recovered signal is completely removed. An alternative approach is to introduce an additional parameter for every ring and solve for the offsets produced by the $1/f$ noise, as well as recovering the signal multipoles. As one would expect, there is no change in the recovered values of the signal multipoles using this approach, so this should only be used if the offsets themselves are required. If the knee frequency is not sufficiently low to isolate the $1/f$ noise in the zero frequency modes, then the approach presented here can be extended to deal with any arbitrary noise power-spectrum on the rings, by suitably weighting the higher frequency modes. The matrix, $\left(R^{T}N^{-1}R\right)$, the inversion of which is required to obtain the solution for the spherical multipoles, unfortunately, is non-sparse and large with the order of $\ell_{max}^{4}$ elements. This effectively limits the maximum value of $\ell$ to which the analysis may proceed, as the computational requirements needed to produce a solution scale as ${\mathcal{O}}\left(\ell_{max}^{6}\right)$ for a direct method and ${\mathcal{O}}\left(\ell_{max}^{4}\right)$ for iterative methods such as the preconditioned conjugate-gradient method which we have implemented. It should be noted that there is an implicit assumption of full-sky coverage and a minimum of $4\times\ell_{max}$ rings, should this not be the case the condition number of $\left(R^{T}N^{-1}R\right)$ will increase and in practice not all multipoles will be recoverable. Since the coupling matrix is already non-sparse there are no increases in the computational expense of our code if we choose to analyse the data with beams whose expansion in spherical multipoles contain arbitrary high values of $m$, in contrast to the method of Armitage & Wandelt (2004). 2.2.1 Iterative Method In order to reach the values of $\ell$ required for an analysis of Planck data, it was necessary to develop an iterative method for solving equation (3) for which a parallel implementation is possible. Due to its large requirement on memory, the coupling matrix, R, must be stored over multiple processors, as a copy on each processor would be prohibitively expensive in terms of memory used; even stored once the size of R becomes prohibitive for large values of $\ell_{max}$ for the full mission analysis. In order to reduce the memory requirements, R is evaluated as needed and only the part which corresponds to a single ring is stored in memory at any one time. This reduces the memory requirement to ${\mathcal{O}}\left(\ell_{max}^{3}\right)$. The division of the storage and calculation of R between the processors should be such that it minimises the amount of data which must be exchanged between them. This requirement may be met by storing and evaluating R in terms of the sub-matrices corresponding to individual $\ell$ values. This division of the processing has implications for the scaling of the code with the number of processors used, which is described in Appendix A. We use a preconditioner, the purpose of which is to achieve a reduction in the number of iterations required at the expense of a small increase in the computational cost of each iteration in terms of an additional matrix-vector multiplication (Golub & van Loan, 1996). Our chosen preconditioner is the diagonal of the matrix $R^{T}N^{-1}R$, which meets these criteria. The implementation of the preconditioned conjugate-gradient method described here was parallelised using the Message-Passing Interface (MPI), and hence is capable of being run on both shared-memory machines and clusters. 2.2.2 Direct Method Our direct method for solving equation (3) , takes advantage of the fact that in the case of a diagonal N, it is possible to pre-whiten the data, d, and the coupling matrix, R, so that equation (3) reduces to a standard least squares equation, which may be solved by QR decomposition and backsubstitution. The direct method performs the QR decomposition using Householder transformations (van Leeuwen, 2007) to reduce R to an upper triangular form, U. This method allows the processing of R to be split into sections in terms of rows, provided that the number of rows of R being processed together is larger than the number of columns of R. Our implementation uses this property to ensure that the subsection of data being processed together with its corresponding section of the coupling matrix will fit within the available memory. As each subsection of data is processed the upper triangular matrix, U, is further refined and updated. Once U has been found the $a_{\ell m}$ may be evaluated through backsubstitution. Additionally, $U^{-1}$ may also be found through backsubstition, and hence the full noise covariance matrix for the $a_{\ell m}$ may be evaluated. This covariance matrix may be useful as an input to the hybrid approach of Efstathiou (2005) for power spectrum or parameter estimation, which uses a direct likelihood evaluation at low multipoles and pseudo-$C_{\ell}$ estimators for high multipoles. The direct approach was also parallelised using MPI, for reasons of code portability. Additionally, it makes use of the subroutines which calculate the sub-matrices of R for the individual values of $\ell$, and hence the direct method will be subject to the same scaling conditions on the performance with the number of processors used, described in Appendix A, as the iterative method. 3 Complex-beam simulations As a validation step, in order to demonstrate the ability of our method to remove the effects of any arbitrarily-complex beam and to test it independently of the TOD to Fourier coefficient step, it was necessary to generate our own set of simulations. The Fourier coefficients on the rings may be simulated directly by using equation (1). An arbitrary beam was generated based on the first author’s initials with the middle initial, l, being represented by an elliptical beam with major and minor full-width half-maximums (FWHMs) of 3 and 1 degrees, respectively. The largest sidelobe is -15.8 dB relative to the peak, and located 5.9 degrees from it. The set of Fourier coefficients on the rings, generated using this arbitrary beam, shall be referred to from now on as the complex-beam simulation, and this beam shall be referred to as the true beam. The data for the complex-beam simulation consists of 800 rings, for two polarised detector pairs, containing Fourier coefficients up to an $\ell_{max}$ of $200$. These simulations are noise free; no noise is added to the coefficients, and a suitably low-level of noise is used to produce the noise covariance matrix, $N$. To complete the set of beams with which to analyse the complex-beam simulation, the spherical multipoles for an elliptical and a circular beam were produced. The elliptical beam corresponds to the elliptical main lobe of the true beam, and a circular beam has a FWHM corresponding to the geometric mean of the two FWHMs of the elliptical beam. All the beams are defined by their spherical multipoles. However, for visualisation purposes, $T$, $Q$ and $U$ maps for these beams were produced. These maps, synthesised at the north pole of the coordinate system, may be seen in Figure 2. 3.1 Analysis of complex-beam simulations The complex-beam simulation was analysed, using the iterative approach described in Section 2.2.1, using the three different beams described in Section 3 and visually represented in Figure 2. The residual multipoles of these analyses have been synthesised onto $T$, $Q$ and $U$ maps shown in Figures 3, 4 and 5, respectively. In each of these figures the input-sky multipoles, which are the same as those used in the Trieste simulations described in Section 4, are convolved with a circularly symmetric beam with a FWHM of $1^{\circ}$ and synthesised onto a map for comparison with the structures seen in the residual maps. An alternate assessment of the performance of the analyses is shown in Figure 6. This Figure shows the fractional reconstruction errors in the power spectra, for $T$, $E$, and $B$, in the cases where the true, elliptical or circular beams are used in the deconvolution. It should be noted that, these power spectra do not correspond to the CMB power spectrum, but to a combination of the CMB and all the simulated foregrounds, including point sources. In Figures 3 through 5, the maps synthesised from the residual multipoles from the analysis of the complex-beam simulation with the true beams may be seen to contain little power. Additionally, the low level of the fractional errors in the power spectra in Figure 6, produced from the analysis using the true beams, also confirms the successful removal of the beam effects from the data. Figures 3 through 5, also show how ignoring beam asymmetries may affect the maps. This is seen in the analyses using the elliptical and circular beams, where the residual maps show structure along the galactic plane and regions near compact objects. 4 Trieste simulations As well as testing our method on the complex-beam simulations, as described in Section 3, we also used a set of simulations produced by the Planck CTP Working Group, which is concerned with the evaluation of map-making, power-spectrum and likelihood methods for Planck data. The CTP simulations were produced to enable the comparison of a number of map-making algorithms, (Ashdown et al., 2007a) and they will be referred to throughout this paper as the Trieste simulations. They were generated using a simulations pipeline developed by the Planck collaboration with the purpose of providing simulated Planck data, (Reinecke et al., 2006) . One year of pointing and signal data for the LFI 30 GHz channel, which corresponds to 8784 rings of TOD, was generated. Each repositioning of the spin axis, corresponds to one hour of data and to one ring of TOD, and this repositioning follows a cycloidal scanning strategy. The data were simulated for the two polarised detector pairs of which the LFI 30 GHz channel is comprised. Two different sets of beams, circular and elliptical, were used with these simulations allowing the effects of different beams to be investigated. The circular beams are Gaussian beams with a full-width half-maximum (FWHM) of $32.5\arcmin$, which is the geometric mean of the two FWHMs of the elliptical beams. These elliptical beams are the best-fit elliptical approximation to the pre-launch LFI 30 GHz beams (Sandri et al., 2010), for each of the two detectors corresponding to each of the two horns. These data include the effects of variable spin velocity and nutation. There is also the option of including sampling effects, where the effects of the finite sampling period of the detectors are taken into account. Here we use the most realistic data, which is, in this case, the TOD simulated using the elliptical beams, in which the effects of sampling are included. The following components of the signal are included in the simulations: the CMB, the diffuse Galactic foregrounds, including the synchrotron, free-free and dust, compact objects, such as Sunyaev-Zel’dovich (SZ) clusters and point sources, and both white noise and $1/f$ noise. The models and templates used for these foregrounds are described in Reinecke et al. (2006) and references therein; it should be noted that the templates used for the diffuse Galactic emission are extrapolations to Planck frequencies. 4.1 Analysis of Trieste data In this section we present the results of the analysis, using the iterative approach of Section 2.2.1, of the most realistic sub-set of Trieste simulations; which were created using the cycloidal scanning strategy, the elliptical beams and included the effects of sampling. The 8784 simulated pointing periods, corresponding to one year of data were processed and the sky multipoles were reconstructed up to $\ell_{max}=400$. It should be noted that the simulations contain signals at $\ell$ values greater than $\ell_{max}$. Given the beam sizes of the LFI 30 GHz detectors, and hence their much reduced sensitivity to power above $\ell=400$, this value for $\ell_{max}$ should be sufficient at least in the case of diffuse signals. Indeed, curtailing the reconstruction acts like a low-pass filter preventing the high-frequency noise from dominating the maps, due to the deconvolution. The first set of simulated data to be analysed included contributions from the CMB, diffuse Galactic foregrounds, $1/f$ and white noise. The results of the analysis of these data may be seen in Figures 7, 8 and 9. In these figures the $a_{\ell m}$ corresponding to the signals input to the simulations were used to generate a map of the input-sky for comparison with the map produced from the recovered $a_{\ell m}$. In order to perform this comparison the input $a_{\ell m}$ were curtailed to the same value of $\ell_{max}$ that was used in the recovery of the $a_{\ell m}$ from the simulated data. The differences between these two sets of $a_{\ell m}$ were used to generate residual maps, in order to illustrate the differences between the recovered and the input sky. In Figures 7 – 9 it is seen that there has been a successful reconstruction of the input sky, with the only features visible in the residual maps being due to the noise modulated by the hit-count. The hit-count is determined by the scanning strategy and this is the cause of the lower level of the residuals seen at the ecliptic poles, where the coverage is very much enhanced over that of the rest of the sky. These residuals maps also demonstrate that the remaining unrecovered power at higher multipoles has not interfered with the recovery of the lower multipoles. The second set of data was the same as the first except that the contributions from SZ and point sources were also included. These simulations were first analysed using the elliptical beams they were produced with and then they were reanalysed assuming circular beams. The recovery of the $a_{\ell m}$ when using the elliptical beams results in residuals maps indistinguishable from the residual maps for $Q$ and $U$ from the analysis of the first data set (shown in Figures 8 and 9). The residuals for the $T$ map, however, show that the recovery is not ideal in the region of bright point sources. The top panel of Figure 10 shows these residuals plotted over the same colour range as the residual map for $T$ from the analysis of the first data set. This sensitivity to point-like objects is due to the resultant extra power at small scales, and the fact the recovery is limited in $\ell$. The contribution of the point sources to the polarisation signal is small, so the $Q$ and $U$ maps remain unaffected. Increasing the value of $\ell_{max}$ used in the recovery of the $a_{\ell m}$ will remove these effects from the residual map. This sensitivity was not seen in the complex-beam simulations, with their relatively higher value of $l_{max}$ in comparison to the beam size. The residuals from the analysis of this simulated data set with the circular beams, equivalent to a standard pixel-based map-making analysis, are shown in Figures 10, 11 and 12. These residual maps are again plotted over the same colour range as the residual maps produced from the analysis of the first data set, shown in Figures 7 – 9. Recalling that the residual maps produced from the recovery using the elliptical beams were indistinguishable from the residual maps produced from the analysis of the first data set for the $Q$ and $U$ maps, it is seen that for all the maps there is an increase in the magnitude of the residuals along the Galactic plane and in the vicinity of beam-sized or smaller objects when circular beams are assumed. Figure 13 shows the residual T map found from the analysis of the first data set with the circular beams; this also shows an increase in the level of the residuals. The bottom panels of Figures 10, 11, 12 and 13 show maps of the absolute difference between the maps recovered using the elliptical beams and the maps recovered using the circular beams. Given that the same set of simulated data is used in each case, all the differences must be due to the difference between the elliptical and circular beams. The smallest differences between these recovered maps are observed to be at the ecliptic poles. This is due to the properties of the sky coverage in these regions, with multiple intersecting scans at many different orientations, and the fact that the circular beams used have FWHMs which are the geometric mean of the two FWHMs which describe each elliptical beam. It is noticeable that the magnitude of the additional errors in the recovery due to using the circular, and in this case incorrect, beams is typically larger than the residuals due to the noise in the data. The differences between the recoveries due to the different beams may easily be seen in Figure 14 which shows the fractional errors in the power spectrum of both data sets with both the elliptical and circular beams. This figure shows the ratio of the power spectra of the residual $a_{\ell m}$, which are formed from the difference between the input and recovered $a_{\ell m}$, and the input power spectra. These power spectra are seen to increase with increasing $\ell$. This increase is observed as the beams have been deconvolved and is due to the fact that the signal is suppressed by the beams whereas the noise is not. The dark blue (green) and light blue (red) points are from the analysis of the first (second) data set, with the circular and elliptical beams respectively. The first data set does not contain point or SZ sources whereas the second data set does, the input power spectra for the second data set will therefore have more power at smaller scales, especially in $T$, than that of the first data set. This leads to the smaller fractional errors seen for the analysis of the second data set, at higher multipoles in the $T$ power spectra. For the $E$ and $B$ power spectra the figure shows virtually no differences between the two different data sets, for each beam. The $B$ power spectra for the different beams are very similar, whereas there are clear differences between the $E$ power spectra formed using the different beams, as may be expected from Carretti et al. (2004), who showed that the $E$ power spectrum is coupled to the unpolarised sky through the beam, with the contamination peaking on the scale corresponding to the FWHM of the beam. In the case of the temperature power spectra the differences are only apparent at intermediate $\ell$-values, for the case where there are no point sources, as at higher $\ell$-values the noise dominates. Whereas the differences between the the power spectra formed from the different beams using the second data set, which contained point sources, are visible for all $\ell$ values higher than $\ell\sim 70$. 5 Conclusions We have described a successful implementation of the map-making methodology developed in Challinor et al. (2002) and van Leeuwen et al. (2002). While the computational costs of our implementation are far from trivial, the ability to deconvolve any arbitrarily complex beam from the data may prove to be worth the computational expense, at least for an analysis of the Planck LFI channels. Due to their position at the edges of the focal plane the LFI detectors are anticipated to have more elongated beams than their HFI counterparts. We have demonstrated that this method may successfully be applied in the presence of compact and point sources provided that recovery proceeds to a value of $\ell_{max}$ at which there is little remaining unresolved signal. We have also shown how ignoring beam asymmetries affects the recovery of the Planck maps, both for temperature and polarisation. These effects are most noticeable in the Galactic plane region and near the locations of compact or point source objects. We have shown that our method can produce maps without the distorting influence of the beams. This property may be especially useful for producing the best possible maps for Galactic science where we have shown that ignoring beam asymmetries will results in larger distortions. As the attractiveness of this method increases with increasing beam complexities, it may also be of use for proposed future experiments, which are likely to consist of many more detectors, hence there will be detectors further from the centre of the focal plane resulting in the beams being more distorted. Additionally, the analysis of data from any future experiment would benefit from increases in computing performance, likely to occur in the interim. Acknowledgements. This work was supported by STFC at the Cambridge Planck Analysis Centre, and utilised COSMOS VI, an Altix 3700 supercomputer, funded by SGI/Intel, HEFCE and PPARC. This research also used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Some of the results in this paper have been derived using the HEALPix (Górski et al., 2005) package. This paper has made use of simulated data produced by the CTP Working Group. References Armitage & Wandelt (2004) Armitage, C. & Wandelt, B. D. 2004, Phys. Rev. D, 70, 123007 Ashdown et al. (2007a) Ashdown, M. A. J., Baccigalupi, C., Balbi, A., et al. 2007a, A&A, 471, 361 Ashdown et al. (2007b) Ashdown, M. A. J., Baccigalupi, C., Balbi, A., et al. 2007b, A&A, 467, 761 Burigana et al. (1999) Burigana, C., Malaspina, M., Mandolesi, N., et al. 1999, ArXiv Astrophysics e-prints Carretti et al. (2004) Carretti, E., Cortiglioni, S., Sbarra, C., & Tascone, R. 2004, A&A, 420, 437 Challinor et al. (2002) Challinor, A. D., Mortlock, D. J., van Leeuwen, F., et al. 2002, MNRAS, 331, 994 Delabrouille (1998) Delabrouille, J. 1998, A&AS, 127, 555 Efstathiou (2005) Efstathiou, G. 2005, MNRAS, 356, 1549 Golub & van Loan (1996) Golub, G. H. & van Loan, C. F. 1996, Matrix computations, 3rd edition (London: The Johns Hopkins University Press) Górski et al. (2005) Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759 Lewis & Challinor (2006) Lewis, A. & Challinor, A. 2006, Phys. Rep, 429, 1 O’Dea et al. (2007) O’Dea, D., Challinor, A., & Johnson, B. R. 2007, MNRAS, 292 Perotto et al. (2010) Perotto, L., Bobin, J., Plaszczynski, S., Starck, J., & Lavabre, A. 2010, A&A, 519, A4+ Planck Collaboration et al. (2011) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2011, ArXiv e-prints 1101.2022 Poutanen et al. (2006) Poutanen, T., de Gasperis, G., Hivon, E., et al. 2006, A&A, 449, 1311 Reinecke et al. (2006) Reinecke, M., Dolag, K., Hell, R., Bartelmann, M., & Enßlin, T. A. 2006, A&A, 445, 373 Sandri et al. (2010) Sandri, M., Villa, F., Bersanelli, M., et al. 2010, A&A, 520, A7+ Tauber et al. (2010) Tauber, J. A., Mandolesi, N., Puget, J., et al. 2010, A&A, 520, A1+ van Leeuwen (2007) van Leeuwen, F. 2007, Hipparcos, the New Reduction of the Raw Data, Astrophysics and Space Science Library. Vol. 350 edn. (Springer) van Leeuwen et al. (2002) van Leeuwen, F., Challinor, A. D., Mortlock, D. J., et al. 2002, MNRAS, 331, 975 Appendix A Implementation Details A.1 Scaling with number of processors This appendix describes the effect, due to the method used to evaluate $R$, on the scaling with the number of processers used in the parallel implementations of both the iterative and direct methods. The time taken per iteration of the code will be determined by the processor with the largest workload. It is therefore important that the load is balanced as equally as possible over the number of processors being used. As the code is parallelised by dividing the individual values of $\ell$ between the various processes, there must come a point at which using additional processors will no longer produce a decrease in the time required, as it becomes impossible to share the workload between the different processors. This situation is shown in the top panel in Figure 15, for an analysis up to an $\ell_{max}=400$. In this figure the black crosses represent the load on the processor with the maximum load, relative to the processor with the maximum load in the case when 32 processors are used. The red curve shows, for comparison, the reduction in the workload, with increasing numbers of processors, assuming perfect load balancing. The bottom panel in Figure 15 shows the relationship between the maximum load and the time taken per iteration. Doubling the number of processors used halves the workload per processor which in turn halves the time taken per iteration, up until the point at which the load balancing breaks down. The number of processors at which this occurs may be easily evaluated and is found to be equal to $\ell_{max}/2.8$.
C*-envelopes of tensor algebras for multivariable dynamics Kenneth R. Davidson Pure Math. Dept. U. Waterloo Waterloo, ON  N2L–3G1 CANADA [email protected]  and  Jean Roydor Pure Math. Dept. U. Waterloo Waterloo, ON  N2L–3G1 CANADA [email protected] (Date:: ) Abstract. We give a new very concrete description of the C*-envelope of the tensor algebra associated to multivariable dynamical system. In the surjective case, this C*-envelope is described as a crossed product by an endomorphism, and as a groupoid C*-algebra. In the non-surjective case, it is a full corner of a such an algebra. We also show that when the space is compact, then the C*-envelope is simple if and only if the system is minimal. Key words and phrases:multivariable dynamical system, C*-envelope, groupoid C*-algebra, crossed product by an endomorphism, minimal, simple 2000 Mathematics Subject Classification: 47L55, 47L40, 46L05, 37B20, 37B99 First author partially supported by an NSERC grant. 1. Introduction A multivariable dynamical system $(X,\sigma)$ is a locally compact Hausdorff space $X$ together with a family $\sigma=(\sigma_{1},\dots,\sigma_{n})$ of proper continuous maps from $X$ into itself. In [9], two natural universal operator algebras associated to this system were introduced. The more tractable one is the tensor algebra ${\mathcal{A}}(X,\sigma)$, which is the universal operator algebra generated by ${\mathrm{C}}_{0}(X)$ and $n$ isometries ${\mathfrak{s}}_{1},\dots,{\mathfrak{s}}_{n}$ with pairwise orthogonal ranges satisfying the covariance relations $$f{\mathfrak{s}}_{i}={\mathfrak{s}}_{i}(f\circ\sigma_{i})\quad\text{for all}% \quad f\in{\mathrm{C}}_{0}(X)\text{ and }1\leq i\leq n.$$ In that paper, there is a description of the C*-envelope of ${\mathcal{A}}(X,\sigma)$ as the Cuntz–Pimsner algebra of an associated C*-correspondence. It also contains an explicit description of a norming family of boundary representations. So in principle, a more explicit description of this C*-envelope should be available. When $n=1$, Peters [17] showed that the C*-envelope is a crossed product of a related dynamical system constructed from the original by a projective limit construction. In this paper, we show that a similar description is possible for $n\geq 2$. The C*-envelope is no longer a crossed product by an automorphism, but it is a crossed product by an endomorphism. It also is a groupoid C*-algebra of an related dynamical system. The C*-envelope $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}})$ of an operator algebra ${\mathcal{A}}$ is the unique minimal C*-algebra (up to isomorphism fixing the image of ${\mathcal{A}}$) generated $j_{0}({\mathcal{A}})$, where $j_{0}:{\mathcal{A}}\to{\mathcal{B}}({\mathcal{H}})$ is a completely isometric isomorphism. This is characterized by the fact that if $j:{\mathcal{A}}\to{\mathcal{B}}({\mathcal{H}})$ is another completely isometric isomorphism, then there is a surjective $*$-homomorphism $q:\mathrm{C}^{*}(j({\mathcal{A}}))\twoheadrightarrow\mathrm{C}^{*}_{\text{e}}(% {\mathcal{A}})$ such that $qj=j_{0}$. The existence of the C*-envelope was conjectured by Arveson [3], and established in many cases. It was eventually proven by Hamana [13]. More recently, Dritschel and McCullough [12] provided a new proof. The main ingredient was the notion of a maximal dilation. A dilation $\pi$ of a representation $\rho$ is maximal if any further dilation of $\pi$ can only be accomplished by adding on a direct summand. They prove that every maximal dilation factors through the C*-envelope. In particular, if one starts with a completely isometric representation $\rho$ and constructs a maximal dilation $\pi$, then $\mathrm{C}^{*}(\pi({\mathcal{A}}))$ is the C*-envelope. Arveson’s definition of a boundary representation is equivalent to being maximal (in the sense that it is a maximal dilation of itself) and extends to an irreducible $*$-representation of the enveloping C*-algebra. Dritschel and McCullough do not produce irreducible representations, but this is not necessary to construct the C*-envelope. However these irreducible representations are the analogue of the Choquet boundary, and Arveson [4] shows that in the separable case, one can use a direct integral decomposition to show that there are sufficiently many boundary representations to construct the C*-envelope. While a sufficient family of boundary representations for ${\mathcal{A}}(X,\sigma)$ were constructed in [9], we find it convenient here to drop the irreducibility condition in order to have a larger family of representations to work with to construct the C*-envelope more explicitly. In the last section, we provide a direct proof that in the compact case, simplicity of the C*-envelope is equivalent to minimality of the dynamical system. Our proof is based on the representation of the C*-envelope as a crossed product ${\mathfrak{B}}\times_{\alpha}{\mathbb{N}}$ by an endomorphism. The C*-algebra ${\mathfrak{B}}$ is an inductive limit of homogeneous C*-algebras. We provide an explicit description of the $\alpha$-invariant and $\alpha$-bi-invariant ideals of ${\mathfrak{B}}$. Then applying a result of Paschke [16] will yield simplicity. More general results of Schweizer [18] for Cuntz–Pimsner algebras of C*-correspondences could be used instead. 2. Preliminaries If $(X,\sigma)$ is a multivariable dynamical system, we form the (non-closed) covariance algebra ${\mathcal{A}}_{0}(X,\sigma)$ as the space of polynomials in $n$ indeterminates ${\mathfrak{s}}_{1},\dots,{\mathfrak{s}}_{n}$ with coefficients in ${\mathrm{C}}_{0}(X)$ where multiplication is determined by the covariance relations $f{\mathfrak{s}}_{i}={\mathfrak{s}}_{i}(f\circ\sigma_{i})$. Let $\mathbb{F}_{n}^{+}$ be the free semigroup of all words in the alphabet $\{1,\dots,n\}$. If $w=i_{1}i_{1}\dots i_{k}$, we write $\sigma_{w}$ for the map $\sigma_{i_{1}}\circ\sigma_{i_{2}}\circ\dots\circ\sigma_{i_{k}}$; and we write ${\mathfrak{s}}_{w}={\mathfrak{s}}_{i_{1}}\dots\mathfrak{s}_{i_{k}}$. Then a typical element of ${\mathcal{A}}_{0}(X,\sigma)$ is a finite sum $\sum_{w\in\mathbb{F}_{n}^{+}}{\mathfrak{s}}_{w}f_{w}$ where $f_{w}$ are arbitrary elements of ${\mathrm{C}}_{0}(X)$. The multiplication rule is just $({\mathfrak{s}}_{v}f)({\mathfrak{s}}_{w}g)={\mathfrak{s}}_{vw}(f\circ\sigma_{w% })g$. A row contractive representation $\rho$ of ${\mathcal{A}}_{0}(X,\sigma)$ is a homomorphism into ${\mathcal{B}}({\mathcal{H}})$ such that the restriction to ${\mathrm{C}}_{0}(X)$ is a $*$-homomorphism and $\big{\|}\big{[}\rho({\mathfrak{s}}_{1})\ \dots\ \rho({\mathfrak{s}}_{n})\big{]% }\big{\|}\leq 1$. The tensor algebra ${\mathcal{A}}(X,\sigma)$ is the universal operator algebra with these as its completely contractive representations. One can define the norm by taking a supremum over all such representations into a fixed infinite dimensional Hilbert space sufficiently large to admit a faithful representation of ${\mathrm{C}}_{0}(X)$. It is shown in [9] that every row contractive representation dilates to a row isometric representation. So it follows that ${\mathfrak{s}}_{i}$ are isometries with pairwise orthogonal ranges. If $\rho$ is a (completely contractive) representation of an operator algebra ${\mathcal{A}}$ on a Hilbert space ${\mathcal{H}}$, we say that a representation $\pi$ of ${\mathcal{A}}$ on a Hilbert space ${\mathcal{K}}$ containing ${\mathcal{H}}$ is a dilation of $\rho$ if ${\mathcal{K}}$ decomposes as ${\mathcal{K}}={\mathcal{H}}_{-}\oplus{\mathcal{H}}\oplus{\mathcal{H}}_{+}$ so that $\pi(A)$ is upper triangular with respect to this decomposition, and $\rho(A)=P_{\mathcal{H}}\pi(A)|_{\mathcal{H}}$ for all $A\in{\mathcal{A}}$. As mentioned in the introduction, if the only dilations of $\rho$ have the form $\pi=\rho\oplus\pi^{\prime}$, then we say that $\rho$ is maximal. An easy way to obtain a representation of ${\mathcal{A}}(X,\sigma)$ is to pick a point $x\in X$ and define the orbit representation $\lambda_{x}$. This is defined on the Fock space $\ell^{2}(\mathbb{F}_{n}^{+})$, which has orthonormal basis $\{\xi_{w}:w\in\mathbb{F}_{n}^{+}\}$. $\mathbb{F}_{n}^{+}$ acts on this space by the left regular action $L_{v}\xi_{w}=\xi_{vw}$. Define $$\lambda_{x}(f)=\operatorname{diag}(f(\sigma_{w}(x)))\quad\text{and}\quad% \lambda_{x}({\mathfrak{s}}_{i})=L_{i}\text{ for }1\leq i\leq n.$$ In general this is not maximal. Indeed, this is maximal if and only if $x$ is not in the range of any map $\sigma_{i}$. The representation $\Lambda_{X}=\bigoplus_{x\in X}\lambda_{x}$ is called the full Fock representation. In [9], it is shown that the full Fock representation is completely isometric. To obtain maximal representations, it is generally necessary to use an inductive limit construction. An infinite tail representation is given by an infinite sequence ${\mathbf{i}}=i_{0}i_{1}i_{2}\dots$ in the alphabet $\{1,\dots,n\}$ and a corresponding sequence of points ${\mathbf{x}}=\{x_{s}\in X:s\geq 0\}$ such that $\sigma_{i_{s}}(x_{s+1})=x_{s}$. We will call such a pair $({\mathbf{i}},{\mathbf{x}})$ an infinite tail for $(X,\sigma)$. For each $s\geq 0$, let ${\mathcal{H}}_{s}$ denote a copy of Fock space with basis $\{\xi^{s}_{w}:w\in\mathbb{F}_{n}^{+}\}$. Identify ${\mathcal{H}}_{s}$ with a subspace of ${\mathcal{H}}_{s+1}$ via $R_{i_{s}}$, where $R_{i_{s}}\xi^{s}_{w}=\xi^{s+1}_{wi_{s}}$. Consider the orbit representations $\lambda_{x_{s}}$ to be a representation on ${\mathcal{H}}_{s}$. It is easy to see that $\lambda_{x_{s+1}}|_{{\mathcal{H}}_{s}}=\lambda_{x_{s}}$ for $s\geq 0$. So we may define $\lambda_{{\mathbf{i}},{\mathbf{x}}}$ to be the inductive limit of the representations $\lambda_{x_{s}}$ on ${\mathcal{H}}=\overline{\bigcup_{s\geq 0}{\mathcal{H}}_{s}}$. This representation is always maximal. In [9], one required that the maximal representations also be irreducible. This was accomplished by insisting that the orbits consist of distinct points. For our purposes, it is convenient to ignore that requirement. One still obtains a family of maximal representations. Thus we have found two types of maximal representations. From these, we form two large representations of ${\mathcal{A}}(X,\sigma)$: $$\displaystyle\lambda_{X,1}$$ $$\displaystyle:=\bigoplus{}_{x\in U}\ \lambda_{x}$$ $$\displaystyle\text{where }U=X\setminus\cup_{i=1}^{n}\sigma_{i}(X)$$ $$\displaystyle\lambda_{X,2}$$ $$\displaystyle:=\bigoplus\lambda_{{\mathbf{i}},{\mathbf{x}}}$$ summing over all possible infinite tails $$\displaystyle\lambda_{X}$$ $$\displaystyle:=\lambda_{X,1}\oplus\lambda_{X,2}.$$ We can state a result which follows from [9, Corollary 2.8]. This is the case because $\Lambda_{X}$ is completely isometric, and $\lambda_{X}$ is a maximal dilation of $\Lambda_{X}$. Lemma 2.1. Let $(X,\sigma)$ be a multivariable dynamical system. Then $\lambda_{X}$ is a completely isometric maximal representation of ${\mathcal{A}}(X,\sigma)$. Consequently, $$\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))=\mathrm{C}^{*}(\lambda_{X}(% {\mathcal{A}}(X,\sigma))).$$ 3. The surjective case In this section, we suppose that $(X,\sigma)$ is surjective in the sense that $X=\bigcup_{i=1}^{n}\sigma_{i}(X)$. When $n=1$, Peters [17] used a projective limit construction on $(X,\sigma)$ to obtain a new space $\tilde{X}$ and a homeomorphism $\tilde{\sigma}$, together with a projection $p:\tilde{X}\to X$ such that $p\tilde{\sigma}=\sigma p$. We first define an analogue of this construction. Let ${\mathbf{n}}=\{1,\dots,n\}$ with the discrete topology. Set $Y={\mathbf{n}}^{\mathbb{N}}\times X^{\mathbb{N}}$ with the product topology. Let $\tilde{X}$ be the subset of $Y$ consisting of all infinite tails for $(X,\sigma)$, namely $$\tilde{X}=\{({\mathbf{i}},{\mathbf{x}})\in Y:\sigma_{i_{k}}(x_{k+1})=x_{k}% \text{ for }k\geq 0\}.$$ The continuity of the maps ensures that $\tilde{X}$ is closed in $Y$. If ${\mathbf{i}}=(i_{0},i_{1},i_{2},\dots)$, let $i{\mathbf{i}}=(i,i_{0},i_{1},\dots)$. Likewise, if ${\mathbf{x}}=(x_{0},x_{1},x_{2},\dots)$, let $(x,{\mathbf{x}})=(x,x_{0},x_{1},\dots)$ For $i=1,\dots,n$, we define $\tilde{\sigma}_{i}:\tilde{X}\to\tilde{X}$ by $$\tilde{\sigma}_{i}({\mathbf{i}},{\mathbf{x}})=\big{(}i{\mathbf{i}},(\sigma_{i}% (x_{0}),{\mathbf{x}})\big{)}=\big{(}(i,i_{0},i_{1},\dots),(\sigma_{i}(x_{0}),x% _{0},x_{1},\dots)\big{)}.$$ It is easy to see that $\tilde{\sigma}_{i}$ is a homeomorphism of $\tilde{X}$ onto $\tilde{X}_{i}$, where $$\tilde{X}_{i}=\{({\mathbf{i}},{\mathbf{x}})\in\tilde{X}:i_{0}=i\}.$$ Observe that $\tilde{X}$ is the disjoint union of the sets $\tilde{X}_{i}$ for $1\leq i\leq n$. Thus we have constructed a new multivariable dynamical system $(\tilde{X},\tilde{\sigma})$ which we call the covering system of $(X,\sigma)$. Also define a projection $p:\tilde{X}\to X$ by $p({\mathbf{i}},{\mathbf{x}})=x_{0}$. Given any $x_{0}\in X$, surjectivity ensures that there is at least one choice of $i_{0}$ and $x_{1}\in X$ so that $\sigma_{i_{0}}(x_{1})=x_{0}$. Recursively, one can construct a point $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$ so that $p(({\mathbf{i}},{\mathbf{x}}))=x_{0}$. So $p$ is surjective. It is easy to see that $\sigma_{i}p=p\tilde{\sigma}_{i}$ for $1\leq i\leq n$. Lemma 3.1. $\tilde{X}$ is (locally) compact when $X$ is (locally) compact. The projection $p:\tilde{X}\to X$ is a proper map. If $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$, then there is a unique infinite tail in $\tilde{X}$ beginning at this point. Thus $\tilde{\tilde{X}}=\tilde{X}$. Proof.. If $X$ is compact, then $\tilde{X}$ is a closed subset of the compact space $Y={\mathbf{n}}^{\mathbb{N}}\times X^{\mathbb{N}}$, and thus it is compact by Tychonoff’s theorem. However, if $X$ is not compact, then $Y$ will not be locally compact, and we need to be more careful. Let $X_{\infty}$ denote the one point compactification of $X$ obtained by adding a point $\infty$. Since each $\sigma_{i}$ is proper, it extends by continuity to a map $\overline{\sigma}_{i}$ on $X_{\infty}$ by setting $\overline{\sigma}_{i}(\infty)=\infty$. Form the compact space $\tilde{X}_{\infty}$. Then the only points $({\mathbf{i}},{\mathbf{x}})$ in $\tilde{X}_{\infty}$ for which any coordinate $x_{j}=\infty$ are the points $({\mathbf{i}},\boldsymbol{\infty})$ where $\boldsymbol{\infty}=(\infty,\infty,\dots)$. This is a compact set, and $\tilde{X}=\tilde{X}_{\infty}\setminus{\mathbf{n}}^{\mathbb{N}}\times\{% \boldsymbol{\infty}\}$. Since compact Hausdorff spaces are normal, it follows that $\tilde{X}$ is locally compact. Let $K$ be a compact subset of $X$. Since each $\sigma_{i}$ is proper, the sets $K_{k}=\bigcup_{|w|=k|}\sigma_{w}^{-1}(K)$ are compact for $k\geq 0$. Therefore $$p^{-1}(K)\subset{\mathbf{n}}^{\mathbb{N}}\times\prod_{k\geq 0}K_{k},$$ which is compact. Therefore $p$ is proper. If $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$, then the choice of the infinite tail beginning with this point is uniquely determined. This is because the maps $\tilde{\sigma}_{i}$ have disjoint ranges. Indeed, the sequence of maps is just given by ${\mathbf{i}}$ itself, and the points are $$\tilde{x}_{k}=\big{(}(i_{k},i_{k+1},\dots),(x_{k},x_{k+1},\dots)).$$ Therefore the covering space of $\tilde{X}$ is canonically homeomorphic to $\tilde{X}$ itself via the projection map $\tilde{p}$. ∎ The new system has a number of advantages over the original. In particular, it is possible to define an inverse map $\tau$ by $$\tau|_{\tilde{X}_{i}}=\tilde{\sigma}_{i}^{-1}\quad\text{for}\quad 1\leq i\leq n.$$ Clearly $\tau$ is everywhere defined on $\tilde{X}$ and is a local homeomorphism. For $w\in\mathbb{F}_{n}^{+}$ with $|w|\geq 1$, let $\tilde{X}_{w}=\tilde{\sigma}_{w}(\tilde{X})$. Observe that $\tilde{X}$ is the disjoint union of the clopen sets $\{\tilde{X}_{w}:|w|=k\}$ for each $k\geq 1$. Let ${\raise 1.505pt\hbox{$\chi$}}_{w}$ denote the characteristic function of $\tilde{X}_{w}$. This does not lie in ${\mathrm{C}}_{0}(\tilde{X})$ is $\tilde{X}$ (and hence $X$) is not compact. But it does lie in the multiplier algebra. Also let $p_{k}:\tilde{X}\to X$ be given by $p_{k}({\mathbf{i}},{\mathbf{x}})=x_{k}$. Observe that $p_{k}=p\circ\tau^{k}$. Lemma 3.2. Let the generators for ${\mathcal{A}}(\tilde{X},\tilde{\sigma})$ be ${\mathfrak{t}}_{1},\dots,{\mathfrak{t}}_{n}$. For any $f\in{\mathrm{C}}_{0}(\tilde{X})$ and $w\in\mathbb{F}_{n}^{+}$, ${\mathfrak{t}}_{w}f{\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$}}_{w}(f% \circ\tau^{|w|})$. Proof.. Let $k=|w|$. $$\displaystyle{\raise 1.505pt\hbox{$\chi$}}_{w}(f\circ\tau^{|w|}){\mathfrak{t}}% _{w}$$ $$\displaystyle={\mathfrak{t}}_{w}({\raise 1.505pt\hbox{$\chi$}}_{w}\circ\tilde{% \sigma}_{w})(f\circ\tau^{k}\circ\tilde{\sigma}_{w})$$ $$\displaystyle={\mathfrak{t}}_{w}(1)(f\circ{\operatorname{id}})={\mathfrak{t}}_% {w}f.$$ Hence ${\mathfrak{t}}_{w}f{\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$}}_{w}(f% \circ\tau^{|w|}){\mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}$. In the compact case, we can set $f=1$ and see that ${\mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$}}_{w}{% \mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}\leq{\raise 1.505pt\hbox{$\chi$}}_{w}$. In general, this makes sense in the multiplier algebra. Thus we have $$1=\sum_{|w|=k}{\mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}\leq\sum_{|w|=k}{\raise 1% .505pt\hbox{$\chi$}}_{w}=1.$$ Hence ${\mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$}}_{w}$. So the identity ${\mathfrak{t}}_{w}f{\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$}}_{w}(f% \circ\tau^{|w|})$ follows. ∎ When $n=1$, this consists of a single homeomorphism of $\tilde{X}$, which allows Peters to construct a C*-crossed product. When $n\geq 2$, the situation is still much improved. The first goal is to show that the C*-envelope is determined by this new system. Theorem 3.3. Let $(X,\sigma)$ be a surjective multivariable dynamical system, and let $(\tilde{X},\tilde{\sigma})$ be the associated covering system. Let ${\mathcal{A}}(X,\sigma)$ and ${\mathcal{A}}(\tilde{X},\tilde{\sigma})$ be the associated tensor algebras. Then $($i$)$ ${\mathcal{A}}(X,\sigma)$ can be embedded into ${\mathcal{A}}(\tilde{X},\tilde{\sigma})$ via a completely isometric homomorphism. $($ii$)$ $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))=\mathrm{C}^{*}_{\text{e}}(% \mathcal{A}(\tilde{X},\tilde{\sigma}))$. Proof.. Let the generators of ${\mathcal{A}}(\tilde{X},\tilde{\sigma})$ be ${\mathfrak{t}}_{1},\dots,{\mathfrak{t}}_{n}$. Embed $C_{0}(X)$ into $C_{0}(\tilde{X})$ via $\rho(f)=f\circ p$. This is a $*$-monomorphism because $p$ is surjective. Then we can embed the covariance algebra ${\mathcal{A}}_{0}(X,\sigma)$ into ${\mathcal{A}}(\tilde{X},\tilde{\sigma})$ by defining $\rho({\mathfrak{s}}_{i}f)={\mathfrak{t}}_{i}\rho(f)$ and extending to the homomorphism $$\rho\big{(}\sum{\mathfrak{s}}_{w}f_{w}\big{)}=\sum{\mathfrak{t}}_{w}(f_{w}% \circ p).$$ The important observation is that $\rho$ satisfies the covariance relations $$\rho(f)\rho({\mathfrak{s}}_{i})=(f\circ p){\mathfrak{t}}_{i}={\mathfrak{t}}_{i% }(f\circ p\circ\tilde{\sigma}_{i})={\mathfrak{t}}_{i}(f\circ\sigma_{i}\circ p)% =\rho(f)\rho(f\circ\sigma_{i}).$$ Therefore by the universal property of the tensor algebra, this extends to a completely contractive representation of ${\mathcal{A}}(X,\sigma)$. Let us verify that this map is a complete isometry. We will use the fact that the full Fock representation is completely isometric. For each $x\in X$, choose $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$ so that $p(({\mathbf{i}},{\mathbf{x}}))=x$. Observe that $\lambda_{({\mathbf{i}},{\mathbf{x}})}\rho=\lambda_{x}$. Indeed, both representations send ${\mathfrak{s}}_{i}$ to the left shifts $L_{i}$, so it suffices to check what happens to ${\mathrm{C}}_{0}(X)$. If $w=j_{k}j_{k-1}\dots j_{1}$, then $$p\tilde{\sigma}_{w}({\mathbf{i}},{\mathbf{x}})=\sigma_{w}p({\mathbf{i}},{% \mathbf{x}})=\sigma_{w}(x_{0}).$$ So for any $f\in{\mathrm{C}}_{0}(X)$, $$\lambda_{({\mathbf{i}},{\mathbf{x}})}\rho(f)\xi_{w}=(f\circ p)(\tilde{\sigma}_% {w}({\mathbf{i}},{\mathbf{x}}))\xi_{w}=f(\sigma_{w}(x_{0}))\xi_{w}=\lambda_{x}% (f)\xi_{w}.$$ Whence it follows for all elements $A\in{\mathfrak{M}}_{m}({\mathcal{A}}_{0}(X,\sigma))$ that $$\displaystyle\|A\|$$ $$\displaystyle=\big{\|}({\operatorname{id}}_{{\mathfrak{M}}_{m}}\otimes\Lambda_% {X})(A)\big{\|}$$ $$\displaystyle=\big{\|}({\operatorname{id}}_{{\mathfrak{M}}_{m}}\otimes\Lambda_% {\tilde{X}})(({\operatorname{id}}_{{\mathfrak{M}}_{m}}\otimes\rho)(A))\big{\|}$$ $$\displaystyle=\|({\operatorname{id}}_{{\mathfrak{M}}_{m}}\otimes\rho)(A)\|.$$ So this embedding is a complete isometry. To prove (ii), we use the representations $\lambda_{X}$ and $\lambda_{\tilde{X}}$. As these systems are surjective, we only need to consider infinite tail representations. If $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$, then the choice of the infinite tail beginning with this point is uniquely determined by Lemma 3.1. This representation $\lambda_{{\mathbf{i}},(\tilde{x}_{0},\tilde{x}_{1},\dots)}$ will be denoted by $\tau_{({\mathbf{i}},{\mathbf{x}})}$. Now if $x\in X$, the infinite tails beginning at $x$ are precisely the points $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$ such that $p(({\mathbf{i}},{\mathbf{x}}))=x$. Arguing exactly as in the previous paragraph, we see that $\tau_{({\mathbf{i}},{\mathbf{x}})}\rho=\lambda_{({\mathbf{i}},{\mathbf{x}})}$. By Lemma 2.1, we have $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(\tilde{X},\tilde{\sigma}))=\mathrm{C}^% {*}(\lambda_{\tilde{X}}({\mathcal{A}}(\tilde{X},\tilde{\sigma})))$. Moreover the previous paragraph shows that $\lambda_{\tilde{X}}\rho\simeq\lambda_{X}$ yields a maximal completely isometric representation of ${\mathcal{A}}(X,\sigma)$ into this C*-algebra. So $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is the C*-algebra generated by its image. So it suffices to demonstrate that this is the whole algebra. It is convenient to consider $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(\tilde{X},\tilde{\sigma}))$ as generated by ${\mathrm{C}}_{0}(\tilde{X})$ and $\{{\mathfrak{t}}_{i}f:1\leq i\leq n,\ f\in{\mathrm{C}}_{0}(\tilde{X})\}$. Observe that ${\mathfrak{t}}_{1},\dots,{\mathfrak{t}}_{n}$ are Cuntz isometries, meaning that they have orthogonal ranges with sum to the whole space. This is evident in each infinite tail representation. Moreover, one can see that ${\mathfrak{t}}_{i}{\mathfrak{t}}_{i}^{*}={\raise 1.505pt\hbox{$\chi$}}_{i}$. In the non-unital case, this makes sense in the multiplier algebra of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(\tilde{X},\tilde{\sigma}))$ which contains ${\mathfrak{t}}_{1},\dots,{\mathfrak{t}}_{n}$ and all bounded continuous functions on $\tilde{X}$. This follows from Lemma 3.2 because $${\mathfrak{t}}_{i}{\mathfrak{t}}_{i}^{*}={\mathfrak{t}}_{i}1{\mathfrak{t}}_{i}% ^{*}={\raise 1.505pt\hbox{$\chi$}}_{i}{\mathfrak{t}}_{i}{\mathfrak{t}}_{i}^{*}% {\raise 1.505pt\hbox{$\chi$}}_{i}\leq{\raise 1.505pt\hbox{$\chi$}}_{i}.$$ Since $1=\sum{\mathfrak{t}}_{i}{\mathfrak{t}}_{i}^{*}\leq\sum_{i}{\raise 1.505pt\hbox% {$\chi$}}_{i}=1$, we have equality. The subalgebra $\rho(\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is generated by the algebra of functions $\{f\circ p:f\in{\mathrm{C}}_{0}(X)\}$ and $\{{\mathfrak{t}}_{i}(f\circ p):1\leq i\leq n,\ f\in{\mathrm{C}}_{0}(X)\}$. It suffices to show that all of ${\mathrm{C}}_{0}(\tilde{X})$ is in the smaller algebra. To this end, it is enough to show that the algebra $\bigcup_{w\in\mathbb{F}_{n}^{+}}{\mathfrak{t}}_{w}\rho({\mathrm{C}}_{0}(X)){% \mathfrak{t}}_{w}^{*}$ is dense in ${\mathrm{C}}_{0}(\tilde{X})$. Now by Lemma 3.2, for $|w|=k\geq 1$, $${\mathfrak{t}}_{w}(f\circ p){\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$% }}_{w}(f\circ p\circ\tau^{k}){\mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}={\raise 1% .505pt\hbox{$\chi$}}_{w}(f\circ p_{k}).$$ By the Stone–Weierstrass Theorem, it suffices to show that this subalgebra separates points and does not vanish anywhere. The latter is clear. If $({\mathbf{i}},{\mathbf{x}})\neq({\mathbf{j}},{\mathbf{y}})$, then either ${\mathbf{i}}\neq{\mathbf{j}}$ or for some $k\geq 0$, $x_{k}\neq y_{k}$. In the former case, there is a $k\geq 1$ so that the initial segment of ${\mathbf{i}}$ is a word $w$ which differs from the initial segment of ${\mathbf{j}}$. So choose $f\in{\mathrm{C}}_{0}(X)$ so that $f(x_{k})\neq 0$, and if $y_{k}\neq x_{k}$, make $f(y_{k})=0$. Then in either case, $$\displaystyle{\mathfrak{t}}_{w}(f\circ p){\mathfrak{t}}_{w}({\mathbf{i}},{% \mathbf{x}})$$ $$\displaystyle={\raise 1.505pt\hbox{$\chi$}}_{w}({\mathbf{i}},{\mathbf{x}})f(x_% {k})\neq 0,$$ and $$\displaystyle{\mathfrak{t}}_{w}(f\circ p){\mathfrak{t}}_{w}({\mathbf{j}},{% \mathbf{y}})$$ $$\displaystyle={\raise 1.505pt\hbox{$\chi$}}_{w}({\mathbf{j}},{\mathbf{y}})f(y_% {k})=0.$$ It follows that $\mathrm{C}^{*}(\rho({\mathcal{A}}(X,\sigma)))=\mathrm{C}^{*}_{\text{e}}({% \mathcal{A}}(\tilde{X},\tilde{\sigma}))$. ∎ Remark 3.4. This proof and the various algebraic relations imply that $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is the closed span of $$\{{\mathfrak{t}}_{v}f{\mathfrak{t}}_{w}^{*}:v,w\in\mathbb{F}_{n}^{+},\ f\in{% \mathrm{C}}_{0}(\tilde{X})\}.$$ For instance, suppose $\tilde{X}$ compact. When $\operatorname{span}\{{\mathfrak{t}}_{w}{\mathfrak{t}}_{w}^{*}:w\in\mathbb{F}_{% n}^{+}\}$ is dense in ${\mathrm{C}}(\tilde{X})$ (or equivalently when the characteristic functions of the sets $\tilde{X}_{w}$ for $w\in\mathbb{F}_{n}^{+}$ separate the points of $\tilde{X}$), then $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is isomorphic to the Cuntz algebra $\mathcal{O}_{n}$. Example 3.5. Consider $X=\{1,\dots,n\}^{\mathbb{N}}$ and for $1\leq i\leq n$, set $$\sigma_{i}((x_{0},x_{1},x_{2},\dots))=(i,x_{0},x_{1},x_{2},\dots).$$ Obviously $X=\tilde{X}$ and the characteristic functions of the $\tilde{X}_{w}$’s separate the points of $\tilde{X}$. So $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))=\mathcal{O}_{n}$. As a consequence of this theorem, we are able to describe the C*-algebra $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ as a groupoid C*-algebra. Following [1] or [10], we denote by $\mathrm{C}^{*}(\tilde{X},\tau)$ the groupoid C*-algebra associated to the local homeomorphism $\tau$. The route to the proof is via Cuntz–Pimsner algebras of the associated C*-correspondences, which are shown to be isomorphic. Corollary 3.6. Let $(X,\sigma)$ be a surjective multivariable dynamical system. Then, $$\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))\simeq\mathrm{C}^{*}(\tilde{% X},\tau).$$ Proof.. Deaconu, Kumjian and Muhly [11] prove that $\mathrm{C}^{*}(\tilde{X},\tau)$ is the Cuntz-Pimsner C*-algebra associated to the C*-correspondence $E=C_{0}(\tilde{X})$ endowed with the $C_{0}(\tilde{X})$-valued inner product $$\langle\xi,\eta\rangle(x)=\sum_{\tau(y)=x}\overline{\xi(y)}\eta(y)=\sum_{i=1}^% {n}\overline{\xi(\tilde{\sigma}_{i}(x))}\eta(\tilde{\sigma}_{i}(x))$$ for $\eta,\xi\in E$ and $x\in\tilde{X}$. The left and right actions of ${\mathrm{C}}_{0}(\tilde{X})$ are given by $$f\cdot\xi(x)=f(x)\xi(x)\quad\text{and}\quad\xi\cdot f(x)=\xi(x)f(\tau(x))$$ for $\xi\in E$ and $f\in C_{0}(\tilde{X})$. On the other hand, in [9] it is shown that $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(\tilde{X},\tilde{\sigma}))$ is the Cuntz-Pimsner algebra associated to the C*-correspondence $F=C_{0}(\tilde{X}\times n)$ over $C_{0}(\tilde{X})$ in the following way: The $C_{0}(\tilde{X})$-valued inner product is $$\langle\xi,\eta\rangle(x)=\sum_{i=1}^{n}\overline{\xi(x,i)}\eta(x,i)$$ and the left and right actions of ${\mathrm{C}}_{0}(\tilde{X})$ are $$f\cdot\xi(x,i)=f(\tilde{\sigma}_{i}(x))\xi(x,i)\quad\text{and}\quad\xi\cdot f(% x,i)=\xi(x,i)f(x)$$ for $\eta,\xi\in F$, $f\in C_{0}(\tilde{X})$ and $x\in\tilde{X}$. To prove that these two Cuntz-Pimsner algebras are $*$-ismorphic, we will show that the C*-correspondences $E$ and $F$ are unitarily equivalent, i.e. there is a $C_{0}(\tilde{X})$-bimodule map from E onto F which preserves the inner products. Define $h:\tilde{X}\times n\to\tilde{X}$ by $h((x,i))=\tilde{\sigma}_{i}(x).$ Then consider the map $U$ from ${\mathrm{C}}_{0}(\tilde{X})$ to ${\mathrm{C}}_{0}(\tilde{X}\times n)$ by $U\xi=\xi\circ h$. It is easy to verify that $U$ is a $C_{0}(\tilde{X})$-bimodule map from E onto F which preserves the inner products. ∎ Example 3.7. To illustrate this corollary, let’s have another look at Example 3.5. In this example, the local homeomorphism $\tau:\tilde{X}\to\tilde{X}$ is just the left shift $$\tau((x_{0},x_{1},x_{2},\dots))=(x_{1},x_{2},\dots).$$ By [10, Example 1], the associated groupoid C*-algebra $\mathrm{C}^{*}(\tilde{X},\tau)$ is $\mathcal{O}_{n}$. The next step is to describe the C*-envelope of ${\mathcal{A}}(X,\sigma)$ as the crossed product ${\mathfrak{B}}\rtimes_{\alpha}{\mathbb{N}}$ of a C*-algebra ${\mathfrak{B}}$ by a single endomorphism $\alpha$. This construction was introduced by Cuntz [7] when he described his algebras $\mathcal{O}_{n}$ as crossed products of UHF algebras by endomorphisms. This construction applies more generally (see [8] and [19] for the non-unital case). We recall how the crossed product by an endomorphism is defined. Let ${\mathfrak{B}}$ be a C*-algebra and let $\alpha$ be an injective $*$-homomorphism of ${\mathfrak{B}}$ into itself. In the unital case, there exists a unique C*-algebra ${\mathfrak{B}}\rtimes_{\alpha}{\mathbb{N}}$ generated by ${\mathfrak{B}}$ and an isometry $S$ such that $$SbS^{*}=\alpha(b)\quad\text{for all}\quad b\in{\mathfrak{B}}$$ and satisfying the universal property: for any $*$-homomorphism $\pi$ of ${\mathfrak{B}}$ into ${\mathcal{B}}({\mathcal{H}})$ and any isometry $T\in B({\mathcal{H}})$ such that $T\pi(b)T^{*}=\pi(\alpha(b))$, there is a $*$-homomorphism $\tilde{\pi}:{\mathfrak{B}}\rtimes_{\alpha}{\mathbb{N}}\to{\mathcal{B}}({% \mathcal{H}})$ extending $\pi$ such that $\tilde{\pi}(S)=T$. In the non-unital case, the isometry lives in the multiplier algebra. ${\mathfrak{B}}\rtimes_{\alpha}{\mathbb{N}}$ is defined as the universal algebra generated by ${\mathfrak{B}}$ and $\{Sb:b\in{\mathfrak{B}}\}$; and the universal property is that the map $\pi$ above extends to $\tilde{\pi}$ satisfying $\tilde{\pi}(Sb)=T\pi(b)$ for all $b\in{\mathfrak{B}}$. As usual, this crossed product has a family of gauge automorphisms $\gamma_{z}$ for $z\in{\mathbb{T}}$ determined by $$\gamma_{z}(b)=b\text{ for }b\in{\mathfrak{B}}\quad\text{and}\quad\gamma_{z}(S)% =zS.$$ A standard argument shows that integration over ${\mathbb{T}}$ yields a faithful conditional expectation $\Gamma(A)=\frac{1}{2\pi}\int_{0}^{2\pi}\gamma_{e^{i\theta}}(A)\,d\theta$ from ${\mathfrak{B}}\rtimes_{\alpha}{\mathbb{N}}$ onto ${\mathfrak{B}}$. We turn to the definition of ${\mathfrak{B}}$ in our setting. We know from Theorem 3.3 and the Cuntz and covariance relations of ${\mathfrak{A}}:=\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ that this algebra is the closed span of words of the form $$\{{\mathfrak{t}}_{v}f{\mathfrak{t}}_{w}^{*}:v,w\in\mathbb{F}_{n}^{+},\ f\in{% \mathrm{C}}_{0}(\tilde{X})\}.$$ The universal property of the C*-envelope also guarantees that for each $z\in{\mathbb{T}}$, there are $*$-automorphisms $\psi_{z}$ of ${\mathfrak{A}}$ determined by $\psi_{z}(f)=f$ for $f\in{\mathrm{C}}_{0}(\tilde{X})$ and $\psi_{z}({\mathfrak{t}}_{i})=z{\mathfrak{t}}_{i}$ for $1\leq i\leq n$. We define an expectation of $\Psi$ of ${\mathfrak{A}}$ into itself by integration: $\Psi(A)=\frac{1}{2\pi}\int_{0}^{2\pi}\psi_{e^{i\theta}}(A)\,d\theta$. It is easy to see that $$\Psi({\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*})=\begin{cases}{\mathfrak{t}}_{u% }f{\mathfrak{t}}_{v}^{*}&\quad\text{if}\quad|u|=|v|\\ 0&\quad\text{otherwise}\end{cases}$$ Define $${\mathfrak{B}}=\operatorname{Ran}(\Psi)=\overline{\operatorname{span}\{{% \mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}:u,v\in\mathbb{F}_{n}^{+},\ |u|=|v|,\ % f\in{\mathrm{C}}_{0}(\tilde{X})\}}.$$ For $k\geq 0$, define $${\mathfrak{B}}_{k}=\overline{\operatorname{span}\{{\mathfrak{t}}_{u}f{% \mathfrak{t}}_{v}^{*}:u,v\in\mathbb{F}_{n}^{+},\ |u|=|v|=k,\ f\in{\mathrm{C}}_% {0}(\tilde{X})\}}.$$ Since ${\mathfrak{t}}_{v}^{*}{\mathfrak{t}}_{u}=\delta_{u,v}$ when $|u|=|v|$, it is evident that ${\mathfrak{B}}_{k}$ is a C*-subalgebra of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ which is $*$-isomorphic to ${\mathfrak{M}}_{n^{k}}(C_{0}(\tilde{X}))$ via the map which sends ${\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}$ to $f\otimes E_{u,v}$, where $\{E_{u,v}:|u|=|v|=k\}$ denote the matrix units of ${\mathfrak{M}}_{n^{k}}$. Moreover, ${\mathfrak{B}}_{k}$ is contained in ${\mathfrak{B}}_{k+1}$ because if $|v|=|w|=k$, then $${\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}={\mathfrak{t}}_{u}f\sum_{i=1}^{n}{% \mathfrak{t}}_{i}{\mathfrak{t}}_{i}^{*}{\mathfrak{t}}_{v}^{*}=\sum_{i=1}^{n}{% \mathfrak{t}}_{ui}(f\circ\tilde{\sigma}_{i}){\mathfrak{t}}_{vi}^{*}$$ (Observe that this is not imbedded in the usual manner of UHF algebras because of the fact that ${\mathfrak{B}}_{0}={\mathrm{C}}_{0}(\tilde{X})$ is imbedded into ${\mathfrak{B}}_{k}$ by sending $f$ to the diagonal operator $\operatorname{diag}(f\circ\sigma_{w})$. Since the maps $\sigma_{w}$, for $|w|=k$, are homeomorphisms onto pairwise disjoint clopen subsets of $\tilde{X}$, this carries ${\mathfrak{B}}_{0}$ onto the full diagonal of ${\mathfrak{B}}_{k}$.) It follows that $${\mathfrak{B}}=\overline{\bigcup_{k\geq 0}{\mathfrak{B}}_{k}}.$$ In particular, ${\mathfrak{B}}$ is a C*-subalgebra of ${\mathfrak{A}}$ which is the inductive limit of homogeneous C*-algebras. Define a proper isometry $V=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}{\mathfrak{t}}_{i}$ in ${\mathfrak{A}}$. Observe that $$\alpha(b)=VbV^{*}\quad\text{for}\quad b\in{\mathfrak{B}}$$ determines a $*$-endomorphism. Thus we can define the crossed product ${\mathfrak{B}}\times_{\alpha}{\mathbb{N}}$. Theorem 3.8. Let $(X,\sigma)$ be a multivariable dynamical system with $n\geq 2$. Then, with the above notation, $$\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))\simeq{\mathfrak{B}}\times_{% \alpha}{\mathbb{N}}.$$ Proof.. First we show that $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is generated by ${\mathfrak{B}}$ and $V$. This is straightforward. If ${\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}$ is given, with $|v|\leq|u|$, let $k=|u|-|v|$. Then $$(n^{k/2}{\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}{\mathfrak{t}}_{1}^{*k})V^{k}% ={\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}.$$ So ${\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}$ belongs to $\mathrm{C}^{*}({\mathfrak{B}},V)$. But these elements together with their adjoints span $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$. By the universal property of ${\mathfrak{B}}\times_{\alpha}{\mathbb{N}}$, there is a $*$-homomorphism $$\pi:{\mathfrak{B}}\rtimes_{\alpha}{\mathbb{N}}\to\mathrm{C}^{*}_{\text{e}}({% \mathcal{A}}(X,\sigma))$$ such that $\pi|_{\mathfrak{B}}={\operatorname{id}}$ and $\pi(S)=V$. This is surjective, and it is easy to see that $\Psi\pi=\pi\Gamma$. The gauge invariant uniqueness theorem shows that $\pi$ is an isomorphism. ∎ Remark 3.9. When $n=1$, ${\mathfrak{B}}$ is just ${\mathrm{C}}_{0}(\tilde{X})$ and the isometry $V$ is actually a unitary. Thus this result recovers Peters result [17] describing the C*-envelope as a C*-crossed product by $\mathbb{Z}$. 4. The non-surjective case: adding a tail The previous section only applies when the union of the ranges $\bigcup_{i=1}^{n}\sigma_{i}(X)$ is all of $X$. When the system is not surjective, there is a technique called “adding a tail” which comes from the construction of graphs without sources from ones that have them. This is now a standard procedure. Given a dynamical system $(X,\sigma)$, let $U=X\setminus\cup_{i=1}^{n}\sigma_{i}(X)$. Define $T=\{(u,k):u\in\overline{U},\ k<0\}$ and $X^{T}=X\cup T$. For each $1\leq i\leq n$, we extend $\sigma_{i}$ to a map $\sigma_{i}^{T}:X^{T}\to X^{T}$ by $$\sigma_{i}^{T}(u,k)=(u,k+1)\text{ for }k<-1,\quad\text{and}\quad\sigma_{i}^{T}% (u,-1)=u.$$ We can consider the new multivariable dynamical system $(X^{T},\sigma^{T})$. Theorem 4.1. Let $(X,\sigma)$ be a non-surjective multivariable dynamical system, and let $(X^{T},\sigma^{T})$ be the system with an added tail. Then, (i) ${\mathcal{A}}(X,\sigma)$ can be embedded in ${\mathcal{A}}(X^{T},\sigma^{T})$ via a completely isometric homomorphism and its image is completely contractively complemented in ${\mathcal{A}}(X^{T},\sigma^{T})$. (ii) $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is a full corner of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))$. Proof.. Let ${\mathfrak{t}}_{1},\dots,{\mathfrak{t}}_{n}$ denote the generators of ${\mathcal{A}}(X^{T},\sigma^{T})$. Extend each $f\in C_{0}(X)$ to a function $f^{T}\in C_{0}(X^{T})$ by setting it to be $0$ on $T$. Then we can embed ${\mathcal{A}}(X,\sigma)$ into ${\mathcal{A}}(X^{T},\sigma^{T})$ by $$j(f)=f^{T}\quad\text{and}\quad j({\mathfrak{s}}_{i}f)={\mathfrak{t}}_{i}f^{T}.$$ Clearly this is an algebra homomorphism. Note that if $X$ is compact, then $j({\mathfrak{s}}_{i})=j({\mathfrak{s}}_{i}1)={\mathfrak{t}}_{i}1^{T}={% \mathfrak{t}}_{i}{\raise 1.505pt\hbox{$\chi$}}_{X}$ where ${\raise 1.505pt\hbox{$\chi$}}_{X}$ is the characteristic function of $X$ in ${\mathrm{C}}(X^{T})$. Indeed, since $X$ is invariant for $\sigma^{T}$, we have $$j({\mathfrak{s}}_{w}f)={\mathfrak{t}}_{w}f^{T}={\raise 1.505pt\hbox{$\chi$}}_{% X}{\mathfrak{t}}_{w}f^{T}{\raise 1.505pt\hbox{$\chi$}}_{X}.$$ To see that this embedding is completely isometric, it suffices to consider the two full Fock representations which are the direct sum of all orbit representations. As $X$ is invariant under the maps $\sigma_{i}^{T}$, it is evident that orbit representations $\pi_{x}$ and $\pi_{x}^{T}$ for the two systems coincide for all $x\in X$. If $(u,k)$ belongs to the tail $T$, then consider the representation $\pi_{(u,k)}^{T}j$, the restriction of the orbit representation $\pi_{(u,k)}^{T}$ to ${\mathcal{A}}(X,\sigma)$. We claim that this is unitarily equivalent to $0^{(\alpha)}\oplus\pi_{u}^{(\beta)}$ where $0$ is the 1-dimensional zero representation, $\alpha=\sum_{s=0}^{k-1}n^{s}$ and $\beta=n^{k}$. Indeed, on the basis vectors $\xi_{w}$ for $|w|<k$, $f^{T}(\sigma_{w}(u,k))=f^{T}(u,k-|w|)=0$. Hence $\pi_{(u,k)}^{T}(j(A))\xi_{w}=0$ for all $A\in{\mathcal{A}}(X,\sigma)$ and all $|w|<k$. Observe that for each word $w$ with $|w|=k$, the restriction of $\pi_{(u,k)}^{T}$ to $\operatorname{span}\{\xi_{vw}:v\in\mathbb{F}_{n}^{+}\}$ is unitarily equivalent to $\pi_{u}^{T}$. Hence the claim follows. Combining these two observations, one sees that $\Pi_{X^{T}}j$ is completely isometric to $\Pi_{X}$, and indeed they are the direct sum of the same representations with different non-zero multiplicities. The last assertion of (i) will be established in (ii) below. To prove (ii), we use Lemma 2.1 to note that the representations $\lambda_{X}$ and $\lambda_{X^{T}}$ are completely isometric maximal representations of ${\mathcal{A}}(X,\sigma)$ and ${\mathcal{A}}(X^{T},\sigma^{T})$ respectively. Note that any infinite tail $({\mathbf{i}},{\mathbf{x}})$ of $(X,\sigma)$ is also an infinite tail of $(X^{T},\sigma^{T})$. So the sum of all of these representations, $\lambda_{X,2}$, is a direct summand of $\lambda_{X^{T}}|_{{\mathcal{A}}(X,\sigma)}$. The other summands of $\lambda_{X}$ are the orbit representations $\lambda_{u}$ for $u\in U$. In $(X^{T},\sigma^{T})$, these can be dilated to infinite tail representations by setting ${\mathbf{x}}=(u,(u,-1),(u,-2),\dots)$ and taking an arbitrary infinite sequence ${\mathbf{i}}$. It is easy to see that these two options exhaust all of the summands of $\lambda_{X^{T}}$. These latter infinite tail representations restrict to ${\mathcal{A}}(X,\sigma)$ to yield the representation $0^{(\infty)}\oplus\lambda_{u}^{(\infty)}$. The argument is essentially the same as the analysis above of the representations $\pi_{(u,k)}^{T}$, except that the multiplicities are now countably infinite. It follows that $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is isomorphic to the C*-subalgebra of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))$ generated by the image of ${\mathcal{A}}(X,\sigma)$. We can summarize what we’ve proved so far in the following commutative diagram: $$\xymatrix@C+5pc{{\mathcal{A}}(X^{T},\sigma^{T})\ar@{{}^{(}->}[r]^{\lambda_{X^{% T}}}&\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))\\ {\mathcal{A}}(X,\sigma)\ar@{{}^{(}->}[r]^{\lambda_{X}}\ar@{{}^{(}->}[u]^{j}&% \ar@{{}^{(}-->}[u]_{\tilde{j}}\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma% ))}$$ More concretely, $$\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))=\overline{% \operatorname{span}\{{\mathfrak{t}}_{u}f_{u,v}{\mathfrak{t}}_{u}^{*}:f_{u,v}% \in C_{0}(X^{T}),\ u,v\in\mathbb{F}_{n}^{+}\}}$$ and $$\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))=\overline{\operatorname{% span}\{{\raise 1.505pt\hbox{$\chi$}}_{X}{\mathfrak{t}}_{u}f_{u,v}^{T}{% \mathfrak{t}}_{v}^{*}{\raise 1.505pt\hbox{$\chi$}}_{X}:f_{u,v}\in C_{0}(X),\ u% ,v\in\mathbb{F}_{n}^{+}\}}.$$ Thus it is evident that $$\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))={\raise 1.505pt\hbox{$\chi$% }}_{X}\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T})){\raise 1.505% pt\hbox{$\chi$}}_{X}.$$ So this is a corner of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))$. To see that this is a full corner, observe that if $|w|=k$, then $$({\mathfrak{t}}_{u}f{\mathfrak{t}}_{w}^{*}){\raise 1.505pt\hbox{$\chi$}}_{X}({% \mathfrak{t}}_{w}g{\mathfrak{t}}_{v})={\mathfrak{t}}_{u}(fg{\raise 1.505pt% \hbox{$\chi$}}_{\tau^{k}(X)}){\mathfrak{t}}_{v}.$$ Since the sets $\tau^{k}(X)$ are an increasing sequence of clopen sets with union $X^{T}$, ${\raise 1.505pt\hbox{$\chi$}}_{\tau^{k}(X)}$ is an approximate unit for ${\mathrm{C}}_{0}(X^{T})$. It follows that ${\mathfrak{t}}_{u}fg{\mathfrak{t}}_{v}$ lies in $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T})){\raise 1.505pt\hbox% {$\chi$}}_{X}\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))$. Thus it is a full corner. Moreover we see that the map taking $A\in{\mathcal{A}}(X^{T},\sigma^{T})$ to $A{\raise 1.505pt\hbox{$\chi$}}_{X}$ is a completely contractive idempotent projection onto ${\mathcal{A}}(X,\sigma)$ with complementary map sending $A$ to $A{\raise 1.505pt\hbox{$\chi$}}_{T}$, which is also a complete contraction. Thus ${\mathcal{A}}(X,\sigma)$ is completely contractively complemented in ${\mathcal{A}}(X^{T},\sigma^{T})$. ∎ 5. Simplicity In this last section, we consider when the C*-envelope of ${\mathcal{A}}(X,\sigma)$ is simple. We will soon restrict our attention to the compact case. Definition 5.1. A subset $A\subset X$ is invariant for $(X,\sigma)$ if $\sigma_{i}(A)\subset A$ for $1\leq i\leq n$. Say that $A$ is bi-invariant if in addition, $\sigma_{i}^{-1}(A)\subset A$ for $1\leq i\leq n$. If $(X,\sigma)$ is a dynamical system, let the orbit of a point $x$ be ${\mathcal{O}}^{+}(x)=\{\sigma_{w}(x):x\in\mathbb{F}_{n}^{+}\}$ and let the full orbit of $x$ be the smallest set ${\mathcal{O}}(x)$ containing $x$ which is bi-invariant. If $(X,\sigma)$ is a compact dynamical system, we say $(X,\sigma)$ is minimal if and only if every orbit is dense in $X$ or equivalently, there are no proper closed invariant sets. Example 5.2. Consider the space $X=\{0,1,2\}$. For $i=1,2$, define a map $$\sigma_{i}(j)=\begin{cases}i&\quad\text{if}\quad j=i\\ 0&\quad\text{otherwise}\end{cases}$$ Then $\{0\}$ is a proper closed invariant set. However, one can easily check that X has no proper bi-invariant sets. Observe that $\tilde{X}$ consists of the points $$\displaystyle(\boldsymbol{1},\boldsymbol{1}),(\boldsymbol{2},\boldsymbol{2}),(% {\mathbf{i}},\boldsymbol{0})\quad\text{for}\quad{\mathbf{i}}\in{\mathbf{n}}^{% \mathbb{N}}\quad\text{and}$$ $$\displaystyle\big{(}(i_{1}\dots i_{k-1}2\boldsymbol{1}),0^{k}\boldsymbol{1})% \big{)},\big{(}(i_{1}\dots i_{k-1}1\boldsymbol{2}),0^{k}\boldsymbol{2})\big{)}% \quad\text{for}\quad k\geq 1,$$ where $\boldsymbol{0}$, $\boldsymbol{1}$ and $\boldsymbol{2}$ represent an infinite string of the digit 0,1 or 2 respectively, and $0^{k}$ represents a string of $k$ zeros. Now $$p^{-1}(0)=\tilde{X}\setminus\{(\boldsymbol{1},\boldsymbol{1}),(\boldsymbol{2},% \boldsymbol{2})\}$$ is invariant, but not bi-invariant. It contains the subset $\{({\mathbf{i}},\boldsymbol{0}):{\mathbf{i}}\in{\mathbf{n}}^{\mathbb{N}}\}$ which is a proper closed bi-invariant set. The difference in the two situations results from the fact that the inverse map $\tau$ on $\tilde{X}$ takes a point to its unique preimage under the maps $\tilde{\sigma}_{i}$. In $X$, the point $0$ has multiple preimages. Example 5.3. Consider the space $X={\mathbb{N}}$ with the map $\sigma(n)=n+1$. Then $X$ contains many proper closed invariant sets, but has no proper bi-invariant set. In this case, $X=\tilde{X}$. So no improvement is obtained. This system is not surjective. Adding a tail yields the analogous system on ${\mathbb{Z}}$, which also has many invariant sets, but no proper bi-invariant sets. The algebra $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is the compact operators, which is simple. Proposition 5.4. Let $(X,\sigma)$ be a compact dynamical system. Minimality implies that $(X,\sigma)$ is surjective; and the following are equivalent: $($1$)$ $(X,\sigma)$ is minimal. $($2$)$ $(\tilde{X},\tilde{\sigma})$ is minimal. $($3$)$ $(\tilde{X},\tilde{\sigma})$ has no proper closed bi-invariant subset. Proof.. If $(X,\sigma)$ is not surjective, then $A=\bigcup_{i=1}^{n}\sigma_{i}(X)$ is a proper closed invariant subset. So minimality implies surjectivity. If $A\subset X$ is a proper closed invariant set, then $\tilde{A}=p^{-1}(A)$ is a proper closed invariant subset of $(\tilde{X},\tilde{\sigma})$. So (2) implies (1). Suppose that $B\subset\tilde{X}$ is closed and invariant. Define a sequence of subsets $$B_{0}=B\text{ and }B_{k+1}=\bigcup_{i=1}^{n}\tilde{\sigma}_{i}(B_{k})\quad% \text{for}\quad k\geq 0.$$ Then this is a decreasing sequence of non-empty compact invariant sets. Hence $B_{\infty}=\bigcap_{k\geq 0}B_{k}$ is a non-empty closed invariant set. We claim that $B_{\infty}$ is bi-invariant. Indeed, if $x\in B_{\infty}$, then $x=\tilde{\sigma}_{i}(y)$ for a unique choice of $i$ and $y$, namely $y=\tau(x)$ and $i$ is determined by membership in $\tilde{X}_{i}$, which are disjoint sets. Since $x\in B_{k+1}$, it follows that $y\in B_{k}$. This holds for all $k\geq 0$, and thus $y\in B_{\infty}$. So (3) implies (2). Clearly, (2) implies (3). Finally suppose that $(X,\sigma)$ is minimal, and fix a point $({\mathbf{i}},{\mathbf{x}})\in\tilde{X}$. We will show that the orbit ${\mathcal{O}}^{+}(({\mathbf{i}},{\mathbf{x}}))$ is dense in $\tilde{X}$. To this end, suppose that $({\mathbf{j}},{\mathbf{y}})$ is an arbitrary point in $\tilde{X}$. A basic open neighbourhood of $({\mathbf{j}},{\mathbf{y}})$ is given by an integer $p$ and open sets $U_{k}$ of $y_{k}$ for $0\leq k\leq p$: $$U=\{({\mathbf{i}},{\mathbf{z}}):i_{k}=j_{k}\text{ for }0\leq k<p\text{ and }z_% {k}\in U_{k}\text{ for }0\leq k\leq p\}.$$ Moreover, we may suppose that $\sigma_{j_{k}}(U_{k+1})\subset U_{k}$ by replacing $U_{1}$ by a smaller open neighbourhood of $y_{1}$ which is mapped by $\sigma_{j_{0}}$ into $U_{0}$ by using the continuity of $\sigma_{j_{0}}$. Then replace $U_{2}$ by a smaller open set, etc. Since $X$ is minimal, ${\mathcal{O}}^{+}(x_{0})$ is dense in $X$. Select a word $w$ so that $\sigma_{w}(x_{0})=z_{p}\in U_{p}$. Define $z_{k}=\sigma_{j_{k}}(z_{k+1})$ for $k=p-1,\dots,0$. Consider the point $$\tilde{\sigma}_{j_{0}j_{1}\dots j_{p-1}w}(({\mathbf{i}},{\mathbf{x}}))=\big{(}% (j_{0}j_{1}\dots j_{p-1}w{\mathbf{i}}),(z_{0},z_{1},\dots,z_{p},\dots)\big{)}.$$ This evidently belongs to $U$. Hence ${\mathcal{O}}^{+}(({\mathbf{i}},{\mathbf{x}}))$ is dense in $\tilde{X}$. So $(\tilde{X},\tilde{\sigma})$ is minimal. ∎ Recall that in the surjective case, we have expressed $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ as the crossed product of a C*-algebra ${\mathfrak{B}}$ by an endomorphism, ${\mathfrak{B}}\times_{\alpha}{\mathbb{N}}$, where $\alpha(b)=\frac{1}{n}\sum_{i,j=1}^{n}{\mathfrak{t}}_{i}b{\mathfrak{t}}_{j}^{*}$ and ${\mathfrak{B}}$ is the span of all elements ${\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}$ for $|u|=|v|$ and $f\in{\mathrm{C}}_{0}(\tilde{X})$. An ideal of ${\mathfrak{B}}$ will intersect ${\mathrm{C}}_{0}(\tilde{X})$ in an ideal, and so will have the form $I_{F}=\{f\in{\mathrm{C}}_{0}(\tilde{X}):f|F=0\}$, where $F$ is a closed subset of $\tilde{X}$. Definition 5.5. Call a subset $F$ of $(\tilde{X},\tilde{\sigma})$ robust if $F$ contains $\tilde{\sigma}_{w}\tau^{|w|}(x)$ for every $x\in F$ and $w\in{\mathbb{F}}_{n}$. The robust closed subsets are not necessarily bi-invariant, but they they do have the property that if $|v|=|w|$ and $y\in\tilde{X}$ such that $\tilde{\sigma}_{v}(y)\in F$, then $\tilde{\sigma}_{w}(y)\in F$ as well. This is the key concept in the following result. Lemma 5.6. Let $(X,\sigma)$ be a surjective, multivariable dynamical system. Then the $\alpha$-invariant ideals of ${\mathfrak{B}}$ are in bijective correspondence with the closed robust $\tau$-invariant subsets of $\tilde{X}$ via the map taking ${\mathfrak{J}}$ to $F$, where ${\mathfrak{J}}\cap{\mathrm{C}}_{0}(\tilde{X})=I_{F}$. Proof.. Let ${\mathfrak{J}}$ be an $\alpha$-invariant ideal of ${\mathfrak{B}}$. Now ${\mathfrak{B}}$ is the inductive limit of subalgebras ${\mathfrak{B}}_{k}\simeq{\mathfrak{M}}_{n^{k}}({\mathrm{C}}_{0}(\tilde{X}))$. Hence ${\mathfrak{J}}$ is the closed union of the ideals ${\mathfrak{J}}_{k}:={\mathfrak{J}}\cap{\mathfrak{B}}_{k}$ for $k\geq 0$. These ideals have the form ${\mathfrak{J}}_{k}\simeq{\mathfrak{M}}_{n^{k}}(I_{F_{k}})$ for some closed subsets $F_{k}$ of $\tilde{X}$. The injection of ${\mathfrak{J}}_{k}$ into ${\mathfrak{J}}_{k+1}$ sends ${\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}$, where $|u|=|v|=k$, to $\sum_{i=1}^{n}{\mathfrak{t}}_{ui}(f\circ\tilde{\sigma}_{i}){\mathfrak{t}}_{vi}$. This shows that if $f\in I_{F_{k}}$, then $f\circ\tilde{\sigma}_{i}$ belong to $I_{F_{k+1}}$. This implies that $$\bigcup_{i=1}^{n}\tilde{\sigma}_{i}(F_{k+1})\subset F_{k}.$$ In particular, this implies that $F_{k+1}\subset\tau(F_{k})$. On the other hand, if $f\in I_{F_{k+1}}$ and $|u|=|v|=k$, then ${\mathfrak{J}}_{k+1}$ contains ${\mathfrak{t}}_{u}({\mathfrak{t}}_{i}f{\mathfrak{t}}_{i}^{*}){\mathfrak{t}}_{v% }^{*}={\mathfrak{t}}_{u}{\raise 1.505pt\hbox{$\chi$}}_{i}(f\circ\tau){% \mathfrak{t}}_{v}^{*}$ by Lemma 3.2. Hence ${\raise 1.505pt\hbox{$\chi$}}_{i}(f\circ\tau)$ belongs to $I_{F_{k}}$, and thus vanishes on $F_{k}$. This implies that $$\tau(F_{k})=\bigcup_{i=1}^{n}\sigma_{i}^{-1}(F_{k})\subset F_{k+1}.$$ Together these relations show that $$F_{k+1}=\tau(F_{k})\quad\text{and}\quad F_{k}=\bigcup_{i=1}^{n}\tilde{\sigma}_% {i}(F_{k+1}).$$ Therefore $F_{k}=\tau^{k}(F_{0})$ and $F_{0}=\bigcup_{|w|=k}\tilde{\sigma}_{w}(F_{k})$. Hence $F_{0}$ is robust. Since $\alpha({\mathfrak{J}})\subset{\mathfrak{J}}$, we see that if $f\in I_{F_{k}}$ and $|u|=|v|=k$, then $$\alpha({\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*})=\frac{1}{n}\sum_{i,j=1}^{n}{% \mathfrak{t}}_{i}{\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}{\mathfrak{t}}_{j}^{% *}\in{\mathfrak{J}}_{k+1}.$$ Thus $f\in I_{F_{k+1}}$. Therefore $I_{F_{k}}\subset I_{F_{k+1}}$, whence $F_{k+1}\subset F_{k}$. It follows that $\tau(F_{k})\subset F_{k}$. So each $F_{k}$ is $\tau$ invariant. In particular, $F_{0}$ is robust and $\tau$-invariant. Conversely, suppose that $F_{0}$ is robust and $\tau$-invariant. Define $F_{k}=\tau^{k}(F_{0})$ for $k\geq 1$. It follows from the robustness of $F_{0}$ that each $F_{k}$ is also robust. In particular, $F_{k}=\bigcup_{i=1}^{n}\tilde{\sigma}_{i}(F_{k+1})$. Let $${\mathfrak{J}}_{k}=\operatorname{span}\{{\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^% {*}:|u|=|v|=k\text{ and }f\in I_{F_{k}}\}\quad\text{and}\quad{\mathfrak{J}}=% \overline{\bigcup_{k\geq 0}{\mathfrak{J}}_{k}}.$$ Reversing the arguments above shows that ${\mathfrak{J}}_{k}\subset{\mathfrak{J}}_{k+1}$ and $\alpha({\mathfrak{J}}_{k})\subset{\mathfrak{J}}_{k+1}$ for all $k\geq 0$. It follows that the union ${\mathfrak{J}}$ is an $\alpha$-invariant ideal of ${\mathfrak{B}}$. An element $f\in{\mathfrak{J}}_{k}\cap{\mathrm{C}}_{0}(\tilde{X})$ is represented in ${\mathfrak{B}}_{k}$ as $\sum_{|u|=k}{\mathfrak{t}}_{u}(f\circ\tilde{\sigma}_{u}){\mathfrak{t}}_{u}^{*}$. This requires $f\circ\tilde{\sigma}_{u}\in I_{F_{k}}$ for all $|u|=k$. Arguing as above, this means that $f\in I_{F_{0}}$. Therefore ${\mathfrak{J}}_{k}\cap{\mathrm{C}}_{0}(\tilde{X})=I_{F_{0}}$ for all $k\geq 0$, and thus the same holds for ${\mathfrak{J}}$. This shows that the map taking ${\mathfrak{J}}$ to $F_{0}$ is a surjection onto the collection of $\tau$-invariant, closed robust sets. Also the details of the structure show that $F_{0}$ determines the sets $F_{k}$ and hence the ideals ${\mathfrak{J}}_{k}$. So this map is injective. ∎ Remark 5.7. There are $\alpha$-invariant ideals determined by non-constant sequences of sets. An easy example starts with $X=[0,1]\times 2^{\mathbb{N}}$ and two maps $\sigma_{i}(x,{\mathbf{i}})=(x^{2},i{\mathbf{i}})$. Since $\sigma_{1}$ and $\sigma_{2}$ are injective maps with complementary ranges, one sees that $\tilde{X}=X$. Consider $F_{0}=[r,1]\times 2^{\mathbb{N}}$ for any $0<r<1$. Then $F_{k}=[r^{2^{-k}},1]\times 2^{\mathbb{N}}$ satisfy the relations, and therefore determine a proper $\alpha$-invariant ideal of ${\mathfrak{B}}$. The only proper closed bi-invariant sets are $\{0\}\times 2^{\mathbb{N}}$ and $\{1\}\times 2^{\mathbb{N}}$. We will be interested in $\alpha$-invariant ideals of ${\mathfrak{B}}$ which are obtained from ideals of ${\mathfrak{B}}\times_{\alpha}{\mathbb{N}}$ by intersection with ${\mathfrak{B}}$. This puts an additional constraint on ${\mathfrak{J}}$, namely that ${\mathfrak{t}}_{i}^{*}{\mathfrak{J}}{\mathfrak{t}}_{j}\subset{\mathfrak{J}}$. Reasoning as above, one sees that if $f\in I_{F_{k+1}}$, then $f\in I_{F_{k}}$. So we deduce that $F_{k}=F_{0}$ for all $k$, and $F$ is a bi-invariant set. So the ideals associated to bi-invariant sets play a more important role for us. An apparently weaker but more intrinsic condition leads to the same result. Definition 5.8. An ideal ${\mathfrak{J}}$ of ${\mathfrak{B}}$ is $\alpha$-bi-invariant if $\alpha({\mathfrak{J}})\subset{\mathfrak{J}}$ and whenever $\alpha(b)\in{\mathfrak{J}}$, then $b\in{\mathfrak{J}}$. Corollary 5.9. Let $(X,\sigma)$ be a surjective multivariable dynamical system. Then the $\alpha$-bi-invariant ideals of ${\mathfrak{B}}$ are in bijective correspondence with the closed $\tilde{\sigma}$-bi-invariant subsets of $\tilde{X}$ via the map sending ${\mathfrak{J}}$ to ${\mathfrak{J}}\cap{\mathrm{C}}_{0}(\tilde{X})=I_{F}$. Proof.. In particular, ${\mathfrak{J}}$ is $\tau$-invariant. So we adopt the notation of the previous proof to describe ${\mathfrak{J}}$. Suppose that $f\in I_{F_{k}}$. Then $\tau^{k}(f)=n^{-k}\sum_{|u|=|v|=k}{\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}$ belongs to ${\mathfrak{J}}_{k}$. By the bi-invariance, we have $f\in{\mathfrak{J}}$. Therefore $I_{F_{0}}=I_{F_{k}}$. Thus $$F_{0}=\tau(F_{0})=\bigcup_{i=1}^{n}\tilde{\sigma}_{i}(F_{0}).$$ In other words, $F_{0}$ is bi-invariant. Conversely, if $F_{0}$ is bi-invariant, then following the construction of Lemma 5.6, we have $F_{k}=F_{0}$ and and $F_{k}=\bigcup_{i=1}^{n}\tilde{\sigma}_{i}(F_{k+1})$ for all $k\geq 0$. It is easy to see that if $b\in{\mathfrak{J}}_{k+1}$, then ${\mathfrak{t}}_{i}^{*}b{\mathfrak{t}}_{j}$ belongs to ${\mathfrak{J}}_{k}$ for all $i,j$. In particular, if $\alpha(b)\in{\mathfrak{J}}_{k+1}$, then $b\in{\mathfrak{J}}_{k}$. So ${\mathfrak{J}}$ is $\alpha$-bi-invariant. ∎ These results are more transparent in the unital case. We have the following variant on Proposition 5.4. Corollary 5.10. Let $(X,\sigma)$ be a surjective, compact, multivariable dynamical system with $n\geq 2$. Then the following are equivalent: $($1$)$ $X$ is minimal. $($2$)$ ${\mathfrak{B}}$ has no proper $\alpha$-invariant ideals. $($3$)$ ${\mathfrak{B}}$ has no proper $\alpha$-bi-invariant ideals. Proof.. By Proposition 5.4, minimality of $X$ is equivalent to having no proper closed bi-invariant subsets in $(\tilde{X},\tilde{\sigma})$. So by Corollary 5.9, this is equivalent to having no proper $\alpha$-bi-invariant ideals in ${\mathfrak{B}}$. So (1) and (3) are equivalent. Clearly, (3) implies (2). However, by Lemma 5.6, (2) implies the existence of a proper robust $\tau$-invariant subset $F$. It follows that $\bigcap_{k\geq 0}\tau^{k}(F)$ is a proper closed bi-invariant set. ∎ We can now prove the main result of this section. Theorem 5.11. Let $(X,\sigma)$ be a compact multivariable dynamical system ($n\geq 2$). Then $\mathrm{C}^{*}_{\text{e}}(\mathcal{A}(X,\sigma))$ is simple if and only if $(X,\sigma)$ is minimal. Proof.. First suppose that $X$ is surjective and is not minimal. By Proposition 5.4, this is equivalent to the existence of a proper closed bi-invariant subset $F$ of $\tilde{X}$. Corollary 5.10 provides an $\alpha$-bi-invariant ideal of ${\mathfrak{B}}$ determined by $F$. Define $${\mathfrak{I}}=\operatorname{span}\{{\mathfrak{t}}_{u}f{\mathfrak{t}}_{v}^{*}:% f\in I_{F}\text{ and }u,v\in\mathbb{F}_{n}^{+}\}.$$ Since $F$ is $\tilde{\sigma}$-invariant, this is seen to be an ideal of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$. We use the $\tau$-invariance of $F$ to see that ${\mathfrak{I}}\cap{\mathrm{C}}_{0}(\tilde{X})=I_{F}$. This follows from Lemma 3.2 since this intersection will contain ${\mathfrak{t}}_{w}f{\mathfrak{t}}_{w}^{*}={\raise 1.505pt\hbox{$\chi$}}_{w}(f% \circ\tau^{|w|})$. It follows that $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma)$ is not simple. When $(X,\sigma)$ is not surjective, it is definitely not minimal. Moreover, it is clear that the surjective system $(X^{T},\sigma^{T})$ is not minimal either. Thus $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))$ is not simple. Now $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is a full corner of $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X^{T},\sigma^{T}))$, so they are Morita equivalent. In particular, there is a bijective correspondence between their ideals. So $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))$ is not simple either. Now suppose that $X$ is minimal. Then in particular $(X,\sigma)$ is surjective. Lemma 5.6 shows that ${\mathfrak{B}}$ has no proper $\alpha$-invariant ideals. We will apply a result of Paschke [16] to see that $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))\simeq{\mathfrak{B}}\times_{% \alpha}{\mathbb{N}}$ is simple. The C*-algebra ${\mathfrak{B}}$ is the inductive limit of matrix algebras over ${\mathrm{C}}(\tilde{X})$. Since $X$ is compact, these algebras are unital and the imbeddings are unital. It follows from Johnson [14] that ${\mathfrak{B}}$ is strongly amenable. Since ${\mathfrak{B}}$ has no proper $\alpha$-invariant ideals, Paschke’s result shows that ${\mathfrak{B}}\times_{\alpha}{\mathbb{N}}$ is simple. ∎ Remark 5.12. When $n=1$, the previous theorem is false. For example, when $X$ is just one point and $\sigma$ is the identity mapping, then ${\mathcal{A}}(X,\sigma)$ is the disc algebra. This system is obviously minimal, but $\mathrm{C}^{*}_{\text{e}}({\mathcal{A}}(X,\sigma))={\mathrm{C}}({\mathbb{T}})$ is not simple. Since $\tilde{X}=X$ is a point, this follows, for example, from Peters [17] identification $$\mathrm{C}^{*}_{\text{e}}(\mathcal{A}(X,\sigma))={\mathrm{C}}(\tilde{X})\times% _{\alpha}{\mathbb{Z}}={\mathrm{C}}({\mathbb{T}}).$$ It is well-known that one must require $\tilde{X}$ to be infinite to apply the previous simplicity criteria. But when $n\geq 2$, $\tilde{X}$ is necessarily infinite. The results of Schweizer [18] show that unital Cuntz–Pimsner algebras are simple if and only if the analogue of the C*-subalgebra ${\mathfrak{B}}$ has no invariant ideals. This result could be applied here, but is technically more difficult. Naturally one wants a nice condition that is equivalent to simplicity in the non-compact case as well. We suspect that this should hold precisely when $\tilde{X}$ has no proper bi-invariant closed subsets. Example 5.3 shows that this is not equivalent to proper invariant sets in $X$. Example 5.2 shows that this is not equivalent to proper bi-invariant sets in $X$ even when $X$ is compact. So we have no good idea about a dynamical condition on $X$ which determines this property. References [1] C. Anantharaman-Delaroche, Purely infinite C*-algebras arising from dynamical system, Bull. Soc. Math. France 125 (1997), 199–225. [2] W.B. Arveson, Analyticity in operator algebras, Amer.  J. Math. 89 (1967), 578–642. [3] W. Arveson, Subalgebras of C*-algebras, Acta Math. 123 (1969), 141–224. [4] W. Arveson, The noncommutative Choquet boundary, J. Amer. Math. Soc. 21 (2008), 1065–1084. [5] W. Arveson and K. Josephson, Operator algebras and measure preserving automorphisms II, J. Funct. Anal. 4, (1969), 100–134. [6] D.P. Blecher and C. Le Merdy, Operator algebras and their modules—an operator space approach, Oxford University Press, Oxford, 2004. [7] J. Cuntz, Simple C*-algebras generated by isometries, Commun. Math. Phys. 57 (1977), 173–185. [8] J. Cuntz, The internal structure of simple C*-algebras, Operator Algbebras and Applications, R.V. Kadison (Ed.), Proc. Symposia Pure Math. 38 Part I, American Mathematical Society, 1982, pp. 85–115. [9] K.R. Davidson and E.G. Katsoulis, Operator algebras for multivariable dynamics, preprint 2007. [10] V. Deaconu, Groupoid associated with endomorphisms, Trans. Amer. Math. Soc. 347 (1995), 1779–1786. [11] V. Deaconu, A. Kumjian and P. Muhly, Cohomology of topological graphs and Cuntz-Pimsner algebras, [12] M. Dritschel and S. McCullough, Boundary representations for families of representations of operator algebras and spaces, J. Operator Theory 53 (2005), 159–167. [13] M. Hamana, Injective envelopes of operator systems, Publ. Res. Inst. Math. Sci. 15 (1979), 773-785. [14] B.E. Johnson, Cohomology in Banach algebras, Mem. Amer. Math. Soc. 127, American Mathematical Society, Providence, R.I., 1972. [15] E. Katsoulis and D. Kribs, Tensor algebras of C*-correspondences and their C*-envelopes, J. Funct. Anal. 234 (2006), 226–233. [16] W. L. Paschke, The crossed product of a C*-algebra by an endomorphism, Proc. Amer. Math. Soc. 80 (1980), 113–118. [17] J. Peters, The C*-envelope of a semicrossed product and nest representations, preprint, 2006. [18] J. Schweizer, Dilations of C*-correspondences and the simplicity of Cuntz-Pimsner algebras, J. Funct. Anal. 180 (2001), 404–425. [19] P.J. Stacey, Crossed products of C*-algebras by endomorphisms, J. Austral. Math. Soc. Series A 54 (1993), 204–212.
Incompleteness of the bond market with Lévy noise under the physical measure Michał Barski Faculty of Mathematics and Computer Science, University of Leipzig, Germany Faculty of Mathematics, Cardinal Stefan Wyszyński University in Warsaw, Poland [email protected] Abstract The problem of completeness of the forward rate based bond market model driven by a Lévy process under the physical measure is examined. The incompleteness of market in the case when the Lévy measure has a density function is shown. The required elements of the theory of stochastic integration over the compensated jump measure under a martingale measure is presented and the corresponding integral representation of local martingales is proven. Key words: bond market, completeness, representation of local martingales, model under physical measure. AMS Subject Classification: 91B28, 91B70. 1 Introduction A bond with maturity $T\geq 0$ is a financial contract paying to its owner $1$ at the date $T$. The price of the bond $P(t,T),t\in[0,T]$ is a stochastic process satisfying $P(T,T)=1$ and the family $P(\cdot,T);\ T\in[0,T^{\ast}]$ forms a bond market with a finite time horizon $T^{\ast}<+\infty$. One possible approach to construct the bond market model is based on the random field $f(t,T);\ t,T\in[0,T^{\ast}]$ called forward rate. The prices are then defined by the exponential formula $$\displaystyle P(t,T)=e^{-\int_{t}^{T}f(t,u)du},\quad t\in[0,T],\ T\in[0,T^{% \ast}].$$ (1.1) Random behaviour in the model is enforced by a Lévy process $Z$ defined on a probability space $(\Omega,\mathcal{F},P)$ with filtration $(\mathcal{F}_{t}),t\in[0,T^{\ast}]$. For any $T\in[0,T^{\ast}]$ the forward rate process $f(\cdot,T)$ is defined by the dynamics of the form $$\displaystyle df(t,T)=\alpha(t,T)dt+\sigma(t,T)dZ(t),\quad t\in[0,T].$$ (1.2) The measure $P$ under which the model is constructed will be called a physical measure. If $Z$ is a Wiener process then (1.1)-(1.2) provide the earliest form of the model introduced by Heath, Jarrow and Morton in [6] which was afterwards extensively studied in the literature. The modification involving a Lévy process reflects better the real behaviour of the bond prices like, for instance, their heavy-tailed distributions. On the other side it also leads to new problems concerned with the definition of bond portfolios, option pricing and hedging which were absent in the no-jump setting. Let $X$ be an $\mathcal{F}_{T^{\ast}}$-measurable random variable which represents the payoff at time $T^{\ast}$ of a financial contract. A bond portfolio $\varphi$, which is to be precisely defined, replicates $X$ if the corresponding wealth process $X^{\varphi}$ satisfies $$\displaystyle X^{\varphi}_{T^{\ast}}=X,\quad P-a.s..$$ (1.3) If each bounded payoff can be replicated then the market is called complete and incomplete in the opposite case. The analysis of the problem (1.3), that is the issue of existence of $\varphi$, requires the passage to the risk-neutral setting governed by the family of the so called martingale measures. Recall, $Q$ is a martingale measure if it is equivalent to $P$ and the discounted bond prices are $Q$-local martingales. Application of the Girsanov theorem, see [9], yields the dynamics of the forward rate under $Q$, which is $$\displaystyle df(t,T)=\tilde{\alpha}(t,T)dt+\sigma(t,T)d\tilde{Z}(t),\quad t,T% \in[0,T^{\ast}],$$ (1.4) where $\tilde{\alpha}(\cdot,\cdot)$ is a modified drift and $\tilde{Z}$ stands for the transformation of $Z$ under $Q$. If $Z=W$ is a Wiener process under $P$ so is $\tilde{Z}=\tilde{W}$ under $Q$ and the martingale representation theorem provides the integral decomposition $$\displaystyle M_{t}=M_{0}+\int_{0}^{t}\phi_{M}(s)d\tilde{W}(s),\quad t\in[0,T^% {\ast}],$$ of the martingale $M_{t}=E^{Q}[X\mid\mathcal{F}_{t}],t\in[0,T^{\ast}]$. Above $\phi_{M}$ is a certain process and it enables to determine $\varphi$ which solves (1.3). If $Z$ is a general Lévy process then the arguments above fail for two reasons. The first is that Lévy processes are not stable under a measure change, that is $\tilde{Z}$ is no longer a Lévy process under $Q$. Its increments may be not stationary nor independent. Consequently, the forward rate dynamics (1.4) has a non-Lévy structure. The second reason, which in fact arises from the first one, is that we need a relevant version of the martingale representation theorem under $Q$. A model framework which is commonly used in the literature and allows to overcome these two difficulties is to assume that $P$ is simultaneously a martingale measure. Then $Z=\tilde{Z}$ and any local martingale can be represented in the form $$\displaystyle M_{t}=M_{0}+\int_{0}^{t}\phi_{M}(s)dW(s)+\int_{0}^{t}\int_{% \mathbb{R}}\psi_{M}(s,y)\tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}].$$ (1.5) with some $\phi_{M},\psi_{M}$, see [9]. Above $\tilde{\pi}$ stands for the compensated jump measure of $Z$ under $P$. As was shown in [2] the existence of $\phi_{M},\psi_{M}$ for $M_{t}:=E[X\mid\mathcal{F}_{t}]$ does not imply the existence of $\varphi$ solving (1.3), that is there exists a financial contract $X$ which can not be replicated. The problem (1.3), in the case when the physical measure $P$ is not a martingale measure, has not been examined in the literature. In this paper we investigate the problem (1.3) without the assumption that $P$ is a martingale measure. We provide a systematic treatment of the issue of passage from the physical measure $P$ to the martingale one $Q$ and prove a required version of the martingale representation theorem which allows to write any $Q$-local martingale $M$ in the form $$\displaystyle M_{t}=M_{0}+\int_{0}^{t}\phi_{M}(s)d\tilde{W}(s)+\int_{0}^{t}% \int_{\mathbb{R}}\psi_{M}(s,y)\tilde{\pi}_{Q}(ds,dy),\quad t\in[0,T^{\ast}],$$ (1.6) where $\tilde{\pi}_{Q}$ is a compensated jump measure of $Z$ under $Q$. In particular we present precisely the construction of the second integral in (1.6). Our main result is Theorem 4.4 in Section 4.3 showing that there exists a bounded random variable $X$ for which (1.3) has no solution providing that the Lévy measure of $Z$ has a density function. This means that then the bond market model is incomplete, no matter if the martingale measure is unique or not. The result implies that in the bond market the classical relation known from stock markets between the uniqueness of the martingale measure and completeness brakes down. The paper consists of three parts. In Section 2 we discuss properties of a Lévy process which are needed to formulate the martingale decomposition formula (1.5) and further to describe equivalent measures. The construction of a stochastic integral over the compensated jump measure under an equivalent measure and the related martingale representation formula (1.6) are presented in Section 3. The incompleteness of the bond market is treated in Section 4 where we precisely introduce the bond market model, the concept of a bond portfolio and finally prove Theorem 4.4. 2 Lévy process and related martingale representation We start with summarizing properties of Lévy processes which are needed in the paper. Their proofs can be found, for instance, in [1]. Let $Z$ be a real valued Lévy process on a probability space $(\Omega,\mathcal{F},P)$ with filtration $(\mathcal{F}_{t}),t\in[0,T^{\ast}]$ such that $\mathcal{F}_{T^{\ast}}=\mathcal{F}$. It is known that $Z$ has a modification with càdlàg trajectories and only this modification will be considered in the sequel. For any $\varepsilon>0$ the number of jumps on $[0,T^{\ast}]$ such that $\mid\triangle Z_{s}\mid:=\mid Z_{s}-Z_{s-}\mid>\varepsilon$ is finite almost surely. Consequently, for any $A\subseteq\mathbb{R}$ which is separated from zero, that is $0\notin\bar{A}$, where $\bar{A}$ stands for the closure of $A$, the random variable $$\pi(t,A):=\sharp\{s\in[0,t]:\triangle Z_{s}\in A\},\quad t\in[0,T^{\ast}],$$ is well defined. It counts the number of jumps of $Z$ on the interval $[0,t]$ which lie in the set $A$. The function $\pi(\cdot,\cdot)$ can be treated as a $\sigma$-finite measure on $[0,T^{\ast}]\times\mathbb{R}$. It is called a jump measure of $Z$. From the independence and stationarity of the increments of $Z$ follow two important properties of the jump measure, that is for any $A,B$ separated from zero hold $$\displaystyle\pi(t,A),t\in[0,T^{\ast}]\ \text{is a Poisson process with % intensity}\ \lambda_{A}:=E[\pi(1,A)],$$ (2.1) $$\displaystyle\text{For any}\ t\in[0,T^{\ast}]\ \text{the r.v.}\ \pi(t,A),\pi(t% ,B)\ \text{are independent if}\ A\cap B=\emptyset.$$ (2.2) The $\sigma$-finite measure $\nu$ on $\mathbb{R}$ defined by $$\displaystyle\nu(A):=E[\pi(1,A)],\quad 0\notin\bar{A},$$ (2.3) is called an intensity measure or a Lévy measure of $Z$. It satisfies the integrability condition $$\displaystyle\int_{\mathbb{R}}(\mid y\mid^{2}\wedge\ 1)\nu(dy)<+\infty.$$ (2.4) Because of (2.1)-(2.2) and (2.3) the measure $\pi$ is called a Poisson random measure with intensity measure $\nu$. On the other hand any measure satisfying (2.4) is an intensity measure of some Poisson random measure. Further, it follows that, for a separated from zero set $A$, the process $$\tilde{\pi}(t,A):=\pi(t,A)-t\nu(A),\quad t\in[0,T^{\ast}],$$ is a martingale, which means that $dt\nu(dy)$ is a compensating measure for $\pi(dt,dy)$. The measure $\tilde{\pi}(dt,dy)$ is called a compensated jump measure of $Z$. For $f:\mathbb{R}\longrightarrow\mathbb{R}$, a set $A$ separated from zero and any $t\in[0,T^{\ast}]$ the random variable $$\int_{0}^{t}\int_{A}f(y)\pi(ds,dy)=\sum_{s\in[0,t]}f(\triangle Z_{s})\mathbf{1% }_{A}(\triangle Z_{s}),$$ is integrable with expectation $$E\Big{(}\int_{0}^{t}\int_{A}f(y)\pi(ds,dy)\Big{)}=t\int_{A}f(y)\nu(dy).$$ Further, the process $\int_{0}^{t}\int_{A}f(y)\tilde{\pi}(ds,dy)$ is a square integrable martingale and $$\displaystyle E\Big{(}\Big{(}\int_{0}^{t}\int_{A}f(y)\pi(ds,dy)\Big{)}^{2}\Big% {)}=t\int_{A}f^{2}(y)\nu(dy),\quad t\in[0,T^{\ast}].$$ (2.5) For $f(y)=y$ and a sequence of sets $A_{n}:=\{\frac{1}{n}<\mid y\mid\leq 1\}$ one can prove, using (2.5) and (2.4), that the sequence $$\int_{0}^{t}\int_{A_{n}}y\ \tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}],n=1,2,...,$$ converges almost surely uniformly on $[0,T^{\ast}]$. The limit is denoted by $$\int_{0}^{t}\int_{\{\mid y\mid\leq 1\}}y\ \tilde{\pi}(ds,dy):=\lim_{n% \rightarrow+\infty}\int_{0}^{t}\int_{A_{n}}y\ \tilde{\pi}(ds,dy),\quad t\in[0,% T^{\ast}].$$ Now we are ready to formulate the Lévy-Itô decomposition. It tells that any Lévy process $Z$ admits the following representation $$\displaystyle Z_{t}=at+W(t)+\int_{0}^{t}\int_{\{\mid y\mid\leq 1\}}y\ \tilde{% \pi}(ds,dy)+\int_{0}^{t}\int_{\{\mid y\mid>1\}}y\ \pi(ds,dy),\quad t\in[0,T^{% \ast}],$$ (2.6) where $a\in\mathbb{R}$, $W$ is a Wiener process with variance $q>0$, that is $Var(W_{t})=qt$. Moreover, all the ingredients in (2.6) are independent. The Lévy-Itô decomposition is an important tool in the analysis of Lévy processes. One of its consequences is that it makes possible to define the stochastic integral $$\displaystyle\int_{0}^{t}f(s)dZ(s),\quad t\in[0,T^{\ast}],$$ (2.7) for integrands $f\in\Phi$, where $\Phi$ is a family of predictable and square integrable processes, i.e. such that $$\int_{0}^{T^{\ast}}\mid f(s)\mid^{2}ds<+\infty.$$ The definition of the class $\Phi$ is commonly known if $Z$ is a Wiener process. The passage to the general case is based on the fact that the first integral on the right side of (2.6) is a square integrable martingale. In the case when $Z$ is a martingale, that is when $$\int_{\{\mid y\mid>1\}}\mid y\mid\nu(dy)<+\infty,\quad\text{and}\quad a=-\int_% {\{\mid y\mid>1\}}y\ \nu(dy),$$ the form of $Z$ is $$Z_{t}=W(t)+\int_{0}^{t}\int_{\mathbb{R}}y\ \tilde{\pi}(ds,dy),\quad t\in[0,T^{% \ast}],$$ and then the integral (2.7) is a local martingale. It turns out that the class $\Phi$ is to narrow to represent any local martingale as a stochastic integral (2.7) with some $f\in\Phi$. However, the integral representation of local martingales is possible in the class of integrals $$\int_{0}^{t}f(s)dW(s),\quad\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}(ds,% dy).$$ In the following section we present the construction of the second integral above. Afterwards, in Section 2.2 we formulate the representation theorem for local martingales. 2.1 Integration over the compensated jump measure Here we present the construction of the integral $$\int_{0}^{t}\int_{U}g(s,y)\ \tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}],$$ where $g:[0,T^{\ast}]\times\mathbb{R}\rightarrow\mathbb{R}$ and $\tilde{\pi}$ stands for a compensated jump measure of the Lévy process $Z$. We start from an intuitive definition of the integral for simple integrands and further extend it for integrands satisfying certain integrability conditions. The procedure provides a class of integrands which allow to obtain the integral representation for any local martingale which is discussed in Section 2.2. The construction presented below has been sketched in [9]. Our presentation contains more details since it will serve as a point of reference for the extension of the concept of integration under an equivalent measure in Section 3.1. The process $g=g(t,y)$ is simple if it has the form $$\displaystyle g(s,y)=g(0,y)\mathbf{1}_{\{s=0\}}+\sum_{i=0}^{n-1}\left(\sum_{j=% 1}^{m_{i}}g_{ij}\mathbf{1}_{(t_{i},t_{i+1}]}(s)\mathbf{1}_{A_{ij}}\right),% \qquad s\in[0,T^{\ast}],\ y\in\mathbb{R},$$ (2.8) where $0=t_{0}<t_{1}<...<t_{n}=T^{\ast}$ is a partition of $[0,T^{\ast}]$ and $A_{ij}$ is a family of sets of $\mathbb{R}$ which are separated from zero, i.e. $$0\notin\bar{A}_{ij}.$$ For a given subinterval $(t_{i},t_{i+1}]$ the process $g$ is a linear combination of the terms $g_{ij}\mathbf{1}_{(t_{i},t_{i+1}]}(s)\mathbf{1}_{A_{ij}}$, where $g_{ij}$ are bounded $\mathcal{F}_{t_{i}}$- measurable random variables and $A_{ij},j=1,2,...,m_{i}$ are disjoint. Notice that we do not assume that the sets $A_{ij}$ and $A_{kl}$ are disjoint for $i\neq k$. Denote the class of all simple processes by $\mathcal{S}$. For $g\in\mathcal{S}$ a stochastic integral $I(g)$ is defined by $$I(g)_{t}=\int_{0}^{t}\int_{U}g(s,y)\tilde{\pi}(ds,dy):=\sum_{i=0}^{n}\sum_{j=1% }^{m_{i}}g_{ij}\tilde{\pi}((t_{i}\wedge t,t_{i+1}\wedge t]\times A_{ij}),% \qquad t\in[0,T^{\ast}].$$ We will show that $I(g)$ is a square integrable martingale and find its second moment. It follows from (2.1)-(2.2) that the processes $$\tilde{\pi}(t,A_{ij})=\pi(t,A_{ij})-t\nu(A_{ij}),\quad t\in[0,T^{\ast}],$$ are square integrable martingales with independent increments and $\tilde{\pi}(t,A_{ij})$, $\tilde{\pi}(t,A_{kl})$ are independent if $A_{ij}\cap A_{kl}=\emptyset$. As a direct consequence of that we obtain the following result. Proposition 2.1 For the sets $A,B\subseteq\mathbb{R}$ separated from zero and $s<t,s,t\in[0,T^{\ast}]$ hold $$\displaystyle E[\tilde{\pi}^{2}\left((s,t]\times A\right)\mid\mathcal{F}_{s}]=% (t-s)\nu(A),$$ $$\displaystyle E[\tilde{\pi}\left((s,t]\times A\right)\cdot\tilde{\pi}\left((s,% t]\times B\right)\mid\mathcal{F}_{s}]=0,\quad\text{if}\ A\cap B=\emptyset,$$ $$\displaystyle E[\tilde{\pi}\left((s,t]\times A\right)\cdot\tilde{\pi}\left((u,% v]\times B\right)\mid\mathcal{F}_{u}]=0,\quad\text{for}\ t\leq u<v\leq T^{\ast}.$$ Proposition 2.1 is a key toll for proving the isometric formula below. Proposition 2.2 For $g\in\mathcal{S}$ the integral $I(g)$ is a square integrable martingale and $$\displaystyle E\left[\mid I(g)_{t}\mid^{2}\right]=E\left[\int_{0}^{t}\int_{% \mathbb{R}}\mid g(s,y)\mid^{2}ds\nu(dy))\right],\quad t\in[0,T^{\ast}].$$ (2.9) Proof: It is clear that $I(g)$ is a martingale. For the sake of simplicity we prove (2.9) for $t=T^{\ast}$ only. From the definition of the integral follows $$E\Big{[}\mid I(g)_{T^{\ast}}\mid^{2}]=E\Big{[}\sum_{i=0}^{n}\sum_{j=1}^{m_{i}}% \sum_{k=0}^{n}\sum_{l=1}^{m_{l}}g_{ij}g_{kl}\cdot\tilde{\pi}((t_{i},t_{i+1}]% \times A_{ij})\cdot\tilde{\pi}((t_{k},t_{k+1}]\times A_{kl})\Big{]}.$$ Using Proposition 2.1 let us calculate the expectations of the terms appearing in the above sum. We need to consider the following three cases: a) if $i=k$ and $j=l$ then $$\displaystyle E[g_{ij}g_{ij}\cdot\tilde{\pi}^{2}((t_{i},t_{i+1}]\times A_{ij})]$$ $$\displaystyle=E\Big{[}\mid g_{ij}\mid^{2}E[\tilde{\pi}^{2}((t_{i},t_{i+1}]% \times A_{ij})\mid\mathcal{F}_{t_{i}}]\Big{]}$$ $$\displaystyle=E\big{[}\mid g_{ij}\mid^{2}\ (t_{i+1}-t_{i})\nu(A_{ij})\big{]},$$ b) if $i=k$ and $j\neq l$ then $$\displaystyle E[g_{ij}g_{il}\cdot\tilde{\pi}((t_{i},t_{i+1}]\times A_{ij})% \cdot\tilde{\pi}((t_{i},t_{i+1}]\times A_{il})]$$ $$\displaystyle=E\Big{[}g_{ij}g_{il}\cdot E[\tilde{\pi}((t_{i},t_{i+1}]\times A_% {ij})\tilde{\pi}((t_{i},t_{i+1}]\times A_{il})\mid\mathcal{F}_{t_{i}}]\Big{]}=0,$$ c) if $i\neq k$ then $$\displaystyle E[g_{ij}g_{kl}\cdot\tilde{\pi}((t_{i},t_{i+1}]\times A_{ij})% \cdot\tilde{\pi}((t_{k},t_{k+1}]\times A_{kl})]$$ $$\displaystyle=E\Big{[}g_{ij}g_{kl}\cdot E[\tilde{\pi}((t_{i},t_{i+1}]\times A_% {ij})\tilde{\pi}((t_{k},t_{k+1}]\times A_{kl})\mid\mathcal{F}_{t_{k}\vee t_{i}% }]\Big{]}=0.$$ From the above follows $$\displaystyle E\Big{[}\big{|}\int_{0}^{T^{\ast}}\int_{\mathbb{R}}g(s,y)\tilde{% \pi}(ds,dy)\big{|}^{2}\Big{]}$$ $$\displaystyle=\sum_{i=0}^{n}\sum_{j=1}^{m_{i}}E[\mid g_{ij}\mid^{2}\ (t_{i+1}-% t_{i})\nu(A_{ij})]$$ $$\displaystyle=E\Big{[}\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid^{2}% ds\nu(dy)\Big{]},$$ which is (2.9). $\square$ The definition of the integral can be extended to the class of all predictable process satisfying $$E\Big{[}\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid^{2}ds\nu(dy)\Big{]% }<+\infty.$$ If this is the case then there exists a sequence $g_{n}\in\mathcal{S},n=1,2,...$ such that $$E\Big{[}\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)-g_{n}(s,y)\mid^{2}ds% \nu(dy)\Big{]}\longrightarrow 0,$$ which implies that $$E[\mid I(g_{n})_{T^{\ast}}-I(g_{m})_{T^{\ast}}\mid^{2}]\underset{n,m}{% \longrightarrow}0.$$ The condition above tells that $\{I(g_{n})\}$ is a Cauchy sequence in the space of square integrable martingales which is complete. Thus there exists a limit $I(g)$ and it defines the integral for the integrator $g$, that is $$\int_{0}^{t}g(s,y)\tilde{\pi}(ds,dy)=I(g)_{t}:=\lim_{n\rightarrow+\infty}I(g_{% n})_{t},$$ and the isometric formula (2.9) still holds. Let us introduce a class $\Psi_{2}$ of all predictable processes satisfying $$\Psi_{2}:\qquad\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid^{2}ds\nu(dy% )<+\infty,\quad P-a.s..$$ By using the localizing arguments one can show that for $g\in\Psi_{2}$ the integral $$\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}],$$ is a well defined locally square integrable martingale. The second class of process which are $\tilde{\pi}$-integrable consists of all predictable ones such that $$\displaystyle E\left[\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid ds\nu% (dy)\right]<+\infty.$$ (2.10) Then it follows from the definition of the compensating measure that $$\displaystyle\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}(ds,dy):=\int_{0}^{% t}\int_{\mathbb{R}}g(s,y){\pi}(dy,ds)-\int_{0}^{t}\int_{\mathbb{R}}g(s,y)ds\nu% (dy),\qquad t\in[0,T^{\ast}],$$ (2.11) is a martingale. In the class $\Psi_{1}$ of predictable processes satisfying $$\Psi_{1}:\qquad\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid ds\nu(dy)<+% \infty,\qquad P-a.s.,$$ the condition (2.10) holds locally and thus $$\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}],$$ is a local martingale. The class of processes which plays the crucial role in representing local martingales is $\Psi_{1,2}$ defined below $$g\in\Psi_{1,2}\qquad\Longleftrightarrow\qquad g\mathbf{1}_{\{\mid g\mid\leq 1% \}}\in\Psi_{2}\quad\text{and}\quad g\mathbf{1}_{\{\mid g\mid>1\}}\in\Psi_{1}.$$ For each $g\in\Psi_{1,2}$ the integral is defined by the decomposition $$\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}(ds,dy):=\int_{0}^{t}\int_{% \mathbb{R}}g\mathbf{1}_{\{\mid g\mid\leq 1\}}\tilde{\pi}(ds,dy)+\int_{0}^{t}% \int_{\mathbb{R}}g\mathbf{1}_{\{\mid g\mid>1\}}\tilde{\pi}(ds,dy),$$ and is a local martingale. The class $\Psi_{1,2}$ can be described alternatively by the condition $$\Psi_{1,2}:\qquad\int_{0}^{T^{\ast}}\int_{\mathbb{R}}(\mid g(s,y)\mid^{2}% \wedge\mid g(s,y)\mid)ds\nu(dy)<+\infty,\quad P-a.s..$$ 2.2 Martingale representation and characterisation of equivalent measures Let $\Phi$ stand for the class of processes integrable with respect to the Wiener process, that is $\phi\in\Phi$ if $\phi$ is predictable and satisfies $$\int_{0}^{T^{\ast}}\mid\phi(s)\mid^{2}ds<+\infty,P-a.s..$$ For any $\phi\in\Phi$ the integral $$\int_{0}^{t}\phi(s)dW(s),\quad t\in[0,T^{\ast}],$$ is a local square integrable martingale. The classes of integrands $\Phi$ and $\Psi_{1,2}$ introduced in Section 3.1 are sufficiently large to represent local martingales as stochastic integrals. The result below has been proven in [9]. Theorem 2.3 Let $M$ be an $\mathbb{R}$-valued $P$-local martingale on $[0,T^{\ast}]$. Then there exist $\phi_{M}\in\Phi$ and $\psi_{M}\in\Psi_{1,2}$ satisfying $$\displaystyle M_{t}=M_{0}+\int_{0}^{t}\phi_{M}(s)dW(s)+\int_{0}^{t}\int_{U}% \psi_{M}(s,y)\tilde{\pi}(ds,dy),\qquad t\in[0,T^{\ast}].$$ (2.12) Moreover, the pair $(\phi_{M},\psi_{M})$ is unique i.e., if $(\phi_{M}^{\prime},\psi_{M}^{\prime})$ satisfies (2.12) then $$\displaystyle\phi_{M}=\phi_{M}^{\prime},\quad dP\times dt-\ \text{a.s.}\quad% \text{and}\quad\psi_{M}=\psi_{M}^{\prime},\quad dP\times dt\times d\nu-\ \text% {a.s.}.$$ Let us consider a measure on $(\Omega,\mathcal{F})$ which is equivalent to $P$. The equivalence implies the existence of a positive density process which can be written in the form $$\displaystyle\rho_{t}:=\frac{dQ}{dP}\Big{\arrowvert}_{\mathcal{F}_{t}}=e^{Y_{t% }},\qquad t\in[0,T^{\ast}],$$ (2.13) with $Y$ such that $\rho$ is a martingale under $P$. The following generalization of the classical Girsanov theorem provides an explicit form of the integral representation of $\rho$ and characterizes the process $Z$ under the measure $Q$, for the proof see [9]. Theorem 2.4 (Girsanov) Let $Q\sim P$ and $Z$ be a Lévy process under $P$ with a characteristic triplet $(a,q,\nu)$. a) There exists a pair of processes $(\phi,\psi)$ such that $\phi\in\Phi$ and $e^{\psi}-1\in\Psi_{1,2}$ such that the density process (2.13) has the form $$\displaystyle d\rho(t)=\rho(t-)\left[\phi(t)dW(t)+\int_{\mathbb{R}}(e^{\psi(t,% y)}-1)\tilde{\pi}(dt,dy)\right],\quad\rho(0)=1,\quad t\in[0,T^{\ast}],$$ (2.14) with $E[\rho_{t}]=1,t\in[0,T^{\ast}]$. b) Under the measure $Q$ the process $$\displaystyle\tilde{W}(t):=W(t)-\int_{0}^{t}\phi(s)ds,\quad t\in[0,T^{\ast}],$$ (2.15) is a Wiener process with variance $q$ and the random measure $$\displaystyle\nu_{Q}(dt,dy):=e^{\psi(t,y)}dt\nu(dy),\quad t\in[0,T^{\ast}],y% \in\mathbb{R},$$ (2.16) is a compensating measure for the jump measure $\pi(dt,dy)$ of $Z$. c) Under the measure $Q$ the process $Z$ admits the representation $$\displaystyle Z(t)=\tilde{a}_{t}+\tilde{W}(t)+\int_{0}^{t}\int_{\{\mid y\mid% \leq 1\}}y\ \tilde{\pi}_{Q}(ds,dy)+\int_{0}^{t}\int_{\{\mid y\mid>1\}}y\ \pi(% ds,dy),$$ (2.17) with $$\tilde{a}_{t}:=at+\int_{0}^{t}\phi(s)ds+\int_{0}^{t}\int_{\{\mid y\mid\leq 1\}% }y(e^{\psi(s,y)}-1)ds\nu(dy).$$ A pair $(\phi,\psi)$ appearing in the theorem will be called a generating pair of the measure $Q$. The Doléans-Dade equation (2.14) can be solved explicitly to see that the density process $\rho$ is actually like in (2.13) with $$\displaystyle Y(t)$$ $$\displaystyle=\int_{0}^{t}\phi(s)dW(s)-\frac{1}{2}\int_{0}^{t}\mid\sqrt{q}\phi% (s)\mid^{2}ds$$ $$\displaystyle+\int_{0}^{t}\int_{\mathbb{R}}(e^{\psi(s,y)}-1)\tilde{\pi}(ds,dy)% -\int_{0}^{t}\int_{\mathbb{R}}(e^{\psi(s,y)}-1-\psi(s,y))\pi(ds,dy).$$ (2.18) To comment on the theorem let us introduce two sets $\underline{A}=\underline{A}(t)$ and $\hat{A}=\hat{A}(t)$ by $$\underline{A}:=\{(s,y)\in[0,t]\times\mathbb{R}:\mid e^{\psi(s,y)}-1\mid\leq 1% \},\quad\hat{A}:=\{(s,y)\in[0,t]\times\mathbb{R}:\mid e^{\psi(s,y)}-1\mid>1\}.$$ First, let us explain that (2.17) is well defined. Indeed, from the definition of the class $\Psi_{1,2}$ follows that $$\displaystyle\int_{0}^{t}\int_{B}\mid y(e^{\psi(s,y)}-1)\mid ds\nu(dy)=\int_{[% 0,t]\times B\cap\underline{A}}\mid y(e^{\psi(s,y)}-1)\mid ds\nu(dy)$$ $$\displaystyle+\int_{[0,t]\times B\cap\hat{A}}\mid y(e^{\psi(s,y)}-1)\mid ds\nu% (dy)$$ $$\displaystyle\leq\left(\int_{0}^{t}\int_{B}\mid y\mid^{2}ds\nu(dy)\right)^{% \frac{1}{2}}\left(\int_{\underline{A}}\mid e^{\psi(s,y)}-1\mid^{2}ds\nu(dy)% \right)^{\frac{1}{2}}+\int_{\hat{A}}\mid e^{\psi(s,y)}-1\mid ds\nu(dy)<+\infty,$$ where $B:=\{y:\mid y\mid\leq 1\}$ and thus $\tilde{a}_{t}$ is actually well defined. The compensating measure $\nu_{Q}(ds,dy)$ satisfies $$\displaystyle\int_{0}^{t}\int_{\mathbb{R}}(\mid y\mid^{2}\wedge\ 1)\nu_{Q}(ds,% dy)<+\infty,$$ (2.19) because $$\displaystyle\int_{0}^{t}\int_{\mathbb{R}}(\mid y\mid^{2}\wedge\ 1)e^{\psi(s,y% )}\ ds\nu(dy)<+\infty\quad\Longleftrightarrow\quad\int_{0}^{t}\int_{\mathbb{R}% }(\mid y\mid^{2}\wedge\ 1)\mid e^{\psi(s,y)}-1\mid\ ds\nu(dy)<+\infty,$$ and using a similar decomposition with the sets $\underline{A}$ and $\hat{A}$ as above we obtain $$\displaystyle\int_{0}^{t}\int_{\mathbb{R}}(\mid y\mid^{2}\wedge\ 1)\mid e^{% \psi(s,y)}-1\mid ds\nu(dy)$$ $$\displaystyle\leq\int_{0}^{t}\int_{\mathbb{R}}(\mid y\mid^{2}\wedge\ 1)ds\nu(dy)$$ $$\displaystyle+\int_{\hat{A}}\mid e^{\psi(s,y)}-1\mid ds\nu(dy)<+\infty,$$ hence all the terms in (2.17) are well defined. Actually (2.17) follows immediately from the Lévy-Itô decomposition (2.6) by adding and subtracting the terms $\int\phi ds$ and $\int\int_{B}ye^{\psi}ds\nu(dy)$. It follows that under $Q$ the process $Z$ is a Lévy process only if $\phi$ is a deterministic constant and $\psi$ is a deterministic function independent on time, that is $\phi(\omega,t)=\phi$, $\psi(\omega,t,y)=\psi(y)$. This is a very particular situation and it follows that, in general, $Z$ is not a Lévy process under $Q$ any more. In particular, the measure $Q$ changes stochastic properties of the jumps of $Z$ because the new compensating measure $\nu_{Q}$ is random and time dependent. Hence $\pi$ is no longer a Poisson random measure under $Q$. It is clear that under $Q$ the small jumps are square summable and there are only finite number of big jumps on $[0,T^{\ast}]$ because $Q\sim P$. However, the condition (2.19) does not imply that the corresponding expectations are finite just like it was under the physical measure $P$. Since the integral in (2.19) is continuous in $t$, it follows from (2.19) that there exists a localizing sequence of stopping times $\{\tau_{n},n=1,2,...\}$ such that $$E^{Q}\Big{[}\int_{0}^{\tau_{n}}\int_{\mathbb{R}}(\mid y\mid^{2}\wedge\ 1)\nu_{% Q}(ds,dy)\Big{]}<+\infty,\quad n=1,2,...,$$ which implies that $$\displaystyle E^{Q}\Big{[}\sum_{s\in[0,\tau_{n}]}\mid\triangle Z_{s}\mid^{2}% \mathbf{1}_{\{\mid\triangle Z_{s}\mid\leq 1\}}\Big{]}<+\infty,\quad E^{Q}\Big{% [}\sum_{s\in[0,\tau_{n}]}\mathbf{1}_{\{\mid\triangle Z_{s}\mid>1\}}\Big{]}<+% \infty,\quad n=1,2,...\ .$$ Moreover, for any set $A\in\mathbb{R}$ with $0\notin\bar{A}$ holds $$\displaystyle\nu_{Q}([0,t],A)=\int_{0}^{t}\int_{A}e^{\psi(s,y)}ds\nu(dy)<+\infty,$$ (2.20) and using similar arguments as above one can show that the process $$\tilde{\pi}_{Q}(t,A),\qquad t\geq 0,$$ is a $Q$-local martingale and not a $Q$-martingale in general. The property (2.20) follows from the estimation $$\int_{0}^{t}\int_{A}\mid e^{\psi(s,y)}-1\mid ds\nu(dy)\leq t\nu(A)+\int_{\hat{% A}}\mid e^{\psi(s,y)}-1\mid ds\nu(dy)<+\infty.$$ 3 Martingale representation under equivalent measures As explained in Section 2.2 the process $Z$, which is a Lévy process under $P$, is not a Lévy process under an equivalent measure $Q$ any more. Its jump measure is not a Poisson measure and hence Theorem 2.3 can not be applied for $Q$-local martingales. Our aim is to formulate an analogue result to Theorem 2.3 and to this aim also to construct the integral over the compensated jump measure of $Z$ under $Q$. A comprehensive exposition of this part of the theory is missing in the literature. 3.1 Integration over the compensated jump measure under $Q$ Let $(\phi,\psi)$ be a generating pair of a measure $Q\sim P$. In view of Theorem 2.4 the jump measure $\pi(dt,dy)$ of $Z$ has a new compensating measure under $Q$ of the form $\nu_{Q}(dt,dy)=e^{\psi(t,y)}dt\nu(dy)$. Consequently, $\tilde{\pi}_{Q}(ds,dy)=\pi(ds,dy)-e^{\psi(s,y)}ds\nu(dy)$ is a compensated jump measure of $Z$ under $Q$. Our aim now is to construct the stochastic integral $$\displaystyle\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}_{Q}(ds,dy),\qquad t% \in[0,T^{\ast}],$$ (3.1) for $g:[0,T^{\ast}]\times\mathbb{R}\longrightarrow\mathbb{R}$. We will start from simple processes and then extend the construction to a wider class of integrands. First notice that $$E^{Q}[\pi(t,A)]=E^{Q}[\nu_{Q}(t,A)]=E^{Q}\Big{[}\int_{0}^{t}\int_{A}e^{\psi(s,% y)}ds\nu(dy)\Big{]},\quad t\geq 0,$$ may be infinite even if the set $A$ is separated from zero. For that reason we introduce an additional restriction on the separated from zero subsets of $\mathbb{R}$, that is $$\displaystyle E^{Q}[\nu_{Q}([0,T^{\ast}]\times A)]<+\infty.$$ (3.2) If (3.2) holds then the process $\tilde{\pi}_{Q}(t,A)=\pi(t,A)-\nu_{Q}(t,A)$ is a $Q$-martingale although its increments are not independent. In fact $\tilde{\pi}_{Q}(t,A)$ is then a $Q$-square integrable martingale and its properties are formulated below. Lemma 3.1 Let $A,B\subseteq\mathbb{R}$ be such that for both (3.2) holds. Then the processes $\tilde{\pi}_{Q}(t,A)$, $\tilde{\pi}_{Q}(t,B)$ are square integrable martingales under $Q$ on $[0,T^{\ast}]$ and their quadratic covariation is given by $$\displaystyle\mbox{\boldmath$\langle$}\tilde{\pi}_{Q}(t,A),\tilde{\pi}_{Q}(t,B% )\mbox{\boldmath$\rangle$}=\nu_{Q}([0,t]\times A\cap B),\quad t\in[0,T^{\ast}].$$ (3.3) In particular, the processes $$\displaystyle(\tilde{\pi}_{Q}(t,A))^{2}-\nu_{Q}([0,t]\times A);\qquad\tilde{% \pi}_{Q}(t,A)\cdot\tilde{\pi}_{Q}(t,B)-\nu_{Q}([0,t],A\cap B),\quad t\in[0,T^{% \ast}]$$ are $Q$-martingales and if $A\cap B=\emptyset$ then $$E^{Q}[\tilde{\pi}_{Q}(t,A)\cdot\tilde{\pi}_{Q}(t,B)]=0,\quad t\in[0,T^{\ast}].$$ Proof: First let us notice that $\tilde{\pi}_{Q}(t,A)$, $\tilde{\pi}_{Q}(t,B)$ are $Q$-locally square integrable martingales. Indeed, the process $\pi(t,A)$ has jumps of size $1$ and $\nu_{Q}([0,t]\times A)$ is continuous, so both are locally bounded and thus $\tilde{\pi}_{Q}(t,A)$ is locally bounded, hence $Q$-locally square integrable. It follows that the process $\mbox{\boldmath$\langle$}\tilde{\pi}_{Q}(t,A),\tilde{\pi}_{Q}(t,B)\mbox{% \boldmath$\rangle$}$ is well defined. Application of the Itô product formula, see Theorem 4.4.13 in [1], yields $$\displaystyle\tilde{\pi}_{Q}(t,A)\cdot\tilde{\pi}_{Q}(t,B)$$ $$\displaystyle=\int_{0}^{t}\tilde{\pi}_{Q}(s-,A)d\tilde{\pi}_{Q}(s,B)+\int_{0}^% {t}\tilde{\pi}_{Q}(s-,B)d\tilde{\pi}_{Q}(s,A)$$ $$\displaystyle+\sum_{s\in[0,t]}\triangle\tilde{\pi}_{Q}(s,A)\cdot\triangle% \tilde{\pi}_{Q}(s,B),\qquad t\in[0,T^{\ast}].$$ The first two integrals on the right side are $Q$-local martingales as stochastic integrals of locally bounded processes with respect to martingales. Since both processes $\tilde{\pi}_{Q}(t,A),\tilde{\pi}_{Q}(t,B)$ have jumps of size $1$ we have $$\sum_{s\in[0,t]}\triangle\tilde{\pi}_{Q}(t,A)\cdot\triangle\tilde{\pi}_{Q}(t,B% )=\pi([0,t]\times A\cap B),\quad t\in[0,T^{\ast}].$$ Compensating the last term we obtain $$\sum_{s\in[0,t]}\triangle\tilde{\pi}_{Q}(t,A)\cdot\triangle\tilde{\pi}_{Q}(t,B% )=\tilde{\pi}_{Q}([0,t]\times A\cap B)+\nu_{Q}([0,t]\times A\cap B),\quad t\in% [0,T^{\ast}].$$ Finally, the process $$\tilde{\pi}_{Q}(t,A)\cdot\tilde{\pi}_{Q}(t,B)-\nu_{Q}([0,t]\times A\cap B),% \qquad t\in[0,T^{\ast}],$$ is a $Q$-local martingale which gives (3.3). Further it follows from the estimation $$E^{Q}[\tilde{\pi}_{Q}(T^{\ast},A)^{2}]=E^{Q}[\nu_{Q}([0,T^{\ast}]\times A)]<+\infty,$$ that $\tilde{\pi}_{Q}(t,A)$ is in fact a $Q$-square integrable martingale. $\square$ In the first step one constructs the integral (3.1) for a simple process having the representation $$\displaystyle g(s,y)=g(0,y)\mathbf{1}_{\{s=0\}}+\sum_{i=0}^{n-1}\left(\sum_{j=% 1}^{m_{i}}g_{ij}\mathbf{1}_{(t_{i},t_{i+1}]}(s)\mathbf{1}_{A_{ij}}\right),% \qquad s\in[0,T^{\ast}],\ y\in U,$$ (3.4) where $0=t_{0}<t_{1}<...<t_{n}=T^{\ast}$ is a partition of $[0,T^{\ast}]$ and $A_{i,j}$ is a family of subsets of $\mathbb{R}$ separated from zero such that $$\displaystyle E^{Q}[\nu_{Q}([0,T^{\ast}]\times A_{i,j})]<+\infty.$$ (3.5) The set of simple processes, denoted by $\mathcal{S}^{Q}$, is similar to $\mathcal{S}$ defined in Section 3.1. The difference lies in the condition (3.5) imposed on the sets $A_{ij}$ and this requirement is related to the different form of the compensating measure under $Q$. Actually, under $P$, the analogue of (3.5) holds automatically if only $\{A_{ij}\}$ are separated from zero. It turns out that for $g\in\mathcal{S}^{Q}$ the stochastic integral $$I^{Q}(g)_{t}=\int_{0}^{t}\int_{U}g(s,y)\tilde{\pi}_{Q}(ds,dy):=\sum_{i=0}^{n}% \sum_{j=1}^{m_{i}}g_{ij}\tilde{\pi}_{Q}((t_{i}\wedge t,t_{i+1}\wedge t]\times A% _{ij}),\qquad t\in[0,T^{\ast}].$$ is a $Q$-square integrable martingale. This can be proved having in hand Proposition 3.2 below which is a counterpart of Proposition 2.1 and describes properties of the $Q$-compensated jump measure. Its proof is directly based on Lemma 3.1 and is left to the reader. Proposition 3.2 For the sets $A,B\in U$ satisfying (3.2) and $0\leq s<t\leq T^{\ast}]$ hold $$\displaystyle E^{Q}[\tilde{\pi}_{Q}^{2}\left((s,t]\times A\right)\mid\mathcal{% F}_{s}]=E^{Q}[\nu_{Q}\left((s,t]\times A\right)\mid\mathcal{F}_{s}],$$ $$\displaystyle E^{Q}[\tilde{\pi}_{Q}\left((s,t]\times A\right)\cdot\tilde{\pi}_% {Q}\left((s,t]\times B\right)\mid\mathcal{F}_{s}]=0,\quad\text{if}\ A\cap B=\emptyset,$$ $$\displaystyle E^{Q}[\tilde{\pi}_{Q}\left((s,t]\times A\right)\cdot\tilde{\pi}_% {Q}\left((u,v]\times B\right)\mid\mathcal{F}_{u}]=0,\quad\text{for}\ t\leq u<v% \leq T^{\ast}.$$ Then mimicking the proof of Proposition 2.2 one can prove the following. Proposition 3.3 For a process $g\in\mathcal{S}^{Q}$ the integral $I^{Q}(g)$ is a $Q$-square integrable martingale and $$\displaystyle E^{Q}\left[\big{|}\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}% _{Q}(ds,dy)\big{|}^{2}\right]=E^{Q}\left[\int_{0}^{t}\int_{\mathbb{R}}\mid g(s% ,y)\mid^{2}\nu_{Q}(ds,dy)\right],\quad t\in[0,T^{\ast}].$$ (3.6) To extend the definition of the integral on larger class of integrands we use the same arguments as under the measure $P$. If a predictable process $g$ satisfies $$E^{Q}\Big{[}\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid^{2}\nu_{Q}(ds,% dy)\Big{]}<+\infty.$$ then $I^{Q}(g)$ is defined by the approximating arguments. It is a $Q$-square integrable martingale and (3.6) holds. By $\Psi^{Q}_{2}$ we denote the class of all predictable processes satisfying $$\Psi^{Q}_{2}:\qquad\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid^{2}\nu_% {Q}(ds,dy)<+\infty,\qquad Q-a.s.$$ For $g\in\Psi^{Q}_{2}$ the integral $I^{Q}(g)$ is a $Q$-locally square integrable martingale. If $g$ satisfies the condition $$\displaystyle E^{Q}\left[\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid% \nu_{Q}(ds,dy)\right]<+\infty.$$ (3.7) then one defines $$\displaystyle\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}_{Q}(ds,dy):=\int_{% 0}^{t}\int_{\mathbb{R}}g(s,y){\pi}(ds,dy)-\int_{0}^{t}\int_{\mathbb{R}}g(s,y)% \nu_{Q}(ds,dy),\qquad t\in[0,T^{\ast}],$$ (3.8) which is a $Q$-martingale. If $g\in\Psi^{Q}_{1}$, where $$\Psi^{Q}_{1}:\qquad\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mid g(s,y)\mid\nu_{Q}(% ds,dy)<+\infty,\qquad Q-a.s.,$$ then (3.7) holds locally and thus the process $$\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}_{Q}(ds,dy),\quad t\in[0,T^{\ast% }],$$ is a $Q$-local martingale. Finally, for the representation of $Q$- local martingales we need a class $\Psi^{Q}_{1,2}$ defined by $$g\in\Psi^{Q}_{1,2}\qquad\Longleftrightarrow\qquad g\mathbf{1}_{\{\mid g\mid% \leq 1\}}\in\Psi^{Q}_{2}\quad\text{and}\quad g\mathbf{1}_{\{\mid g\mid>1\}}\in% \Psi^{Q}_{1}.$$ For each $g\in\Psi^{Q}_{1,2}$ the integral is defined by the decomposition $$\int_{0}^{t}\int_{\mathbb{R}}g(s,y)\tilde{\pi}_{Q}(ds,dy):=\int_{0}^{t}\int_{% \mathbb{R}}g\mathbf{1}_{\{\mid g\mid\leq 1\}}\tilde{\pi}_{Q}(ds,dy)+\int_{0}^{% t}\int_{\mathbb{R}}g\mathbf{1}_{\{\mid g\mid>1\}}\tilde{\pi}_{Q}(ds,dy),$$ and is a $Q$-local martingale. It is clear that $g\in\Psi^{Q}_{1,2}$ if and only if $$\int_{0}^{T^{\ast}}\int_{U}(\mid g(s,y)\mid^{2}\wedge\mid g(s,y)\mid)\nu_{Q}(% ds,dy)<+\infty,\quad Q-a.s..$$ 3.2 Martingale representation under $Q$ Using the class of integrands described in Section 3.1 we can decompose any $Q$-local martingale to the integral form. Theorem 3.4 Let $Q$ be a measure equivalent to $P$ with a generating pair $(\phi,\psi)$, $\phi\in\Phi$ and $$\displaystyle e^{\psi}-1\in\Psi_{1}.$$ (3.9) Any $Q$-local martingale $M_{t},t\in[0,T^{\ast}],$ admits a representation of the form $$\displaystyle M_{t}=M_{0}+\int_{0}^{t}\tilde{\phi}_{M}(s)d\tilde{W}(s)+\int_{0% }^{t}\int_{\mathbb{R}}\tilde{\psi}_{M}(s,y)\tilde{\pi}_{Q}(ds,dy),\quad t\in[0% ,T^{\ast}]$$ (3.10) with $\tilde{\phi}_{M}\in\Phi$ and $\tilde{\psi}_{M}\in\Psi^{Q}_{1,2}$. Moreover, the pair $(\tilde{\phi}_{M},\tilde{\psi}_{M})$ is unique i.e., if $(\tilde{\phi}_{M}^{\prime},\tilde{\psi}_{M}^{\prime})$ satisfies (3.10) then $$\displaystyle\tilde{\phi}_{M}=\tilde{\phi}_{M}^{\prime}\quad dQ\times dt-\ % \text{a.s.}\quad\text{and}\quad\tilde{\psi}_{M}=\tilde{\psi}_{M}^{{}^{\prime}}% \quad dQ\times d\nu_{Q}-\ \text{a.s.}.$$ In the proof we will use the classical result, for its the proof see, for instance, Proposition 3.8 in [7]. Lemma 3.5 Let $Q$ be equivalent to $P$ and have the density process $\rho_{t}:=\frac{dQ}{dP}\mid_{\mathcal{F}_{t}},t\in[0,T^{\ast}]$. Then the process $M(t)$ is a $Q$-local martingale if and only if $M(t)\rho(t)$ is a $P$-local martingale. Proof: [of Theorem 3.4] We consider the case with no Wiener part, that is the density process has the form $$\displaystyle d\rho(t)=\rho(t-)\int_{\mathbb{R}}(e^{\psi(t,y)}-1)\tilde{\pi}(% dt,dy),\quad\rho(0)=1,\quad t\in[0,T^{\ast}].$$ (3.11) The passage to the general case does not cause serious difficulties. In view of Lemma 3.5 the process $\rho_{t}M_{t},t\in[0,T^{\ast}]$ is a $P$-local martingale and by Theorem 2.3 admits the representation $$\displaystyle\rho_{t}M_{t}=\rho_{0}M_{0}+\int_{0}^{t}\int_{\mathbb{R}}\psi_{M}% (s,y)\tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}],$$ (3.12) for some $\psi_{M}\in\Psi_{1,2}$. From the Itô formula and (3.11) follows $$\displaystyle\frac{1}{\rho_{t}}$$ $$\displaystyle=1-\int_{0}^{t}\frac{1}{\rho_{s-}^{2}}d\rho_{s}+\sum_{s\in[0,t]}% \left\{\frac{1}{\rho_{s}}-\frac{1}{\rho_{s-}}+\frac{1}{\rho^{2}_{s-}}\triangle% \rho_{s}\right\}$$ $$\displaystyle=1-\int_{0}^{t}\int_{\mathbb{R}}\frac{1}{\rho_{s-}}(e^{\psi(s,y)}% -1)\tilde{\pi}(ds,dy)$$ $$\displaystyle+\int_{0}^{t}\int_{\mathbb{R}}\frac{1}{\rho_{s-}}\left(e^{-\psi(s% ,y)}+e^{\psi(s,y)}-2\right)\pi(ds,dy),\quad t\in[0,T^{\ast}].$$ (3.13) Application of the Itô product formula together with (3.12) and (3.13) yields $$\displaystyle M_{t}$$ $$\displaystyle=(M_{t}\rho_{t})\cdot\frac{1}{\rho_{t}}=M_{0}+\int_{0}^{t}(M\rho)% _{s-}d\big{(}\frac{1}{\rho_{s}}\big{)}+\int_{0}^{t}\frac{1}{\rho_{s-}}d(M\rho)% _{s}+\big{[}M\rho,\frac{1}{\rho}\big{]}_{s}$$ $$\displaystyle=M_{0}+\int_{0}^{t}\int_{\mathbb{R}}M_{s-}\Big{(}(e^{-\psi}+e^{% \psi}-2)\pi(ds,dy)-(e^{\psi}-1)\tilde{\pi}(ds,dy)\Big{)}$$ $$\displaystyle+\int_{0}^{t}\int_{\mathbb{R}}\frac{1}{\rho_{s-}}\Big{(}\psi_{M}(% e^{-\psi}-1)\pi(ds,dy)+\psi_{M}\tilde{\pi}(ds,dy)\Big{)},\quad t\in[0,T^{\ast}].$$ Now, we use the fact that $e^{\psi(t,y)}dt\nu(dy)$ is a compensating measure of $\pi(dt,dy)$ under $Q$ and rearrange the terms above. This gives $$M_{t}=M_{0}+\int_{0}^{t}\int_{\mathbb{R}}M_{s-}e^{-\psi(s,y)}(1-e^{\psi(s,y)})% \tilde{\pi}_{Q}(ds,dy)+\int_{0}^{t}\int_{\mathbb{R}}\frac{1}{\rho_{s-}}\psi_{M% }(s,y)e^{-\psi(s,y)}\tilde{\pi}_{Q}(ds,dy).$$ The proof is completed by showing that the integrals above are actually well defined, that is that the process $$\tilde{\psi}_{M}(s,y):=M_{s-}e^{-\psi(s,y)}(1-e^{\psi(s,y)})+\frac{1}{\rho_{s-% }}e^{-\psi(s,y)}\psi_{M}(s,y),\quad t\in[0,T^{\ast}],y\in U,$$ belongs to $\Psi_{1,2}^{Q}$. Since the processes $M$ and $\frac{1}{\rho}$ are locally bounded, we will prove that $$\displaystyle e^{-\psi}(1-e^{\psi})\in\Psi_{1,2}^{Q},$$ (3.14) and $$\displaystyle e^{-\psi}\psi_{M}\in\Psi_{1,2}^{Q}.$$ (3.15) The condition (3.14) follows from the estimation $$\Big{(}\mid e^{-\psi}(1-e^{\psi})\mid^{2}\wedge\mid e^{-\psi}(1-e^{\psi})\mid% \Big{)}e^{\psi}=e^{-\psi}\mid e^{\psi}-1\mid^{2}\wedge\mid e^{\psi}-1\mid\leq% \mid e^{\psi}-1\mid$$ and (3.9). The condition (3.15) has the form $$\displaystyle\int_{0}^{T^{\ast}}\int_{\mathbb{R}}H(s,y)ds\nu(dy)<+\infty,\quad% \text{with}\ H(s,y):=\mid\psi_{M}(s,y)\mid^{2}e^{-\psi(s,y)}\wedge\mid\psi_{M}% (s,y)\mid,$$ (3.16) and in view of (3.9) we need to prove (3.16) in the case when $\psi\leq 0$ only. Let us consider the following subsets of $[0,T^{\ast}]\times\mathbb{R}$ $$\displaystyle A:=\{\psi\leq 0\},\quad B:=\{\mid\psi_{M}\mid^{2}e^{-\psi}\leq% \mid\psi_{M}\mid\},\quad C:=\{e^{\psi}\leq\frac{1}{2}\}.$$ From (3.9) follows that $$\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B}(1-e^{\psi(s,y)})ds\nu% (dy)<+\infty,$$ and thus $A\cap C$ is a set of finite $dt\nu(dy)$ measure. The four estimations below a) $$\displaystyle\int_{0}^{T^{\ast}}\int_{\mathbb{R}}$$ $$\displaystyle\mathbf{1}_{A\cap B\cap C}H(s,y)ds\nu(dy)=\int_{0}^{T^{\ast}}\int% _{\mathbb{R}}\mathbf{1}_{A\cap B\cap C}\mid\psi_{M}(s,y)\mid^{2}e^{-\psi(s,y)}% ds\nu(dy),$$ $$\displaystyle\leq\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B\cap C% }\mid\psi_{M}(s,y)\mid ds\nu(dy)\leq\frac{1}{2}\int_{0}^{T^{\ast}}\int_{% \mathbb{R}}\mathbf{1}_{A\cap B\cap C}ds\nu(dy)<+\infty,$$ because $A\cap B\cap C$ is of finite measure, b) $$\displaystyle\int_{0}^{T^{\ast}}\int_{\mathbb{R}}$$ $$\displaystyle\mathbf{1}_{A\cap B\cap C^{c}}H(s,y)ds\nu(dy)\leq\int_{0}^{T^{% \ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B\cap C^{c}}\Big{(}2\mid\psi_{M}(s,y)% \mid^{2}\wedge\mid\psi_{M}(s,y)\mid\Big{)}ds\nu(dy)$$ $$\displaystyle\leq 2\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B\cap C% ^{c}}\Big{(}\mid\psi_{M}(s,y)\mid^{2}\wedge\mid\psi_{M}(s,y)\mid\Big{)}ds\nu(% dy)<+\infty,$$ because $\psi_{M}\in\Psi_{1,2}$, c) $$\displaystyle\int_{0}^{T^{\ast}}\int_{\mathbb{R}}$$ $$\displaystyle\mathbf{1}_{A\cap B^{c}\cap C^{c}}H(s,y)ds\nu(dy)\leq\int_{0}^{T^% {\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B^{c}\cap C^{c}}\Big{(}2\mid\psi_{M}% (s,y)\mid^{2}\wedge\mid\psi_{M}(s,y)\mid\Big{)}ds\nu(dy)$$ $$\displaystyle\leq 2\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B^{c}% \cap C^{c}}\Big{(}\mid\psi_{M}(s,y)\mid^{2}\wedge\mid\psi_{M}(s,y)\mid\Big{)}% ds\nu(dy)<+\infty,$$ d) $$\displaystyle\int_{0}^{T^{\ast}}\int_{\mathbb{R}}$$ $$\displaystyle\mathbf{1}_{A\cap B^{c}\cap C}H(s,y)ds\nu(dy)=\int_{0}^{T^{\ast}}% \int_{\mathbb{R}}\mathbf{1}_{A\cap B^{c}\cap C\cap\{\mid\psi_{M}\mid\leq 1\}}% \mid\psi_{M}(s,y)\mid ds\nu(dy)$$ $$\displaystyle+\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B^{c}\cap C% \cap\{\mid\psi_{M}\mid>1\}}\mid\psi_{M}(s,y)\mid ds\nu(dy)$$ $$\displaystyle\leq\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B^{c}% \cap C\cap\{\mid\psi_{M}\mid\leq 1\}}ds\nu(dy)$$ $$\displaystyle+\int_{0}^{T^{\ast}}\int_{\mathbb{R}}\mathbf{1}_{A\cap B^{c}\cap C% \cap\{\mid\psi_{M}\mid>1\}}\Big{(}\mid\psi_{M}(s,y)\mid^{2}\wedge\mid\psi_{M}(% s,y)\mid\Big{)}ds\nu(dy)<+\infty,$$ imply (3.16). The uniqueness is equivalent to the implication $$M_{t}:=0=\int_{0}^{t}\int_{\mathbb{R}}\tilde{\psi}_{M}(s,y)\tilde{\pi}_{Q}(dy,% ds),\quad t\in[0,T^{\ast}],\quad\tilde{\psi}_{M}\in\Psi_{1,2}^{Q}\quad% \Longrightarrow\quad\tilde{\psi}_{M}\equiv 0.$$ From the Itô formula follows $$\displaystyle 0=M_{t}\rho_{t}$$ $$\displaystyle=\int_{0}^{t}\int_{\mathbb{R}}M_{s-}\rho_{s-}(e^{\psi(s,y)}-1)d% \tilde{\pi}(ds,dy)+\int_{0}^{t}\int_{U}\rho_{s-}\tilde{\psi}_{M}(s,y)\tilde{% \pi}_{Q}(dy,ds)$$ $$\displaystyle+\int_{0}^{t}\int_{\mathbb{R}}\rho_{s-}\tilde{\psi}_{M}(s,y)(e^{% \psi(s,y)}-1)\pi(ds,dy)$$ $$\displaystyle=\int_{0}^{t}\int_{\mathbb{R}}\rho_{s-}\tilde{\psi}_{M}(s,y)e^{% \psi(s,y)}\tilde{\pi}(ds,dy),\quad t\in[0,T^{\ast}],$$ and the uniqueness of the integral representation under $P$ implies that $\tilde{\psi}_{M}\equiv 0$. $\square$ The decomposition (3.10) has been already formulated, without the uniqueness property, in [9], see Theorem 2.3. The proof in [9] is based on another arguments and is, however, sketchy. In particular, it is not clear in [9] which processes can be integrated over the compensated jump measure under $Q$. 4 Incompleteness of the bond market We will examine the problem of completeness of the bond market. Our main result is Theorem 4.4 showing the market incompleteness in the case when the Lévy measure has a density function. This result generalizes Theorem 4.12 in [2] where the model was specified under the martingale measure, that is $P$ was a martingale measure. 4.1 The model The market under consideration consists of bonds with maturities forming a set $[0,T^{\ast}]$ with $T^{\ast}<+\infty$. For any $T\in[0,T^{\ast}]$ the price of the $T$-bond is defined by $$\displaystyle P(t,T):=e^{-\int_{t}^{T}f(t,u)du},\quad t\in[0,T^{\ast}],$$ (4.1) where $f(\cdot,\cdot)$ stands for a forward rate. The time evolution of the forward rate is defined for each $T\in[0,T^{\ast}]$ separately by $$\displaystyle f(t,T)=f(0,T)+\int_{0}^{t}\alpha(s,T)ds+\int_{0}^{t}\sigma(s,T)% dZ(s),\quad t\in[0,T^{\ast}],$$ (4.2) where $Z$ is a Lévy process on $(\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in[0,T^{\ast}]},P)$. We adopt the model assumptions from [5], that is the coefficients in (4.2) satisfy $$\displaystyle\alpha(t,T)=0,\ \sigma(t,T)=0\quad\text{for}\ t\in[T,T^{\ast}],$$ (4.3) $$\displaystyle(\omega,t,T)\longrightarrow\alpha(\omega,t,T),\sigma(\omega,t,T)% \quad\text{are}\ \mathcal{P}\otimes\mathcal{B}([0,T^{\ast}])-\text{measurable},$$ (4.4) $$\displaystyle\sup_{t,T\in[0,T^{\ast}]}\mid\alpha(t,T)\mid<+\infty,\sup_{t,T\in% [0,T^{\ast}]}\mid\sigma(t,T)\mid<+\infty.$$ (4.5) In (4.4) $\mathcal{P}$ stands for the predictable $\sigma$-field on $\Omega\times[0,T^{\ast}]$ and $\mathcal{B}([0,T^{\ast}])$ for the Borel $\sigma$-field on $[0,T^{\ast}]$. In view of (4.5) the fields $(t,T)\longrightarrow\alpha(t,T),\sigma(t,T)$ are assumed to be bounded, but the bound may depend on $\omega$. Under (4.5) both integrals in (4.2) are well defined. Finally, (4.3) allows to define the bond prices for time points exceeding their maturities. To see this define the short rate process by $r(t):=f(t,t),t\in[0,T^{\ast}]$. Then, by (4.3) and (4.2), we have $$f(t,T)=f(0,T)+\int_{0}^{T}\alpha(s,T)ds+\int_{0}^{T}\sigma(s,T)dZ(s)=f(T,T),% \quad t\in[T,T^{\ast}],$$ and consequently $$\displaystyle P(t,T)=e^{-\int_{t}^{T}f(t,s)ds}=e^{-\int_{t}^{T}f(s,s)ds}=e^{% \int_{T}^{t}r(s)ds}=e^{\int_{T}^{t}r(s)ds},\qquad t\in[T,T^{\ast}].$$ The latter condition means that the nominal value of the bond is automatically transferred at maturity on the savings account and stays there till $T^{\ast}$. Further, it follows from (4.3) that the discounted bond prices $$\hat{P}(t,T):=e^{-\int_{0}^{t}r(s)ds}P(t,T),\quad t,T\in[0,T^{\ast}],$$ can be represented in the form $$\displaystyle\hat{P}(t,T):=e^{-\int_{0}^{T}f(t,s)ds},\quad t,T\in[0,T^{\ast}].$$ (4.6) The issue of prime importance is the absence of arbitrage in the model defined by (4.1) - (4.2). The concept of arbitrage, in the sense of [3] and [4], amounts to the existence of a measure $Q$ which is equivalent to $P$ and such that the discounted bond prices are $Q$-local martingales. Each such a measure is called a martingale measure. The following result, which is a consequence of Theorem 3.1 in [5] and Theorem 3.1 in [8], specifies the relation between the generating pair $(\phi,\psi)$ of the martingale measure and the model coefficients. Theorem 4.1 Assume that (4.3)-(4.5) are satisfied. Let $Q\sim P$ be a measure with a generating pair $(\phi,\psi)$ such that $\phi\in\Phi$, $e^{\psi}-1\in\Psi_{1,2}$. Denote $$A(t,T):=\int_{t}^{T}\alpha(t,v)dv,\qquad\Sigma(t,T):=\int_{t}^{T}\sigma(t,v)dv% ,\qquad t,T\in[0,T^{\ast}].$$ a) If the processes $\hat{P}(\cdot,T),T\in[0,T^{\ast}]$ given by (4.6) are $Q$-local martingales then $$\displaystyle\int_{0}^{T^{\ast}}\int_{\{\mid y\mid\leq 1\}}\mid e^{\psi(s,y)}-% 1\mid ds\ \nu(dy)<+\infty,\quad a.s.,$$ (4.7) and $$\displaystyle\int_{0}^{T^{\ast}}\int_{\{\mid y\mid>1\}}e^{-\Sigma(s,T)y}\cdot e% ^{\psi(s,y)}ds\ \nu(dy)<+\infty,$$ (4.8) for each $T$ almost $\omega$-surely. b) If (4.7) and (4.8) are satisfied then $\hat{P}(\cdot,T),T\in[0,T^{\ast}]$ are $Q$-local martingales if and only if $$\displaystyle A(s,T)$$ $$\displaystyle=-\Sigma(s,T)a+\frac{1}{2}q\Sigma(s,T)^{2}-q\phi(s)\Sigma(s,T)$$ $$\displaystyle+\int_{\mathbb{R}}\big{(}e^{\psi(s,y)}(e^{-\Sigma(s,T)y}-1)+% \mathbf{1}_{\{\mid y\mid\leq 1\}}\Sigma(s,T)y\big{)}\nu(dy),$$ (4.9) for each $T\in[0,T^{\ast}]$ almost all $s$ almost $\omega$-surely. Let us comment on the theorem above. The conditions (4.7) and (4.8) narrow the class of generating pairs of martingale measures. Actually it follows from (4.7) and the fact that $e^{\psi}-1\in\Psi_{1,2}$ that $$\displaystyle e^{\psi}-1\in\Psi_{1}.$$ (4.10) The condition (4.8) is a generalization of the exponential moment conditions obtained in [8] for the case when $\phi=0,\psi=0$, that is when the model is specified directly under the martingale measure. Notice that the right side of (b) involves the volatility of the forward rate, the characteristics of $Z$ and the generating pair of $Q$ while the left side depends on the drift of the forward rate only. Differentiation of (b) in $T$ gives a direct formula for $\alpha(t,T)$, which generalizes the famous Heath-Jarrow-Morton drift condition $$\alpha(t,T)=\sigma(t,T)\int_{t}^{T}\sigma(t,v)dv,\quad t,T\in[0,T^{\ast}],$$ introduced in [6] in the case when $Z$ was a Wiener process. Let $\mathcal{Q}$ be a set of all martingale measures and $Q\in\mathcal{Q}$ have a generating pair $(\phi,\psi)$. Recall, that under $Q$ the process $\tilde{W}$ given by (2.15) is a Wiener process under, $\nu_{Q}(dt,dy)$ given by (2.16) is a compensating measure of $\pi(dt,dy)$ and $\tilde{\pi}_{Q}(dt,dy)=\pi(dt,dy)-\nu_{Q}(dt,dy)$ is a compensated jump measure of $Z$. The use of (4.6), (4.2) together with the Itô formula provides the dynamics of $\hat{P}(\cdot,T),T\in[0,T^{\ast}]$ under $Q$. It has the form $$\displaystyle\hat{P}(t,T)$$ $$\displaystyle=\hat{P}(t,0)-\int_{0}^{t}\hat{P}(s-,T)\Sigma(s,T)d\tilde{W}(s)$$ $$\displaystyle+\int_{0}^{t}\int_{\mathbb{R}}\hat{P}(s-,T)\left[e^{-\Sigma(s,T)y% }-1\right]\tilde{\pi}_{Q}(ds,dy),\quad t,T\in[0,T^{\ast}].$$ (4.11) 4.2 Portfolios The concept of a bond portfolio generalizes the finite dimensional setting involved in the stock market description. From (4.1) follows that for any $t\in[0,T^{\ast}]$ the function $$T\rightarrow P(t,T),\quad T\in[0,T^{\ast}],$$ is continuous and hence $P_{t}:=P(t,\cdot)$ can be treated as an element of a Banach space $B$ of bounded functions on $[0,T^{\ast}]$ with norm $$\ |h|_{B}:=\sup_{z\in[0,T^{\ast}]}\mid h(z)\mid.$$ A trading strategy $\varphi$ will be a $B^{\ast}$-valued process, where $B^{\ast}$ stands for the dual of $B$. The corresponding wealth process is defined by $$X^{\varphi}_{t}=\langle\varphi_{t},P_{t}\rangle_{B},\qquad t\in[0,T^{\ast}],$$ where $\langle\varphi,P\rangle_{B}$ is the value of the functional $\varphi\in B^{\ast}$ on the element $P\in B$. Consequently, the discounted wealth process is given by $$\hat{X}^{\varphi}_{t}=\langle\varphi_{t},\hat{P}_{t}\rangle_{B},\qquad t\in[0,% T^{\ast}].$$ In the class of self-financing strategies the changes of the portfolio value arise from the fluctuations of the bond price process which means that $\hat{X}^{\varphi}$ admits the following representation then $$\displaystyle\hat{X}^{\varphi}_{t}=\hat{X}^{\varphi}_{0}+\int_{0}^{t}\langle% \varphi_{s},d\hat{P}_{s}\rangle_{B},\quad t\in[0,T^{\ast}],$$ (4.12) where the latter integral is to be precisely defined. Taking into account (4.1) we have $$\displaystyle\hat{X}^{\varphi}_{t}=\hat{X}^{\varphi}_{0}-\int_{0}^{t}\langle% \varphi_{s},\hat{P}_{s-}\Sigma_{s}\rangle_{B}d\tilde{W}_{s}+\int_{0}^{t}\int_{% \mathbb{R}}\langle\varphi_{s},\hat{P}_{s-}(e^{-\Sigma_{s}y}-1)\rangle_{B}% \tilde{\pi}_{Q}(ds,dy),\quad t\in[0,T^{\ast}],$$ (4.13) where $\Sigma_{s}:=\Sigma(s,\cdot)\in B$. This leads to the following definition of an admissible strategy. Let $Q\in\mathcal{Q}$ be a martingale measure with a generating pair $(\phi,\psi)$, $\phi\in\Phi$, $e^{\psi}-1\in\Psi_{1}$. A $B^{\ast}$-valued strategy $\varphi$ is admissible if a) $\langle\varphi_{s},\hat{P}_{s-}\Sigma_{s}\rangle_{B}\in\Phi$ and $\langle\varphi_{s},\hat{P}_{s-}(e^{-\Sigma_{s}y}-1)\rangle_{B}\in\Psi^{Q}_{1,2}$, that is $$\displaystyle defstrategidopuszczaych{pierwszywaruneknadopuszczalnosc}\int_{0}% ^{T^{\ast}}\mid\langle\varphi_{s},\hat{P}_{s-}\Sigma_{s}\rangle_{B}\mid^{2}ds<% +\infty,$$ (4.14) and $$\displaystyle defstrategidopuszczaych{drugiwaruneknadopuszczalnosc}\int_{0}^{T% ^{\ast}}\int_{\mathbb{R}}\Big{(}\mid\langle\varphi_{s},\hat{P}_{s-}(e^{-\Sigma% _{s}y}-1)\rangle_{B}\mid^{2}\wedge\mid\langle\varphi_{s},\hat{P}_{s-}(e^{-% \Sigma_{s}y}-1)\rangle_{B}\mid\Big{)}e^{\psi(s,y)}ds\nu(dy)<+\infty,$$ (4.15) b) the wealth process, which is given by (4.13), is a $Q$-martingale, The class of all admissible trading strategies will be denoted by $\mathcal{A}(Q)$. The definition of admissible strategies depends on the choice of martingale measure. This feature of the model is caused by the presence of jumps. Indeed, although (LABEL:pierwszy_warunek_na_dopuszczalnosc) is not measure dependent, (LABEL:drugi_warunek_na_dopuszczalnosc) is. Since the arbitrage free model may admit many martingale measures the definition above may seem confusing because it involves only one fixed measure. The relevance of the definition can be, however, justified by the use of economical arguments admitting the model framework where the prices of financial contracts are given by expectations under a so called pricing measure which is chosen from the set of all martingale measures. The methods of choosing the pricing measure will be not discussed here, we mention only that the model framework with a unique pricing measure is often used in practice and also in purely theoretical consideration. Alternatively we could also take into account trading strategies from the set $$\mathcal{A}:=\bigcap_{Q\in\mathcal{Q}}\mathcal{A}_{Q}.$$ The problem is, however, that $\mathcal{A}$ can be significantly smaller than $\mathcal{A}(Q)$ and consequently the set of investing possibilities would become poor. Also for that reason we fix only one martingale measure and consider admissible strategies related to that measure. 4.3 Incompleteness Let us start from the definition of the market completeness. Definition 4.2 The bond market defined by (4.1)-(4.2) is complete if for each bounded random variable $X$ there exists $\varphi\in\mathcal{A}(Q)$ such that $$\displaystyle X=\hat{X}^{\varphi}_{T^{\ast}}.$$ (4.16) A strategy satisfying (4.16) is called a replicating strategy for $X$. The market is incomplete if it is not complete. Let $X$ be a bounded random variable. In view of Theorem 3.4 the associated martingale $M_{t}:=E^{Q}[X\mid\mathcal{F}_{t}],t\in[0,T^{\ast}]$ admits the integral representation $$X=E^{Q}[X]+\int_{0}^{T^{\ast}}f_{X}(s)d\tilde{W}(s)+\int_{0}^{T^{\ast}}g_{X}(s% ,y)\tilde{\pi}_{Q}(ds,dy),$$ for some $f_{X}\in\Phi$ and $g_{X}\in\Psi_{1,2}^{Q}$. By (4.13) the discounted wealth process of an admissible strategy at $T^{\ast}$ is given by $$\displaystyle\hat{X}^{\varphi}_{T^{\ast}}=\hat{X}^{\varphi}_{0}-\int_{0}^{T^{% \ast}}\langle\varphi_{s},\hat{P}_{s-}\Sigma_{s}\rangle_{B}d\tilde{W}_{s}+\int_% {0}^{T^{\ast}}\int_{\mathbb{R}}\langle\varphi_{s},\hat{P}_{s-}(e^{-\Sigma_{s}y% }-1)\rangle_{B}\tilde{\pi}_{Q}(ds,dy).$$ Since $\hat{X}^{\varphi}$ is a $Q$-martingale, the strategy $\varphi$ replicates $X$ if and only if the following conditions are satisfied $$\displaystyle\hat{X}^{\varphi}_{0}$$ $$\displaystyle=E^{Q}[X],$$ (4.17) $$\displaystyle-\langle\varphi_{s},\hat{P}_{s-}\Sigma_{s}\rangle_{B}$$ $$\displaystyle=f_{X}(s),\quad dt-a.s.,$$ (4.18) $$\displaystyle\langle\varphi_{s},\hat{P}_{s-}(e^{-\Sigma_{s}y}-1)\rangle_{B}$$ $$\displaystyle=g_{X}(s,y),\quad dQ\times d\nu_{Q}-a.s..$$ (4.19) If $\varphi$ replicates $X$ then (4.17) defines the replication cost. The solution $\varphi$ to (4.18) can be searched in the class of functionals which are point evaluations, that is for any maturity $T\in[0,T^{\ast}]$ consider $$\langle\delta_{T},h\rangle_{B}:=h(T),\quad h\in B.$$ Under trivial non-degeneracy conditions there exists a solution $c=c(s)$ of the equation $$-c(s)\hat{P}(s-,T)\Sigma(s,T)=f_{X}(s),\quad s\in[0,T^{\ast}]$$ and hence $\varphi(s):=c(s)\delta_{T}$ solves (4.18). What requires a deeper analysis is the condition (4.19) which involves the jumps of the driving process $Z$. Although (4.19) must hold $dQ\times d\nu_{Q}-a.s.$, we will use in the sequel the concept of a concentration point of the original Lévy measure $\nu$ of $Z$. The precise definition, which has been introduced in [2], is as follows. Definition 4.3 A point $y_{0}\in\mathbb{R}$ is a concentration point of the Lévy measure $\nu$ if there exists a sequence $\{\varepsilon_{n}\}_{n=1}^{\infty}$ s.t. $\varepsilon_{n}\searrow 0$ satisfying $$\displaystyle\nu\Big{\{}B(y_{0},\varepsilon_{n})\backslash B(y_{0},\varepsilon% _{n+1})\Big{\}}>0\quad\forall\ n=1,2,...,$$ (4.20) where $B(y_{0},\varepsilon)=\{y\in\mathbb{R}:|y-y_{0}|\leq\varepsilon\}$. The definition above captures a great majority of Lévy processes used in the financial modelling. Indeed, each Lévy process for which its Lévy measure has a density function has also a concentration point. Our aim now is to prove that if the Lévy measure has a concentration point then (4.19) has no solution for some $g_{X}$ and consequently the bond market model is incomplete. Theorem 4.4 Consider the bond market model (4.1)-(4.2) with coefficients satisfying (4.3)-(4.5). If the Lévy measure $\nu$ of $Z$ has a concentration point $y_{0}\neq 0$ then the market is incomplete. In the proof we will use two auxiliary results formulated below. The first is an extension of the moment problem solution, see Theorem 2 in Section 5 of Yosida [10] or Lemma 4.5 in [2]. Lemma 4.5 Let $E$ be a normed linear space and $A$ an arbitrary set. Let $g:A\longrightarrow\mathbb{R}$ and $h:A\longrightarrow E$. Then there exists $e^{\ast}\in E^{\ast}$ such that $$\displaystyle g(a)=<e^{\ast},h(a)>_{E},\quad\forall a\in A,$$ (4.21) if and only if $$\displaystyle\exists\ \gamma>0\quad\forall\ n\in\mathbb{N}\quad\forall\ \{% \beta_{i}\}_{i=1}^{n},\ \beta_{i}\in\mathbb{R}\quad\forall\ \{a_{i}\}_{i=1}^{n% },\ a_{i}\in A\quad holds$$ $$\displaystyle\Big{|}\sum_{i=1}^{n}\beta_{i}g(a_{i})\Big{|}\leq\gamma\ \Big{|}% \sum_{i=1}^{n}\beta_{i}h(a_{i}){\Big{|}}_{E}.$$ (4.22) The second result follows from the Fubini theorem and will simplify examining of the condition (4.19). Proposition 4.6 Let $(E_{1},\mathcal{E}_{1},\mu_{1})$, $(E_{2},\mathcal{E}_{2},\mu_{2})$ be measurable spaces with sigma-finite measures $\mu_{1},\mu_{2}$ and $(E_{1}\times E_{2},\mathcal{E}_{1}\times\mathcal{E}_{2},\mu_{1}\times\mu_{2})$ be their product space. If two measurable functions $f_{1}:E_{1}\times E_{2}\longrightarrow\mathbb{R}$, $f_{2}:E_{1}\times E_{2}\longrightarrow\mathbb{R}$ satisfy the condition $$\displaystyle f_{1}=f_{2},\qquad d\mu_{1}\times d\mu_{2}-\text{a.s.},$$ (4.23) then there exists a set $\hat{E}_{1}\in\mathcal{E}_{1}$ such that $$\displaystyle\hat{E}_{1}\quad{\text{i}s\ of\ full\ \mu_{1}\ measure}$$ (4.24) $$\displaystyle\forall x\in\hat{E}_{1}\quad{\text{t}he\ set}\quad\{y:f_{1}(x,y)=% f_{2}(x,y)\}\quad{\text{i}s\ of\ full\ \mu_{2}\ measure}.$$ (4.25) Proof: [of Theorem 4.4] We will construct a bounded random variable $X$ such that there is no admissible strategy solving (4.19). Let $\{\varepsilon_{n}\}_{n=1}^{\infty}$ be a sequence satisfying (4.20) and define an auxiliary deterministic function $g$ by the formula $$g(y)=\begin{cases}|y|\wedge\ 1&\text{for}\quad y\in\{B(y_{0},\varepsilon_{2k+1% })\backslash B(y_{0},\varepsilon_{2k+2})\}\quad k=0,1,...,\\ -(|y|\wedge\ 1)&\text{for}\quad y\in\{B(x_{0},\varepsilon_{2k})\backslash B(x_% {0},\varepsilon_{2k+1})\}\quad k=1,2,...,\\ |y|\wedge\ 1&\text{for}\quad y\in(-\infty,y_{0}-\varepsilon_{1})\cup(y_{0}+% \varepsilon_{1})\cup\{y_{0}\}.\end{cases}$$ Assume that for some $\varphi\in\mathcal{A}(Q)$ holds $$\displaystyle\langle\varphi_{t},\hat{P}_{t-}(e^{-\Sigma_{t}y}-1)\rangle_{B}=g(% y),$$ (4.26) $dQ\times d\nu_{Q}-a.s.$. Since the measures $dQ\times\nu_{Q}(dt,dy)$ and $dP\times dt\times d\nu$ are equivalent, the equality (4.26) holds $dP\times dt\times d\nu$ a.s.. Fix $(\omega,t)\in\Omega\times[0,T^{\ast}]$ and assume that (4.26) holds $\nu$-a.s.. From Proposition 4.6 follows that then there exists a set $A_{\nu}(\omega,t)$ of a full $\nu$-measure s.t. (4.26) is satisfied for each $y\in A_{\nu}(\omega,t)$. Due to Lemma 4.5 there exists $\gamma=\gamma(\omega,t)>0$ such that $$\displaystyle\forall\ n\in\mathbb{N}\quad\forall\ \{\beta_{i}\}_{i=1}^{n},\ % \beta_{i}\in\mathbb{R}\quad\forall\ \{y_{i}\}_{i=1}^{n},\ y_{i}\in A_{\nu}(% \omega,t)$$ $$\displaystyle\Big{|}\sum_{i=1}^{n}\beta_{i}g(y_{i})\Big{|}\leq\gamma\Big{|}% \sum_{i=1}^{n}\beta_{i}\hat{P}_{t-}(e^{-\Sigma_{t}y_{i}}-1)\Big{|}_{B}.$$ (4.27) Let us notice that due to (4.20) we have $$\displaystyle\nu\Big{\{}A_{\nu}(\omega,t)\cap\big{\{}B(x_{0},\varepsilon_{n})% \backslash B(x_{0},\varepsilon_{n+1})\big{\}}\Big{\}}>0$$ so we can choose a sequence $\{a_{k}\}_{k=1}^{\infty}$ s.t. $$\displaystyle a_{k}\in A_{\nu}(\omega,t)\cap\big{\{}B(y_{0},\varepsilon_{k})% \backslash B(y_{0},\varepsilon_{k+1})\big{\}}\quad\forall\ k=1,2,....$$ Let us examine condition (4.27) with $n=2$, $\beta_{1}=1,\beta_{2}=-1$ and $y_{1}=a_{2k+1}$, $y_{2}=a_{2k+2}$ for $k=0,1,...$. Then the left side of (4.27) has a form $$\displaystyle\frac{1}{\gamma}\Big{|}\beta_{1}g(a_{2k+1})+\beta_{2}g(a_{2k+2})% \Big{|}=\frac{1}{\gamma}\Big{(}(|a_{2k+1}|\wedge\ 1)+(|a_{2k+2}|\mid\wedge\ 1)% \Big{)}$$ and thus satisfies $$\displaystyle\lim_{k\longrightarrow\infty}\frac{1}{\gamma}\Big{|}\beta_{1}g(a_% {2k+1})+\beta_{2}g(a_{2k+2})\Big{|}=\frac{2(|y_{0}|\wedge\ 1)}{\gamma}\neq 0.$$ Now let us estimate the right side of (4.27). $$\displaystyle\Big{|}\hat{P}_{t-}(e^{-\Sigma_{t}a_{2k+1}}-1)$$ $$\displaystyle-\hat{P}_{t-}(e^{-\Sigma_{t}a_{2k+2}}-1)\Big{|}_{B}$$ $$\displaystyle=\sup_{T\in[0,T^{\ast}]}\Big{|}\hat{P}(t-,T)(e^{-\Sigma(t,T)a_{2k% +1}}-1)-\hat{P}(t-,T)(e^{-\Sigma(t,T)a_{2k+1}}-1)\Big{|}$$ $$\displaystyle\leq\sup_{T\in[0,T^{\ast}]}|\hat{P}(t-,T)|\ \sup_{T\in[0,T^{\ast}% ]}\Big{|}e^{\Sigma(t,T)a_{2k+1}}-e^{-\Sigma(t,T)a_{2k+2}}\Big{|}.$$ The first supremum is clearly finite. To deal with the second supremum let us notice that for sufficiently large $k$ the points $a_{2k+1},a_{2k+2}$ lie in $B(y_{0},\delta);\delta>0$ and thus we have $$\displaystyle\sup_{T\in[0,T^{\ast}]}$$ $$\displaystyle\Big{|}e^{-\Sigma(t,T)a_{2k+1}}-e^{-\Sigma(t,T)a_{2k+2}}\Big{|}$$ $$\displaystyle\leq\sup_{T\in[0,T^{\ast}]}\sup_{y\in B(y_{0},\delta)}\Big{|}De^{% -\Sigma(t,T)y}\Big{|}\cdot|a_{2k+1}-a_{2k+2}|$$ $$\displaystyle\leq\sup_{T\in[0,T^{\ast}]}\sup_{y\in B(y_{0},\delta)}\left\{e^{|% \Sigma(t,T)|\cdot|y|}\cdot|\Sigma(t,T)|\right\}\cdot|a_{2k+1}-a_{2k+2}|,$$ (4.28) which clearly tends to zero. Thus condition (4.27) is not satisfied for any $(\omega,t)\in\Omega\times[0,T^{\ast}]$ and thus (4.26) does not hold $\nu-a.s.$ for any $(\omega,t)\in\Omega\times[0,T^{\ast}]$. As a consequence of Proposition 4.6 the equation (4.26) does not have a solution. Now, with the use of the function $g$, we construct a bounded random variable $X$ which can not be replicated. First let us notice that for a martingale measure $Q$ with a generating pair $(\phi,\psi)$ we have $$\int_{0}^{T^{\ast}}\int_{\mathbb{R}}(\mid g(y)\mid^{2}\wedge\mid g(y)\mid)e^{% \psi(s,y)}\nu(dy)ds\leq\int_{0}^{T^{\ast}}\int_{\mathbb{R}}(\mid y\mid^{2}% \wedge 1)e^{\psi(s,y)}\nu(dy)ds<+\infty,$$ see (2.19), which means that $g\in\Psi^{Q}_{1,2}$. Let $\tau_{k}$ be a stopping time defined by $$\displaystyle\tau_{k}=\inf\{t:\Big{\arrowvert}\int_{0}^{t}\int_{\mathbb{R}}g(y% )\tilde{\pi}_{Q}(ds,dy)\Big{\arrowvert}\geq k\}\wedge T^{\ast},$$ and choose a number $k_{0}$ s.t. the set $\{(\omega,\tau_{k_{0}}(\omega));\omega\in\Omega\}\subseteq\Omega\times[0,T^{% \ast}]$ is of positive $dP\times dt$- measure. Then the process $g_{X}(s,y):=g(y)\mathbf{1}_{(0,\tau_{k_{0}}]}(s)$ is predictable, bounded and belongs to $\Psi^{Q}_{1,2}$. The process $\int_{0}^{\cdot}\int_{\mathbb{R}}g_{X}(s,y)\tilde{\pi}_{Q}(ds,dy)$ is bounded because $|\Delta\int_{0}^{\cdot}\int_{\mathbb{R}}g_{X}(x,y)\tilde{\pi}_{Q}(ds,dy)|\leq 1$ holds and thus is a $Q$-martingale as a bounded $Q$-local martingale. Finally let us define $$\displaystyle X:=\int_{0}^{T^{\ast}}\int_{\mathbb{R}}g_{X}(s,y)\tilde{\pi}_{Q}% (ds,dy).$$ (4.29) Then (4.19) has no solution in the class of admissible strategies and hence $X$ can not be replicated. $\square$ Let us notice that the issue of uniqueness of the martingale measure does not affect the thesis of Theorem 4.4, that is the market is incomplete even if the martingale measure $Q$ is unique. This shows a significant difference comparing to the stock market with a finite number of assets where the equivalence between the uniqueness of the martingale measure and the market completeness is one of the basic properties. References [1] D. Applebaum, Lévy Processes and Stochastic Calculus, Cambridge University Press, (2009). [2] M. Barski, J. Zabczyk, Completeness of bond market driven by Lévy processes, International Journal of Theoretical and Applied Finance, (2010), 13, 635–656. [3] F. Delbaen, W. Schachermayer, A general version of the fundamental theorem of asset pricing, Mathematische Annalen, (1994), 300, 463–520. [4] F. Delbaen, W. Schachermayer, The fundamental Theorem of Asset Pricing for unbounded stochastic processes, Mathematische Annalen, (1998), 312, 215–250. [5] E. Eberlein, J. Jacod, S. Raible, Lévy term structure models: No-arbitrage and completeness, Finance and Stochastics, (2005), 9, 67–88. [6] D. Heath, R. Jarrow, A. Morton, Bond pricing and the term structure of interest rates: a new methodology for contingent claim valuation, Econometrica, (1992), 60, 77–105. [7] J. Jacod, A.N. Shiryaev, Limit theorems for stochastic processes, Springer, (2002). [8] J. Jakubowski, J. Zabczyk, Exponential moments for HJM models with jumps, Finance and Stochastics, (2007), 11, 429–445. [9] H. Kunita, Representation of martingales with jumps and applications to mathematical finance, Advanced Studies in Pure Mathematics, (2004), 41, Stochastic Analysis and Related Topics, 209-232. [10] K. Yosida, Functional Analysis, (1995), Springer.
Growth mechanism and origin of high $sp^{3}$ content in tetrahedral amorphous carbon Miguel A. Caro [email protected] Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland Department of Applied Physics, Aalto University, Espoo, Finland    Volker L. Deringer Engineering Laboratory, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, United Kingdom Department of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB2 1EW, United Kingdom    Jari Koskinen Department of Chemistry and Materials Science, Aalto University, Espoo, Finland    Tomi Laurila Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland    Gábor Csányi Engineering Laboratory, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, United Kingdom (November 26, 2020) Abstract We study the deposition of tetrahedral amorphous carbon (ta-C) films from molecular dynamics simulations based on a machine-learned interatomic potential trained from density-functional theory data. For the first time, the high $sp^{3}$ fractions in excess of 85% observed experimentally have been reproduced by means of computational simulation and the deposition energy-dependence of the film’s characteristics is also accurately described. High confidence in the potential and direct access to the atomic interactions allow us to infer the microscopic growth mechanism in this material. While the widespread view is that ta-C grows by “subplantation”, we show that the so-called “peening” model is actually the dominant mechanism responsible for the high $sp^{3}$ content. We show that pressure waves lead to bond rearrangement away from the impact site of the incident ion, and high $sp^{3}$ fractions arise from a delicate balance of transitions between 3- and 4-fold coordinated carbon atoms. These results open the door for a microscopic understanding of carbon nanostructure formation with an unprecedented level of predictive power. Amorphous carbons (a-C) are a class of materials with important applications as coatings. Of special interest are high-density forms of a-C which exhibit a high fraction of $sp^{3}$-bonded carbon atoms, known as tetrahedral a-C (ta-C) or diamond-like carbon (DLC) because their mechanical properties are similar to those of diamond. Emerging applications of a-C are as precursors in the synthesis of other forms of nanostructured carbons Sainio et al. (2016); Suarez-Martinez and Marks (2012) and as a substrate platform for biocompatible electrochemical devices Laurila et al. (2017). Significant efforts are being made to develop carbon-based devices designed for biological sensing, which could be implantable in the human body, and will be at the heart of the next technological revolution, where seamless integration between human tissue and microelectronics will enable real-time health monitoring and countless other applications Tiwari et al. (2015); Arriaga et al. (2016); Laurila et al. (2017). Together with its widespread technological and industrial use, a-C has also been the subject of significant academic interest, in particular by the computational modeling community. The high degree of bonding flexibility exhibited by carbon, which can exist in $sp^{3}$, $sp^{2}$ and $sp$ environments or “hybridizations”, is behind its ability to form numerous compounds which make the sheer complexity of life possible. This flexibility is also responsible for the large degree of microscopic variability found in a-C, where diverse and disordered atomic motifs can coexist, each in its own metastable configuration. This makes simulations of a-C a long-standing challenge for any computational model based on interatomic potentials. Early molecular dynamics (MD) studies focused on optimizing and parameterizing simple classical potentials for a-C Tersoff (1988), but also seminal ab initio MD (AIMD) simulations of a-C were conducted when the field was still in its infancy Galli et al. (1989); Kaukonen and Nieminen (1992). A constant struggle for computational models, since early on and until today, has been to recreate and understand the formation process which leads to the high $sp^{3}$ fractions observed for ta-C, which can be in excess of 85%. Experimentally, ta-C is commonly grown by deposition of energetic ions onto a substrate. The fraction of $sp^{3}$ carbon increases monotonically with the beam energy up to approximately 60 eV–100 eV (depending on the method) Robertson (2002), where it peaks at around 90%. At higher energies, the amount of $sp^{3}$ atoms starts to diminish. Unfortunately, this is an extremely challenging process to study using highly accurate methods, such as AIMD based on density-functional theory (DFT), due to their computational cost. Instead, simulated deposition has been carried out in the past with “classical” interatomic potentials such as Tersoff Tersoff (1988) and C-EDIP Marks (2000). However, classical potentials have systematically failed at reproducing experimentally observed $sp^{3}$ fractions Marks (2005). DFT-based generation of a-C has been carried out with varying degree of success using alternative routes Marks et al. (1996); McCulloch et al. (2000); Marks et al. (2002). See Ref. Laurila et al. (2017) for a review of the performance of different generation methods and potentials. Thus, there is a gap between what would be a close representation of reality and what can be simulated in practice. This gap is due to the difficulty of modeling realistic processes (large number of atoms, long time scales) and what can currently be done with accurate, yet computationally-expensive methods, such as DFT-based MD. Recent advances in computational techniques have given rise to a trend in the physics, chemistry and materials science communities to apply machine-learning (ML) and data-driven approaches to materials modeling Khaliullin et al. (2011); Sosso et al. (2013). In the specific realm of interatomic potentials, a family of general and highly flexible potentials referred to as “Gaussian approximation potentials” (GAPs) has been introduced, which promises to bridge the gap we were referring to earlier Bartók et al. (2010). In this Letter, we use the GAP ML interatomic potential Deringer and Csányi (2017) to study the hitherto unresolved a-C growth mechanism and the physical reasons for the high $sp^{3}$ concentration in ta-C films with an unprecedented level of accuracy. To study the atomistic details of the growth of an a-C film, we explicitly simulated the deposition of C atoms onto a carbon substrate one atom at a time, using MD. A large [111]-oriented diamond substrate, terminated by the stable $2\times 1$ surface reconstruction, was used, containing 3240 atoms in periodic boundary conditions. This corresponds to initial dimensions of $38\text{~{}\AA}\times 38\text{~{}\AA}$ in plane and 16 Å of thickness. The effect of the substrate on the results of the simulation is discussed in the Supplemental Information (SI). 2500 single monoenergetic C atoms with a kinetic energy of 60 eV were dropped from the top of the simulation box onto the diamond substrate, to create an initial a-C template. After this, an additional 5500 atoms, each with a kinetic energy corresponding to the different deposition regimes studied (20 eV, 60 eV and 100 eV), were subsequently deposited, for a total of 8000 impact events per energy. The equations of motion were integrated using a time step dynamically adapted to correctly describe the atomic trajectories while maximizing computing efficiency, ensuring that the largest atomic displacements do not exceed 0.1 Å per time step. Our main results are obtained with the GAP ML potential trained from local density approximation (LDA) DFT data Deringer and Csányi (2017). All MD simulations were carried out with LAMMPS Plimpton (1995); ref . The impact of the incident ions per se lasts for just a few fs. However, the kinetic energy of the impacting atom is transferred to the substrate, increasing its temperature. To ensure that the experimental conditions are met as closely as possible, this extra kinetic energy needs to be removed using a thermostat, bringing the system back to equilibrium before the next deposition takes place. Equilibrating the system back to the nominal substrate temperature, 300 K, takes up to 1 ps, depending on the energy of the incident ion. Equilibration is therefore by far the most computationally expensive part of the simulation. A more detailed discussion of the dependence on deposition energy (including the low-energy regime), an in-depth study of elasticity and comparison with Tersoff and C-EDIP results will be published later in a more technical paper Caro et al. (2018). Video animations of the growth process can be accessed online from the Zenodo repository Caro (2017) and the SI. In Fig. 1 we show the main structural features of the deposited a-C films. The figure shows the in-plane averaged mass density profile of the films grown at different deposition energies. Very high densities and $sp^{3}$ fractions are obtained in the interior of the film. The simulated deposition at 60 eV, which is the ion energy at which $sp^{3}$ content is expected to peak based on experimental observations Robertson (2011), shows $sp^{3}$ fractions of up to 90%. Previous simulations McCulloch et al. (2000); Marks (2005); Caro et al. (2014); Laurila et al. (2017), either based on deposition or alternative methods such as liquid quenching, have systematically failed to reproduce these high numbers. The previously reported computational results with the highest $sp^{3}$ fractions (shy of 85%) were based on DFT geometry optimization followed by pressure correction Caro et al. (2014); Laurila et al. (2017). Explicit deposition simulations (based on the widely used empirical C-EDIP potential) had not been able to produce a-C structures with $sp^{3}$ fractions exceeding $\sim$60% Marks (2005). The 20 eV, 60 eV and 100 eV films from Fig. 1 reach mass densities around 3.5 g/cm${}^{3}$, very close to diamond. Although these densities exceed typical experimental values for ta-C by a few percent, is it indeed possible to grow “superhard” ta-C close to the density of diamond under ideal conditions, such as the absence of hydrogen Schultrich et al. (1998). Lifshitz et al. showed that ta-C films as dense as 3.5 g/cm${}^{3}$ can be grown consistently over a wide range of deposition energies Lifshitz et al. (1995), although we must note that such extremely high-density samples are lacking from most of the literature, where quoted values are typically below the 3.3 g/cm${}^{3}$ mark. One also needs to take into consideration that these ta-C films are under typical compressive stresses equivalent to $\sim 2$ % change in volume (Table 1). The comparison with experimental fingerprints for short and medium range order (Fig. 2) again reveals excellent agreement and further indicates that GAP provides a correct description of the deposition physics. The elastic properties of the films, including stresses built-in during deposition, are summarized in Table 1. We note that GAP has previously been tested to give reliable elastic properties for quenched a-C Deringer and Csányi (2017). For the present study, we computed the elastic properties of the films in the bulk-like region, that is, the portion of the film where the $sp^{3}$ fraction remains constant. Details will be given in a separate paper, which also presents more detailed information on the elastic properties of the films and their energy dependence Caro et al. (2018). The data in Table 1 indeed confirm that ta-C films are under large compressive stresses, of the order of 10 GPa. Under such compression, this superhard ta-C film is less compressible than diamond at equilibrium, for which the bulk modulus is $\sim 440$ GPa. The elastic moduli should be significantly reduced once the strain in the film is released. We observed plastic deformation (bond rearrangement) when attempting film relaxation. Based on this and on abundant experimental evidence Robertson (2002), it is unlikely that highly $sp^{3}$-rich ta-C can be generated in the absence of these large compressive stresses. What is more difficult to ascertain is whether compressive stress is required for ta-C growth or just a consequence of how growth occurs. With regards to surface morphology, Fig. 1 already clearly hints toward different features as the deposition energy is varied. As the ion energy increases, the spatial extent of the $sp^{2}$-rich region increases too. This can be observed in more detail in Fig. 3, where we show the final deposited film structure for 60 eV and its topographic surface map. The microscopic surface roughness for this film is $\sim 1$ Å. We observe that surface roughness is minimal for the 20 eV film ($\sim 0.7$ Å), and increases for both lower and higher deposition energies (e.g., $\sim 1.5$ Å and $\sim 1.9$ Å at 5 eV and 100 eV, respectively) Caro et al. (2018). These results are in qualitative agreement with the detailed experimental study on the morphology of ta-C surfaces by Davis et al. Davis et al. (1998), who measured $\sim 4$ Å and $\sim 10$ Å thick $sp^{2}$-rich regions for 35 eV and 100 eV films, respectively. Although Davis’ data for surface thickness have large error bars, and the definition of a “surface region” is to some degree arbitrary, we can infer that surface thickness increases experimentally between 0.1 Å/eV and 0.2 Å/eV within the energy regime relevant to ta-C growth Davis et al. (1998). In this context, our estimates of surface thickness (Fig. 1) also show reasonable quantitative agreement with experiment. The general conclusion is that the thickness of the surface region grows with deposition energy, due to the increasing strength of the local thermal spike at the impact site. Impacting atoms induce generation of $sp^{2}$-bonded carbon, including local transition from $sp^{3}$ to $sp^{2}$ coordination. We now turn our attention to the microscopic growth mechanism responsible for these high $sp^{3}$ fractions. The consensus in the literature is that the “subplantation” mechanism is behind this phenomenon Robertson (2011). This mechanism is illustrated in Fig. 4 and relates the increase in bonding coordination to the packing of atoms in too small a volume, as newly arrived atoms are being deposited. The relaxation of the surrounding matrix then explains film growth. However, this view is in contradiction with the results of our simulations. While the subplantation mechanism was already challenged by Marks from C-EDIP simulations Marks (2005), one of the reasons why an alternative model as already proposed with C-EDIP has not been accepted is the lack of quantitative agreement with experiment, i.e., the $sp^{3}$ fractions are too low as predicted by C-EDIP. In Fig. 4 (c) we show the local mass density difference between the structure before and after impact: $$\displaystyle\Delta\rho(r,h)=2\pi r\left(g_{\text{after}}(r,h)-g_{\text{before% }}(r,h)\right),$$ (1) where $g(r,h)$ is the pair correlation function on the surface of a cylinder of radius $r$ and height $h$ with origin at the impact site. $\Delta\rho(r,h)$ therefore gives the difference in total atom density integrated on a circumference of radius $r$ around the impact site at height $h$. We furthermore resolve this according to $sp^{2}$ and $sp^{3}$ components, which are computed with Eq. (1) using only the partial local mass densities corresponding to atoms with 3- and 4-fold coordination, respectively. This quantity allows us to visualize where atoms are being removed and deposited and where the transition from $sp^{2}$ to $sp^{3}$ is taking place. Orange regions in the color maps indicate an increase in local density after impact, whereas blue regions denote a decrease in local density. The origin of the plot, (0,0), corresponds to the impact site, and the maps have been averaged over the last 4000 impacts. Our results challenge the belief that subplantation explains the high $sp^{3}$ fractions. The blue region around and below the impact site on the “Total” and “$sp^{3}$” panels shows that atoms are being displaced by the incoming ion. The orange region circling the impact site in the “$sp^{2}$” panel shows that these atoms, including the incoming ion, are subsequently deposited preferentially as $sp^{2}$ atoms. To further quantify this effect, Fig. 5 shows the average changes in atomic coordination within different regions around the impact site. As mentioned, the impacting atom is preferentially deposited with 3-fold coordination and there is a net annihilation of 4-fold ($sp^{3}$) sites in the immediate vicinity of the impact site. This is incompatible with the subplantation mechanism, which would require a majority of impacting atoms to be deposited with 4-fold coordination (see SI for more quantitative information). Our data show that each single impact induces coordination changes for roughly 80 atoms, and that $sp^{3}$ motifs locally diminish at and around the impact site. However, the dynamical balance between $sp^{3}$ creation and annihilation builds up laterally and away from the impact region to yield net generation of $sp^{3}$ carbon as a result. Figure 4 (b) shows schematically how the atoms are locally depleted around the impact site and deposited nearby as $sp^{2}$ carbon. This displacement induces a transformation of the surrounding carbons from $sp^{2}$ to $sp^{3}$, and also the film’s growth via vertical displacement of the uppermost layer of C atoms, which are always predominantly $sp^{2}$-bonded (and occasionally $sp$). Therefore, our results indicate that the pressure wave generated by the impacting energetic ions and knock-on atoms is responsible for the generation of $sp^{3}$-rich a-C films. This process is beneficial at the studied 20 eV, 60 eV and 100 eV deposition energies, but it does not occur at lower energies Caro et al. (2018). As the deposition energy increases, the incoming ions carry enough kinetic energy to start damaging the surface, which leads to the creation of a thicker and more disordered $sp^{2}$ surface region (Figs. 1 and 3), in agreement with experiment Davis et al. (1998). To summarize, this is the first computational study to report deposited a-C structures with a degree of $sp^{3}$ hybridization in quantitative agreement with experiment. Most importantly, the excellent agreement that we obtain with relevant experiments gives us confidence that our simulation is reproducing the microscopic physical processes correctly. In turn, this gives us confidence that we provide a fully atomistic account of the growth mechanism and high $sp^{3}$ contents in ta-C. The growth mechanism clearly supported by our results is peening; the previously proposed subplantation mechanism cannot be substantiated in view of our data. The use of a machine-learned interatomic potential trained from ab initio data has allowed us to achieve a level of description for this complex problem that has previously been out of reach. We believe these results also highlight the role that machine learning will play in the field of materials modeling and molecular dynamics in the years to come. This research was financially supported by the Academy of Finland through grants 310574 and 285526. Computational resources were provided by CSC – IT Center for Science, Finland, though projects 2000634 and 2000300. V. L. D. gratefully acknowledges a fellowship from the Alexander von Humboldt Foundation, a Leverhulme Early Career Fellowship, and support from the Isaac Newton Trust. References Sainio et al. (2016) S. Sainio, H. Jiang, M. A. Caro, J. Koehne, O. Lopez-Acevedo, J. Koskinen, M. Meyyappan,  and T. Laurila, “Structural morphology of carbon nanofibers grown on different substrates,” Carbon 98, 343 (2016). Suarez-Martinez and Marks (2012) I. Suarez-Martinez and N. A. Marks, “Amorphous carbon nanorods as a precursor for carbon nanotubes,” Carbon 50, 5441 (2012). Laurila et al. (2017) T. Laurila, S. Sainio,  and M. A. Caro, “Hybrid carbon based nanomaterials for electrochemical detection of biomolecules,” Prog. Mater. Sci. 88, 499 (2017). Tiwari et al. (2015) J. N. Tiwari, V. Vij, K. C. Kemp,  and K. S. Kim, “Engineered carbon-nanomaterial-based electrochemical sensors for biomolecules,” ACS nano 10, 46 (2015). Arriaga et al. (2016) R. I. Arriaga, M. Findlay, P. J. Hesketh,  and J. R. Stetter, “Ubiquitous wearable electrochemical sensors,” Electrochem. Soc. Interface 25, 69 (2016). Tersoff (1988) J. Tersoff, “Empirical interatomic potential for carbon, with applications to amorphous carbon,” Phys. Rev. Lett. 61, 2879 (1988). Galli et al. (1989) G. Galli, R. M. Martin, R. Car,  and M. Parrinello, “Structural and electronic properties of amorphous carbon,” Phys. Rev. Lett. 62, 555 (1989). Kaukonen and Nieminen (1992) H.-P. Kaukonen and R. M. Nieminen, “Molecular-dynamics simulation of the growth of diamondlike films by energetic carbon-atom beams,” Phys. Rev. Lett. 68, 620 (1992). Robertson (2002) J. Robertson, “Diamond-like amorphous carbon,” Mat. Sci. Eng. R 37, 129 (2002). Marks (2000) N. A. Marks, “Generalizing the environment-dependent interaction potential for carbon,” Phys. Rev. B 63, 035401 (2000). Marks (2005) NA Marks, “Thin film deposition of tetrahedral amorphous carbon: a molecular dynamics study,” Diam. Relat. Mater. 14, 1223 (2005). Marks et al. (1996) N. A. Marks, D. R. McKenzie, B. A. Pailthorpe, M. Bernasconi,  and M. Parrinello, “Ab initio simulations of tetrahedral amorphous carbon,” Phys. Rev. B 54, 9703 (1996). McCulloch et al. (2000) D. G. McCulloch, D. R. McKenzie,  and C. M. Goringe, “Ab initio simulations of the structure of amorphous carbon,” Phys. Rev. B 61, 2349 (2000). Marks et al. (2002) N. A. Marks, N. C. Cooper, D. R. McKenzie, D. G. McCulloch, P. Bath,  and S. P. Russo, “Comparison of density-functional, tight-binding, and empirical methods for the simulation of amorphous carbon,” Phys. Rev. B 65, 075411 (2002). Caro et al. (2014) M. A. Caro, R. Zoubkoff, O. Lopez-Acevedo,  and T. Laurila, “Atomic and electronic structure of tetrahedral amorphous carbon surfaces from density functional theory: Properties and simulation strategies,” Carbon 77, 1168 (2014). Khaliullin et al. (2011) R. Z. Khaliullin, H. Eshet, T. D. Kühne, J. Behler,  and M. Parrinello, “Nucleation mechanism for the direct graphite-to-diamond phase transition,” Nature materials 10, 693 (2011). Sosso et al. (2013) G. C. Sosso, G. Miceli, S. Caravati, F. Giberti, J. Behler,  and M. Bernasconi, “Fast crystallization of the phase change compound GeTe by large-scale molecular dynamics simulations,” J. Phys. Chem. Lett. 4, 4241 (2013). Bartók et al. (2010) A. P. Bartók, M. C. Payne, R. Kondor,  and G. Csányi, “Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons,” Phys. Rev. Lett. 104, 136403 (2010). Deringer and Csányi (2017) V. L. Deringer and G. Csányi, “Machine learning based interatomic potential for amorphous carbon,” Phys. Rev. B 95, 094203 (2017). Plimpton (1995) S. Plimpton, “Fast parallel algorithms for short-range molecular dynamics,” J. Comput. Phys. 117, 1 (1995). (21) http://lammps.sandia.gov. Caro et al. (2018) M. A. Caro, V. L. Deringer, J. Koskinen, T. Laurila,  and G. Csányi, “Machine-learning-based simulated deposition of $sp^{3}$-rich amorphous carbon films,” in preparation  (2018). Caro (2017) M. A. Caro, “Deposition of amorphous carbon at different energies modeled with GAP,” Zenodo  (2017), DOI:10.5281/zenodo.1133425. Robertson (2011) J. Robertson, “Plasma deposition of diamond-like carbon,” Jpn. J. Appl. Phys. 50, 01AF01 (2011). Schultrich et al. (1998) B. Schultrich, H.-J. Scheibe, D. Drescher,  and H. Ziegele, “Deposition of superhard amorphous carbon films by pulsed vacuum arc deposition,” Surf. Coat. Tech. 98, 1097 (1998). Lifshitz et al. (1995) Y. Lifshitz, G. D. Lempert, E. Grossman, I. Avigal, C. Uzan-Saguy, R. Kalish, J. Kulik, D. Marton,  and J. W. Rabalais, “Growth mechanisms of DLC films from C${}^{+}$ ions: experimental studies,” Diam. Relat. Mater. 4, 318 (1995). Gilkes et al. (1995) K. W. R. Gilkes, P. H. Gaskell,  and J. Robertson, “Comparison of neutron-scattering data for tetrahedral amorphous carbon with structural models,” Phys. Rev. B 51, 12303 (1995). Ferrari et al. (1999) A. C. Ferrari, J. Robertson, M. G. Beghi, C. E. Bottani, R. Ferulano,  and R. Pastorelli, “Elastic constants of tetrahedral amorphous carbon films by surface Brillouin scattering,” Appl. Phys. Lett. 75, 1893 (1999). Davis et al. (1998) C. A. Davis, G. A. J. Amaratunga,  and K. M. Knowles, “Growth mechanism and cross-sectional structure of tetrahedral amorphous carbon thin films,” Phys. Rev. Lett. 80, 3280 (1998).
Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers Masoumeh Soflaei,1 Hongyu Guo,2 Ali Al-Bashabsheh,3 Yongyi Mao,1 Richong Zhang3 1 University of Ottawa, Ottawa, Canada, 2 National Research Council Canada, 3 Beijing Advanced Institution on Big Data and Brain Computing, Beihang University, Beijing, China [email protected], [email protected], [email protected], [email protected], [email protected] Abstract We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call “IB learning”. We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a “vector quantization” approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, “Aggregated Learning”, for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks. Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers Masoumeh Soflaei,1 Hongyu Guo,2 Ali Al-Bashabsheh,3 Yongyi Mao,1 Richong Zhang3 1 University of Ottawa, Ottawa, Canada, 2 National Research Council Canada, 3 Beijing Advanced Institution on Big Data and Brain Computing, Beihang University, Beijing, China [email protected], [email protected], [email protected], [email protected], [email protected] Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Introduction The revival of neural networks in the paradigm of deep learning (?) has stimulated intense interest in understanding the networking of deep neural networks, e.g., (?; ?). Among various efforts, an information-theoretic approach, information bottleneck (IB) (?) stands out as a fundamental tool to theorize the learning of deep neural networks (?; ?; ?). Under the IB principle, the core of learning a neural network classifier is to find a representation $T$ of the input example $X$, that contains as much as possible the information about $X$ and as little as possible the information about the label $Y$. The conflict between these two requirements can be formulated as a constrained optimization problem in which one requirement is implemented as the objective function and another requirement as the constraint (?; ?; ?). In this paper, we call this problem IB learning. A key observation that has inspired this work is that the optimization formulation of IB learning resembles greatly the rate-distortion function in rate-distortion theory, i.e., the theory for quantizing signals (?). A careful investigation along this direction indeed reveals that, conceptually, there is an unconventional quantization problem that is closely related to IB learning. To that end, we formulate this problem, which we refer to as IB quantization. We prove that the objective of IB quantization, namely, designing quantizers that achieve the rate-distortion limit, is equivalent to the objective of IB learning. This result establishes an equivalence between the two problems. In rate-distortion theory, it is well known that scalar quantizers, which quantize signals one at a time, are in general inferior to vector quantizers, which quantize multiple signals at once. The discovered equivalence between IB learning and IB quantization then suggests that IB learning may benefit from a “vector quantization” approach, in which the representations of multiple inputs are learned jointly. Exploiting variational techniques and the recently proposed mutual information neural estimation (MINE) method (?), we show that such a vector quantization approach to IB learning naturally results in a novel framework for learning neural network classifiers. We call this framework Aggregated Learning (AgrLearn). Briefly, in AgrLearn, $n$ random training objects are aggregated into a single amalgamated object and passed to the model; the model predicts the soft labels for all $n$ examples jointly. The training of an AgrLearn model is carried out by solving a min-max optimization problem, derived a variational relaxation of the IB learning problem and a MINE approximation of mutual information. We conducted extensive experiments, applying AgrLearn to the current art of deep learning architectures for image and text classification. Our experimental results suggest that AgrLearn brings significant gain in classification accuracy. In practice, AgrLearn can be easily integrated into existing neural network architectures 111Our implementation of AgrLearn is available at https://github.com/SITE5039/AgrLearn. The proofs of some theoretical results are omitted due to length constraints. They will be included in an extended version of this paper. Information Bottleneck Learning The overall context of this work is a classification setting, where we let $\mathcal{X}$ denote the space of objects to be classified and $\mathcal{Y}$ denote the space of class labels. Assume that the objects and labels are distributed according to an unknown distribution $p_{XY}$ on $\mathcal{X}\times\mathcal{Y}$, where instead we are given a set $\mathcal{D}:=\{(X_{1},Y_{1}),\dots,(X_{N},Y_{N})\}$ of i.i.d samples from $p_{XY}$. The objective of learning here is to find a classifier from ${\cal D}$ that classifies $X$ into its label $Y$. Central to this classification problem is arguably the following representation learning problem: Find a representation of $X$ that only contains the information about $X$ relevant to its class label $Y$. Such a problem can be naturally formulated using the information bottleneck principle (?) and will be referred to as the Information Bottleneck (IB) learning problem. In IB learning, one is interested in learning a representation $T$ of $X$ in some space $\mathcal{T}$ such that the mutual information $I(X;T)$ between $X$ and $T$ is as small as possible whereas the mutual information $I(Y;T)$ between $T$ and the class label $Y$ is as large as possible. Such a representation is sensible since it aims at squeezing away all information in $X$ that is irrelevant to the classification task while keeping the relevant information intact. Intuitively, minimizing $I(X;T)$ forces the model not to over-fit to the irrelevant features of $X$, whereas maximizing $I(Y;T)$ extracts all features useful for the classification task. The two optimization objectives are in conflict with each other. A natural formulation to the IB learning problem is to consider one objective as the optimization objective and the other as a constraint. This gives rise to the following constrained optimization problem, subject to the Markov chain $Y$—$X$—$T$, find $$\widehat{p}_{T|X}=\arg\min_{p_{T|X}:I(X;T)\leq A}-I(Y;T),$$ (1) for a nonnegative value $A$, or equivalently, $$\widehat{p}_{T|X}=\arg\min_{p_{T|X}:I(Y;T)\geq A^{\prime}}I(X;T),$$ (2) for a nonnegative value $A^{\prime}$. The Markov chain assumption ensures that any information in feature $T$ about label $Y$ is obtained from $X$ only. For later use, we denote the minimum mutual information in (2) as $R_{\rm IBL}(A^{\prime})$, i.e., $$R_{\rm IBL}(A^{\prime})=\min_{p_{T|X}:I(Y;T)\geq A^{\prime}}I(X;T).$$ (3) We note that solving this IB learning problem, i.e., obtaining the optimal $\widehat{p}_{T|X}$ and its corresponding bottleneck representation $T$ does not automatically solve the classification problem. It is still required to build a classifier that predicts the class label $Y$ based on the representation $T$ of $X$. Nonetheless later in this paper, we will show that solving a variational approximation of the IB learning problem may, in fact, provide a direct solution to the classification problem of interest. Information Bottleneck Quantization We now formulate the Information Bottleneck (IB) quantization problem. Our objective in this section is to show that the IB quantization and IB learning problems are equivalent. Let $(X_{1},Y_{1}),(X_{2},Y_{2}),\dots,(X_{n},Y_{n})$ be drawn i.i.d from $p_{XY}$. The sequences $(X_{1},X_{2},\cdots,X_{n})$ and $(Y_{1},Y_{2},\cdots,Y_{n})$ are denoted by $X^{n}$ and $Y^{n}$, respectively. An $(n,2^{nR})$ IB-quantization code is a pair $(f_{n},g_{n})$ in which $f_{n}$ maps each sequence $X^{n}$ to an integer in $\{1,2,\cdots,2^{nR}\}$ and $g_{n}$ maps an integer in $\{1,2,\cdots,2^{nR}\}$ to a sequence $T^{n}:=(T_{1},T_{2},\cdots,T_{n})\in{\mathcal{T}}^{n}$. Using the standard nomenclature in quantization, the quantity $R$ is referred to as the rate of the code and $n$ as the length of the code. Using this code, $f_{n}$ encodes the sequence $X^{n}$ as the integer $f_{n}(X^{n})$ and $g_{n}$ reconstructs $X^{n}$ as a representation $T^{n}:=g_{n}(f_{n}(X^{n}))$. Unlike standard quantization problems, the IB quantization problem uses a distortion measure that may depend on the code. To that end, for any $x\in\mathcal{X}$, $t\in\mathcal{T}$ and any two conditional distributions $q_{Y|X}$ and $q_{Y|T}$, define $$d_{\rm IB}(x,t;q_{Y|X},q_{Y|T}):=\text{KL}(q_{Y|X}(.|x)\|q_{Y|T}(.|t)),$$ (4) where $\text{KL}(.\|.)$ is the Kullback–Leibler (KL) divergence. Note that the code $(f_{n},g_{n})$, together with $p_{XY}$, induce a joint distribution over the Markov chain $Y^{n}$—$X^{n}$—$T^{n}$. Under this joint distribution the conditional distributions $p_{Y_{i}|X_{i}}$ and $p_{Y_{i}|T_{i}}$ are well defined for each $i=1,2,...,n$. Hence, given the code $(f_{n},g_{n})$ and for any two sequences $x^{n}\in\mathcal{X}^{n}$ and $t^{n}\in\mathcal{T}^{n}$, their IB distortion is defined as: $$\overline{d}_{\rm IB}(x^{n},t^{n}):=\frac{1}{n}\sum_{i=1}^{n}d_{\rm IB}(x_{i},% t_{i};p_{Y_{i}|X_{i}},p_{Y_{i}|T_{i}}),$$ (5) We note that the quantity $\overline{d}_{\rm IB}(x^{n},t^{n})$ measures a “loss of information about $Y$” when the code $(f_{n},g_{n})$ is used to represent $x^{n}$ as $t^{n}$. Specifically, consider the source coding problem of compressing $Y^{n}$ based on observing $X^{n}=x^{n}$. If the conditional distribution $p_{Y_{i}|X_{i}}(\cdot|x_{i})$ for each $i$ is mistaken as $p_{Y_{i}|T_{i}}(\cdot|t_{i})$ in the design of the source code, the average additional coding overhead per $Y$-symbol is precisely $\overline{d}_{\rm IB}(x^{n},t^{n})$. Using this distortion measure, the IB quantization problem is to find a code $(f_{n},g_{n})$ having the smallest rate $R$ subject to the constraint $\mathbb{E}\overline{d}_{\rm IB}(X^{n},T^{n})\leq D$, where $\mathbb{E}$ denotes expectation. For given $p_{XY}$ and $\mathcal{T}$, a rate distortion pair $(R,D)$ is called achievable if $\mathbb{E}\overline{d}_{\rm IB}(X^{n},T^{n})\leq D$ for some sequence of $(f_{n},g_{n})$ codes. As usual, the rate-distortion function for the IB quantization problem, which we denote by $R_{\rm IBQ}(D)$, is defined as the smallest rate $R$ such that $(R,D)$ is achievable. Theorem 1 Given $p_{XY}$ and $\mathcal{T}$, the rate-distortion function for the IB quantization problem can be written as $$R_{\rm IBQ}(D)=\min_{p_{T|X}:\mathbb{E}d_{\rm IB}(X,T)\leq D}I(X;T)$$ (6) where the expectation is defined as $$\mathbb{E}d_{\rm IB}(X,T):=\sum_{x,t}d_{\rm IB}(x,t;p_{Y|X},p_{Y|T})p_{XT}(x,t).$$ This theorem provides a limit on the achievable rates of the IB quantization problem. We note that this result was first shown in (?). However in (?), the result relies on the assumption that $|\mathcal{T}|\geq|\mathcal{X}|+2$, whereas in this theorem the condition is removed. The form of the rate-distortion function $R_{\rm IBQ}$ for the IB quantization problem given in Theorem 1 resembles greatly the optimal objective of IB learning $R_{\rm IBL}$ in (3). More precisely, we have Theorem 2 $R_{\rm IBL}(A^{\prime})=R_{\rm IBQ}(I(X;Y)-A^{\prime})$ Proof: We have $$\displaystyle\mathbb{E}d_{\rm IB}(X,T)$$ $$\displaystyle:=$$ $$\displaystyle\sum_{x,t}d_{\rm IB}(x,t;p_{Y|X},p_{Y|T})p_{XT}(x,t)$$ $$\displaystyle=$$ $$\displaystyle I(X;Y)-I(Y;T)$$ where the second equality is by the definition of $d_{\rm IB}$ and the Markov chain $Y$—$X$—$T$ assumption. Hence, we may rewrite (6) in Theorem 1 as $$\displaystyle R_{\rm IBQ}(D)$$ $$\displaystyle=$$ $$\displaystyle\min_{p_{T|X}:I(X;Y)-I(Y;T)\leq D}I(X;T)$$ $$\displaystyle=$$ $$\displaystyle\min_{p_{T|X}:I(Y;T)\geq I(X;Y)-D}I(X;T)$$ $$\displaystyle=$$ $$\displaystyle R_{\rm IBL}(I(X;Y)-D)$$ The theorem follows by substituting $A^{\prime}:=I(X;Y)-D$. $\Box$ This theorem relates the IB learning and IB quantization problems, where we note that $I(X;Y)$ is a constant that only depends on $p_{XY}$. By this theorem, solving the IB learning problem where the information about $Y$ contained in $T$ needs to be no less than $A^{\prime}$ is equivalent to solving the IB quantization problem so that the distortion is no more than $I(X;Y)-A^{\prime}$. Variational Approach to IB Learning Having established the equivalence between IB learning and IB quantization, we now turn to solve the IB learning problem. The objective of this section is to develop a variational approach to this problem which not only provides a bottleneck representation $T$ for $X$ but also leads to a classifier for the classification problem at hand. We note that the results presented in this section also underlies the “variational information bottleneck” approach of (?). We first establish the following result. Theorem 3 Under any distribution $p_{YXT}$ that satisfies the Markov chain $Y$—$X$—$T$, we have $$I(Y;T)\geq\mathbb{E}_{(x,y)\sim p_{XY},\atop{t\sim p_{T|X}(\cdot|x)}}\log q_{Y% |T}(y|t)+H(Y)$$ (7) for any conditional distribution $q_{Y|T}$ of a random variable on ${\cal Y}$ conditioned on $T$. In addition, the above inequality holds with equality if and only if $q_{Y|T}$ is equal to $p_{Y|T}$. As a consequence of this theorem, the mutual information $I(Y;T)$ can be written as $$\displaystyle I(Y;T)=\max_{q_{Y|T}}$$ $$\displaystyle\mathbb{E}_{(x,y)\sim p_{XY},\atop{t\sim p_{T|X}(\cdot|x)}}\log q% _{Y|T}(y|t)+H(Y).$$ Substituting this in the IB learning problem as formulated in (1), we have $$\displaystyle\widehat{p}_{T|X}$$ $$\displaystyle=\arg\min_{p_{T|X}:I(X;T)\leq A}-I(Y;T)$$ $$\displaystyle=\arg\min_{p_{T|X}:\atop{I(X;T)\leq A}}\left\{-\max_{q_{Y|T}}% \mathbb{E}_{\kern-5.0pt(x,y)\sim p_{XY},\atop{t\sim p_{T|X}(\cdot|x)}}\log q_{% Y|T}(y|t)\right\}$$ $$\displaystyle=\arg\min{}_{p_{T|X}:\atop{I(X;T)\leq A}}\min_{q_{Y|T}}\left\{-% \mathbb{E}_{\kern-5.0pt(x,y)\sim p_{XY},\atop{t\sim p_{T|X}(\cdot|x)}}\log q_{% Y|T}(y|t)\right\}$$ Now suppose we have a neural network representing the mapping $p_{T|X}$ and that we represent $q_{Y|T}$ using another network. Then we may construct an overall network by concatenating the two networks. Specifically, each object $x$ will be first passed to the network $p_{T|X}$, and the output $T$ of the network is passed to the network $q_{Y|T}$. If the true class label $y$ is modeled as being generated from this concatenated network, it is easy to see that the cross-entropy loss $\ell_{\rm CE}$ of the network is the expectation above, i.e., $$\displaystyle\ell_{\rm CE}=-\mathbb{E}_{(x,y)\sim p_{XY},t\sim p_{T|X}(\cdot|x% )}\log q_{Y|T}(y|t).$$ (8) In other words, the IB learning problem can be formulated as solving the following optimization problem: $$\min_{p_{T|X},q_{Y|T}}\ell_{\rm CE}\left(p_{T|X},q_{Y|T}\right)~{}{\rm subject% ~{}to~{}}I(X;T)\leq A$$ (9) Hence, introducing a Lagrange multiplier, subsequently we will focus on the following unconstrained problem $$\min_{p_{T|X},q_{Y|T}}\ell_{\rm CE}\left(p_{T|X},q_{Y|T}\right)+\alpha I(X;T)$$ (10) for nonnegative $\alpha$. An apparent advantage of this approach to IB learning is that when the optimization problem (10) is solved, not only is the bottleneck representation $T$ found, but also the entire classification network is obtained. It is worth noting that the variational formulation (10) of IB learning can be viewed as a generalization of learning with standard neural networks under the cross-entropy loss. Specifically, learning with standard neural networks is a reduction of (10) in which the standard neural network contains no term $\alpha I(X;T)$, or equivalently has $\alpha=0$. The generalization of learning with standard neural networks to the formulation of IB learning in (10) is arguably beneficial in two respects: 1. The $\alpha I(X;T)$ regularization term in (10) serves to control the model complexity so as to reduce the generalization gap. 2. Generalizing the deterministic map from $X$ to $T$ in standard neural networks to a stochastic one in (10) minimizes the cross-entropy loss $\ell_{\rm CE}$ over a larger space; this potentially allows further decrease of $\ell_{\rm CE}$, thereby achieving better classification accuracy. We note that the “Deep Variational Information Bottleneck” (DVIB) approach of (?), not necessarily motivated by the same reason, uses the same variational bound of $I(Y;T)$ and arrives at the same formulation as (10). In the remainder of this paper, we present a new strategy, termed “Aggregated Learning”, to implement the IB learning formulation (10). Aggregated Learning (AgrLearn) We now introduce the Aggregated Learning (AgrLearn) framework for learning with neural networks. We will stay with the IB learning formulation of (10) while keeping in mind that it results from a variational approximation of the formulation in (1). Recall from Theorem 1 that the IB learning problem is equivalent to the IB quantization problem. In the classical rate-distortion theory (?), it is well known that in order to achieve the rate-distortion limit of quantization, in general, one must consider the use of vector quantizers. In the context of IB quantization, a vector quantizer is an IB-quantization code $(f_{n},g_{n})$ with $n>1$ whereas a scalar quantizer is an IB-quantization code $(f_{n},g_{n})$ with $n=1$. From rate-distortion theory, better quantizers result from using quantization codes with larger length $n$. In particular, in order to achieve the rate-distortion function, it is in general required that the length $n$ of the rate-distortion code be made asymptotically large. Note that a scalar IB-quantization code $(f_{1},g_{1})$ maps $X$ to $T$ by $$T=g_{1}(f_{1}(X)):=(g_{1}\circ f_{1})(X).$$ Under the equivalence between IB quantization and IB learning, the mapping $g_{1}\circ f_{1}$ induced by the scalar quantizer $(f_{1},g_{1})$ essentially defines a conditional distribution $p_{T|X}$ in IB learning, which simply reduces to the deterministic function $g_{1}\circ f_{1}$. On the other hand, in learning with a standard neural network, the deterministic mapping, say $h$, from the input space ${\cal X}$ to the bottleneck space ${\cal T}$ (which could refer to the space of feature representation at any intermediate layer of the network), can be regarded as implementing a scalar IB-quantization code $(f_{1},g_{1})$ with $$g_{1}\circ f_{1}=h.$$ The superiority of vector quantizers to scalar quantizers then motivates us to develop a vector-quantization approach to IB learning, which we call Aggregated Learning or AgrLearn in short. – Like a vector quantizer, which quantizes $n$ signals simultaneously, AgrLearn classifies $n$ input objects jointly at the same time, the details of which are given below. The framework of AgrLearn consists of two networks, which we refer to as the “main network” and the “regularizing network” respectively. The Main Network The main network takes as its input the concatenation of $n$ objects $(X_{1},X_{2},\ldots,X_{n}):=X^{n}$. Such a concatenated input will be referred to as an “$n$-fold aggregated input”. The main network consists of two parts, as seen in Figure 1. The first part, or the “pre-bottleneck” part, implements a deterministic mapping $h:{\cal X}^{n}\rightarrow{\cal T}^{n}$ that maps an aggregated input $X^{n}$ to an “aggregated bottleneck” $T^{n}$ via $$T^{n}:=(T_{1},T_{2},\ldots,T_{n}):=h(X^{n}).$$ (11) The second part, or the “post-bottleneck” part, implements a stochastic mapping $q_{Y^{n}|T^{n}}$ from ${\cal T}^{n}$ to ${\cal Y}^{n}$ that factorizes according to $$q_{Y^{n}|T^{n}}(y^{n}|t^{n}):=\prod_{i=1}^{n}q_{Y_{i}|T^{n}}(y_{i}|t^{n})\\ $$ (12) Overall the main network expresses a stochastic mapping from ${\cal X}^{n}$ to ${\cal Y}^{n}$, which can be expressed as $$q_{Y^{n}|X^{n}}(y^{n}|x^{n}):=\prod_{i=1}^{n}q_{Y_{i}|T^{n}}(y_{i}|h(x^{n}))\\ $$ (13) On the main network as specified by (13), define $$\ell_{\rm CE}^{(n)}:=-\mathbb{E}_{x^{n}y^{n}\sim p_{XY}^{\otimes n}}\log q_{Y^% {n}|X^{n}}(y^{n}|x^{n})$$ (14) where $p_{XY}^{\otimes n}$ is the distribution on $\left({\cal X}\times{\cal Y}\right)^{n}$ induced by drawing $n$ samples i.i.d. from $p_{XY}$. Clearly $\ell_{\rm CE}^{(n)}$ is nothing more than the cross-entropy loss of the network’s predictive distribution $q_{Y^{n}|X^{n}}$ for the aggregated input $X^{n}$ with respect to their labels $Y^{n}$. As we will be minimizing this cross-entropy loss function, we next discuss its properties. Following Theorem 3, $$\ell^{(n)}_{\rm CE}\geq nH(Y)-I(Y^{n};T^{n}).$$ (15) and if the post-bottleneck network component $q_{Y^{n}|T^{n}}$ has sufficient capacity, then $$\min_{q_{Y^{n}|T^{n}}}\ell_{\rm CE}^{(n)}=nH(Y)-I(Y^{n};T^{n})$$ That is if the post-bottleneck component has sufficient capacity, then minimizing $\ell_{\rm CE}^{(n)}$ over the entire main network also maximizes $I(Y^{n};T^{n})$. The Regularizing Network The regularizing network is essentially a mutual information neural estimator (MINE) network (?), which serves to estimate $I(X;T)$ and penalizes it during the training of the main network. For a careful development of MINE, the reader is referred to (?). Here we only give a brief description. MINE in a Nutshell Suppose that ${\cal U}$ and ${\cal V}$ are two spaces and that there is a joint distribution $p_{UV}$ on ${\cal U}\times{\cal V}$ defining a pair $(U,V)$ of random variables. Suppose that we can perform i.i.d. sampling of $p_{UV}$ and we wish to estimate the mutual information $I(U;V)$ from the samples. In the framework of MINE, a family $\Gamma$ of functions is constructed as a neural network, where each $\gamma\in\Gamma$ is a function mapping ${\cal U}\times{\cal V}$ to the set ${\mathbb{R}}$ of real numbers. Then due to dual representation of KL divergence (?), the mutual information $I(U;V)$ can be estimated as $$\displaystyle\begin{split}\displaystyle\widehat{I}(U;V):=&\displaystyle\max_{% \gamma\in\Gamma}\{{\mathbb{E}}_{(u,v)\sim p_{UV}}\gamma(u,v)\\ &\displaystyle-\log{\mathbb{E}}_{(u,v)\sim p_{U}\otimes p_{V}}\exp\left(\gamma% (u,v)\right)\}\end{split}$$ (16) We will denote the term that gets maximized in (16) by $J(U,V;\gamma)$, namely, $$\displaystyle\begin{split}\displaystyle J(U,V;\gamma):=&\displaystyle{\mathbb{% E}}_{(u,v)\sim p_{UV}}\gamma(u,v)\\ &\displaystyle-\log{\mathbb{E}}_{(u,v)\sim p_{U}\otimes p_{V}}\exp\left(\gamma% (u,v)\right)\end{split}$$ (17) and re-express $\widehat{I}(U;V)$ as $$\widehat{I}(U;V)=\max_{\gamma\in\Gamma}J(U,V;\gamma)$$ As usual, practical computation of $J(U,V;\gamma)$ exploits Monte-Carlo approximation based on samples drawn from $p_{UV}$. A natural way to apply MINE to the estimation of $I(X;T)$ in AgrLearn is taking ${\cal U}:={\cal X}^{n}$, ${\cal V}:={\cal T}^{n}$, $U=X^{n}$, $V=T^{n}$. This allows us to estimate $I(X^{n};T^{n})$ by $$\widehat{I}(X^{n};T^{n})=\max_{\gamma\in\Gamma}J(X^{n},T^{n};\gamma)$$ (18) where $T^{n}$ is computed by the pre-bottleneck component of the main network with $X^{n}$ as its input. We may then take $\widehat{I}(X^{n};T^{n})$ as an approximation of $nI(X;T)$. The network implementing the computation of $J(X^{n},T^{n};\gamma)$ is referred to as the regularizing network. Training and Prediction With this development, we may define an overall objective function $\Omega(h,q_{Y^{n}|T^{n}},\gamma)$ as $$\Omega(h,q_{Y^{n}|T^{n}},\gamma):=\ell_{\rm CE}^{(n)}+\alpha J(X^{n},T^{n};\gamma)$$ (19) where we note that the term $\alpha J(X^{n},T^{n};\gamma)$ also depends on $h$ implicitly. The above development then suggests that solving the IB learning problem in the form of (10) can be approximated by solving the following min-max problem: $$\min_{h,q_{Y^{n}|T^{n}}}\max_{\gamma}\Omega(h,q_{Y^{n}|T^{n}},\gamma)$$ (20) In the training of AgrLearn, mini-batched SGD can be used to solve the above min-max problem. The training algorithm is given in Algorithm 1. In the prediction phase, ‘‘Replicated Classification” protocol is used222Two additional protocols were also investigated. Contextual Classification: For each object $X$, $n-1$ random examples are drawn from the training set $\mathcal{D_{\mathcal{X}}}$ and concatenated with $X$ to form the input; the predictive distribution for $X$ generated by the model is then retrieved. This process is repeated $k$ times, and the average of the $k$ predictive distribution is taken as the label predictive distribution for $X$. Batched Classification: Let $\mathcal{D^{\text{test}}_{\mathcal{X}}}$ denote the set of all objects to be classified. In Batched Classification, $\mathcal{D^{\text{test}}_{\mathcal{X}}}$ are classified jointly through drawing $k$ random batches of $n$ objects from $\mathcal{D^{\text{test}}_{\mathcal{X}}}$. The objects in the $i^{th}$ batch $B_{i}$ are concatenated to form the input and passed to the model. The final label predictive distribution for each object $X$ in $\mathcal{D^{\text{test}}_{\mathcal{X}}}$ is taken as the average of the predictive distributions of $X$ output by the model for all batches $B_{i}$’s containing $X$. Since we observe that all three protocols result in comparable performances, all results reported in the paper are obtained using the Replicated Classification protocol.. Each object $X$ is replicated $n$ times and concatenated to form the input. The average of $n$ predictive distributions generated by the model is taken as the label predictive distribution for $X$. Experimental Studies We evaluate AgrLearn with deep network architectures such as ResNet for classification tasks in both image and natural language domains. Standard benchmarking datasets are used. We use mini-batched backprop for 400 epochs333Here an epoch refers to going over $N$ aggregated training examples, where $N=|\mathcal{D}_{\mathcal{X}}|$. with exactly the same hyper-parameter settings without dropout. Specifically, weight decay is $10^{-4}$, and each mini-batch contains 64 aggregated training examples. The learning rate for the main network is set to 0.1 initially and decays by a factor of $10$ after $100$, $150$, and $250$ epochs. Each reported performance value (error rate or accuracy) is the median of the performance values obtained in the final 10 epochs by averaging that value over running the same setting 7 times. Image Recognition Experiments are conducted on the CIFAR-10, CIFAR-100 datasets with two widely used deep network architectures, namely ResNet (?) and WideResNet (?). The CIFAR-10 dataset has 50,000 training images, 10,000 test images, and 10 image classes, and the CIFAR-100 dataset is similar to CIFAR-10 but with 100 classes. We apply AgrLearn to the 18-layer and 34-layer Pre-activation ResNet (ResNet-18 and ResNet-34) (?) as implemented in (?), and the 22-layer WideResNet (WideResNet-22-10) (?) as implemented in (?). The resulting AgrLearn model differs from original ResNet and WideResNet in its $n$ parallel soft-max layers in post-bottleneck part(as opposed to the single soft-max layer in ResNet and WideResNet) and the number of filters in the last layer of pre-bottleneck part, which is expanded by factor $n$. This expanding by factor $n$ is required because the input dimension in AgrLearn increases significantly, and the model is required to extract joint features across individual objects in the amalgamated example. Note that fold number $1$ (fold-1) denotes the standard neural network in which just one object passes to the network and fold number greater than $1$ denotes an AgrLean framework wherein multiple objects are aggregated and passed to the network. The quantity $\alpha$ is the coefficient of the second term in (19), in which $\alpha=0$ corresponds to that only the cross-entropy loss is considered , and $\alpha>0$ corresponds to that the regularization network is added to the main network. Predictive Performance The prediction error rates of AgrLearn for different number of folds are shown in Tables 1, 2, and 3. It can be seen that AgrLearn significantly boosts the performance of ResNet-18, ResNet-34 and WideResNet-22-10. For example, with respect to ResNet-18, the relative error reductions achieved by fold-2, where $\alpha=0$ are $3.74$%, and $2.83$% on CIFAR-10, and CIFAR-100, and where $\alpha>0$ the reductions are $3.86$%, and $3.21$% on CIFAR-10, and CIFAR-100 respectively. Similarly significant improvement upon ResNet-34 and WideResNet is also observed. For example, with respect to WideResNet-22-10, the relative error reductions achieved by fold-2, where $\alpha=0$, are $2.56$%, and $3.93$% on CIFAR-10, and CIFAR-100, and where $\alpha>0$, the reductions are $1.18$%, and $3.89$% on CIFAR-10, and CIFAR-100 respectively. The relative error reductions with respect to ResNet-34, achieved by fold-2, where $\alpha=0$ are $5.26$%, and $5.16$% on CIFAR-10, and CIFAR-100, and where $\alpha>0$, the reductions are $5.3$%, and $6.59$% on CIFAR-10, and CIFAR-100 respectively. Model Behavior During Training The typical behavior of ResNet-18 for fold-1 and fold-4 (in terms of test error rate) across training epochs is shown in Figure 2. It is seen that in the “stable phase” of training, the test error of fold-4 (black curve) continues to decrease whereas the test performance of fold-1 (red curve) fails to further improve. This can be explained by the training loss curve of fold-1 (blue curve), which drops to zero quickly in this phase and provides no training signal for further tuning the network parameters. In contrast, the training curve of fold-4 (purple curve) maintains a relatively high level, allowing the model to keep tuning itself. The relatively higher training loss of fold-4 is due to the much larger space of the amalgamated examples. Even in the stable phase, one expects that the model is still seeing new combinations of images. In other words, we argue that aggregating several examples into a single input can be seen as an implicit form of regularization, preventing the model from over-fitting by limited the number of individual examples. Sensitivity to Model Complexity With fold-$n$ AgrLearn, the output label space becomes $\mathcal{Y}^{n}$. This significantly larger label space seems to suggest that AgrLearn favors a more complex model. In this study, we start with ResNet-18 for fold-2 and investigate the behavior of the model when it becomes more complex. The options we investigate include increasing the model width (by doubling the number of filters per layer) and increasing the model depth (from 18 layers to 34 layers). The performances of these models are given in Table 4. Table 4 shows that increasing the model width with respect to ResNet-18, and ResNet-34, improves the performance of AgrLearn on both CIFAR-10 and CIFAR-100. For example, doubling the number of filters in ResNet-18 reduces the error rate for fold-2 where $\alpha$ is equal to $0.3$ from $4.73$% to $4.3$% on CIFAR-10, and from $22.94$% to $21.78$% on CIFAR-100, respectively. It also shows that increasing the model width with respect to ResNet-34 by factor 2, reduces the error rate from $4.65$% to $4.45$% on CIFAR-10, and from $22.25$% to $21.68$% on CIFAR-100. We hypothesize that with AgrLearn, the width of a model plays a critical role. This is because the input dimension in AgrLearn increases significantly and the model is required to extract joint features across individual objects in the amalgamated example. Moreover, increasing the model depth improves performance. For example, the relative error reductions from ResNet-18 to ResNet-34, where $\alpha$ is equal to $0.3$ are $1.7$%, and $3$% on CIFAR-10, and CIFAR-100 respectively. Behavior with Respect to Fold Number We also conduct experiments investigating the performance of ResNet-18 with varying fold number $n$. Table 5 suggests that the performance of ResNet-18 is significantly boosted by increasing the number of folds $n$. For example, the relative error reductions achieved by fold-4, where $\alpha$ is equal to $0$ are $4.72$%, and $5.11$% on CIFAR-10, and CIFAR-100, while the relative error reductions achieved by fold-2, are $3.74$%, and $2.83$% on CIFAR-10, and CIFAR-100. This shows that increasing the number of folds improves the performance of AgrLearn on both CIFAR-10 and CIFAR-100. Moreover, the relative error reductions achieved by fold-4, where $\alpha>0$ are $4.7$%, and $5.8$% on CIFAR-10, and CIFAR-100 respectively. Text Classification We test AgrLearn with two widely adopted NLP deep-learning architectures, CNN and LSTM (?), using two benchmark sentence-classification datasets, Movie Review (?) and Subjectivity (?). Movie Review and Subjectivity contain respectively 10,662 and 10,000 sentences, with binary labels. We use 10% of random examples in each dataset for testing and the rest for training, as explained in (?). For CNN, we adopt CNN-sentence (?) and implement it exactly as (?). For LSTM, we just simply replace the convolution and pooling components in CNN-sentence with standard LSTM units as implemented in (?). The final feature map of CNN and the final state of LSTM are passed to a logistic regression classifier for label prediction. Each sentence enters the models via a learnable, randomly initialized word-embedding dictionary. For CNN, all sentences are zero-padded to the same length. The fold-2 AgrLearn model corresponding to the CNN and LSTM models are constructed, where $\alpha$ is equal to $0$. In CNN with fold-2, the aggregation of two sentences in each input simply involves concatenating the two zero-padded sentences. In LSTM with fold-2, when two sentences are concatenated in tandem, an EOS word is inserted after the first sentence. We train and test the CNN, LSTM and their respective AgrLearn models on the two datasets, and report their performance in Table 6. Clearly, the AgrLearn models improve upon their corresponding CNN or LSTM counterparts. In particular, the relative performance gain brought by AgrLearn on the CNN model appears more significant, amounting to $4.2$% on Movie Review and $3.8$% on Subjectivity. Conclusion Aggregated Learning, or AgrLearn, is a simple and effective neural network modeling framework, justified information theoretically. It builds on an equivalence between IB learning and IB quantization and exploits the power of vector quantization, which is well known in information theory. We have demonstrated its effectiveness through the significant performance gain it brings to the current art of deep network models. We believe that the proposal and successful application of AgrLearn in this paper signals the beginning of a promising and rich theme of research. Many interesting questions deserve further investigation. For example, how can we characterize the interaction between model complexity, fold number and sample size in AgrLearn? Additionally, the aggregation of inputs provides additional freedom in the architectural design of the network; how can such freedom be better exploited? Acknowledgments This work is supported partly by the National Natural Science Foundation of China (No. 61772059, 61421003), by the Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC). References [Abadi et al. 2016] Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; Kudlur, M.; Levenberg, J.; Monga, R.; Moore, S.; Murray, D. G.; Steiner, B.; Tucker, P.; Vasudevan, V.; Warden, P.; Wicke, M.; Yu, Y.; and Zheng, X. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16, 265–283. [Alemi et al. 2016] Alemi, A. A.; Fischer, I.; Dillon, J. V.; and Murphy, K. 2016. Deep variational information bottleneck. CoRR abs/1612.00410. [Belghazi et al. 2018] Belghazi, M. I.; Baratin, A.; Rajeswar, S.; Ozair, S.; Bengio, Y.; Courville, A.; and Hjelm, R. D. 2018. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062. [Dai et al. 2018] Dai, B.; Zhu, C.; Guo, B.; and Wipf, D. P. 2018. Compressing neural networks using the variational information bottleneck. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, 1143–1152. [Donsker and Varadhan 1983] Donsker, M. D., and Varadhan, S. S. 1983. Asymptotic evaluation of certain markov process expectations for large time. iv. Communications on Pure and Applied Mathematics 36(2):183–212. [He et al. 2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Identity mappings in deep residual networks. In European conference on computer vision, 630–645. Springer. [Hochreiter and Schmidhuber 1997] Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural Comput. 9(8):1735–1780. [Kim 2014a] Kim, Y. 2014a. Convolutional neural networks for sentence classification. In EMNLP, 1746–1751. [Kim 2014b] Kim, Y. 2014b. https://github.com/yoonkim/cnn_sentence. [LeCun, Bengio, and Hinton 2015] LeCun, Y.; Bengio, Y.; and Hinton, G. E. 2015. Deep learning. Nature 521(7553):436–444. [Liu 2017] Liu, K. 2017. https://github.com/kuangliu/pytorch-cifar. [Navot and Tishby 2003] Navot, R. G.-B. A., and Tishby, N. 2003. An information theoretic tradeoff between complexity and accuracy. In COLT. [Pang and Lee 2004] Pang, B., and Lee, L. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In ACL, 271–278. [Pang and Lee 2005] Pang, B., and Lee, L. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, 115–124. Association for Computational Linguistics. [Saxe et al. 2018] Saxe, A. M.; Bansal, Y.; Dapello, J.; Advani, M.; Kolchinsky, A.; Tracey, B. D.; and Cox, D. D. 2018. On the information bottleneck theory of deep learning. In ICLR. [Shamir, Sabato, and Tishby 2010] Shamir, O.; Sabato, S.; and Tishby, N. 2010. Learning and generalization with the information bottleneck. Theor. Comput. Sci. 411(29-30):2696–2711. [Shannon 1959] Shannon, C. E. 1959. Coding theorems for a discrete source with a fidelity criterion. IRE National Convention Record 7. [Shwartz-Ziv and Tishby 2017] Shwartz-Ziv, R., and Tishby, N. 2017. Opening the black box of deep neural networks via information. CoRR abs/1703.00810. [Tishby, Pereira, and Bialek 1999] Tishby, N.; Pereira, F. C.; and Bialek, W. 1999. The information bottleneck method. In Proceedings of 37th Annual Allerton Conference on Communication, Control and Computing, 368–377. [Zagoruyko and Komodakis 2016a] Zagoruyko, S., and Komodakis, N. 2016a. https://github.com/szagoruyko/ wide-residual-networks. [Zagoruyko and Komodakis 2016b] Zagoruyko, S., and Komodakis, N. 2016b. Wide residual networks. arXiv preprint arXiv:1605.07146. [Zhang et al. 2017] Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; and Vinyals, O. 2017. Understanding deep learning requires rethinking generalization. ICLR.
Demonstration and Comparison of Operation of Photomultiplier Tubes at Liquid Argon Temperature R. Acciarri Università dell’Aquila e INFN, L’Aquila, Italy M. Antonello INFN - Laboratori Nazionali del Gran Sasso, Assergi, Italy F. Boffelli Università di Pavia e INFN, Pavia, Italy M. Cambiaghi Università di Pavia e INFN, Pavia, Italy N. Canci F. Cavanna Università dell’Aquila e INFN, L’Aquila, Italy A.G. Cocco INFN - Sezione di Napoli, Napoli, Italy N. Deniskina F. Di Pompeo G. Fiorillo Università di Napoli e INFN, Napoli, Italy C. Galbiati Princeton University - Princeton, New Jersey, USA L. Grandi P. Kryczynski IFJ PAN, Krakow, Poland G. Meng INFN - Sezione di Padova, Padova, Italy C. Montanari INFN - Sezione di Pavia, Pavia, Italy O. Palamara INFN - Laboratori Nazionali del Gran Sasso, Assergi, Italy L. Pandola INFN - Laboratori Nazionali del Gran Sasso, Assergi, Italy F. Perfetto Università di Napoli e INFN, Napoli, Italy G.B. Piano Mortari Università dell’Aquila e INFN, L’Aquila, Italy F. Pietropaolo INFN - Sezione di Padova, Padova, Italy G.L. Raselli INFN - Sezione di Pavia, Pavia, Italy M. Rossella INFN - Sezione di Pavia, Pavia, Italy C. Rubbia INFN - Laboratori Nazionali del Gran Sasso, Assergi, Italy E. Segreto A.M. Szelc A. Triossi INFN - Laboratori Nazionali di Legnaro, Legnaro, Italy S. Ventura INFN - Sezione di Padova, Padova, Italy C. Vignoli INFN - Laboratori Nazionali del Gran Sasso, Assergi, Italy A. Zani Università di Pavia e INFN, Pavia, Italy () Abstract Liquified noble gases are widely used as a target in direct Dark Matter searches. Signals from scintillation in the liquid, following energy deposition from the recoil nuclei scattered by Dark Matter particles (e.g. WIMPs), should be recorded down to very low energies by photosensors suitably designed to operate at cryogenic temperatures. Liquid Argon based detectors for Dark Matter searches currently implement photo multiplier tubes for signal read-out. In the last few years PMTs with photocathodes operating down to liquid Argon temperatures (87 K) have been specially developed with increasing Quantum Efficiency characteristics. The most recent of these, Hamamatsu Photonics Mod. R11065 with peak QE up to about 35%, has been extensively tested within the R&D program of the WArP Collaboration. During these testes the Hamamatsu PMTs showed superb performance and allowed obtaining a light yield around 7 phel/keV${}_{ee}$ in a Liquid Argon detector with a photocathodic coverage in the 12% range, sufficient for detection of events down to few keV${}_{ee}$ of energy deposition. This shows that this new type of PMT is suited for experimental applications, in particular for new direct Dark Matter searches with LAr-based experiments. 11footnotetext: Corresponding authors: [email protected], [email protected], [email protected]: Currently at Yale University - New Haven, Connecticut, USA.33footnotetext: Currently at Princeton University - Princeton, New Jersey, USA.44footnotetext: Currently at IASS - Potsdam, Germany.55footnotetext: Currently at ITAB - Chieti, Italy. 1 Introduction A new generation, high Quantum Efficiency, 3” photomultiplier tube (PMT) for cryogenic applications at liquid Argon temperature (LAr, T=87 K) has been recently developed by Hamamatsu Photonics (Mod. R11065). This is of interest to experiments adopting liquified Argon as target, in particular for direct Dark Matter searches, and read out the scintillation light signals from interactions in the medium. Using these new PMTs could lead to improvements in detector sensitivity down to low recoil energy thresholds due to their enhanced quantum efficiency. Within the on-going R&D activity of the WArP Collaboration a first set of R11065 PMTs has been subject to a series of tests aiming at their characterization in reference working conditions i.e. immersed in liquid argon and optically coupled to LAr cells of various sizes. Scintillation light signals from interactions in the cell were detected by the PMTs and read out by fast waveform digitizers. A comparison of the R11065 Hamamatsu PMT with a former generation of cryogenic PMT, produced by Electron Tubes Limited - Mod. ETL D750 (currently used in the WArP -100 detector) has been performed by simultaneously operating the two PMTs viewing a common LAr volume. In these tests the Hamamatsu PMT has shown superb performance demonstrating that it is suited for experimental applications, in particular for new direct Dark Matter searches using LAr-based detectors. 2 The Hamamatsu PMT The Hamamatsu R11065 [1] is a Box&Linear-focused 12-stages PMT, with Synthetic Silica 3” window (opaque to wavelenghts below 160 nm) and special Bialkali photo-cathode developed to efficiently operate down to LAr temperature in the spectral range from UV to VIS. Its prime features include fast time response, good time resolution and pulse linearity. However, the most noteworthy parameter of this model is its excellent Quantum Efficiency (QE). Hamamatsu declares it at around 35% peak value at 400 nm at room temperature, guaranteed to be stable at LAr temperature. The main characteristics of the new R11065, high QE Hamamatsu PMT are reported in Fig.1. The specific QE as a function of the incident wavelength, as reported by Hamamatsu, for one of the PMTs used in the series of tests reported in this work, is shown in Fig.2. A high Collection Efficiency of photoelectrons at the first dynode (CE, above 95%) is obtained for cathode-to-first dynode voltage above 300 V. The voltage divider for the 12-stages dynode chain is custom made on a G10 printed circuit according to a Hamamatsu reference electrical scheme (AC coupling, 50 $\Omega$ anode termination to ground). All passive components were selected for operation at cryogenic temperature. 3 Light Detection in Liquid Argon Luminescence in liquid Argon is due to the radiative decay of low excited molecular states (Ar${}^{*}_{2}$) formed in ionization events [2, 3]. Photons are emitted with wavelength in the VUV range (around 127 nm) and exponentially distributed in time with two (main) different time constants ($\tau_{S}\simeq$ 5 ns for the fast component and $\tau_{T}\simeq$ 1.3 $\mu$s for the slow component, corresponding to the decay of the excimers in Singlet and Triplet states respectively, as for example reported in [4] from a recent dedicated measurement with time resolved techniques). The transmittance of the PMT-window (Synthetic Silica glass) for VUV photons below 160 nm is null. Therefore, LAr VUV light has to be shifted to longer wavelengths. This is commonly accomplished by using efficient wavelength shifter materials (wls) such as Tetraphenyl-Butadiene (TPB) [5]. The emission spectrum of TPB is peaked around the blue line at 440 nm and extends from 390 to 520 nm, where the transmittance of the glass window and the photocathode quantum efficiency (QE) of the PMTs are sufficiently high. In Fig.2 the TPB emission spectrum is shown superimposed to the R11065 quantum efficiency as a function of wavelength. To effectively provide the LAr based detectors with a high light detection capability and to make the detector response homogeneous, all the internal surfaces delimiting the LAr sensitive volume have to be covered with a TPB layer. In the tests reported here, the boundary surfaces of the detectors (side and bottom end) were completely surrounded with a highly reflecting layer coated by a thin TPB film (about 300 $\mu$g/cm${}^{2}$) obtained by deposition with vacuum evaporation technique. The reflector layer (3M-VIKUITI ESR) was a polymeric, totally dielectric, multi-layer plastic mirror with highest specular reflectivity ( 99% ). This configuration has been chosen to simultaneously optimize the down-conversion efficiency of the impinging VUV photons and the reflection efficiency of the blue-shifted photons. In this way, scintillation VUV photons from energy deposition in the LAr volume propagate inside the LAr volume and are wavelength-shifted into visible photons when hitting the TPB film on the surface boundaries. The TPB film + reflector underlayer has a high reflectivity to the visible photons (around 95$\%$) meaning that down-converted visible photons can be reflected (several times) from the boundary surfaces111In other LAr detectors a thin TPB layer is usually deposited (embedded in a PoliStyrene matrix) onto the PMT window. In the tests reported here however the PMT window was left naked. This allows the highest visible photon collection at the PMT photocathode at the expense of losing the VUV light fraction directly impinging onto the PMT window., up to collection on the photocathode. Effects of residual impurities in LAr on the scintillation light output may significantly degrade the detector performance [6, 7]. Quenching (i.e. non-radiative) processes in two-body collisions of impurity-molecules with Ar${}^{*}_{2}$ excimer states (otherwise radiatively decaying with scintillation light emission) and Absorption of the emitted VUV photons by photo-sensitive impurities can take place depending on the type of impurity and its concentration level. Light collection becomes affected by the presence of O${}_{2}$ and H${}_{2}$O molecules diluted at $\geq$ 100 ppb (part per billion) and nitrogen diluted at ppm (part per million) levels of concentration. The reduction of the O${}_{2}$ and H${}_{2}$O content by appropriate purification systems (molecular sieves for water and Oxygen reactants) is definitively needed in Dark Matter detectors based on the collection of the LAr scintillation light. 4 Single PMT test The device for the first test with a single PMT was composed of a LAr cell (in PTFE, cylinder shaped with internal dimensions h=9.0 cm and $\phi$=8.4 cm) with the 3” R11065 PMT, mounted face-down on the top side, viewing the 0.5 lt LAr volume inside the cell. An isometric layout and a picture of the set-up are shown in Fig.3. The PTFE cell was housed in a long stainless steel cylindrical vessel. Its internal volume is about 5 lt and it contains, after filling, a total amount of at least 3.5 lt of LAr in order to have the PMT and its base fully immersed. The LAr active volume of the detector cell (about 0.5 lt) was optically independent but not partitioned from the rest of the LAr volume inside the vessel. The vessel was deployed in a LAr bath of an open dewar, to keep the LAr internal volume at stable temperature. The experimental set-up was assembled and operated at the WArP Cryogenic Facility (LNGS - External Laboratory). This layout is very similar to that in use for a former set of experimental measurements reported in [6, 7], to which we refer to for more details on the detector set-up. The first test of the Hamamatsu PMT has been carried out in fall ’09 and repeated again in early 2010 for verification of the results obtained. Each test run lasted about 10 days after detector activation. The activation procedure consisted of a vacuum pumping phase of the vessel down to few $10^{-5}$ mbar. The residual gas composition measured with a mass spectrometer indicated H${}_{2}$O as the main component, due to material outgassing from detector components. After immersion of the vessel in the LAr bath water outgassing was quickly halted due to the temperature dropping below freezing point. This was indicated by the residual pressure drop inside the chamber by more than one order of magnitude. At this stage the LAr filling procedure through an in-line set of filtering cartridges (Oxygen reactant and molecular sieve, Oxisorb and Zeolite) was started and smoothly completed in about one hour. After a period left for thermalization of the PMT at LAr temperature (about one day) the bias voltage on the PMT was slowly raised up to working conditions. The DAQ system was structured with the PMT anode current output directly transmitted through a LEMO cable (50 $\Omega$) to a fast Waveform Recorder (Acqiris, DP235 Dual-Channel PCI Digitizer Card, up to 1 GS/s, 8 bit dynamic range). At each trigger the signal waveform is recorded with sampling time of 1 ns over a full record length of 15 $\mu$s 4.1 Data Analysis and Results During each source run (one per hour), single photo-electron (SER) pulses have been selected from out-of-trigger parts of the recorded waveforms (isolated peaks in the waveform tail, Fig.4 [Left]), in order to obtain photo-electron data needed for calibration. Typically, the SER pulse is a narrow signal ($FWHM~{}\simeq~{}5$ ns) quickly returning to baseline (about 20 ns from the onset). An averaged single photo-electron pulse is shown in Fig.4 [Right]. The area under the selected peak (in ADC$\cdot$ns units, proportional to the SER charge) is evaluated by integration of the single photo-electron pulse after local baseline subtraction222The local baseline is evaluated in a 50 ns window starting 25 ns after the peak when no other peaks are observed inside the baseline window. If a second peak is found, it is merged with the previous one, the baseline window is set at 25 ns from the second peak and the check is performed again. This algorithm results in a charge spectrum that corresponds to the superposition of single and multiple photoelectron distributions.. SER charge spectra were obtained for each source run, all throughout the test period. A typical spectrum is reported in Fig.5 (PMT bias voltage +1400 V): the Gaussian distribution around the first peak is well separated from the thermionic dark counts distribution and corresponds to the genuine single photo-electron mean amplitude and spread. The other broader peaks at higher charge values are associated to multiple, 2 or 3, photo-electron distributions. The position of the first peak allows the monitoring of the gain setting333The gain $G$ of the PMT multiplication system corresponds to the output charge collected for a (unitary) input charge (single photo-electron). The output charge is estimated from the (first) peak of the SER spectrum after conversion from [ADC$\cdot$ns] units into charge units [p$C$] (k=50/2${}^{8}$[mV/ADC]$\cdot$(1/R${}_{1}$+1/R${}_{2}$), with R${}_{1}$=R${}_{2}$=50$\Omega$ - parallel of the termination to ground of the voltage divider and of the transmission line to DAQ respectively. (and gives the calibration constant per single photo-electron, useful for the corresponding gamma source spectrum analysis). The gain dependence on the High Voltage applied to the PMT has been measured by changing the HV setting and measuring the corresponding position of the SER peak in the charge spectrum. The result is shown In Fig.6. The nominal gain $G~{}=~{}5\times 10^{6}$ (Hamamatsu data sheet) is achieved at a voltage supply of +1500 V. The SER Peak-to-Valley ratio, obtained by comparing the single photo-electron peak with the point where the single photo-electron distribution meets exponentially falling distribution attributed to the the dark counts, is usually taken as a figure of merit of the PMT performance. In Fig.7 [Left] the $P/V$ ratio determined at different gain values is shown: at the nominal gain it reaches the maximum value of $P/V\simeq~{}3.7$. The resolution of the SER peak, defined as the ratio $R=({\sigma}/{\mu})_{SER}$, is another important PMT characteristic to be measured at LAr temperature. This allows to infer the Excess Noise Factor (ENF) of the PMT, due to the fluctuations of the multiplication process in the dynode chain (ENF$=1+R^{2}$). The PMT resolution at different gain values has been measured (Fig.7 [Right]). At the nominal gain of $5\times 10^{6}$ the resolution of the R11065 PMT is found to be $R~{}\simeq~{}28$% and the Excess Noise Factor is ENF$~{}\simeq 1.08$. During the subsequent period of tests of the Hamamatsu tube a gain setting lower ($G~{}=~{}3.1\times 10^{6}$) than the nominal gain has been adopted (HV=+1400 V). Stability of the gain via the SER peak value has been monitored through the test period (about three days of data acquisition). The SER peak showed an almost stable behavior in time, as indicated by Fig.8, with a slightly decreasing exponential trend ($\tau\simeq 35$ h from the fit) attributed to residual effects of thermalization of the PMT at LAr temperature. The main objective of this run was the measurement of the detector light yield (LY) attainable with the use of this new PMT. The LY, commonly defined as the number of photo-electrons (phel) collected per unit of deposited energy in units of phel/keV, primarily depends on the PMT Quantum Efficiency but also upon several other factors, including the detector geometry, the photo-cathodic coverage and the actual operating conditions of the experimental set-up. Among these last the LAr purity and the TPB wavelength-shifting efficiency may play a significant role. Therefore, the evaluation of the experimentally determined LY needs to be compared with an a priori calculated reference expectation value (e.g by means of MonteCarlo simulations). The LY measurement was performed by exposing the LAr cell viewed by the Hamamatsu PMT to a ${}^{241}$Am monochromatic $\gamma$-source with emmission at 59.54 keV to obtain a reference energy deposit in LAr. The source was located inside a collimator holder positioned outside the cell in a fixed position. The trigger rate has been monitored giving a stable value of $\sim 180$ Hz during the source runs. Data acquisition runs with the source have been alternated with blank runs (background from ambient radiation). Gamma rays from the ${}^{241}$Am source induce photo-electric interactions in the LAr cell active volume444Considering the LAr mass attenuation at this energy ($\simeq$0.5 cm${}^{-1}$) a good fraction of the LAr volume of the cell is exposed to photo-electric interactions (without locus dependent bias)., with electron emission in the mip range. Scintillation light following the electron energy deposition, after down-conversion and reflections at the active volume boundaries, is collected at the PMT photo-cathode and the signal waveform is recorded. By waveform integration, after local baseline evaluation and subtraction, the event signal amplitude $S1$ was obtained in ADC. It was normalized with the values obtained by fitting the SER spectra for each run giving its value in phel units. The pulse amplitude is proportional to the electron energy deposited in the LAr cell. Standard cuts have been applied to remove low energy events (E${}_{min}=20~{}phel$), high energy ADC saturated events, pile-up and out of time events. Pulse amplitude spectra have been thus obtained for each source run. A ${}^{241}$Am spectrum from one of the source runs collected is shown in Fig.9. As reported in the figure, the fit of the full absorption peak is found at 416 phel. Therefore, assuming full deposition of the 59.54 keV, the Light Yield of the detector can be evaluated as: $$LY~{}=~{}7.0~{}\frac{phel}{{\rm keV}}~{}\pm~{}5\%$$ (1) The statistical error from the fit is negligible and the systematic error associated to the LY measurements is evaluated from the dispersion of the photo-peak values in the collected ${}^{241}$Am spectra (4 to 5 %, background subtracted), from uncertainties associated to the calibration ($\simeq$ 2%, SER peak determination) and also from the systematics of the off-line analysis due to the choice of internal parameters (evaluated by varying them within 2 to 3%). An overall systematic error around 5% is estimated. This remarkable result555As a term of reference, with a similar experimental set-up (0.5 lt cell) equipped with another type of PMT (Electron Tube, 3” ETL Mod. D750 - nominal QE=20% at 420 nm [9]), the measured LY yielded a maximum value around 2.4 phel/keV. is fully compatible with the MonteCarlo expectation value based on standard assumptions on the optical properties of the reflector+TPB coating and on the LAr purity level estimated during the detector operations. It confirms the very good performance of the R11065 Hamamatsu PMT at LAr temperature in line with expectations from its nominal Quantum Efficiency. The stability in time of the LY has been monitored by applying the analysis to a large set of runs collected during the test period. The results are shown in Fig.10. All LY values lie within $\pm 1.5\%$ around a mean value of 6.9 phel/keV. At the end of the test run the chamber was smoothly emptied and left in GAr atmosphere. After few weeks the detector has been refilled with a new batch of LAr, without any other change of the set-up (i.e. same PMTs, same TPB coated reflector surfaces). The LY measurements from this additional test fully confirmed the result reported above. 5 Direct Comparison of the R11065 with an ETL PMT The results reported above showed the excellent performance of the new Hamamatsu R11065, which seemed to be superior with regard to PMTs previously used by the WArP collaboration [8, 11]. However, the improvement in performance could, in theory, be an effect of differences in detector geometry and operating conditions. In order to fully decouple from these effects a second dedicated test has been envisaged (mid 2010) for a direct comparative test of two types of PMTs: one 3” HQE Hamamatsu R11065 (used in the single PMT test) and one 3” ETL - D750 (pre-production series of the PMT type [9] adopted in the WArP -100 experiment [8]). A picture of the detector chamber used in this test is shown in Fig.11. It is made of a PTFE cell, about 0.4 lt of internal volume (h=8.0 cm and $\phi$=7.6 cm) for LAr, lined with a TPB coated reflector layer on the lateral wall, analogous to the single PMT test chamber. The PMTs were installed at both ends of the cell -the HQE Hamamatsu face down on top and the ETL face up at the bottom, both with naked windows not covered in wavelength shifter. The read-out, data treatment and the off-line analysis codes were the same as in the single PMT test (Sec.4). Similarly, the single photo-electron pulses were identified and selected for each of the two channels leading to the single photo-electron averaged pulse for each PMT, Fig.12. The two PMT pulse shapes are quite similar to each other. Returning to baseline is however faster for the Hamamatsu SER pulse, without presence of wiggles and oscillations exhibited by the ETL PMT. SER spectra were obtained from each run and one example of typical SER spectra from the two PMT’s is shown in Fig.13. The positions of each peak were later used for gain calculation and calibration of respective acquired waveforms. After the LAr filling of the chamber one day was left for PMT thermalization. Subsequently, the Gain Stability in time has been monitored over about one week of run, Fig.14. The initial gain of the ETL PMT has been set at a higher value, expecting a large decrease over the first days of operation (as observed in previous tests). Indeed, the gain of the ETL PMT showed a (quite steep) decreasing trend over the period of the measurements. The gain of the Hamamatsu PMT instead exhibited a slight decrease over the first day after activation and then stabilized to a constant value. Unfortunately, during the second day of operation an unexpected power outage occurred at the experimental site (WArP cryogenic facility - LNGS). The PMT HV power supplies were powered using an UPS but the effect of the power outage is visible on the R11065 PMT as a sudden drop of gain, which was slowly recovered over the following days of running. The cause of the much steeper gain loss and much longer stabilization time of the ETL tube is not clear. A gain of 3.7$\times 10^{6}$ has been set for both PMTs for the subsequent set of source runs. At this gain value the characteristic features of the two PMTs (Peak-to-Valley ratio and SER resolution) are reported in Tab.1 for comparison. The main objective of the two-PMT direct comparison test was the determination of the individual light yield value for each PMT when operating under strictly identical conditions - i.e. viewing the same LAr volume. This allowed decoupling from such variables as the purity of the liquid and the wavelength-shifting efficiency of the TPB film deposited onto the boundary reflective surface of the detector. The light yield of the two PMTs has been determined by exposure to the ${}^{241}$Am gamma-source. As in the case of the Single PMT test, data acquisition runs with the source have been alternated with blank runs (background from ambient radiation). The signal amplitude for each event was obtained by integration and normalization to the respective SER position. Pulse amplitude spectra for both PMTs have been thus obtained for each source run and used, by performing a gaussian fit, to determine the full absorption peak (typical mean value $\simeq$172 phel of light output for the Hamamatsu PMT and $\simeq$ 52 phel for the ETL PMT, Fig.15). In Fig.16 [Left] the mean light output values for the recorded runs, over a two-week period of operation, is shown. These values were then normalized to the source energy to obtain the Light Yield for each PMT. A direct comparison between the two PMTs is the Hamamatsu-to-ETL LY response ratio which was found to be in the 3 : 1 range (stable during the operation period), as shown in Fig.16 [Right]. Due to the configuration of the test set-up in use, the LY ratio depends only on the Global Efficiency ratio of the two PMTs defined as $GE~{}=~{}QE\times CE$, where QE is the photocathode Quantum Efficiency and CE the photoelectron Collection Efficiency at the first dynode. The relative QE of the two PMTs has been measured in a facility at CERN. Its average value over the emission spectrum of the TPB results to be of the order of 2.7. Taking into account the CE: around $90-95\%$ for the Hamamatsu PMT and around $75-80\%$ for the ETL, the GE ratio is found to be in the region of $3.4-3.0$, well compatible with the measured LY ratio. 6 Four-PMT test The scaling-up of detector mass and complexity (e.g. increased number of PMTs) without loss of detector performance (lower LY) is in general not a trivial task and this is especially true for noble liquid detectors. A third experimental test has been thus performed to check if it is possible to repeat the obtained Light Yields in a detector about ten times bigger in volume compared to the one used with the first test reported above (Sec.4). Monitoring the stability of the system over an extended run period was the other main goal of this test. The PTFE mechanical structure of the detector was taken from the WArP -2.3 l prototype developed for a former set of experimental measurements [11] (we refer to it for more details on the detector set-up). The chamber was equipped with four R11065 HQE Hamamatsu PMTs. The internal volume of the chamber is made of a cylindrical section ($\phi$ = 18.4 cm and h=9.5) superimposed to a slightly conical shaped section ($\phi_{top}$ = 17.6 cm, $\phi_{bot}$ = 16.0 cm and h=7.5 cm), corresponding to about 4.3 lt of total volume when completely filled up with LAr. The four PMTs were mounted face-down on the top side of the volume. The boundary surfaces (lateral and bottom) were prepared as in the previous tests, similarly, the PMT glass windows were left naked. The photo-cathodic surface was about 12% of the total boundary surface ($\sim$equivalent to the coverage of the ”single PMT” test set-up). The detector was housed in a low-radioactivity stainless steel vessel, filled with purified LAr and immersed in a LAr bath of an open cryostat. In Fig.17 a picture of the detector set-up is shown. The 4 PMT anode signals were directly digitized by two Acqiris Boards (Mod. U 1080 A, 2-chs. each with 8-bit dynamic range and 1GS/s) at 1 ns sampling time over 15 $\mu$s time interval. This corresponds to the read-out chain currently implemented in the WArP -100 experiment. DAQ and off-line codes were the same as for the previous tests reported above. Before the detector assembly, all the PTFE mechanical components of the chamber were baked in a vacuum oven (at around 80${}^{o}$ C) for four weeks. After assembly and mounting inside the vessel, the experimental set-up went through a first vacuum pumping phase (4 days) down to $10^{-5}$ mbar, followed by a warm GAr flushing phase (24 h) and again by a vacuum pumping phase (2 days, back to $10^{-5}$ mbar). Few hours after the immersion of the vessel in a LAr bath the filling procedure through an in-line set of filtering cartridges (Oxygen reactant and molecular sieve) was started and completed in about three hours. The vessel was completely filled up to full immersion of the PMTs and their bases in LAr. Therefore, the results reported below refer to measurements in single (liquid) phase at null electric field. Opposite to the previous tests, the PMTs were operated in DC coupling666The presence of the Voltage Bias decoupling capacitor needed for the AC coupling of the PMT introduces an unavoidable (though small) undershoot in the collected waveform shape around 10 $\mu$s after the signal onset, leading to an estimated loss of $\leq$1% of the $S1$ signal due to late phel’s going under detection threshold. Switching to DC coupling was indeed motivated to favor a full photo-electron detection though at the expenses of a slightly higher dark current noise visible in the SER spectra and resulting in a lower peak-to-valley ratio. and biased at negative voltage. After a period left for thermalization of the PMTs at LAr temperature (several hours), the bias voltage on the PMTs cathode was slowly raised up to the working point HV= $-$1400 V corresponding to a gain of about 3$\times 10^{6}$. 6.1 Data Analysis and Results The detector was mainly exposed to the ${}^{241}$Am source. Exposures to ${}^{133}$Ba, ${}^{57}$Co and ${}^{137}$Cs sources (with $\gamma$ emission lines at higher energies) were also performed during the run period. The source was located inside a collimator holder positioned outside the vessel in a fixed position. Data acquisition runs with the sources were alternated with blank runs (background from ambient radiation). The four signal waveforms were individually recorded for scintillation events in which the pulse from three of the four PMTs was above a threshold corresponding to 1.5 phel. During each source (or blank) run, single photo-electron pulses were selected in order to provide the SER data needed for calibration. It is worth noting that the detector geometry is a scaled-down version of the WArP -100 detector (100 lt of active volume, 37 PMTs, $\sim$ 12% of photo-cathodic coverage); therefore the light yield from this detector test can be assumed as somehow predictive of the LY from the WArP -100 Inner Detector, when operated under equivalent conditions. A lot of attention has been given to the quality of the TPB coating on the VIKUITI ESR reflector. Before inserting it into the detector, the wavelength-shifting efficiency of the TPB coated reflector sheets was measured using a dedicated setup. Only after the expected maximum light output was confirmed the sheets were lined onto the internal surfaces of the chamber. The analysis was performed in analogous way to the 2 PMT test, only this time the event signal amplitude was obtained by summing the single channel signal amplitudes after their normalization to the respective SER peak positions: ($S1=\sum_{i=1,..,4}s1_{i}$ in phel units; An example of a Pulse Amplitude spectrum ($S1$ distribution) from the ${}^{241}$Am (higher intensity) source is shown in Fig.18 (background subtracted), with the full absorption peak at 378 phel as determined by a gaussian fit of the spectrum. Therefore, the Light Yield of the detector can be evaluated as: $$LY~{}=~{}6.35~{}\frac{phel}{\rm{keV}}~{}\pm~{}5\%$$ (2) The LY value determined with Compton spectra obtained from exposures to the other sources at higher $\gamma$-energies was less precise, but in good agreement with the above LY value. The LY stability in time was monitored over several time intervals during the test period. As an example, results from a ten-days stability test is shown in the Fig.19. The LY (sum of all 4 PMTs) is stable within 2%. The LY from the individual PMTs was also checked by fitting the $s1_{i}$ distributions and the results (relative LY contribution) are shown in Fig.19. Three of the four PMTs behave in a very similar manner, while the fourth of these shows a systematically slightly lower value, not compatible with the difference in Quantum Efficiency. The reasons of this effect have yet to be understood. This is the only sign of malfunctioning experienced during the extensive use of the new Hamamatsu R11065 PMTs (it is worth noting that with this PMTs working at its nominal performance, i.e. like the other three PMTs, a LY $\simeq$ 6.6 phel/keV could be achieved). The purity of the liquid Argon during the test was inferred from the measurement of the long-decay time constant ($\tau_{T}$) of the scintillation light. The waveform from the sum of the four PMTs signals averaged over a large number of scintillation events is shown in Fig.20. The decay time of the slow component from the fit was $\tau_{T}$ = 1130 ns. This value - compared to an expected value around 1300 ns for asymptotically pure liquid Argon [4, 6] - indicates that a residual concentration of impurities was still present in the liquid. A direct measurement by mass spectroscopy on an Ar sample extracted from the chamber has been performed and showed the presence of Nitrogen in the liquid at the ppm level. Nitrogen impurities are not filtered out with the implemented set of filters (dedicated to O${}_{2}$ and H${}_{2}$O removal). Commercial Ar (research grade - routinely used for filling) is usually provided with a lower content of N${}_{2}$ impurities, however a more dirty batch of Argon delivered in occasion of the present test and used for the filling cannot be excluded. The reduction of the long-decay time constant (via quenching effect of the Ar${}_{2}^{*}$ excimers in triplet state) results in a $\sim$10 % loss of light [6] (and correspondingly in the LY value). Therefore, the result with this 4.3 lt chamber (4 HQE Hamamatsu PMTs) agrees in good approximation with the result obtained with the 0.5 lt detector equipped with one HQE PMT (LY $\simeq$ 7 phel/keV) and characterized by an equivalent photo-cathodic coverage ($\sim$12% in both chambers). The difference in measured values can be attributed in full to a higher N${}_{2}$ concentration in the second measurement. 7 Conclusions A new PMT type with enhanced Quantum Efficiency photocathode and operating at LAr temperature has been recently developed by Hamamatsu Photonics Mod. R11065 with peak QE up to about 35%. This PMT is very interesting from the point of view of Liquid Argon based Dark Matter detectors, which currently implement photo-multiplier tubes for signal read-out. The new PMTs have been extensively tested in the course of the R&D program of the WArP Collaboration. The main working parameters of this PMT were measured at LAr temperature and its optimal performance has been demonstrated. It has also been shown experimentally that Liquid Argon detectors with HQE photo-cathodic coverage in the 12% range can achieve a light yield around 7 phel/keV (at null electric field), sufficient for detection of events down to few keV of energy deposition in the liquid.However, since the scaling-up of detector mass and complexity (e.g. increased number of PMTs) without loss of detector performance (namely LY) is definitively not a trivial task, a dedicated test with four PMTs viewing signals from a 4.3 liters LAr cell has been performed. The main delicate variables to keep strictly under control in order to obtain a high Light Yield have been identified: (1) PMTs at the highest possible level of performance (Quantum Efficiency of the photo-cathode at LAr temperature, Collection Efficiency at the first dynode, overall PMT stability in response), (2) wavelength-shifting efficiency of the TPB coating, (3) purity of the liquid Argon. All three elements need to be simultaneously controlled and maintained at their best possible level to guarantee optimal detector performance in line with expectations. References [1] Hamamatsu Photonis, R11065 data sheets (2009). [2] S. Kubota et al., Recombination luminescence in liquid Ar and Xe, Phys. Rev. B 17 (1978), 2762. [3] T. Doke, Fundamental properties of liquid Argon, Krypton and Xenon as Radiation detector media, Portgal Phys. 12 (1981), 9. [4] T. Heindl et al., The scintillation of liquid Argon, EPL 91 (2010), 62002. [5] W.M.Burton and B.A.Powell, Fluorescence of TetrapPhenyl-Butadiene in the Vacuum Ultraviolet, Applied Optics, 12 (1973), 87. D.N. McKinsey et al., Fluorescence efficiencies of thin scintillating films in the extreme UV spectral region Nucl. Inst. and Meth. B, 132 (1997), 351. [6] WArP Collaboration, Effects of Nitrogen contamination in liquid Argon, JINST 5 P06003 (2010). [7] WArP Collaboration, Oxygen contamination in liquid Argon: combined effects on ionization electron charge and scintillation light, JINST 5 P05003 (2010). [8] WArP Collaboration, The WArP experiment, Journal of Physics: Conference Series 203 (2010), 012006 [9] Electron Tube, ETL-D750 data sheets (2006). [10] E. Nichelatti et al., Optical characterization of organic light-emitting thin films in the UltraViolet and Visible spectral ranges, ENEA Tech. Report RT/2010/31/ENEA (2010) [11] WArP Collaboration, First results from a dark matter search with liquid argon at 87 K in the Gran Sasso underground laboratory, Astropart. Phys. 28 (2008), 495.
What happens to the quantum Hall effect when magnetic-field-induced spin-density wave moves Victor M. Yakovenko[a] and Hsi-Sheng Goan[b] Department of Physics and Center for Superconductivity Research, University of Maryland, College Park, MD 20742, USA (E-print cond-mat/9505016, May 3, 1995 ) Abstract The influence of the motion of a magnetic-field-induced spin-density wave (FISDW) on the quantum Hall effect in a quasi-one-dimensional conductor is studied theoretically. In the ideal case of a free FISDW, it is found that the counterflow of the FISDW precisely cancels the quantum Hall current, so the resultant Hall conductivity is zero. In real systems, the Hall conductivity should vanish at the high frequencies, where the pinning and the damping can be neglected, and the dynamics of the FISDW is dominated by inertia. pacs: To be published by World Scientific Publishing Co. in the Proceedings of Physical Phenomena at High Magnetic Fields–II Conference, Tallahassee, May 6–9, 1995 [ ] It is known experimentally[3] and understood theoretically[4, 5] that the magnetic-field-induced spin-density-wave (FISDW) state, observed in the (TMTSF)${}_{2}$X organic quasi-one-dimensional conductors, exhibits the quantum Hall effect. In the theoretical explanation of this effect, it is assumed that the FISDW is pinned and acts on electrons as a static potential. The Hall conductivity, calculated in the presence of this potential at zero temperature, is quantized. On the other hand, under certain conditions, the density wave in a quasi-one-dimensional conductor can move[6]. It is interesting to study how this motion would influence the quantum Hall effect. Since the density-wave condensate can move only along the chains, at first sight, this purely one-dimensional motion cannot contribute to the Hall effect which is essentially a two-dimensional effect. Nevertheless, it is shown below that in the case of the FISDW, unlike in the case of a regular charge- or spin-density wave (CDW/SDW), a nonstationary motion of the condensate does produce a non-trivial contribution to the Hall conductivity. This effect is found at zero temperature in the absence of normal carriers and has the same origin as the quantum Hall effect. In the ideal system where the FISDW is not pinned or damped, the contribution due to the FISDW motion precisely cancels the bare quantum Hall term, so that the resultant Hall conductivity is zero. In real systems, this effect should manifest itself at high enough frequencies where the dynamics of the FISDW is dominated by inertia, and the pinning and the damping can be neglected. On the other hand, the effect cannot be observed in the DC measurements where the FISDW can be depinned by strong electric field. In this paper, we present only a heuristic, semiphenomenological outline, whereas a systematic derivation will be given elsewhere. Some of these results were briefly reported also in other conference proceedings[7]. Let us consider a two-dimensional system where electrons are confined to the chains parallel to the $x$-axis, and the spacing between the chains along the $y$-axis is equal to $b$. Magnetic field $H$ is applied along the $z$-axis perpendicular to the $(x,y)$-plane. The system is in the FISDW state at zero temperature. Let us consider first the case where the electric field $E_{y}$ is applied perpendicular to the chains. The electron Hamiltonian can be written as $$\displaystyle{\cal H}$$ $$\displaystyle=$$ $$\displaystyle\frac{{\hbar}^{2}{k_{x}}^{2}}{2m}+{\Delta_{0}}\cos(Q_{x}x+\Theta)$$ (1) $$\displaystyle+2t_{b}\cos(k_{y}b-q_{x}x+\omega_{y}t).$$ Here $\hbar=h/2\pi$ is the Planck constant, $m$ is the electron mass, $k_{x}$ and $k_{y}$ are the electron wave vectors along and perpendicular to the chains, and $t_{b}$ is the amplitude of tunneling between the chains. In the gauge ${A_{y}}=Hx-c{E_{y}}t$ and $A_{x}=A_{z}=0$, the magnetic and the transverse electric fields appear in the Hamiltonian (1) through the Peierls–Onsager substitution $k_{y}\rightarrow k_{y}-eA_{y}/c\hbar$, so $q_{x}=ebH/\hbar c$ and ${\omega_{y}}=eb{E_{y}}/{\hbar}$ ($e$ is the electron charge and $c$ is the velocity of light). The FISDW potential is represented by the second term in Eq. (1) with the amplitude $\Delta_{0}$ and the phase $\Theta$. It is well known[4, 5] that the longitudinal wave vector of the FISDW is equal to $Q_{x}=2{k_{F}}-Nq_{x}$, where $k_{F}$ is the Fermi wave vector and $N$ is an integer. For simplicity, we set the transverse wave vector of the FISDW to zero. We see that, in the presence of the magnetic field $H$, the hopping term in Eq. (1) acts as a potential, periodic along the chains with the wave vector $q_{x}$ proportional to $H$. In the presence of the transverse electric field $E_{y}$, this potential moves along the chains with the velocity $\omega_{y}/q_{x}=cE_{y}/H$ proportional to $E_{y}$. This velocity is nothing but the drift velocity in crossed electric and magnetic fields. The FISDW potential may also move along the chains[6], in which case its phase $\Theta$ depends on time $t$, and the velocity of the motion is proportional to the time derivative $\dot{\Theta}$. Since we are interested only in a spatially homogeneous motion of the FISDW, we assume that $\Theta$ depends only on $t$ and not on the coordinates $x$ and $y$. We also assume that both potentials move very slowly, adiabatically, which is the case when the electric field is sufficiently weak. Now, we are going to calculate the current along the chains produced by the motion of the potentials. Since there is an energy gap at the Fermi level, following the arguments of Laughlin[8] we can say that an integer number of electrons $N_{1}$ is transferred from one end of a chain to another when the FISDW potential shifts by its period $L_{1}=2\pi/Q_{x}$. The same is true for the motion of the hopping potential with an integer $N_{2}$ and the period $L_{2}=2\pi/q_{x}$. Suppose that the first potential shifts by an infinitesimal displacement $dx_{1}$ and the second by $dx_{2}$. The total transferred charge $dq$ would be the sum of the prorated amounts of $N_{1}$ and $N_{2}$: $$dq=eN_{1}\frac{dx_{1}}{L_{1}}+eN_{2}\frac{dx_{2}}{L_{2}}.$$ (2) Now, suppose that both potentials are shifted by the same displacement $dx=dx_{1}=dx_{2}$. In this case, we can also write that $$dq=e\rho\,dx,$$ (3) where $\rho=4k_{F}/2\pi$ is the concentration of electrons. Equating (2) and (3) and substituting the expressions for $\rho$, $L_{1}$, and $L_{2}$, we find the following Diophantine-type equation[9]: $$4k_{F}=N_{1}(2k_{F}-Nq_{x})+N_{2}q_{x}.$$ (4) Since $k_{F}/q_{x}$ is, in general, an irrational number, the only solution of Eq. (4) for the integer $N_{1}$ and $N_{2}$ is $N_{1}=2$ and $N_{2}=N_{1}N=2N$. Dividing Eq. (2) by the time increment $dt$ and the distance between the chains $b$, we find the density of current along the chains, $j_{x}$. Taking into account that according to Eq. (1) the displacements of the potentials are related to their phases: $dx_{1}=-d\Theta/Q_{x}$ and $dx_{2}=\omega_{y}dt/q_{x}$, we find the final expression for $j_{x}$: $$j_{x}=-\frac{e}{\pi b}\dot{\Theta}+\frac{2Ne^{2}}{h}E_{y}.$$ (5) The first term in Eq. (5) represents the contribution of the FISDW motion, the so-called Fröhlich conductivity[6]. The second term describes the quantum Hall effect[4, 5]. The integer number $N$ in the quantized Hall conductivity $\sigma_{xy}=2Ne^{2}/h$ is the same as that in the FISDW wave vector $Q_{x}=2k_{F}-Nq_{x}$. To complete solution of the problem, it is necessary to find how $\dot{\Theta}$ depends on $E_{y}$. For this purpose, we need the equation of motion of $\Theta$, which can be derived once we know the Lagrangian of the system, $L$. Two terms in $L$ can be readily recovered taking into account that the current density $j_{x}$, given by Eq. (5), is the variational derivative of the Lagrangian with respect to the electromagnetic vector-potential $A_{x}$: $j_{x}=c\,\delta L/\delta A_{x}$. Written in the gauge-invariant form, the recovered part of the Lagrangian is equal to $$L_{1}=-\sum_{i,j,k}\frac{Ne^{2}}{2\pi\hbar c}\varepsilon_{ijk}A_{i}F_{jk}-% \frac{e}{\pi b}\Theta E_{x},$$ (6) where $\varepsilon_{ijk}$ is the antisymmetric tensor with the indices $i,j,k=t,x,y$; $A_{i}$ and $F_{jk}$ are the vector-potential and the tensor of the electromagnetic field, and $E_{x}\equiv F_{tx}$ is the electric field along the chains. The first term in Eq. (6) is the so-called Chern–Simons term responsible for the quantum Hall effect[5]. The second term describes the interaction of the density-wave condensate with the electric field along the chains[6]. Lagrangian (6) should be supplemented with the kinetic energy of the FISDW condensate, $K$. The FISDW potential itself has no inertia because it is produced by the instantaneous Coulomb interaction between electrons, so $K$ originates completely from the kinetic energy of the electrons which are confined under the FISDW gap. The latter energy is proportional to the square of their average velocity, which, in turn, is proportional to the electric current along the chains: $$K=\frac{\pi\hbar b}{4v_{F}e^{2}}\,j_{x}^{2},$$ (7) where $v_{F}$ is the Fermi velocity. Substituting Eq. (5) into Eq. (7), expanding, and omitting an unimportant term proportional to $E_{y}^{2}$, we obtain the second part of the Lagrangian of the system: $$L_{2}=\frac{\hbar}{4\pi bv_{F}}\dot{\Theta}^{2}-\frac{eN}{2\pi v_{F}}\dot{% \Theta}E_{y}.$$ (8) The first term in Eq. (8) is the same as the kinetic energy of a purely one-dimensional density wave[6] and is not specific to the FISDW. The most important is the second term which describes the interaction of the FISDW motion and the electric field perpendicular to the chains. This term is allowed by symmetry in the considered system and has the structure of a mixed vector–scalar product: $${\bf v}\,[{\bf E}\times{\bf H}].$$ (9) Here, ${\bf v}$ is the velocity of the FISDW which is proportional to $\dot{\Theta}$ and is directed along the chains, that is, along the $x$-axis. The magnetic field ${\bf H}$ is directed along the $z$-axis, thus, allowing the electric field ${\bf E}$ to enter only through the component $E_{y}$. Comparing formula (9) with the second term in Eq. (8), one should take into account that the magnetic field enters the second term implicitly, through the integer $N$, which depends on $H$ and changes sign when $H$ changes sign. Varying the total Lagrangian $L=L_{1}+L_{2}$, given by Eqs. (6) and (8), with respect to $\Theta$, we find the equation of motion of $\Theta$: $$\ddot{\Theta}=-\frac{2ev_{F}}{\hbar}E_{x}+\frac{eNb}{\hbar}\dot{E_{y}}.$$ (10) In Eq. (10), the first two terms constitute the standard one-dimensional equation of motion of the density wave[6], whereas the last term, proportional to the time derivative of $E_{y}$, which originated from the second term in Eq. (8), describes the influence of the electric field across the chains on the motion of the FISDW. Taking into account that $E_{x}=0$ and integrating Eq. (10), we find that $\dot{\Theta}=eNbE_{y}/\hbar$; thus, the second term in Eq. (5) (the Fröhlich conductivity of the FISDW) precisely cancels the first term (the quantum Hall current), so the resulting Hall current is equal to zero. This result could have been obtained without calculations by taking into account that the time dependence $\Theta(t)$ is determined by the principle of minimal action. The relevant part of the action is given, in this case, by Eq. (7) which attains the minimal value when $j_{x}=0$. It is instructive to see how the nullification of the Hall conductivity takes place in the case where the electric field is directed along the chains. Varying $L$ (Eqs. (6) and (8)) with respect to $A_{y}$, we find the density of current perpendicular to the chains: $$j_{y}=-\frac{2Ne^{2}}{h}E_{x}-\frac{eN}{2\pi v_{F}}\ddot{\Theta}.$$ (11) In Eq. (11), the first term describes the quantum Hall current, whereas the second term, proportional to the acceleration of the FISDW condensate, comes from the second term in Eq.  (8) and reflects the contribution of the FISDW motion along the chains to the electric current across the chains. According to the equation of motion (10), the electric field along the chains accelerates the density wave: $\ddot{\Theta}=-2ev_{F}E_{x}/\hbar$, thus, the Hall current (11) vanishes. It is clear, however, that in stationary, DC measurements the acceleration of the FISDW, discussed in the previous paragraph, cannot last forever. Any friction or dissipation will inevitably stabilize the motion of the density wave to a steady flow with zero acceleration. In this steady state, the second term in Eq. (11) vanishes, and the current $j_{y}$ recovers its quantum Hall value. The same is true in the case where the electric field is perpendicular to the chains. In that case, the dissipation eventually stops the motion of the FISDW along the chains and restores $j_{x}$, given by Eq. (5), to the quantum Hall value. The conclusion is that the contribution of the moving FISDW condensate to the Hall conductivity is essentially nonstationary and cannot be observed in DC measurements. On the other hand, the effect can be seen in AC experiments. To be realistic, let us add damping and pinning[6] to the equation of motion of the FISDW (10): $$\ddot{\Theta}+\frac{1}{\tau}\dot{\Theta}+\omega_{0}^{2}\Theta=-\frac{2ev_{F}}{% \hbar}E_{x}+\frac{eNb}{\hbar}\dot{E_{y}},$$ (12) where $\tau$ is the relaxation time and $\omega_{0}$ is the pinning frequency. Solving Eq. (12) via the Fourier transformation from the time $t$ to the frequency $\omega$ and substituting the result into Eqs. (5) and (11), we find the Hall conductivity as a function of the frequency: $$\sigma_{xy}(\omega)=\frac{2Ne^{2}}{h}\frac{\omega_{0}^{2}-i\omega/\tau}{\omega% _{0}^{2}-\omega^{2}-i\omega/\tau}.$$ (13) The absolute value of the Hall conductivity, $|\sigma_{xy}|$, computed from Eq. (13) is plotted in the Fig. 1 as a function of $\omega/\omega_{0}$ for $\omega_{0}\tau=2$. As we can see in the Figure, the Hall conductivity is quantized at zero frequency and has a resonance at the pinning frequency. At the higher frequencies, where the pinning and the damping can be neglected and the system effectively behaves as an ideal, purely inertial system considered above, the Hall conductivity does decrease toward zero. The frequency dependence of the Hall conductivity in regular semiconductor quantum Hall systems was measured using the technique of crossed wave guides[10, 11]. Unfortunately, no such measurements were performed in the FISDW systems. These measurements would be extremely interesting. To give a crude estimate of the required frequency range, we quote the value of the pinning frequency $\omega_{0}\sim$ 3 GHz $\sim$ 0.1 K $\sim$ 10 cm for a regular (not magnetic-field-induced) SDW in (TMTSF)${}_{2}$PF${}_{6}$[12]. This work was partially supported by NSF under Grant DMR–9417451 and by A. P. Sloan Foundation. References [a] E-mail: [email protected]. [b] E-mail: [email protected]. [3] J. R. Cooper et al., Phys. Rev. Lett. 63 (1988) 1984; S. T. Hannahs et al., ibid., p. 1988. [4] D. Poilblanc et al., Phys. Rev. Lett. 58 (1987) 270. [5] V. M. Yakovenko, Phys. Rev. B 43 (1991) 11353. [6] G. Grüner, Rev. Mod. Phys. 60 (1988) 1129. [7] V. M. Yakovenko, J. Phys. (France) IV, Colloque C2, Suppl. I, 3 (1993) 307; J. Supercond. 7 (1994) 683. [8] R. B. Laughlin, Phys. Rev. B 23 (1981) 5632. [9] I. Dana, Y. Avron, and J. Zak, J. Phys. C 18 (1985) L679. [10] F. Kuchar et al., Phys. Rev. B 33 (1986) 2965. [11] L. A. Galchenkov et al., JETP Lett. 46 (1987) 542. [12] D. Quinlivan et al., Phys. Rev. Lett. 65 (1990) 1816.
Fundamental limits to signal integrity in nonlinear parametric optical circulators Ian A. D. Williamson [email protected]    Zheng Wang [email protected] Microelectronics Research Center, The University of Texas at Austin, Austin, TX 78758 USA Abstract We characterize the response of a parametric nonlinear optical circulator to realistic signals that have finite bandwidths. Our results show that intermodulation distortion (IMD), rather than pump depletion or compression, limits the maximal operating signal power and the dynamic range of nonlinear parametric circulators. This limitation holds even in the undepleted pump regime where nonlinear circulators are not constrained by dynamic reciprocity. With a realistic pump power, noise floor, and nonlinear waveguide, our numerical modeling demonstrates a maximally achievable spur-free dynamic range (SFDR) of 81 dB. Circulators play the key role of separating high-power outgoing signals from low-power received signals in optical transceivers and interferometers for communication, signal-processing, ranging, and imaging applications in optics and microwave photonics. Because these applications involve broadband signals, the linearity of the circulator transfer function is crucial for maintaining a large signal-to-noise ratio (SNR), especially for systems requiring high spectral efficiency or digital modulation schemes with high peak-to-average power ratios. High linearity of the circulator transfer function in the forward direction minimizes signal distortion and the associated spurious signals that cause inter-channel and intra-channel interference Marpaung et al. (2013). Conventionally, in order to break reciprocity, optical circulators rely on the gyromagnetic effect, a linear magneto optical (MO) effect Bi et al. (2011); Shoji et al. (2016). However MO materials are challenging for on-chip integration due to material incompatibility as well as the requirement of large interaction lengths that overcome weak magnetic effects at optical frequencies. More recently, time-reversal symmetry breaking has been achieved via second-order Rangelov and Longhi (2017); Wang et al. (2017) and third-order optical nonlinearities Fan et al. (2012); Peng et al. (2014); Hua et al. (2016) as well as stimulated Brillouin scattering (SBS) Huang and Fan (2011). These approaches show great promise for realizing CMOS compatible circulators with smaller device sizes and improved integration. However, dynamic reciprocity limits a nonlinear circulator’s ability to isolate signals in neighboring frequency bands Shi et al. (2015). This makes it challenging to use such circulator designs in broadband systems where signals at multiple ports must be simultaneously routed. Moreover, breaking time-reversal symmetry with nonlinearity imposes a fundamental lower limit on the operating signal power level, below which the system scatters light reciprocally. A recently proposed class of nonlinear circulator and isolator designs have operated in the undepleted pump regime of parametric three-wave mixing. In these systems the interaction of signal and idler waves is effectively linear, overcoming the constraint of dynamic reciprocity while, in principal, eliminating the lower limit on operational signal power. However, the overall design space for parametric nonlinear circulators has not been fully explored, and the range of allowable signal powers has not been quantified. Moreover, the response these devices to finite-bandwidth signals has not been considered. It is therefore critical to evaluate the suitability of parametric nonlinear processes in realistic circulator designs, which could be constrained by practical limits on pump power and noise performance. In this letter we first review the operating principals of a parametric nonlinear optical circulator. We then numerically characterize the broadband performance of the circulator by modeling its response to a two-tone signal, allowing us to characterize intermodulation distortion, dynamic range, and the allowable signal power levels of the circulator relative to the pump wave. Our results reveal that that despite not being constrained by dynamic reciprocity, parametric nonlinear circulators suffer significant signal distortion and a reduced dynamic range even in the undepleted pump regime. This could potentially inhibit their use in next generation optical communications and signal processing systems requiring very high dynamic range. Parametric nonlinear circulators are implemented in resonant Hua et al. (2016) or waveguiding geometries Yu and Fan (2009), where photonic modes are coupled via symmetry and momentum matching of the modulating radio frequency signal or the optical pump. Here we focus only on a waveguide implementation that leverages a nonreciprocal phase shift (NRPS). When inserted into one arm of a dual-input dual-output Mach-Zender interferometer (DIDO-MZI), as shown in Fig. 1a, the NRPS is converted into a circulator response in terms of scattered power, with transmission occurring from port 1 $\rightarrow$ port 2 $\rightarrow$ port 4 $\rightarrow$ port 3 $\rightarrow$ port 1. The NRPS in the parametric circulator is achieved through a $\chi^{(2)}$ Rabi oscillation between signal and idler waves within a waveguide Wang et al. (2017). In the undepleted pump limit, where the pump amplitude is much higher than the signal and idler amplitude, $A_{p}\gg A_{s(i)}$ and with perfect phase matching, the amplitude of the signal and idler waves have a sinusoidal spatial dependence Boyd (2008), $$\displaystyle A_{s}\left(z\right)$$ $$\displaystyle=A_{s}\left(0\right)\text{cos}\left(\kappa z\right)$$ (1) $$\displaystyle A_{i}\left(z\right)$$ $$\displaystyle=A_{s}\left(0\right)j\sqrt{\frac{n_{s}\omega_{i}}{n_{i}\omega_{s}% }}e^{j\phi_{p}}\text{sin}\left(\kappa z\right).$$ (2) In the more general case of the signal wave amplitude approaching the pump amplitude, Jacobi elliptic functions describe the evolution of the wave amplitudes Armstrong et al. (1962). The nonlinear coupling coefficient, $$\kappa=2\frac{\omega_{s}\omega_{i}d_{\text{eff}}}{\sqrt{k_{s}k_{i}}c^{2}}|A_{p}|$$ (3) defines the characteristic interaction length of the parametric process. From Eqn. 1 it is clear that undergoing one Rabi cycle, $\kappa z=\pi$ results in a $\pi$ phase shift for the signal wave. Moreoever, this phase is nonreciprocal or directional via the momentum matching condition of the parametric process, $k_{s}+k_{p}=k_{i}$ through the engineered dispersion of an on-chip waveguide, and can also be viewed as an indirect mode transition induced by the pump Yu and Fan (2009). Although useful for highlighting the operating principals of the nonlinear waveguide, the transfer function for the signal described by Eqn. 1 does not apply to the case of a finite-bandwidth input signal. In general, the broadband response is not simply the superposition of the response at individual frequencies due to cascaded mixing among different tones that make up the signal and idler waves Barbour et al. (2016). Parasitic multi-tone mixing leeches energy from the desired signal $\rightarrow$ idler $\rightarrow$ signal process, and generates spurious tones that interfere with the signals being routed by the circulator. In microwave photonics the linearity of an optical link, in terms of a fundamental input and output RF signal, is characterized with a two-tone test Urick et al. (2015). This underpins the system’s response to a broadband excitation, where the two tones represent any arbitrary pair of frequencies within the continuous input spectrum. A test setup for exciting two optical tones is the single-sideband (SSB) transmitter architecture and direct detection receiver (Fig. 1b). The two fundamental microwave tones are $\Omega_{1}$ and $\Omega_{2}$ which are upconverted to $\omega_{c}+\Omega_{1,(2)}$, where $\omega_{c}$ is the optical carrier (Fig. 2a,b). In this architecture the optical carrier at $\omega_{c}$ is filtered at the output of the modulator. Intermodulation distortion (IMD) products arise in the output microwave spectrum at combinations of the two fundamental input tones (Fig. 2a): second order intermodulation distortion (IMD2) spurs occur at frequencies $\Omega_{2}\pm\Omega_{1}$ and third order intermodulation distortion (IMD3) spurs occur at frequencies $2\Omega_{1(2)}-\Omega_{2(1)}$. In the nonlinear parametric phase shifter, the third order spur (IMD3) dominates as the result of a three-stage cascaded nonlinear process within the waveguide. First, the two input signal tones are upconverted to tones on the idler wave at $\omega_{c}+\omega_{p}+\Omega_{1(2)}$. In the second stage, each idler tone at $\omega_{c}+\omega_{p}+\Omega_{1(2)}$ mixes with its opposite signal tone at $\omega_{c}+\Omega_{2(1)}$ to generate spur tones around the pump wave at $\omega_{p}+\Omega_{1(2)}-\Omega_{2(1)}$. Finally, the sprs around the pump wave mix with the signal (idler) tone in a SFG (DFG) process to generate the spurs around the idler (signal) tones. These spurs around the signal tone ultimately show up as IMD3 tones in the output microwave spectrum after demodulating from the optical carrier. The proximity of these IMD3 spurs to the signal tones inherently limits the signal fidelity because they can not be optically filtered. Moreover, because the incident signals at any of the MZI ports pass through the $\chi^{(2)}$ waveguide (Fig. 1a), the resulting IMD3 spurs will have the potential to interfere not only within their own pathway, but also with other pathways in the circulator. For example, the IMD3 spurs generated from the signal incident at port 1 will show up in the output of both port 2 and port 3. Similarly to the limitation of dynamic reciprocity Shi et al. (2015), this interference presents a fundamental challenge for full-duplex communications where a high-power transmit signal must be routed simultaneously with a much lower power received signal. The IMD3 spurs resulting from the high power transmit signal would completely overwhelm the much lower power received signals. The output power of the IMD3 spur ultimately determines the operating signal power levels of the circulator. As a concrete example, a pump power of 20 dBm with an input signal power of -10 dBm (in both tones) generates an output IMD3 spur that is less than -120 dBm at the waveguide length for complete signal revival, $z\kappa=\pi$ (Fig. 2c, left side). However, when the input power of the two signal tones is increased to 10 dBm, the IMD3 spur power increases to approximately -40 dBm for the same waveguide length (Fig. 2c, right side). This is due to a nonlinear shift in the spatial null of the IMD3 spur which arises from the cascaded nonlinear process described above. These results were computed by numerically solving the nonlinear coupled amplitude equations Gallo et al. (1997); Chen and Xu (2004), accounting for all frequency components shown in Fig. 2b. The template coupled amplitude equation for a wave with index $l$ is $$\begin{split}\displaystyle\frac{dA_{l}}{dz}=\sum_{m,n}j\frac{1}{2}\frac{d_{% \text{eff}}\omega_{l}^{2}}{k_{l}c^{2}}A_{m}A_{n}e^{j\left(k_{l}-k_{n}-k_{m}% \right)z}\delta\left(\omega_{l}-\omega_{n}-\omega_{m}\right)\\ \displaystyle+\sum_{m,n}j\frac{d_{\text{eff}}\omega_{l}^{2}}{k_{l}c^{2}}A_{m}A% _{n}^{*}e^{j\left(k_{l}+k_{n}-k_{m}\right)z}\delta\left(\omega_{l}+\omega_{n}-% \omega_{m}\right).\end{split}$$ (4) In Eqn. 4, the first term captures all possible sum frequency generation (SFG) processes that couple with the wave at $\omega_{l}$, and the second term captures all possible difference frequency generation (DFG) processes that couple with the wave at $\omega_{l}$. The linearity of the signal transfer function holds for power levels well below the pump power where a waveguide length satisfying $z\kappa=\pi$, completely recovers both signal tones with no insertion loss (Fig. 3a). Here both signal tones have identical input powers and have essentially the same same spatial dependence along the nonlinear waveguide. For signal powers that are within 8 dB of the pump, compression of the output signal tones occurs. At such high signal power, the pump wave can no longer provide enough energy to sustain the Rabi oscillation between the idler and signal waves, which is critical for achieving a linear response in the NRPS. At first glance, compression may be seen as the upper limit on allowable signal power but in fact, the IMD3 spur limits the system response before the onset of pump depletion. A noise floor is used to quantify the relative strength of the intermodulation spurs. In the optical domain the photo diode noise equivalent power (NEP) is defined as the optical intensity that results in a signal-to-noise ratio (SNR) of unity in the output, $P_{\text{NEP}}=S\sqrt{B}/\mathcal{R}$ = -83 dBm. Here, we assume a photodiode noise output spectral density $S$ = 0.6 A/$\sqrt{\text{Hz}}$, a photodiode responsivity $\mathcal{R}$ = 0.8 A/W, and a detector bandwidth $B$ = 1 Hz. The input intercept point (IIP) and output intercept point (OIP) define the input and output signal power, respectively, where the system produces an equal signal and spur power. These points are used to extrapolate back to the noise floor and calculate a spur-free dynamic range (SFDR) as $$\text{SFDR}_{n}=\frac{n-1}{n}\left(\text{IIP}_{n}-P_{\text{noise}}\right),$$ (5) where $n=\left\{2,3,\ldots\right\}$ is the order of the dominant intermodulation spur. At the length for complete revival of the signal wave $z\kappa=\pi$, the SFDR reaches its maximal value of 81 dB due to the spatial null in the IMD3 spur for low signal powers (Fig. 3a). For a perturbed length (or pump power) where $z\kappa\approx 3.7120=1.184\pi$, the SFDR drops to 69 dB due operation away from the spatial null of the IMD3 spur (Fig. 3b). This indicates that obtaining the largest possible SFDR requires careful balancing of the system fabrication constraints as well as the pump power that is ultimately coupled into the on-chip nonlinear waveguide. At the waveguide length and pump power with maximal SFDR $\left(z\kappa=\pi\right)$, the IMD3 slope is 6 in log-log scaling, rather than 3, even though the spur is third-order. The much steeper dependence on input signal power at this length is what opens up a wider spur-free range of operation. The maximum SFDR of 81 dB is far below the SFDR of modern microwave optical links which can be on the order of 110 dB or greater Marpaung et al. (2013). This points to this particular parametric circulator architecture being a bottleneck for signal fidelity. The computed design chart showing the signal power level as a function of both waveguide length and input power shows the fundamental challenge of simultaneously maintaining low insertion loss, avoiding signal compression, and having a high dynamic range (Fig. 4a). Only a narrow region of the parameter space can be used for optimal device operation, where the dotted red contour indicates complete signal revival. Its curvature at large signal powers indicates that for a fixed waveguide length and pump power, negligible insertion loss can not be achieved for all possible input signal powers. Complete revival can occur for a single waveguide length, but only up to powers of approximately 5 dBm, which is 15 dB below the pump. The null in the IMD3 spur closely tracks the signal revival contour but diverges slightly once the input signal power exceeds approximately 5 dBm (Fig. 4b). The corresponding null is where the maximum SFDR is obtained. On the other hand, operating at a waveguide length and pump power where $z\kappa\neq\pi$, doesn’t result in a significant increase in the link’s insertion loss. However, a much more significant penalty is observed in the SFDR as is also confirmed by Fig. 3a. Our numerical modeling indicates that despite not being constrained by dynamic reciprocity, nonlinear parametric optical circulators have a reduced dynamic range that may be too low for next generation all-optical signal processing systems. In order to improve the dynamic range from the 81 dB predicted by our simulations, either the noise floor must be reduced or the pump power must be increased. For a lower noise floor, the increase in dynamic range will be associated with a reduced maximum signal power, corresponding to the intersection of the dashed horizontal line and the red spur line of Fig. 3a,b. Importantly, the intermodulation distortion resulting from the NRPS will carry over to adjacent ports (e.g port 3, when transmitting from port 1 to port 2) because the spurs from the $\chi^{(2)}$ process can not be canceled at the 50/50 coupler. In practice, this will result in significant in-band interference between simultaneous excitations of the circulator ports, such as a full-duplex application. Although we have only considered a nonlinear waveguide geometry in this work, we believe that similar conclusions will apply to resonant nonlinear processes. The linewidth of an optical cavity can not filter IMD3 spurs generated from arbitrarily small signal tones. Methods In this work the effective nonlinear coupling was $d_{\text{eff}}=2.5\times 10^{-12}$ V/m, the effective mode area was $S=1\times 1$ $\mu$m${}^{2}$, the waveguide length was 50.638 mm for $\kappa z=4$, and the pump power was 20 dBm. The RF signal tones were $\Omega_{1}$ = 9 MHz and $\Omega_{2}$ = 11 MHz. The optical carrier was $\omega_{c}$ = 200.1 THz and the pump frequency was $\omega_{p}$ = 201.2 THz. All results in this work were computed by expanding the template Eqn. 4 into a set accounting for all 11 frequencies shown in Fig. 3b with the sum- and difference-frequency coupling accounted for by the Dirac delta functions. The resulting set of coupled equations was solved with Newton’s method and a Crank-Nicolson finite differencing scheme Crank and Nicolson (1996). Acknowledgments This work was supported by the Packard Fellowships for Science and Engineering, the National Science Foundation (NSF) (EFMA-1641069), and the US Office of Naval Research (ONR) (N00014-16-1-2687). References Marpaung et al. (2013) D. Marpaung, C. Roeloffzen, R. Heideman, A. Leinse, S. Sales,  and J. Capmany, Laser & Photonics Reviews 7, 506 (2013). Bi et al. (2011) L. Bi, J. Hu, P. Jiang, D. H. Kim, G. F. Dionne, L. C. Kimerling,  and C. A. Ross, Nature Photonics 5, 758 (2011). Shoji et al. (2016) Y. Shoji, K. Miura,  and T. Mizumoto, Journal of Optics 18, 013001 (2016). Rangelov and Longhi (2017) A. Rangelov and S. Longhi, Applied Optics 56, 2991 (2017). Wang et al. (2017) K. Wang, Y. Shi, A. S. Solntsev, S. Fan, A. A. Sukhorukov,  and D. N. Neshev, Optics Letters 42, 1990 (2017). Fan et al. (2012) L. Fan, J. Wang, L. T. Varghese, H. Shen, B. Niu, Y. Xuan, A. M. Weiner,  and M. Qi, Science 335, 447 (2012). Peng et al. (2014) B. Peng, Ş. K. Özdemir, F. Lei, F. Monifi, M. Gianfreda, G. L. Long, S. Fan, F. Nori, C. M. Bender,  and L. Yang, Nature Physics 10, 394 (2014). Hua et al. (2016) S. Hua, J. Wen, X. Jiang, Q. Hua, L. Jiang,  and M. Xiao, Nature Communications 7, 13657 (2016). Huang and Fan (2011) X. Huang and S. Fan, Journal of Lightwave Technology 29, 2267 (2011). Shi et al. (2015) Y. Shi, Z. Yu,  and S. Fan, Nature Photonics 9, 388 (2015). Yu and Fan (2009) Z. Yu and S. Fan, Nature Photonics 3, 91 (2009). Boyd (2008) R. W. Boyd,  Nonlinear Optics  (Academic Press, 2008). Armstrong et al. (1962) J. A. Armstrong, N. Bloembergen, J. Ducuing,  and P. S. Pershan, Physical Review 127, 1918 (1962). Barbour et al. (2016) R. J. Barbour, T. Brewer,  and Z. W. Barber, Optics Letters 41, 3639 (2016). Urick et al. (2015) V. J. Urick, J. D. McKinney,  and K. J. Williams, Fundamentals of microwave photonics, Wiley series in microwave and optical engineering (Wiley, Hoboken, New Jersey, 2015). Gallo et al. (1997) K. Gallo, G. Assanto,  and G. I. Stegeman, Applied Physics Letters 71, 1020 (1997). Chen and Xu (2004) B. Chen and C.-Q. Xu, IEEE Journal of Quantum Electronics 40, 256 (2004). Crank and Nicolson (1996) J. Crank and P. Nicolson, Advances in Computational Mathematics 6, 207 (1996).
Algebraic and Analytic Properties of the One-Dimensional Hubbard Model Frank Göhmann${}^{\dagger}$***e-mail: [email protected] and Shuichi Murakami${}^{\ddagger}$444e-mail: [email protected] ${}^{\dagger}$Department of Physics, Faculty of Science, University of Tokyo,555Address from Oct. 1996: Physikalisches Institut der Universität Bayreuth, TP1, 95440 Bayreuth, Germany Hongo 7-3-1, Bunkyo-ku, Tokyo 113, Japan ${}^{\ddagger}$Department of Applied Physics, Faculty of Engineering, University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113, Japan Abstract We reconsider the quantum inverse scattering approach to the one-dimensional Hubbard model and work out some of its basic features so far omitted in the literature. It is our aim to show that $R$-matrix and monodromy matrix of the Hubbard model, which are known since ten years now, have good elementary properties. We provide a meromorphic parametrization of the transfer matrix in terms of elliptic functions. We identify the momentum operator for lattice fermions in the expansion of the transfer matrix with respect to the spectral parameter and thereby show the locality and translational invariance of all higher conserved quantities. We work out the transformation properties of the monodromy matrix under the su(2) Lie algebra of rotations and under the $\eta$-pairing su(2) Lie algebra. Our results imply su(2)$\oplus$su(2) invariance of the transfer matrix for the model on a chain with an even number of sites. 1 Introduction The one-dimensional Hubbard model is one of the most thoroughly studied integrable quantum systems with applications in solid state physics. Starting with the seminal article [1] of Lieb and Wu lots of its physical properties have been worked out exactly [2]. For the case of half-filled band, in particular, a complete picture of its elementary excitations is available by now [3, 4]. All excited states are scattering states of only four quasiparticles, two of which carry spin but no charge, whereas the other two carry charge but no spin. The $S$-matrix of these quasiparticles has been calculated exactly. These achievements give a precise meaning to the notion of spin-charge separation in one-dimensional solids. All exact results on physical properties of the one-dimensional Hubbard model obtained so far rely on the extensive use of the coordinate Bethe Ansatz. Since Bethe wave functions, however, are difficult to handle, any exact calculation of local quantities going beyond the long distance asymptotics of correlation functions [5] seems to demand for an algebraic treatment. Moreover, an algebraic treatment is likely to facilitate the calculation of the thermodynamical properties of the model [6]. At present we know two algebraic structures related to the Hubbard model, a graded Yang-Baxter algebra, developed in the works of Shastry [7, 8, 9] and Olmedilla et al. [10, 11, 12] and a representation of the Y(su(2)) Yangian quantum group commuting with the Hubbard Hamiltonian, which was discovered by Uglov and Korepin [13]. The relation of these two notions was recently exposed by the authors [14]. Although $R$-matrix and $L$-matrix of the Hubbard model are known since long, it took nearly ten years before it was shown that the $R$-matrix satisfies the Yang-Baxter equation [15]. An algebraic Bethe Ansatz was performed only recently in a remarkable preprint by Ramos and Martins [16]. Every progress in the development of an algebraic approach was hindered before by the complexity of $R$-matrix and monodromy matrix and by several unusual features of these basic tools of the quantum inverse scattering method. The monodromy matrix is $4\times 4$ rather than $3\times 3$, as one might have guessed naively from the fact that there are two levels of Bethe Ansatz equations. It further seems to be impossible to find a parametrization of the $R$-matrix, such that it becomes a function of the difference of the spectral parameters. Amazingly, not even the most elementary properties of $R$-matrix and monodromy matrix have been worked out so far. For that reason we look again at the construction of the graded Yang-Baxter algebra. We begin with a description of the spin model [7, 8, 9] that is related to the Hubbard model by means of a Jordan-Wigner transformation in section 2. Our account is based on the Yang-Baxter equation. In section 3 we show that there exists a meromorphic parametrization of the transfer matrix in terms of elliptic functions. Section 4 is devoted to the rederivation of the graded Yang-Baxter algebra [11]. We show how to treat general twisted boundary conditions. As a byproduct we find a simple method to obtain higher conserved quantities of the Hubbard model from their counterparts for the spin model. These conserved quantities are generated by the graded trace of the fermionic monodromy matrix. We identify the momentum operator for fermions in the zeroth order of the expansion of this generating function with respect to the spectral parameter. Thus all higher conserved quantities are local and translational invariant. In section 5 we derive the properties of the monodromy matrix under a combined particle-hole and gauge transformation, which is characteristical for the Hubbard Hamiltonian. It turns out that the graded trace of the monodromy matrix is invariant under this transformation, if the model is considered on a chain with an even number of sites. In section 6 we investigate the behaviour of the monodromy matrix under su(2) transformations. Our results are complementary to the work of Ramos and Martins [16] and should provide a means to discuss the symmetry properties of quasiparticles within the algebraic approach. Like the Hamiltonian, the graded trace of the monodromy matrix turns out to be invariant under rotations of the spins and, if we consider an even number of sites, also under the $\eta$-pairing su(2) Lie algebra [17, 18, 19, 20]. In order to make our representation self-contained, we expose the $R$-matrix along with a list of relations among its elements in appendix A. Appendix B provides a detailed discussion of the momentum operator on a lattice. In appendix C it is shown how to obtain the momentum operator from a monodromy matrix for free fermions. 2 The spin model As all one-dimensional fermionic models the Hubbard model is related to a certain spin chain by a Jordan-Wigner transformation. The Yang-Baxter algebra corresponding to this spin chain is easier to formulate and closer to intuition than the graded Yang-Baxter algebra of the Hubbard model. In fact, Shastry in his seminal articles [7, 8, 9] was using the language of spins rather than the language of electrons. The graded form of the Yang-Baxter algebra having the advantage of being defined directly in terms of fermi operators was derived later by Olmedilla et al. [10, 11, 12]. Let us follow the historical route here and start with a description of the spin model. Its Hamiltonian is $$H=\sum_{j=1}^{L}\left(\sigma_{j\tau}^{+}\sigma_{j+1\tau}^{-}+\sigma_{j\tau}^{-% }\sigma_{j+1\tau}^{+}+{\textstyle\frac{U}{4}}\,\sigma_{j\uparrow}^{z}\sigma_{j% \downarrow}^{z}\right)\quad.$$ (1) Here and in the following we are using implicit summation over doubly occuring indices. $H$ describes a periodic spin chain of $L$ sites ($\sigma_{L+1\tau}^{\pm}:=\sigma_{1\tau}^{\pm}$) with two species of spins, labeled $\uparrow$ and $\downarrow$, at each site. $U$ is the strength of an on-site Ising coupling between the species. The interaction between nearest neighbours is of XX-type for both species independently. Thus, in the limit $U\rightarrow 0$ the model decouples into a pair of non-interacting XX-chains. As was shown by Shastry, $H$ is the logarithmic derivative of a certain transfer matrix, which can be obtained by appropriately coupling together two copies of the Yang-Baxter algebra of the XX-chain. The $R$-matrix of the XX-chain is $$r={\textstyle\frac{1}{2}}\left(a+b+(a-b)\sigma^{z}\otimes\sigma^{z}+\sigma^{x}% \otimes\sigma^{x}+\sigma^{y}\otimes\sigma^{y}\right)\quad,$$ (2) where $a$ and $b$ have to satisfy the free fermion condition $$a^{2}+b^{2}=1\quad.$$ (3) If we introduce the parametrization $a=\cos(\lambda)$, $b=\sin(\lambda)$, then $r=r(\lambda)$ satisfies the Yang-Baxter equation in the form $$r_{12}(\lambda-\mu)r_{13}(\lambda)r_{23}(\mu)=r_{23}(\mu)r_{13}(\lambda)r_{12}% (\lambda-\mu)$$ (4) which is sometimes called difference form of the Yang-Baxter equation. $r(\lambda)$ is regular, i.e. $r(0)=P$, the permutation of the two factors of $\mbox{{\gfettohne C}}^{2}\otimes\mbox{{\gfettohne C}}^{2}$. As usual, the indices in (4) refer to the canonical embeddings of $r(\lambda)$ into $\mbox{{\gfettohne C}}^{2}\otimes\mbox{{\gfettohne C}}^{2}\otimes\mbox{{% \gfettohne C}}^{2}$. The $L$-matrix $l(\lambda)$ of the XX-chain is the fundamental representation $l(\lambda)=r(\lambda)$ of the Yang-Baxter algebra, $$\check{r}(\lambda-\mu)(l(\lambda)\otimes l(\mu))=(l(\mu)\otimes l(\lambda))% \check{r}(\lambda-\mu)\quad,$$ (5) generated by $\check{r}(\lambda):=Pr(\lambda)$. As usual, $l(\lambda)$ in eq. (5) has to be understood as matrix in auxiliary space with entries acting on a quantum space. For the construction of $R$- and $L$-matrices corresponding to the spin Hamiltonian (1) we have to duplicate the above construction by attaching a label referring to the spin species to each $\sigma$-matrix. The matrix $r(\lambda)$ is replaced by $r_{\tau}(\lambda)$ ($\tau=\uparrow,\downarrow$), and we may redefine $r(\lambda)$ as $$r(\lambda):=r_{\uparrow}(\lambda)r_{\downarrow}(\lambda)\quad.$$ (6) This new $R$-matrix obviously satisfies (4) and is again regular in the sense that $r(0)=P_{\uparrow}P_{\downarrow}=:P$ is a permutation operator. Whenever an explicit matrix representation is required we will use the convention $\sigma_{\uparrow}^{\alpha}=\sigma^{\alpha}\otimes I_{2}$, $\sigma_{\downarrow}^{\alpha}=I_{2}\otimes\sigma^{\alpha}$, where $I_{2}$ denotes the $2\times 2$ unit matrix. Correspondingly, the $4\times 4$ unit matrix, which will be needed below, is denoted by $I_{4}$. Now Shastry’s $R$-matrix associated to the Hamiltonian (1) reads $$R(\lambda,\mu|h,l):=\mbox{\,ch}(h-l)\frac{r(\lambda-\mu)}{\cos(\lambda-\mu)}+% \mbox{\,sh}(h-l)\frac{r(\lambda+\mu)}{\cos(\lambda+\mu)}\sigma^{z}\otimes% \sigma^{z}\otimes I_{4}\quad.$$ (7) This is a four parameter family of $16\times 16$-matrices. It was shown by Shiroishi and Wadati [15], that it satisfies the Yang-Baxter equation in the form $$R_{12}(\lambda,\mu|h,l)R_{13}(\lambda,\nu|h,m)R_{23}(\mu,\nu|l,m)=R_{23}(\mu,% \nu|l,m)R_{13}(\lambda,\nu|h,m)R_{12}(\lambda,\mu|h,l)\quad,$$ (8) provided that the parameters are constrained by the equations $$\frac{U}{4}=\frac{\mbox{\,sh}(2h)}{\sin(2\lambda)}=\frac{\mbox{\,sh}(2l)}{\sin% (2\mu)}=\frac{\mbox{\,sh}(2m)}{\sin(2\nu)}\quad.$$ (9) This constraint can be satisfied for arbitrarily small $m$ and $\nu$. Hence, $$L_{jk}(\lambda|h):=\cos(\lambda)R_{jk}(\lambda,0|h,0)=r_{jk}(\lambda)e^{h% \sigma_{j\uparrow}^{z}\sigma_{j\downarrow}^{z}}$$ (10) is a representation of the Yang-Baxter algebra generated by the $R$-matrix (7). The index $j$ in (10) refers to the auxiliary space, the index $k$ to the quantum space. Note that $L_{jk}(\lambda|h)\neq L_{kj}(\lambda|h)$. $L_{jk}(\lambda|h)$ is a tensor product of two XX-chain $L$-matrices coupled in auxiliary space. If we solve the first equation in (9) for $h$ and choose the branch of solution properly, then $h(\lambda=0)=0$, and $L_{jk}$ according to (10) is again regular. For $U=0$ the constraint (9) is satisfied by $h=0$ for all $\lambda$. We get back the free model as it should be by construction, and the $R$-matrix (7) becomes a solution of the Yang-Baxter equation in difference form (4). On the other extreme, we can carry out the limit $U\rightarrow\infty$ after properly rescaling $\lambda\rightarrow\lambda/U$. Then the constraint (9) becomes $\lambda=2\mbox{\,sh}(2h)$. However, $\lambda$ disappears from the definitions of $L_{jk}$ and $R_{jk}$, since $\cos(\lambda/U)\rightarrow 1$ and $\sin(\lambda/U)\rightarrow 0$ for every $\lambda$. Thus $r_{jk}((\lambda-\mu)/U)\rightarrow P_{jk}$, and $$R_{jk}(\lambda,\mu|h,l)\rightarrow L_{jk}(0|h-l)=P_{jk}e^{(h-l)\sigma_{j% \uparrow}^{z}\sigma_{j\downarrow}^{z}}\quad.$$ (11) It is easily verified that the expression on the right hand side of (11) satisfies the Yang-Baxter equation in difference form (4) and that the corresponding Hamiltonian is the on-site part of (1) with $U=1$. Hence $\lambda$ is the natural spectral parameter of the model at zero coupling, whereas $h$ is the natural spectral parameter in the strong coupling limit. The algebra (7), (8), (9) interpolates between these limits. For the remainder of this article we will suppress the arguments $h$ and $l$ of the $R$- and $L$-matrices, assuming that they are given as functions of $\lambda$ and $\mu$ by the constraint (9). To finish the description of the spin model let us introduce its monodromy matrix, $$T_{L}(\lambda):=L_{aL}(\lambda)\dots L_{a1}(\lambda)\quad.$$ (12) The index $a$ in this definition refers to the auxiliary space. The transfer matrix of the spin model, $t(\lambda)$, is the trace of $T_{L}(\lambda)$ over the auxiliary space. It follows from the regularity of $L_{jk}$ that $\tilde{U}:=t(0)$ is the shift operator for spins, $$\tilde{U}\sigma_{j\tau}^{\alpha}=\sigma_{j+1\tau}^{\alpha}\tilde{U}\quad,\quad j% =1,\dots,L\quad,\quad\tau=\uparrow,\downarrow\quad.$$ (13) A brief calculation yields the derivative of the $L$-matrix at zero spectral parameter, $$\dot{L}_{jk}(0)P_{jk}=\sigma_{j\tau}^{+}\sigma_{k\tau}^{-}+\sigma_{j\tau}^{-}% \sigma_{k\tau}^{+}+{\textstyle\frac{U}{4}}\sigma_{k\uparrow}^{z}\sigma_{k% \downarrow}^{z}\quad.$$ (14) This equation implies that the Hamiltonian (1) is obtained from the transfer matrix $t(\lambda)$ as logarithmic derivative, $H=d_{\lambda}\left.\ln(t(\lambda))\right|_{\lambda=0}$. 3 A meromorphic parametrization of the transfer matrix The considerations in this section were motivated by two facts. First, in case of the eight vertex model Baxter’s meromorphic parametrization of the $R$-matrix yields a solution of difference form of the Yang-Baxter equation [21]. Second, the transfer matrix enters certain functional equations, the solutions of which usually require strong analytic properties. Let us solve the constraint (9) for $e^{2h}$, $$e^{2h}=\frac{U\sin(2\lambda)}{4}+\sqrt{1+\frac{U^{2}\sin^{2}(2\lambda)}{16}}\quad.$$ (15) The only possibility to remove the square root on the right hand side is by replacing $2\lambda$ by $\mbox{am}(u)$, the amplitude function and setting $k={\rm i}U/4$. Then $e^{2h}$ is expressed in terms of Jacobi elliptic functions as $$e^{2h}=\mbox{\,dn}(u)-{\rm i}k\,\mbox{\,sn}(u)\quad.$$ (16) Because of the homogeneity of the Yang-Baxter algebra, we may multiply $L_{jk}(u)$ by $e^{h}$. Then $h(u)$ enters into the definition of the monodromy matrix $T_{a}(u)$ only as meromorphic function $e^{2h}$ of the redefined spectral parameter $u$. Letting $A:=(a+b)/2$ and $B=(a-b)/2$ we see that the matrix $r_{jk}$ in the definition of the $L$-matrix is of the form $$r_{jk}=\left(\begin{array}[]{cccc}e&o&o&e\\ o&e&e&o\\ o&e&e&o\\ e&o&o&e\end{array}\right)\quad,$$ (17) where $e$ denotes an even polynomial in $A$, $B$ and $o$ denotes an odd one. $e$ and $o$ are operators on quantum space $k$, whose precise form is irrelevant for the following arguments. The rules $e^{2}=o^{2}=e$, $eo=oe=o$, $e+e=e$ and $o+o=o$ imply $$\left(\begin{array}[]{cccc}e&o&o&e\\ o&e&e&o\\ o&e&e&o\\ e&o&o&e\end{array}\right)^{2}=\left(\begin{array}[]{cccc}e&o&o&e\\ o&e&e&o\\ o&e&e&o\\ e&o&o&e\end{array}\right)\quad.$$ (18) This means that the monodromy matrix (12) is again of the form (17). Therefore $t(u)$ is an even polynomial in $A$ and $B$. In other words, $t(u)$ is a polynomial in $A^{2}$, $B^{2}$ and $AB$. Since $$A^{2}=(1+\mbox{\,sn}(u))/4\quad,\quad B^{2}=(1-\mbox{\,sn}(u))/4\quad,\quad AB% =\mbox{\,cn}(u)/4\quad,$$ (19) $t(u)$ is a meromorphic function of $u$. To state the problem of the analytic structure of the $R$-matrix let us go one step back and write again $a$ for $\cos(\lambda)$ and $b$ for $\sin(\lambda)$. Let furthermore $c:=e^{2h}$. Then $a$, $b$ and $c$ are connected by the free fermion condition (3) and the constraint (9), i.e. $a$, $b$ and $c$ are lying on a complex curve given by the algebraic equations $$a^{2}+b^{2}=1\quad,\quad c-1/c=Uab\quad.$$ (20) This curve may be called the spectral curve of the Hubbard model. Unfortunately we could neither find a meromorphic parametrization of this seemingly simple structure nor assign a geometrical meaning to it. 4 The Hubbard model Applying a Jordan-Wigner transformation to $R$- and $L$-matrix of the spin model we obtain the graded Yang-Baxter algebra [22] of the Hubbard model [11]. No extra effort is necessary to introduce the grading. It is rather induced by the Jordan-Wigner transformation. Before turning to the fermionic formulation let us perform a gauge transformation of $R$- and $L$-matrices with $4\times 4$ transformation matrix $$G_{x}:=e^{x(\sigma^{z}\otimes\sigma^{z})/2}\quad.$$ (21) Then $$\displaystyle L_{k}(\lambda)\quad\longrightarrow\quad\tilde{L}_{k}(\lambda)$$ $$\displaystyle:=$$ $$\displaystyle G_{h}L_{k}(\lambda)G_{h}^{-1}=G_{h}(l_{k\uparrow}(\lambda)% \otimes l_{k\downarrow}(\lambda))G_{h}\quad,$$ (22) $$\displaystyle R(\lambda,\mu)\quad\longrightarrow\quad\tilde{R}(\lambda,\mu)$$ $$\displaystyle:=$$ $$\displaystyle(G_{h}\otimes G_{l})R(\lambda,\mu)(G_{h}^{-1}\otimes G_{l}^{-1})\quad.$$ (23) We suppress the auxiliary space index of the $L$-matrix here and in the following and consider $\tilde{L}_{k}(\lambda)$ as $4\times 4$ matrix with entries acting on quantum space $k$. Unlike $R(\lambda,\mu)$, the transformed matrix $\tilde{R}(\lambda,\mu)$ is symmetric. By use of the definition $$\check{R}(\lambda,\mu):=P\tilde{R}(\lambda,\mu)$$ (24) the Yang-Baxter algebra assumes the form $$\check{R}(\lambda,\mu)\left(\tilde{L}_{k}(\lambda)\otimes\tilde{L}_{k}(\mu)% \right)=\left(\tilde{L}_{k}(\mu)\otimes\tilde{L}_{k}(\lambda)\right)\check{R}(% \lambda,\mu)\quad,$$ (25) which is most convenient for changing to a fermionic formulation [22]. Since the canonical anticommutation relations for fermi operators are invariant under gauge transformations and under particle-hole transformations, there is some freedom in the definition of the Jordan-Wigner transformation. Using the abbreviations $$u_{k}:=\sum_{j=1}^{k}n_{j\uparrow}\quad,\quad d_{k}:=\sum_{j=1}^{k}n_{j% \downarrow}\quad,k=1,\dots,L,$$ (26) where the $n_{js}$ are electron densities and setting further $u_{0}:=d_{0}:=1$, we take the choice $$\displaystyle\sigma_{k\uparrow}^{+}=c_{k\uparrow}^{+}e^{{\rm i}\pi u_{k-1}}$$ $$\displaystyle,$$ $$\displaystyle\quad\sigma_{k\uparrow}^{-}=c_{k\uparrow}e^{-{\rm i}\pi u_{k-1}}\quad,$$ (27) $$\displaystyle\sigma_{k\downarrow}^{+}=c_{k\downarrow}^{+}e^{{\rm i}\pi(u_{L}+d% _{k-1})}$$ $$\displaystyle,$$ $$\displaystyle\quad\sigma_{k\downarrow}^{-}=c_{k\downarrow}e^{-{\rm i}\pi(u_{L}% +d_{k-1})}\quad.$$ (28) The XX $L$-matrices $l_{k\uparrow}(\lambda)$ and $l_{k\downarrow}(\lambda)$ can now be expressed in terms of fermi operators, $$\displaystyle l_{k\uparrow}(\lambda)$$ $$\displaystyle=$$ $$\displaystyle e^{-{\rm i}\pi u_{k}\sigma^{z}/2}{\cal L}_{k\uparrow}(\lambda)e^% {{\rm i}\pi u_{k-1}\sigma^{z}/2}\quad,$$ (29) $$\displaystyle l_{k\downarrow}(\lambda)$$ $$\displaystyle=$$ $$\displaystyle e^{-{\rm i}\pi(u_{L}+d_{k})\sigma^{z}/2}{\cal L}_{k\downarrow}(% \lambda)e^{{\rm i}\pi(u_{L}+d_{k-1})\sigma^{z}/2}\quad,$$ (30) where ${\cal L}_{k\tau}(\lambda)$ ($\tau=\uparrow,\downarrow$) is defined as $${\cal L}_{k\tau}(\lambda):=\left(\begin{array}[]{cc}\sin(\lambda)+{\rm i}e^{{% \rm i}\lambda}n_{k\tau}&c_{k\tau}\\ -{\rm i}c_{k\tau}^{+}&\cos(\lambda)-e^{{\rm i}\lambda}n_{k\tau}\end{array}% \right)\quad.$$ (31) ${\cal L}_{k\tau}(\lambda)$ is an $L$-matrix for free fermions. Its entries for different quantum space indices $k$ either commute or anticommute. This fact can be formally described by assigning a parity $\pi({\cal L}_{k\tau}(\lambda)_{j}^{i})=0,1$ to each matrix element. Call a matrix element even, if its parity is zero, odd if it is one. Odd matrix elements with different quantum space indices anticommute, whereas even elements commute with all elements with different quantum space indices. A grading is a function $p$, which assigns parity to the basis vectors of the auxiliary space. Let $p(i):=p(e_{i})$. Then, with the grading $p(1)=0$, $p(2)=1$, $p\in Z_{2}$, the elements of the free fermion $L$-matrix have parity $\pi({\cal L}_{k\tau}(\lambda)_{j}^{i})=p(i)+p(j)$. The crucial point about these notions is that they allow for the introduction of a graded tensor product $\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0% pt}{\sf s}}\hskip 4.0pt$ which respects comultiplication. Let $\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0% pt}{\sf s}}\hskip 4.0pt$ be defined by the equation $$\left(A\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{% \raisebox{-5.0pt}{\sf s}}\hskip 4.0ptB\right)_{kl}^{ij}:=(-1)^{(p(i)+p(k))p(j)% }A_{k}^{i}B_{l}^{j}\quad.$$ (32) Then $\left(A\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{% \raisebox{-5.0pt}{\sf s}}\hskip 4.0ptB\right)\left(C\hskip 2.0pt\mbox{% \raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4% .0ptD\right)=\left(AC\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5% pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4.0ptBD\right)$ for all matrices $A,B,C,D$ of the parity defined above. The graded tensor product can be used to formulate a graded Yang-Baxter algebra [22]. Grading and graded tensor product may be defined for arbitrary matrix dimensions. In appendix C we give a brief account of the free fermion model which will be needed below to recover the lattice momentum operator from the monodromy matrix of the Hubbard model. The free fermion monodromy matrix is $${\cal T}_{L\tau}(\lambda):={\cal L}_{L\tau}(\lambda)\dots{\cal L}_{1\tau}(% \lambda)\quad,\quad(\tau=\uparrow,\downarrow)\quad.$$ (33) Note that our choice of Jordan-Wigner transformation and thus our free fermion $L$-matrix differ from that in [11]. Inserting (29), (30) into (22) we find $$\tilde{L}_{k}(\lambda)=WV_{k}^{-1}{\cal L}_{k}(\lambda)V_{k-1}W^{-1}\quad,$$ (34) where $$\displaystyle V_{k}$$ $$\displaystyle:=$$ $$\displaystyle e^{{\rm i}\pi u_{k}\sigma^{z}/2}\otimes e^{{\rm i}\pi(u_{L}+d_{k% })\sigma^{z}/2}\quad,$$ (35) $$\displaystyle W$$ $$\displaystyle:=$$ $$\displaystyle\mbox{diag}(1,1,{\rm i},{\rm i})\quad,$$ (36) $$\displaystyle{\cal L}_{k}(\lambda)$$ $$\displaystyle=$$ $$\displaystyle G_{h}\left({\cal L}_{k\uparrow}(\lambda)\hskip 2.0pt\mbox{% \raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4% .0pt{\cal L}_{k\downarrow}(\lambda)\right)G_{h}\quad.$$ (37) The grading comes in naturally in (37), since the operator $I_{2}\otimes e^{-{\rm i}\pi(u_{L}+d_{k})\sigma^{z}/2}$ when moved to the left in the tensor product $l_{k\uparrow}(\lambda)\otimes l_{k\downarrow}(\lambda)$ induces a gauge transformation on ${\cal L}_{k\uparrow}(\lambda)$, which affects only the odd elements. We will see below that ${\cal L}_{k}(\lambda)$ according to equation (37) is an $L$-matrix for the Hubbard model. The associated $R$-matrix follows from (25). First of all we have $$\displaystyle\tilde{L}_{k}(\lambda)\otimes\tilde{L}_{k}(\mu)=$$ (38) $$\displaystyle(W\otimes W)\left(V_{k}^{-1}\otimes V_{k}^{-1}\right)X\left({\cal L% }_{k}(\lambda)\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{% \raisebox{-5.0pt}{\sf s}}\hskip 4.0pt{\cal L}_{k}(\mu)\right)X^{-1}\left(V_{k-% 1}\otimes V_{k-1}\right)(W^{-1}\otimes W^{-1})\,,$$ where $\hskip 2.0pt\mbox{\raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0% pt}{\sf s}}\hskip 4.0pt$ is a graded tensor product of $4\times 4$ matrices (cf. (32)) with grading $p(1)=p(4)=0$, $p(2)=p(3)=1$. We are using the same symbol for graded tensor products of $2\times 2$ and $4\times 4$ matrices. For $4\times 4$ matrices the grading will always be as above, and for $2\times 2$ matrices we will use the grading introduced below (32). The matrix $X$ in (38) is the diagonal matrix $$X:=\sigma^{z}\otimes\mbox{diag}(1,{\rm i},{\rm i},1)\otimes I_{2}\quad.$$ (39) Let $$\Gamma:=\mbox{diag}(e^{{\rm i}\alpha},e^{{\rm i}\beta},e^{{\rm i}\gamma},e^{{% \rm i}\delta})\quad,$$ (40) where $\alpha,\beta,\gamma,\delta$ may generally be mutually commuting operators. Then $$\left[\check{R}(\lambda,\mu),\Gamma\otimes\Gamma\right]=0\quad,$$ (41) $$\Leftrightarrow\quad\alpha+\delta=\beta+\gamma\,\mbox{mod}2\pi\quad.$$ (42) Since $V_{k}$ and W are of the form (40) and satisfy (42), we may infer from (38) that $${\cal R}(\lambda,\mu)\left({\cal L}_{k}(\lambda)\hskip 2.0pt\mbox{\raisebox{2.% 0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4.0pt{\cal L% }_{k}(\mu)\right)=\left({\cal L}_{k}(\mu)\hskip 2.0pt\mbox{\raisebox{2.0pt}{$% \otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4.0pt{\cal L}_{k}(% \lambda)\right){\cal R}(\lambda,\mu)\quad,$$ (43) where $${\cal R}(\lambda,\mu)=X^{-1}\check{R}(\lambda,\mu)X\quad.$$ (44) Hence, the $L$-matrix of the Hubbard model is a representation of the graded Yang-Baxter algebra. This result is due to Olmedilla et al. [11]. To be self-contained we expose the $R$-matrix ${\cal R}(\lambda,\mu)$ along with some useful relations among its elements in appendix A. The graded tensor product in (43) respects comultiplication. Therefore the monodromy matrix $${\cal T}_{L}(\lambda):={\cal L}_{L}(\lambda)\dots{\cal L}_{1}(\lambda)$$ (45) represents the graded Yang-Baxter algebra with the same $R$-matrix $${\cal R}(\lambda,\mu)\left({\cal T}_{L}(\lambda)\hskip 2.0pt\mbox{\raisebox{2.% 0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4.0pt{\cal T% }_{L}(\mu)\right)=\left({\cal T}_{L}(\mu)\hskip 2.0pt\mbox{\raisebox{2.0pt}{$% \otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4.0pt{\cal T}_{L}(% \lambda)\right){\cal R}(\lambda,\mu)\quad.$$ (46) We will demonstrate below that ${\cal T}_{L}(\lambda)$ generates the Hubbard Hamiltonian. For the matrix elements of ${\cal T}_{L}(\lambda)$ we introduce the following notation $${\cal T}_{L}(\lambda)=\left(\begin{array}[]{cccc}D_{11}&C_{11}&C_{12}&D_{12}\\ B_{11}&A_{11}&A_{12}&B_{12}\\ B_{21}&A_{21}&A_{22}&B_{22}\\ D_{21}&C_{21}&C_{22}&D_{22}\end{array}\right)\quad,$$ (47) dividing it into four $2\times 2$ submatrices $A(\lambda)$, $B(\lambda)$, $C(\lambda)$, $D(\lambda)$. As we shall see in sections 5 and 6, this block notation reflects the properties of the monodromy matrix under the two su(2) transformations connected with the Hubbard model and under combined particle-hole and gauge transformations. If $\alpha$, $\beta$, $\gamma$, $\delta$ satisfy (42) and commute among each other and with the diagonal elements of ${\cal T}_{L}(\lambda)$, then (41), (44) and (46) imply that $$[\mbox{tr}(\Gamma\,{\cal T}_{L}(\lambda)),\mbox{tr}(\Gamma\,{\cal T}_{L}(\mu))% ]=0\quad.$$ (48) Thus $\mbox{tr}(\Gamma\,{\cal T}_{L}(\lambda))$ generates a family of mutually commuting operators. Different choices of $\alpha$, $\beta$, $\gamma$, $\delta$ correspond to different boundary conditions. Since we did not restrict $\alpha$, $\beta$, $\gamma$, $\delta$ to be complex numbers, a dynamical twist is possible. In fact, because of the non-local nature of the Jordan-Wigner transformation, the periodic spin model turns into the Hubbard model with dynamically twisted boundary conditions. This point was recently discussed in detail by Yue and Deguchi [23]. To express the transfer matrix $t(\lambda)$ of the spin model introduced in section 2 in terms of fermi operators we note that $\Gamma=V_{L}^{-1}$ satisfies (42). Using (22) and (34) we conclude that $$t(\lambda)=\mbox{tr}(V_{L}^{-1}{\cal T}_{L}(\lambda))\quad.$$ (49) As we will see in the following, the Hubbard model under periodic boundary conditions is obtained with the choice $\Gamma=\sigma^{z}\otimes\sigma^{z}$, which leads to $$\mbox{str}({\cal T}_{L}(\lambda)):=\mbox{tr}((\sigma^{z}\otimes\sigma^{z}){% \cal T}_{L}(\lambda))=\mbox{tr}(D)-\mbox{tr}(A)=\mbox{str}(V_{L}T_{L}(\lambda)% )\quad.$$ (50) This expression is called the graded trace or super trace of the monodromy matrix. Its zeroth order term of the expansion in the spectral parameter is $$\mbox{str}({\cal T}_{L}(0))=\mbox{str}({\cal T}_{L\uparrow}(0))\mbox{str}({% \cal T}_{L\downarrow}(0))=e^{-{\rm i}\pi u_{L}/2}\hat{U}_{\uparrow}e^{-{\rm i}% \pi d_{L}/2}\hat{U}_{\downarrow}=e^{-{\rm i}\pi\hat{N}/2}\hat{U}\quad.$$ (51) where $\hat{N}=u_{L}+d_{L}$ is the particle number operator, and $\hat{U}$ is the shift operator for electrons. (51) follows from the corresponding result for free fermions which we derive in appendix C. $\hat{U}$ is the product $\hat{U}=\hat{U}_{\uparrow}\hat{U}_{\downarrow}$ of shift operators for up and down spin electrons. We introduce these operators in appendix B, where we give a detailed account of shift and momentum operators for fermions on the lattice. $\hat{U}$ is connected to the total momentum $\Pi$ by $\hat{U}=e^{{\rm i}\Pi}$. $\Pi$ assumes its familiar form, $$\Pi=\phi\sum_{k=1}^{L-1}k\tilde{c}_{k\tau}^{+}\tilde{c}_{k\tau}\quad,$$ (52) when expressed in terms of Fourier transformed fermi operators, $$\tilde{c}_{k\tau}=\frac{1}{\sqrt{L}}\sum_{l=1}^{L}e^{{\rm i}\phi kl}c_{l\tau}\quad.$$ (53) For brevity we wrote $\phi:=2\pi/L$ here. Eq. (52) has to be read with care, since $\Pi$ is merely defined modulo $2\pi$. Hence we interpret (52) as defining equation of an equivalence class of operators differing from each other only by certain “phase operators”. A restriction $\hat{\Pi}$ of $\Pi$ to the fundmental domain of the logarithm is constructed in appendix B. $\hat{\Pi}$ is a polynomial in $\hat{U}$. The momentum operator $\hat{\Pi}$ preserves the particle number. Thus $$\ln(\mbox{str}({\cal T}_{L}(0)))=-{\rm i}\pi\hat{N}/2+{\rm i}\hat{\Pi}\quad.$$ (54) We will see moreover in section 6 that $\mbox{str}({\cal T}_{L}(\lambda))$ commutes with the particle number operator and may therefore conclude that $$[\hat{\Pi},\ln(\mbox{str}({\cal T}_{L}(\lambda)))]=0\quad.$$ (55) This equation implies that $\tau(\lambda):=\ln(\mbox{str}({\cal T}_{L}(\lambda)))$ is a generating function of translational invariant commuting operators. According to the arguments of Lüscher [24] these operators are local. They are most easily calculated in the language of the spin model, since the building blocks of the monodromy matrix are permutation operators of spins, $P_{jk\tau}=\textstyle{{1\over 2}}(1+\sigma_{j\tau}^{\alpha}\sigma_{k\tau}^{% \alpha})$. To give an example, we present the derivation of the Hamiltonian. Recall from section 2 that $\tilde{U}=t(0)$ is the shift operator for spins. If we reintroduce for a moment an auxiliary space index $a$, we obtain $$\dot{T}_{aL}(0)=P_{a1}\tilde{U}\dot{L}_{LL-1}(0)P_{LL-1}+\sum_{j=1}^{L-1}\dot{% L}_{j+1j}(0)P_{j+1j}P_{a1}\tilde{U}\quad.$$ (56) The product $\dot{L}_{jk}(0)P_{jk}$ was given in section 2, eq. (14). Using the Jordan-Wigner transformation, (27), (28), it is expressed in terms of fermi operators as $$\dot{L}_{j+1j}(0)P_{j+1j}=c_{j\tau}^{+}c_{j+1\tau}+c_{j+1\tau}^{+}c_{j\tau}+U(% n_{j\uparrow}-\textstyle{\frac{1}{2}})(n_{j\downarrow}-\textstyle{\frac{1}{2}}% )\quad.$$ (57) (50) and (51) imply $\mbox{str}(V_{aL}P_{a1}\tilde{U})=e^{-{\rm i}\pi\hat{N}/2}\hat{U}$. Since moreover $[V_{aL},\dot{L}_{j+1j}(0)P_{j+1j}]=0$, we obtain $\mbox{str}(\dot{\cal T}_{aL}(0))=e^{-{\rm i}\pi\hat{N}/2}\hat{U}\hat{H}$. $\hat{H}$ is the Hubbard Hamiltonian $$\hat{H}=\sum_{j=1}^{L}\left(c_{j\tau}^{+}c_{j+1\tau}+c_{j+1\tau}^{+}c_{j\tau}+% U(n_{j\uparrow}-\textstyle{\frac{1}{2}})(n_{j\downarrow}-\textstyle{\frac{1}{2% }})\right)$$ (58) under periodic boundary conditions ($c_{L+1\tau}:=c_{1\tau}$). Due to our choice of Jordan-Wigner transformation (27), (28) we obtained the Hamiltonian for holes here. The sign of the hopping term can of course be changed by a particle-hole transformation. Higher conserved quantities may be calculated in the same way as the Hamiltonian. We get an expansion of the generating function $\tau(\lambda)$, whose first terms are $$\tau(\lambda)=-{\rm i}\pi\hat{N}/2+{\rm i}\hat{\Pi}+\lambda\hat{H}+{\cal O}(% \lambda^{2})\quad.$$ (59) The ${\cal O}(\lambda^{2})$ term was derived by Shastry [9]. The zeroth order terms in (59) were not known before. They are however indispensable for the derivation of the dispersion relations of elementary excitations from the eigenvalues of $\mbox{str}({\cal T}_{L}(\lambda))$ [16, 23]. It will be interesting to investigate, if we can get both branches of quasiparticle dispersion relations [3, 4] from these eigenvalues. 5 Discrete transformations A characteristic feature of the Hubbard Hamiltonian on a chain consisting of an even number of sites is its invariance under the transformation $$c_{j\uparrow}\rightarrow c_{j\uparrow}\quad,\quad c_{j\downarrow}\rightarrow(-% 1)^{j}c_{j\downarrow}^{+}\quad,\quad U\rightarrow-U\quad.$$ (60) Since the generators of rotations of the spins are not invariant under (60), there exists a second su(2) Lie algebra commuting with the Hamiltonian. We will show now that not only the Hamiltonian, but the whole transfer matrix $\mbox{str}({\cal T}_{L}(\lambda))$ is invariant under (60). First note that $h(\lambda)\rightarrow-h(\lambda)$, and thus $$G_{h}\rightarrow G_{-h}=G_{h}^{-1}\quad.$$ (61) The matrix elements of the free fermion $L$-matrix (31) transform according to $${\cal L}_{k\downarrow}(\lambda)\rightarrow e^{{\rm i}\pi k\sigma^{z}/2}\sigma^% {y}{\cal L}_{k\downarrow}(\lambda)\sigma^{y}e^{-{\rm i}\pi(k-1)\sigma^{z}/2}\quad.$$ (62) The last two formulae imply $${\cal L}_{k}(\lambda)\rightarrow\left(I_{2}\otimes e^{{\rm i}\pi k\sigma^{z}/2% }\right)(\sigma^{z}\otimes\sigma^{y}){\cal L}_{k}(\lambda)(\sigma^{z}\otimes% \sigma^{y})\left(I_{2}\otimes e^{-{\rm i}\pi(k-1)\sigma^{z}/2}\right)\quad,$$ (63) and thus by comultiplication, $${\cal T}_{L}(\lambda)\rightarrow\left(I_{2}\otimes e^{{\rm i}\pi L\sigma^{z}/2% }\right)(\sigma^{z}\otimes\sigma^{y}){\cal T}_{L}(\lambda)(\sigma^{z}\otimes% \sigma^{y})\quad.$$ (64) Finally $\mbox{str}({\cal T}_{L}(\lambda))$ transforms according to $$\mbox{str}({\cal T}_{L}(\lambda))\rightarrow-e^{-{\rm i}\pi L/2}\left(D_{11}(% \lambda)-e^{{\rm i}\pi L}A_{11}(\lambda)-A_{22}(\lambda)+e^{{\rm i}\pi L}D_{22% }(\lambda)\right)\quad,$$ (65) and we can conclude invariance modulo sign of the graded trace for even $L$ $$\mbox{str}({\cal T}_{L}(\lambda))\rightarrow\pm\mbox{str}({\cal T}_{L}(\lambda% ))\quad.$$ (66) Hence, all higher commuting operators generated by $\tau(\lambda)=\ln(\mbox{str}({\cal T}_{L}(\lambda)))$ are invariant under the transformation (60). We can of course revers the spins in (60). Then a slight modification in the transformation of the monodromy matrix occurs. The factors $\sigma^{z}\otimes\sigma^{y}$ in (64) have to be replaced by $\sigma^{y}\otimes I_{2}$, and the two factors of $I_{2}\otimes e^{{\rm i}\pi L\sigma^{z}/2}$ have to be interchanged. The result (66) for the transfer matrix remains the same. Performing up spin and down spin transformations in succession the monodromy matrix transforms as $${\cal T}_{L}(\lambda)\rightarrow\left(e^{{\rm i}\pi L\sigma^{z}/2}\otimes e^{{% \rm i}\pi L\sigma^{z}/2}\right)(\sigma^{x}\otimes\sigma^{y}){\cal T}_{L}(% \lambda)(\sigma^{x}\otimes\sigma^{y})\quad.$$ (67) In this case we find for the transfer matrix $$\mbox{str}({\cal T}_{L}(\lambda))\rightarrow\mbox{tr}(e^{-{\rm i}\pi L\sigma^{% z}}D(\lambda))-\mbox{tr}(A(\lambda))\quad.$$ (68) Again invariance is only achieved for an even number of lattice sites. 6 su(2) symmetries A careful discussion of the two su(2) symmetries [17, 18, 19, 20] connected with the Hubbard Hamiltonian was crucial for the complete understanding of the coordinate Bethe Ansatz of the model. It was shown in [25] that the states obtained from coordinate Bethe Ansatz are incomplete. They are highest weight states of two su(2) Lie algebras [26, 27]. One is the su(2) Lie algebra of rotations, the other one is the $\eta$-pairing su(2) Lie algebra. The generators of the $\eta$-pairing su(2) are obtained from the generators of rotations under the canonical transformation of the preceding section. They are connected with the creation of pairs of particles or holes in the system. We show in the following how the generators of the two symmetries commute with the monodromy matrix. Our result will be useful for the classification of quasiparticles according to their symmetry within the algebraic approach. A discussion analogous to the discussion of the spin of spin waves by Faddeev and Takhtajan [28] is likely to be possible. There are four interactionless states which may serve as reference states for an algebraic Bethe Ansatz of the Hubbard model, the empty band, the completely filled band and the half-filled band with all spins up or all spins down. Depending on the choice of reference state four of the elements of the matrices $B(\lambda)$ and $C(\lambda)$ in (47) are creation operators, whereas the remaining four are annihilation operators. This fits with the fact that there are four different quasiparticles in the system. We think that their identification will eventually become possible by use of the symmetry properties presented below. The su(2) generators of rotations are given by $$S^{+}:=-\sum_{j=1}^{L}c_{j\uparrow}^{+}c_{j\downarrow}\quad,\quad S^{-}:=-\sum% _{j=1}^{L}c_{j\downarrow}^{+}c_{j\uparrow}\quad,\quad S^{z}:=\sum_{j=1}^{L}(n_% {j\uparrow}-n_{j\downarrow})\quad.$$ (69) Recall that we are using the language of holes here (cf. (58)). Under a particle hole transformation the operators $S^{+}$, $S^{-}$, $S^{z}$ turn into the operators $\zeta^{\dagger}$, $\zeta$, $\zeta_{z}$ used by Eßler et al. [4]. We will show now that the whole transfer matrix is rotational invariant. To this end let us introduce local generators of rotations $$S_{j}^{+}:=-c_{j\uparrow}^{+}c_{j\downarrow}\quad,\quad S_{j}^{-}:=-c_{j% \downarrow}^{+}c_{j\uparrow}\quad,\quad S_{j}^{z}:=n_{j\uparrow}-n_{j% \downarrow}\quad.$$ (70) The matrices $$\Sigma^{+}:=\sigma^{+}\otimes\sigma^{-}\quad,\quad\Sigma^{-}:=\sigma^{-}% \otimes\sigma^{+}\quad,\quad\Sigma^{z}:=\textstyle{\frac{1}{2}}(\sigma^{z}% \otimes I_{2}-I_{2}\otimes\sigma^{z})$$ (71) clearly generate a representation of su(2). They are connected to the inner block $A(\lambda)$ of the monodromy matrix ${\cal T}_{L}(\lambda)$. Let $$\displaystyle\Sigma^{x}$$ $$\displaystyle:=$$ $$\displaystyle\Sigma^{+}+\Sigma^{-}\quad,\quad\Sigma^{y}:=-{\rm i}(\Sigma^{+}-% \Sigma^{-})\quad,$$ (72) $$\displaystyle S_{j}^{x}$$ $$\displaystyle:=$$ $$\displaystyle S_{j}^{+}+S_{j}^{-}\quad,\quad S_{j}^{y}:=-{\rm i}(S_{j}^{+}-S_{% j}^{-})\quad.$$ (73) Then it is not difficult to see that $$[{\cal L}_{j}(\lambda),\Sigma^{\alpha}+S_{j}^{\alpha}]=0\quad,\quad\alpha=x,y,% z\quad.$$ (74) The verification of this equation may be done as follows. First show by direct calculation that $[{\cal L}_{j}(\lambda),\Sigma^{+}+S_{j}^{+}]=0$. ${\cal L}_{j}(\lambda)$ has the same block structure as the monodromy matrix, (47). Under reversing all spins the blocks $A(\lambda)$, $B(\lambda)$, $C(\lambda)$, $D(\lambda)$ transform as $$\left(\begin{array}[]{cc}A&B\\ C&D\end{array}\right)\rightarrow\left(\begin{array}[]{cc}\sigma^{x}&0\\ 0&\sigma^{z}\end{array}\right)\left(\begin{array}[]{cc}A&B\\ C&D\end{array}\right)\left(\begin{array}[]{cc}\sigma^{x}&0\\ 0&\sigma^{z}\end{array}\right)\quad,$$ (75) and we may conclude that $[{\cal L}_{j}(\lambda),\Sigma^{-}+S_{j}^{-}]=0$. The vanishing of the last commutator $[{\cal L}_{j}(\lambda),\Sigma^{z}+S_{j}^{z}]$ follows by means of the Jacobi identity. The local equation (74) extends to an identity for the monodromy matrix by induction, $$[{\cal T}_{L}(\lambda),\Sigma^{\alpha}+S^{\alpha}]=0\quad,$$ (76) $\alpha=x,y,z$, where $S^{x}$ and $S^{y}$ are defined as their local analogs. Taking the graded trace of this equation yields $$[\mbox{str}({\cal T}_{L}(\lambda)),S^{\alpha}]=0\quad,\quad\alpha=x,y,z\quad.$$ (77) The transfer matrix and thus all higher commuting operators are rotational invariant. The transformation properties of the monodromy matrix under the discrete transformation (60) introduced in the preceding section induce a second su(2) invariance. Applying (60) to the su(2) generators of rotations, (69), we find $$\displaystyle S^{+}$$ $$\displaystyle\rightarrow$$ $$\displaystyle\eta^{+}=\sum_{j=1}^{L}(-1)^{j+1}c_{j\uparrow}^{+}c_{j\downarrow}% ^{+}\quad,$$ (78) $$\displaystyle S^{-}$$ $$\displaystyle\rightarrow$$ $$\displaystyle\eta^{-}=\sum_{j=1}^{L}(-1)^{j+1}c_{j\downarrow}c_{j\uparrow}\quad,$$ (79) $$\displaystyle S^{z}$$ $$\displaystyle\rightarrow$$ $$\displaystyle\eta^{z}=\sum_{j=1}^{L}(n_{j\uparrow}+n_{j\downarrow}-1)=\hat{N}-% L\quad.$$ (80) This is the $\eta$-pairing symmetry. Because of (80), it may be interpreted as non-abelian extension of the gauge symmetry. The commutators of the generators of $\eta$-pairing with the monodromy matrix follow from (60) and (76). Let $$\tilde{\Sigma}^{+}:=\sigma^{+}\otimes\sigma^{+}\quad,\quad\tilde{\Sigma}^{-}:=% \sigma^{-}\otimes\sigma^{-}\quad,\quad\tilde{\Sigma}^{z}:=\textstyle{\frac{1}{% 2}}(\sigma^{z}\otimes I_{2}+I_{2}\otimes\sigma^{z})\quad.$$ (81) These matrices generate a representation of su(2) connected to the block $D(\lambda)$ of the monodromy matrix. Like in case of rotations we define $$\displaystyle\tilde{\Sigma}^{x}$$ $$\displaystyle:=$$ $$\displaystyle\tilde{\Sigma}^{+}+\tilde{\Sigma}^{-}\quad,\quad\tilde{\Sigma}^{y% }:=-{\rm i}(\tilde{\Sigma}^{+}-\tilde{\Sigma}^{-})\quad,$$ (82) $$\displaystyle\eta^{x}$$ $$\displaystyle:=$$ $$\displaystyle\eta^{+}+\eta^{-}\quad,\quad\eta^{y}:=-{\rm i}(\eta^{+}-\eta^{-})\quad.$$ (83) Using these definitions we obtain for $L$ even $$[{\cal T}_{L}(\lambda),\tilde{\Sigma}^{\alpha}+\eta^{\alpha}]=0\quad,\quad% \alpha=x,y,z\quad,$$ (84) and thus $$[\mbox{str}({\cal T}_{L}(\lambda)),\eta^{\alpha}]=0\quad.$$ (85) Note that there is no local analog like (74) of equation (85). The $\eta$-pairing symmetry is sensitive to a change of boundary conditions and is in this sense a non-local symmetry. Equation (84) may be verified in the following way. Observe that $$\Sigma^{\pm}(\sigma^{z}\otimes\sigma^{y})=(\sigma^{z}\otimes\sigma^{y})\tilde{% \Sigma}^{\pm}\quad,$$ (86) and $$\Sigma^{\pm}(I_{2}\otimes e^{{\rm i}\pi L\sigma^{z}/2})=(I_{2}\otimes e^{-{\rm i% }\pi L\sigma^{z}/2})\Sigma^{\pm}\quad.$$ (87) Apply the transformation (60) to equation (76), and use (86), (87). Then $$[{\cal T}_{L}(\lambda),\eta^{\pm}]+{\cal T}_{L}(\lambda)\tilde{\Sigma}^{\pm}-(% I_{2}\otimes e^{{\rm i}\pi L\sigma^{z}})\tilde{\Sigma}^{\pm}{\cal T}_{L}(% \lambda)=0\quad.$$ (88) Equation (87) remains true, if $\Sigma^{\pm}$ is replaced by $\tilde{\Sigma}^{\pm}$. Using this fact, (88) implies $$[{\cal T}_{L}(\lambda),\tilde{\Sigma}^{z}+\eta^{z}]=0\quad,$$ (89) whereby $$[\mbox{str}({\cal T}_{L}(\lambda)),\hat{N}]=0$$ (90) for every $L$, which means that all higher conserved quantities are gauge invariant. This fact has been used in the derivation of (55). From (88) and (89) we infer the validity of (84) for even $L$. Here is a simple example for the usefulness of the above formulae. (59), (80), (85) imply immediately that $\hat{\Pi}\eta^{+}=\eta^{+}(\hat{\Pi}+\pi)$, which means that $\eta^{+}$ changes the momentum of eigenstates by $\pi$ [18]. 7 Conclusions We hope we could convince the reader, that the graded Yang-Baxter algebra (46) is a useful tool for further investigations of the one-dimensional Hubbard model. Our account of basic features of the monodromy matrix should be read in conjunction with the recent preprint of Ramos and Martins [16], who were able to diagonalize $\mbox{str}({\cal T}_{L}(\lambda))$ by purely algebraic means. Combining both works it should be not too difficult to redrive algebraically all results obtained so far by means of the coordinate Bethe Ansatz. Moreover, there is hope to proceed in the calculation of correlation functions and thermodynamical properties. There have been speculations [3] that there might be a different Yang-Baxter algebra embedding of the Hubbard Hamiltonian. Of course we can not rule out this possibility. Some arguments in favour of it, however, are disproved by now. In a recent article [14] we were able to show how a rational substructure of the $R$-matrix naturally arises in the thermodynamic limit. The corresponding submatrix of the monodromy matrix generates the Y(su(2)) representation discovered by Uglov and Korepin [13]. This nicely fits with the fact that the $S$-matrix of quasiparticle scattering is of rational form. In the present article we showed that the monodromy matrix has an appropriate algebraic structure. In particular, its graded trace is fully su(2)$\oplus$su(2) invariant and invariant under translations. The analytic properties of $R$-matrix and monodromy matrix are less usual. We do not yet have a geometrical idea of the spectral curve. Still we succeeded in showing the existence of a meromorphic parametrization of the transfer matrix. Acknowledgments. This work has been supported by the Japan Society for the Promotion of Science and the Ministry of Science, Culture and Education of Japan. We are grateful to Professor Miki Wadati for continuous encouragement and comments. We would like to thank H. Fehske, V. E. Korepin and M. Shiroishi for fruitful discussions. S. M. is also grateful to Professor Naoto Nagaosa for his encouragement. Appendix A The R-matrix The $R$-matrix generating the graded Yang-Baxter algebra of the Hubbard model was first derived by Olmedilla et al. [11]. It follows from the equations (7), (23), (24) and (44) and is of the following structure, $$\displaystyle{\cal R}(\lambda,\mu)=$$ $$\displaystyle\left(\begin{array}[]{cccccccccccccccc}\rho_{1}&0&0&0&0&0&0&0&0&0% &0&0&0&0&0&0\\ 0&\rho_{2}&0&0&{\rm i}\rho_{9}&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&\rho_{2}&0&0&0&0&0&{\rm i}\rho_{9}&0&0&0&0&0&0&0\\ 0&0&0&\rho_{3}&0&0&-{\rm i}\rho_{6}&0&0&{\rm i}\rho_{6}&0&0&\rho_{8}&0&0&0\\ 0&-{\rm i}\rho_{10}&0&0&\rho_{2}&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&\rho_{4}&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&{\rm i}\rho_{6}&0&0&\rho_{5}&0&0&\rho_{7}&0&0&-{\rm i}\rho_{6}&0&0&0\\ 0&0&0&0&0&0&0&\rho_{2}&0&0&0&0&0&-{\rm i}\rho_{10}&0&0\\ 0&0&-{\rm i}\rho_{10}&0&0&0&0&0&\rho_{2}&0&0&0&0&0&0&0\\ 0&0&0&-{\rm i}\rho_{6}&0&0&\rho_{7}&0&0&\rho_{5}&0&0&{\rm i}\rho_{6}&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&\rho_{4}&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&\rho_{2}&0&0&-{\rm i}\rho_{10}&0\\ 0&0&0&\rho_{8}&0&0&{\rm i}\rho_{6}&0&0&-{\rm i}\rho_{6}&0&0&\rho_{3}&0&0&0\\ 0&0&0&0&0&0&0&{\rm i}\rho_{9}&0&0&0&0&0&\rho_{2}&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&{\rm i}\rho_{9}&0&0&\rho_{2}&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&\rho_{1}\end{array}\right)$$ The ten Boltzmann weights $\rho_{j}=\rho_{j}(\lambda,\mu)$ are $$\displaystyle\rho_{1}$$ $$\displaystyle=$$ $$\displaystyle\cos(\lambda)\cos(\mu)e^{h-l}+\sin(\lambda)\sin(\mu)e^{l-h}\quad,$$ (A.2) $$\displaystyle\rho_{2}$$ $$\displaystyle=$$ $$\displaystyle 1\quad,$$ (A.3) $$\displaystyle\rho_{3}$$ $$\displaystyle=$$ $$\displaystyle\frac{\cos(\lambda)\cos(\mu)e^{h-l}-\sin(\lambda)\sin(\mu)e^{l-h}% }{\cos^{2}(\lambda)-\sin^{2}(\mu)}\quad,$$ (A.4) $$\displaystyle\rho_{4}$$ $$\displaystyle=$$ $$\displaystyle\cos(\lambda)\cos(\mu)e^{l-h}+\sin(\lambda)\sin(\mu)e^{h-l}\quad,$$ (A.5) $$\displaystyle\rho_{5}$$ $$\displaystyle=$$ $$\displaystyle\frac{\cos(\lambda)\cos(\mu)e^{l-h}-\sin(\lambda)\sin(\mu)e^{h-l}% }{\cos^{2}(\lambda)-\sin^{2}(\mu)}\quad,$$ (A.6) $$\displaystyle\rho_{6}$$ $$\displaystyle=$$ $$\displaystyle\frac{2\mbox{\,sh}(2(h-l))}{U(\cos^{2}(\lambda)-\sin^{2}(\mu))}\quad,$$ (A.7) $$\displaystyle\rho_{7}$$ $$\displaystyle=$$ $$\displaystyle\rho_{4}-\rho_{5}\quad,$$ (A.8) $$\displaystyle\rho_{8}$$ $$\displaystyle=$$ $$\displaystyle\rho_{1}-\rho_{3}\quad,$$ (A.9) $$\displaystyle\rho_{9}$$ $$\displaystyle=$$ $$\displaystyle\sin(\lambda)\cos(\mu)e^{l-h}-\cos(\lambda)\sin(\mu)e^{h-l}\quad,$$ (A.10) $$\displaystyle\rho_{10}$$ $$\displaystyle=$$ $$\displaystyle\sin(\lambda)\cos(\mu)e^{h-l}-\cos(\lambda)\sin(\mu)e^{l-h}\quad.$$ (A.11) The parameters $\lambda$, $\mu$, $h$ and $l$ are connected by equation (9). Note that our definition of $h$ and $l$ differs from that in ref. [11] and that we performed a shift of $\frac{\pi}{4}$ in the arguments of the functions $\alpha(\lambda)$ and $\gamma(\lambda)$ occurring there. There are the following quadratic relations between the Boltzmann weights, [11], $$\displaystyle\rho_{1}\rho_{4}+\rho_{9}\rho_{10}$$ $$\displaystyle=$$ $$\displaystyle 1\quad,$$ (A.12) $$\displaystyle\rho_{1}\rho_{5}+\rho_{3}\rho_{4}$$ $$\displaystyle=$$ $$\displaystyle 2\quad,$$ (A.13) $$\displaystyle\rho_{3}\rho_{5}-\rho_{6}^{2}$$ $$\displaystyle=$$ $$\displaystyle 1\quad.$$ (A.14) Further identities useful for practical calculations can be found in the recent article [14]. Appendix B Momentum operator for fermions on a lattice Below we present a detailed discussion of the momentum operator on a lattice. The shift operator from permutations In this subsection we mimic the construction of the shift operator for spin chains. Start with spinless fermions, $c_{1},\dots,c_{L}$ on a one dimensional lattice of $L$ sites. Let $$K_{ij}:=1-(c_{i}^{+}-c_{j}^{+})(c_{i}-c_{j})\quad.$$ (B.1) It is not difficult to see that $K_{ij}$ permutes fermions. There are the obvious identities $$K_{ij}=K_{ij}^{+}\quad,\quad K_{ij}=K_{ji}\quad,\quad K_{jj}=1\quad.$$ (B.2) Use of the fundamental anti commutators for the fermions yields $$K_{ij}c_{i}=c_{i}+(c_{i}^{+}-c_{j}^{+})c_{j}(c_{i}-c_{j})=c_{i}+(-1+c_{j}c_{j}% ^{+}-c_{j}c_{i}^{+})(c_{i}-c_{j})=c_{j}K_{ij}\quad,$$ (B.3) and (B.2) implies $$K_{ij}c_{j}=c_{i}K_{ij}\quad,\quad K_{ij}c_{i}^{+}=c_{j}^{+}K_{ij}\quad,\quad K% _{ij}c_{j}^{+}=c_{i}^{+}K_{ij}\quad.$$ (B.4) Furthermore, by the last four equations, we obtain $$K_{ij}K_{jk}=K_{ik}K_{ij}=K_{jk}K_{ik}\quad,\quad i\neq j\neq k\quad,$$ (B.5) and a short calculation similar to the one in eq. (B.3) leads to $$K_{ij}K_{ij}=1\quad.$$ (B.6) Hence, the operators $K_{ij}$ are identified as permutation operators. With the aid of permutations of fermions it is of course possible to realize a global shift [17]. Let $$\hat{U}:=K_{12}K_{23}\dots K_{L-1L}\quad.$$ (B.7) Then eq. (B.3) implies $$\hat{U}c_{j}=\left\{\begin{array}[]{l}c_{j+1}\hat{U}\\ c_{1}\hat{U}\end{array}\quad,\quad\mbox{if}\quad\begin{array}[]{l}j=1,\dots,L-% 1\\ j=L\quad.\end{array}\right.$$ (B.8) This means that $\hat{U}$ is acting as right shift operator on the elementary fermi operators of a periodic chain of $L$ sites. Now (B.7) implies $$\hat{U}^{+}=K_{L-1L}\dots K_{12}\quad,$$ (B.9) and thus by eq. (B.6), $\hat{U}\hat{U}^{+}=\hat{U}^{+}\hat{U}=1$. $\hat{U}$ is invertible, and $\hat{U}^{-1}=\hat{U}^{+}$, i.e. $\hat{U}$ is unitary. $\hat{U}^{-1}$ is the left shift operator. With the aid of the shift operator it is possible to define the momentum as its formal infinitesimal generator, $$e^{{\rm i}\Pi}:=\hat{U}\quad.$$ (B.10) This is the common construction in case of the spin chains. Note however, that (B.10) defines $\Pi$ only modulo $2\pi$. To realize the shift operator for electrons, attach a spin label to all operators above, and observe that $[\hat{U}_{\uparrow},c_{j\downarrow}]=[\hat{U}_{\downarrow},c_{j\uparrow}]=0$. Then $\hat{U}:=\hat{U}_{\uparrow}\hat{U}_{\downarrow}$ is the shift operator for electrons. Diagonalization of the momentum operator – an exercise For simplicity, return to the spinless case. Because of the basic property (B.8), it follows that $[\hat{U}^{L},c_{j}]=[\hat{U}^{L},c_{j}^{+}]=0$, $j=1,\dots,L$, i.e. $\hat{U}^{L}$ is a scalar operator. Since $\hat{U}|0\rangle=|0\rangle$, we obtain $\hat{U}^{L}=1$. Shifting all fermions once around the lattice does not change the state of the system. This simple fact has strong implications. Let $\alpha\in\mbox{{\gfettohne C}}$. Then $$(1-\alpha\hat{U})\,\sum_{n=0}^{L-1}\alpha^{n}\hat{U}^{n}=1-\alpha^{L}\quad.$$ (B.11) This means that the resolvent of $\hat{U}$ is a finite sum. With $\lambda:=1/\alpha$ we find, $$(\lambda-\hat{U})^{-1}=\frac{1}{\lambda(\lambda^{L}-1)}\sum_{n=0}^{L-1}\lambda% ^{L-n}\hat{U}^{n}\quad.$$ (B.12) The sum on the right hand side of the latter equation is regular in $\lambda$ and of order $\lambda$. The spectrum of $\hat{U}$ is therefore given by the equation $$\lambda^{L}=1\quad,\quad\Leftrightarrow\quad,\quad\lambda_{k}=e^{{\rm i}2\pi k% /L}\quad,\quad k=0,\dots,L-1\quad,$$ (B.13) and is of course highly degenerate. Let $P_{0},\dots,P_{L-1}$ be the projections on the eigenspaces of $\hat{U}$ corresponding to the eigenvalues $\lambda_{0},\dots,\lambda_{L-1}$. The $P_{k}$ are orthogonal, since the eigenstates of a unitary operator to different eigenvalues are. $P_{k}P_{l}=\delta_{kl}P_{k}$, and furthermore, $\sum_{k=0}^{L-1}P_{k}=1$. Hence, the spectral decomposition of the resolvent (B.12) is $$(\lambda-\hat{U})^{-1}=\sum_{k=0}^{L-1}\frac{P_{k}}{\lambda-\lambda_{k}}\quad.$$ (B.14) Conversely, the $P_{k}$ are determined by the spectral decomposition via $P_{k}=\mbox{\,res}_{\lambda=\lambda_{k}}((\lambda-\hat{U})^{-1})$. Since $d_{\lambda}(\lambda(\lambda^{L}-1))|_{\lambda=\lambda_{k}}=L$, we obtain $$P_{k}=\frac{1}{L}\sum_{n=0}^{L-1}e^{-{\rm i}\phi kn}\hat{U}^{n}\quad,$$ (B.15) where we used the abbreviation $\phi=2\pi/L$. From this representation the momentum eigenstates arise quite naturally. The particle number operator $\hat{N}$ is of course translational invariant, $[\hat{N},\hat{U}]=0$. Hence, the projections $P_{k}$ preserve the particle number, $[P_{k},\hat{N}]=0$. We find $$P_{k}c_{j}^{+}|0\rangle=\frac{e^{{\rm i}\phi jk}}{L}\sum_{n=1}^{L}e^{-{\rm i}% \phi kn}c_{n}^{+}|0\rangle\quad.$$ (B.16) Since this is true for all $j=1,\dots,L$, we see that $\hat{U}$ is nondegenerate in the one particle sector of the Hilbert space. In this sector the subspace corresponding to the eigenvalue $\lambda_{k}$ is spanned by the right hand side of the above equation. Normalization yields the eigenvector $|k\rangle:=\tilde{c}_{k}^{+}|0\rangle$, where $\tilde{c}_{k}^{+}$ is the creator of a one particle momentum eigenstate. It is defined in the usual way as the adjoint of the annihilator $$\tilde{c}_{k}:=\frac{1}{\sqrt{L}}\sum_{n=1}^{L}e^{{\rm i}\phi kn}c_{n}\quad.$$ (B.17) $\tilde{c}_{k}$ and $\tilde{c}_{k}^{+}$ are fermi operators. They are transformed back into site operators $c_{j}$, $c_{j}^{+}$ by Fourier inversion of eq. (B.17). Therefore arbitrary operators defined in terms of $c_{j}$, $c_{j}^{+}$ may be expressed in terms of $\tilde{c}_{k}$ and $\tilde{c}_{k}^{+}$. The particle number operator, in particular, is form invariant under Fourier transformation, $$\hat{N}=\sum_{k=0}^{L-1}\tilde{c}_{k}^{+}\tilde{c}_{k}\quad.$$ (B.18) Since the $\tilde{c}_{k}$ are fermi operators, the states $|k_{1}\dots k_{N}\rangle:=\tilde{c}_{k_{1}}^{+}\dots\tilde{c}_{k_{N}}^{+}|0\rangle$, $k_{1}<\dots<k_{N}$, are orthogonal. (B.18) implies $\hat{N}|k_{1}\dots k_{N}\rangle=N|k_{1}\dots k_{N}\rangle$. Counting these states we see that they span the $N$-particle sector of the lattice Hilbert space. Letting $N$ vary from $1,\dots,L$ we get a basis of the full Hilbert space. Now, from the definition (B.17) of $\tilde{c}_{k}$ it follows that $$\hat{U}\tilde{c}_{k}^{+}=e^{{\rm i}\phi k}\tilde{c}_{k}^{+}\hat{U}\quad.$$ (B.19) Hence we have obtained $$\hat{U}|k_{1}\dots k_{N}\rangle=e^{{\rm i}\phi(k_{1}+\dots+k_{N})}|k_{1}\dots k% _{N}\rangle\quad,$$ (B.20) or for the momentum operator, respectively, $$\Pi|k_{1}\dots k_{N}\rangle=\phi(k_{1}+\dots+k_{N})|k_{1}\dots k_{N}\rangle\quad.$$ (B.21) This equation is of course defined only modulo $2\pi$. On the other hand it is easily verified that the operator $\phi\sum_{k=1}^{L-1}k\,\tilde{c}_{k}^{+}\tilde{c}_{k}$ acts the same way on $|k_{1}\dots k_{N}\rangle$. Since these states form a basis, we have achieved the diagonalization of the momentum operator, $$\Pi=\phi\sum_{k=1}^{L-1}k\,\tilde{c}_{k}^{+}\tilde{c}_{k}\quad.$$ (B.22) In this form $\Pi$ is usually found in the literature. Site representation of the momentum operator What happens, when we use the Fourier transform to translate back the above diagonal form of the momentum operator into the site representation? Inserting the definition (B.17) into eq. (B.22) yields $$\Pi=\frac{\phi}{L}\sum_{m,n=1}^{L}c_{m}^{+}c_{n}\sum_{k=1}^{L-1}ke^{-{\rm i}% \phi(m-n)k}\quad.$$ (B.23) Now, for $\alpha\in\mbox{{\gfettohne C}}$, let $$g(\alpha):=\sum_{k=0}^{L-1}{\rm i}e^{-{\rm i}\phi k\alpha}={\rm i}\,\frac{1-e^% {-{\rm i}\phi L\alpha}}{1-e^{-{\rm i}\phi\alpha}}\quad,$$ (B.24) where the term on the right hand side makes sense only for $\alpha\notin L\mbox{{\gfettohne Z}}$. Eqs. (B.23) and (B.24) imply that $$\displaystyle\Pi$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{L}\sum_{m,n=1}^{L}g^{\prime}(m-n)\,c_{m}^{+}c_{n}$$ (B.25) $$\displaystyle=$$ $$\displaystyle{\textstyle{1\over 2}}\phi(L-1)\hat{N}+\phi\sum_{m,n=1\atop m\neq n% }^{L}\frac{c_{m}^{+}c_{n}}{e^{-{\rm i}\phi(m-n)}-1}\quad.$$ This formula is remarkable. Since $\Pi=-{\rm i}\ln(\hat{U})$, we can read it as having taken the logarithm of the ordered product of transpositions which defines $\hat{U}$. $\Pi$ as given by (B.25) is a non-local operator. It is interesting to notice that the second term on the right hand side of (B.25) is gauge equivalent to the $1/\sin(\phi(m-n))$ hopping term of the long-range Hubbard model introduced by Gebhard and Ruckenstein [29]. We have thus found a simple interpretation of this Hamiltonian and in turns a simple interpretation of the origin of the $1/\sin^{2}$ exchange of the Haldane-Shastry spin chain [30, 31]. Momentum operator modulo $2\pi$ To obtain the appropriate definition of $\Pi$ mod $2\pi$, observe from the preceding subsection that $$\phi k=\frac{1}{L}\sum_{m=1}^{L}g^{\prime}(m)e^{{\rm i}\phi km}=\phi\sum_{m=1}% ^{L-1}\left({1\over 2}+\frac{e^{{\rm i}\phi km}}{e^{-{\rm i}\phi m}-1}\right)$$ (B.26) for $k=0,\dots,L-1$. With view of the right hand side of this equation the function $\phi k$ is periodically continued to a saw tooth function on the integers. Since $\Pi/\phi$ assumes only integer eigenvalues, the definition $$\hat{\Pi}:=\phi\sum_{m=1}^{L-1}\left({1\over 2}+\frac{\hat{U}^{m}}{e^{-{\rm i}% \phi m}-1}\right)$$ (B.27) yields the required restriction of $\Pi$ modulo $2\pi$. In other words, (B.19) and (B.26) imply that $e^{{\rm i}\Pi}=e^{{\rm i}\hat{\Pi}}$. $\hat{\Pi}$ obviously commutes with the Hubbard Hamiltonian, whereas $\Pi$ does not. Appendix C Free fermions In this appendix we show how to obtain the shift operator $\hat{U}$ for fermions from the graded trace of the free fermion monodromy matrix (33). For this purpose we have to treat XX-chain and free fermion model in parallel. Both models are related by a Jordan-Wigner transformation. For brevity let us consider the up-spin case (27), and let us suppress the spin index here. The XX-chain monodromy matrix is $$T_{L}(\lambda)=l_{L}(\lambda)\dots l_{1}(\lambda)\quad.$$ (C.1) It satisfies the Yang-Baxter algebra (5), $$\check{r}(\lambda-\mu)(T_{L}(\lambda)\otimes T_{L}(\mu))=(T_{L}(\mu)\otimes T_% {L}(\lambda))\check{r}(\lambda-\mu)\quad.$$ (C.2) Applying the Jordan-Wigner transformation (27) to the XX-chain monodromy matrix, we obtain $$T_{L}(\lambda)=e^{-{\rm i}\pi u_{L}\sigma^{z}/2}{\cal T}_{L}(\lambda)\quad,$$ (C.3) where ${\cal T}_{L}(\lambda)$ is the free fermion monodromy matrix (33) with $\tau=\uparrow$, which satisfies the graded analog of (C.2), $$\check{r}_{g}(\lambda-\mu)\left({\cal T}_{L}(\lambda)\hskip 2.0pt\mbox{% \raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4% .0pt{\cal T}_{L}(\mu)\right)=\left({\cal T}_{L}(\mu)\hskip 2.0pt\mbox{% \raisebox{2.0pt}{$\otimes$}}\hskip-6.5pt\mbox{\raisebox{-5.0pt}{\sf s}}\hskip 4% .0pt{\cal T}_{L}(\lambda)\right)\check{r}_{g}(\lambda-\mu)\quad.$$ (C.4) The matrix $\check{r}_{g}(\lambda)$ is defined as $\check{r}_{g}(\lambda):=W^{-1}\check{r}(\lambda)W$ with $W$ according to (36). (C.4) implies that $$[\mbox{str}({\cal T}_{L}(\lambda)),\mbox{str}({\cal T}_{L}(\mu))]=0\quad.$$ (C.5) We want to calculate the action of $\mbox{str}({\cal T}_{L}(0))$ on fermi operators. To this end let $$P_{0j}:={\textstyle{1\over 2}}(1+\sigma^{\alpha}\sigma_{j}^{\alpha})\quad,% \quad P_{jk}:={\textstyle{1\over 2}}(1+\sigma_{j}^{\alpha}\sigma_{k}^{\alpha})\quad.$$ (C.6) Then (C.1) implies that $$T_{L}(0)=P_{01}P_{12}\dots P_{L-1L}=P_{01}\tilde{U}\quad.$$ (C.7) We use the Jordan-Wigner transformation (27) to express $P_{jj+1}$ in terms of fermi operators. Then $$P_{jj+1}=K_{jj+1}+2n_{j}n_{j+1}$$ (C.8) with the spinless densities $n_{j}=c_{j}^{+}c_{j}$ and the permutation operators $K_{jj+1}$ according to (B.1). Let $e_{\pm}:=e^{\pm{\rm i}\pi u_{L}/2}$. Then (C.3) implies that $$\mbox{str}({\cal T}_{L}(0))=(e_{+}n_{1}+e_{-}(n_{1}-1))\tilde{U}\quad.$$ (C.9) Using the Jordan-Wigner transformation (27) and the known action of the permutation operators $P_{jj+1}$ on Pauli matrices, we obtain $$\displaystyle\tilde{U}c_{k}$$ $$\displaystyle=$$ $$\displaystyle c_{k+1}(1-2n_{1})\tilde{U}\quad,\quad k=1,\dots,L-1\quad,$$ (C.10) $$\displaystyle\tilde{U}c_{L}$$ $$\displaystyle=$$ $$\displaystyle c_{1}(1-2n_{1})e_{+}^{2}\tilde{U}\quad.$$ (C.11) $e_{+}$ and $e_{-}$ are generators of gauge transformations, $$e_{+}c_{k}=-{\rm i}c_{k}e_{+}\quad,\quad e_{-}c_{k}={\rm i}c_{k}e_{-}\quad.$$ (C.12) By use of the four previous formulae, we infer for the annihilators $c_{k}$, $$\mbox{str}({\cal T}_{L}(0))c_{k}={\rm i}c_{k+1}\mbox{str}({\cal T}_{L}(0))\quad,$$ (C.13) where $k=1,\dots,L$ ($c_{L+1}=c_{1}$). It is not difficult to see that $\mbox{str}({\cal T}_{L}(0))$ is a unitary operator. Therefore the creators $c_{k}^{+}$ satisfy $$\mbox{str}({\cal T}_{L}(0))c_{k}^{+}=-{\rm i}c_{k+1}^{+}\mbox{str}({\cal T}_{L% }(0))\quad,$$ (C.14) $k=1,\dots,L$. (C.12), (C.13) and (C.14) imply that $$e_{+}\mbox{str}({\cal T}_{L}(0))=\alpha_{L}\hat{U}\quad,$$ (C.15) where $\alpha_{L}$ is a complex constant, and $\hat{U}$ is the shift operator (B.7) for spinless fermions. Since $K_{jj+1}|0\rangle=P_{jj+1}|0\rangle=|0\rangle$, we conclude from (C.9) that $\alpha_{L}=-1$ and eventually obtain $$\mbox{str}({\cal T}_{L}(0))=-e_{-}\hat{U}\quad.$$ (C.16) The corresponding formula for down-spins is simply obtained by reversing the spins. Then (C.16) implies (51). It may be instructive for the reader to verify (C.16) directly for small $L$. References [1] E. H. Lieb and F. Y. Wu, Phys. Rev. Lett. 20, 1445 (1968). [2] F. H. L. Eßler and V. E. Korepin, Exactly Solvable Models of Strongly Correlated Electrons, World Scientific, Singapore, (1994). [3] F. H. L. Eßler and V. E. Korepin, Phys. Rev. Lett. 72, 908 (1994). [4] F. H. L. Eßler and V. E. Korepin, Nucl. Phys. B 426, 505 (1994). [5] H. Frahm and V. E. Korepin, Phys. Rev. B 43, 5653 (1991). [6] A. Klümper and R. Z. Bariev, Nucl. Phys. B 458, 623 (1996). [7] B. S. Shastry, Phys. Rev. Lett. 56, 1529 (1986). [8] B. S. Shastry, Phys. Rev. Lett. 56, 2453 (1986). [9] B. S. Shastry, J. Stat. Phys. 50, 57 (1988). [10] M. Wadati, E. Olmedilla, and Y. Akutsu, J. Phys. Soc. Jpn. 56, 1340 (1987). [11] E. Olmedilla, M. Wadati, and Y. Akutsu, J. Phys. Soc. Jpn. 56, 2298 (1987). [12] E. Olmedilla and M. Wadati, Phys. Rev. Lett. 60, 1595 (1988). [13] D. B. Uglov and V. E. Korepin, Phys. Lett. A 190, 238 (1994). [14] S. Murakami and F. Göhmann. Yangian Symmetry and Quantum Inverse Scattering Method for the One-Dimensional Hubbard Model. preprint, cond-mat/9609249, (1996). [15] M. Shiroishi and M. Wadati, J. Phys. Soc. Jpn. 64, 57 (1995). [16] P. B. Ramos and M. J. Martins. Algebraic Bethe Ansatz Approach for the One-Dimensional Hubbard Model. preprint, hep-th/9605141, (1996). [17] O. J. Heilmann and E. H. Lieb, Ann. N.Y. Acad. Sci. 172, 584 (1971). [18] C. N. Yang, Phys. Rev. Lett. 63, 2144 (1989). [19] C. N. Yang and S. C. Zhang, Mod. Phys. Lett. B 4, 40 (1990). [20] M. Pernici, Europhys. Lett. 12, 75 (1990). [21] R. J. Baxter, Exactly Solved Models in Statistical Mechanics, Academic Press, London, (1982). [22] P. P. Kulish and E. K. Sklyanin, J. Soviet Math. 19, 1596 (1982). [23] R. Yue and T. Deguchi. Analytic Bethe Ansatz for 1-d Hubbard model and twisted coupled XY model. preprint, hep-th/9605141, (1996). [24] M. Lüscher, Nucl. Phys. B 117, 475 (1976). [25] F. H. L. Eßler, V. E. Korepin, and K. Schoutens, Nucl. Phys. B 372, 559 (1992). [26] F. H. L. Eßler, V. E. Korepin, and K. Schoutens, Phys. Rev. Lett. 67, 3848 (1991). [27] F. H. L. Eßler, V. E. Korepin, and K. Schoutens, Nucl. Phys. B 384, 431 (1992). [28] L. A. Takhtajan and L. D. Faddeev, J. Soviet Math. 24, 241 (1984). [29] F. Gebhard and A. E. Ruckenstein, Phys. Rev. Lett. 68, 244 (1992). [30] F. D. M. Haldane, Phys. Rev. Lett. 60, 635 (1988). [31] B. S. Shastry, Phys. Rev. Lett. 60, 639 (1988).
Meta-Learning Guarantees for Online Receding Horizon Control Deepan Muthirayan    Pramod Khargonekar Abstract In this paper we provide provable regret guarantees for an online meta-learning receding horizon control algorithm in an iterative control setting, where in each iteration the system to be controlled is a linear deterministic system that is different and unknown, the cost for the controller in an iteration is a general additive cost function and the control input is required to be constrained, which if violated incurs an additional cost. We prove (i) that the algorithm achieves a regret for the controller cost and constraint violation that are $O(T^{3/4})$ for an episode of duration $T$ with respect to the best policy that satisfies the control input control constraints and (ii) that the average of the regret for the controller cost and constraint violation with respect to the same policy vary as $O((1+1/\sqrt{N})T^{3/4})$ with the number of iterations $N$, showing that the worst regret for the learning within an iteration continuously improves with experience of more iterations. 1 Introduction In this paper, we provide theoretical properties of meta-learning in a suitable closed-loop control setting. Specifically, we consider a scenario in which there is a sequence of episodes, each of a finite duration. In each episode, the system to be controlled is unknown, different and drawn from a fixed set. The controller has knowledge of the cost function but doesn’t have any other prior information, has to learn to control on-the-fly and can leverage the experience from previous episodes to improve its learning during a new episode. For this setting we propose and study an online model-based meta-learning control algorithm. It has two levels of learning: an outer learner that continually learns a general model by adapting the model after every new episode of experience and an inner learner that continually learns a model of the system during an episode by adapting the model proposed by the outer learner. The controller computes the control input for a particular time during an episode by optimizing the cost-to-go using the model estimate provided by the inner learner in place of the actual model in the transition dynamics. The role of the outer learner is to learn a general model so that the adaptations within an episode improve across the episodes, and thus the overall controller is a meta-learning controller. Since there are two levels of learning in our meta learning algorithm, we assess its performance through two notions of regret: (i) the regret for the performance within an episode (ii) the average regret for the performance across the episodes. Here we measure the performance of the algorithm for an episode by (i) the controller cost for the episode and (ii) the cumulative of the violation of the control input constraints at each time step for the duration of an episode. The idea for using the average regret as a measure is to assess the ability of the meta-learner to improve the adaptations and hence the performance with more episodes of experience. 1.1 Related Work Meta-learning research has been pioneered by (Schmidhuber, 1987), (Naik & Mammone, 1992), and (Thrun & Pratt, 1998). Recently, there has been a renewed interest in meta-learning, and allied ideas like few-shot learning, motivated by the success of deep learning. (Andrychowicz et al., 2016), (Li & Malik, 2016), (Ravi & Larochelle, 2016) explored meta-learning in the context of learning to optimize. (Finn et al., 2017) explored the case where the meta-learner uses ordinary gradient descent to update the learner (Model-Agnostic Meta-Learner (MAML)) and showed that this simplified approach can achieve comparable performance. The MAML approach has been subsequently improved and refined by many other works (Nichol & Schulman, 2018; Rajeswaran et al., 2019; Flennerhag et al., 2019). (Duan et al., 2016) and (Wang et al., 2016) both investigated meta-learning in reinforcement-learning domains using traditional Recurrent Neural Networks (RNN) architectures, Gated Recurrent Units (GRUs) and Long Short-Term memories (LSTMs). (Mishra et al., 2018) investigated a general meta-learner architecture that combines temporal convolutions and causal attention and showed that this model achieves improved performance. In the OCO framework, under the full information feedback setting, it is established that the best possible regret scales as $O(T^{1/2})$ (resp. $O(\log T)$) for convex (resp. strongly convex) loss functions, where $T$ is the number of time steps (Zinkevich, 2003; Hazan et al., 2006; Abernethy et al., 2009). These results have also been extended to constrained online convex optimization where it has been shown that the best regret scales as $O(T^{\max\{c,1-c\}})$ for the cost and $O(T^{1-c/2})$ for constraint violation, where $c$ is a constant (Jenatton et al., 2016; Yuan & Lamperski, 2018). There are some papers in the machine learning literature that provide online regret analysis for meta-learning algorithms. These works analyse gradient based meta-learners because of the natural connection between gradient based learning and online convex optimization. (Finn et al., 2019) provide $O(\log(T))$ regret bound for a gradient based online meta-learner under certain smoothness assumptions. (Balcan et al., 2019) extend the OCO setting to a meta-learning setting and provide regret analysis for a gradient based meta-learning algorithm. They show that the best regret scales as $O((1+\log(N)/N)T^{1/2})$, where $T$ represents the number of time steps within an OCO procedure and $N$ represents the number of such procedures. Some significant advancements have been made in the recent years in providing convergence guarantees for standard learning methods in control problems. (Fazel et al., 2018) proved that the policy gradient based learning converges asymptotically to the optimal policy for the Linear-Quadratic Regulator (LQR) problem. (Zhang et al., 2019) extended this result to the ${\cal H}_{2}/{\cal H}\infty$ control problem. Recently, (Molybog & Lavaei, 2020) proved asymptotic convergence of a gradient based meta-learner for the LQR problem. All of these works provide asymptotic convergence guarantees. Recently, there has also been considerable interest in establishing online learning guarantees in standard control settings. (Dean et al., 2018) provide an algorithm for the Linear-Quadratic control problem with unknown dynamics that achieves a regret of $O(T^{3/4})$. Just recently, (Cohen et al., 2019) improved on this result by providing an algorithm that achieves a regret of $O(T^{1/2})$ for the same problem. (Agarwal et al., 2019a) consider the control problem for a general convex cost and a linear dynamic system with additive adversarial disturbance. They provide an online learning algorithm that achieves an $O(\sqrt{T})$ regret for the cost with respect to the best linear control policy. Agarwal et al. also showed in a subsequent work (Agarwal et al., 2019b) that a poly logarithmic regret is achievable for the same setting when the transition dynamics is known. We emphasize that the regret analysis we provide is the first of its kind for an online meta-learning receding horizon control algorithm. In contrast to the recent splendid work on online learning in control settings, our work considers the online meta-learning setting for a control problem with a general convex cost function for an episode and control input constraints. The key difference from similar works is that the regret analysis we provide is w.r.t the optimal. 1.2 Our Contribution In this work we propose a model-based meta-learning receding horizon control algorithm for a general control setting and provide guarantees for its online performance. The key novelty of the algorithm we propose is how the control input is designed to so that it is persistently exciting. We show that the proposed algorithm achieves (i) a regret for the controller cost that is $\sim O(T^{3/4})$ for an individual episode of duration $T$ with respect to the best policy that satisfies the control input constraints (ii) an average regret for the controller cost that varies as $\sim O((1+1/\sqrt{N})T^{3/4})$ with the number of episodes $N$, (iii) a regret for constraint violation that is $\sim O(T^{3/4})$ for an episode of duration $T$ with respect to the best policy that satisfies the control input constraints and (iv) an average regret for constraint violation that varies as $\sim O((1+1/\sqrt{N})T^{3/4})$ with the number of episodes $N$. Hence we show that the worst regret for the learning within an episode continuously improves with experience of more episodes. In section 2 we outline the learning setting. In section 3 we introduce and briefly discuss the online meta-learning control algorithm. In section 4 we discuss the inner learner and provide an upper bound for the cumulative error in the model estimation for the inner learner during an episode. In section 5 we discuss the outer learner and provide an upper bound for the average of the cumulative error in model estimation across the episodes. And finally in section 6 we characterize the regret for the controller’s cost and cumulative constraint violation within an episode and the respective averages across the episodes. 2 Problem Setup 2.1 Episodes The learning setting comprises a sequence of episodes of duration $T$, from $1$ to $N$, and the system to be controlled in each episode is an unknown linear deterministic system. We denote the matrices of the system in episode $i$ by $A^{i}\in\mathbb{R}^{n\times n}$, and $B^{i}\in\mathbb{R}^{n\times m}$. Let $\theta^{i}=[A^{i},B^{i}]\in\Theta$, a known compact set, and $\lVert\theta\rVert_{F}\leq S,\ \forall\theta\in\Theta$. We denote the state of the system and the control input to the system at time $t$ in episode $i$ by $x^{i}_{t}\in\mathbb{R}^{n}$ and $u^{i}_{t}\in\mathbb{R}^{m}$. The initial condition is that $x^{i}_{0}=x_{s}\ \forall\ i$. Hence the dynamics of the evolution of the state of the system is given by the equation $$x^{i}_{t+1}=A_{i}x^{i}_{t}+B_{i}u^{i}_{t}$$ (1) We denote a general controller by $\mathcal{H}$. The control cost at time step $t$ is a function of the state $x_{t}$ and the control input $u_{t}$ generated by the controller and is denoted by $c_{t}(x_{t},u_{t})$. Hence the overall cost for the controller $\mathcal{H}$ in episode $i$ is given by $$\mathcal{C}_{i}(\mathcal{H})=\sum_{t=0}^{T}c_{t}(x^{i}_{t},u^{i}_{t}).$$ (2) Additionally, it is required that the control input be constrained within a bounded convex polytope given by $$u\in\mathcal{U},\ \forall\ i\ \text{and}\ t,\ \mathcal{U}=\{u|F_{u}u\leq b_{u}\}.$$ (3) Finally, the observed signal is the state plus an additive noise $\epsilon$: $$y^{i}_{t}=x^{i}_{t}+\epsilon^{i}_{t}.$$ The assumptions used throughout this paper are as follows: Assumption 1 The spectral radius $\rho(A)<1\ \forall\theta\in\Theta$ and the system in Eq. (1) is controllable. The assumptions on the spectral radius (or the assumption that there is additional knowledge of a feedback rule to stabilize the system) and controllability are standard in online learning and control problems (Abbasi-Yadkori & Szepesvari, 2011; Dean et al., 2018; Cohen et al., 2019; Li et al., 2019; Simchowitz et al., 2020). Assumption 2 (i) for the stochastic process given by $\{\{z_{1},\epsilon_{0}\},\{z_{2},\epsilon_{1}\},...\}$, where $z_{t}=[x^{\top}_{t}u^{\top}_{t}]^{\top}$, there exists a filtration $\mathcal{F}_{t}$ s.t. $z_{t+1},\epsilon_{t}$ are $\mathcal{F}_{t}$ measurable (ii) the noise $\epsilon_{t}$ is bounded, i.e., $\lVert\epsilon_{t}\rVert_{2}\leq\epsilon_{c}$ (iii) each component of $\epsilon_{t}$ is a real-valued martingale difference w.r.t the filtration $\mathcal{F}_{t}$ and conditionally $R$ sub-Gaussian. The filtration assumption was used in (Abbasi-Yadkori & Szepesvari, 2011; Cohen et al., 2019). The assumption that the noise is sub-Gaussian and a martingale difference are standard assumptions in prior works (Abbasi-Yadkori & Szepesvari, 2011; Dean et al., 2018; Cohen et al., 2019). Definition 1 A sequence of stage cost functions $\{c_{t}\}$ is said to be globally exponentially controllable to zero with respect to a function $\sigma:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}$ if there exists a pair $(M_{\lambda},\lambda_{e})\in\mathbb{R}_{>0}\times\mathbb{R}_{>0}$ such that for any $t$ and $x_{t}$ there exists an infinite length control input sequence $u_{t:\infty}=\{u_{t},u_{t+1},....\}$ such that $$c_{k}(x_{k},u_{k})\leq M_{\lambda}\exp^{-\lambda_{e}(k-t)}\sigma(x_{t}),\ % \forall k\geq t.$$ Assumption 3 The function $c_{t}$ is continuous and locally Lipschitz for all $t$. The sequence of functions given by $c_{t}$s is globally exponentially controllable w.r.t $\sigma:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}$ with the pair of constants $(M_{\lambda},\lambda_{e})$ and there exists a constant $\alpha>0$ such that $c_{t}(x,u)\geq\alpha\sigma(x)$. The cost functions that satisfy Assumption 3 include quadratic functions of the type $x^{\top}Qx+u^{\top}Ru$, where $Q>0$ and $R\geq 0$ (see Corollary 4, (Grimm et al., 2005)). Hence the class of cost functions $c_{t}$ that satisfy Assumption 3 include the loss function used in the LQR case. The assumption that the cost functions are locally Lipschitz is also a standard assumption (Li et al., 2019; Simchowitz et al., 2020). 2.2 Regret We define the regret for the controller’s cost in a particular episode as the difference between the cost $\mathcal{C}_{i}(\mathcal{H})$ and the overall cost for the best policy that satisfies the control input constraints. Thus, the average regret for $N$ episodes is given by $$\displaystyle\frac{1}{N}\sum_{i=1}^{N}\left[\mathcal{C}_{i}(\mathcal{H})-% \mathcal{C}^{*}_{i}\right],\ \text{where}$$ $$\displaystyle\mathcal{C}^{*}_{i}=\min\sum_{j=1}^{T-1}[c_{j}(.,.)],$$ $$\displaystyle\text{s.t.}\ (\text{Eq. \ref{eq:stateequation}})\ \text{and}\ (% \text{Eq. \ref{eq:controlconstraints}})\ \text{are satisfied}\ \forall\ t,\ x^% {i}_{0}=x_{s}.$$ (4) Here, the violation of the constraint incurs an additional cost that is proportional to $$\mathcal{V}_{i}=\sum_{t=0}^{T-1}\left(\sum_{k}\{F_{u}u^{i}_{t}-b_{u}\}_{k,+}% \right),$$ (5) where $\{.\}_{l}$ denotes the $l$th component of a vector. The subscript $\{.\}_{+}$ is a shorthand notation for $\max\{.,0\}$. This is also the regret for constraint violation with respect to the best policy that satisfies the control input constraints. Thus the average regret for constraint violation for $N$ epsidoes is given by $$\frac{1}{N}\sum_{i=1}^{N}\mathcal{V}_{i}.$$ (6) The objective is to design a suitable learning controller such that the average regret for both the controller’s cost and constraint violation are minimized. 3 Structure of the Meta-Learning Control Algorithm In this section we propose a model-based meta-learning receding horizon control algorithm for the learning setting described above. The overall meta-learning control architecture is shown in Fig. 1. The overall controller comprises an outer learner and an inner learner. The outer learner learns a general model parameter by continually adapting it following new episodes of experience. The inner learner learns an estimate of the parameter of the model of the system during an episode by continually adapting the suggestion by the outer learner as more observations of the state transitions are made. The outer learner learns the general model parameter such that the learning within an episode continuously improves with exprerience of more episodes. We denote the outer learner by $\mathcal{A}_{s}$ and the inner learner by $\mathcal{A}_{f}$. We denote the output of $\mathcal{A}_{s}$ in episode $i$ by $\hat{\phi}_{i}$ and the estimate of the model parameter for time $t$ in episode $i$ by $\hat{\theta}^{i}_{t}$. At time $t$ the controller computes a control input $u^{i}_{t}$ by minimizing the look-ahead cost for horizon $M$ plus a terminal cost function $d$ for the dynamics based on the model estimate. The algorithm updates the model parameter estimate at certain intervals. Denote the index of an interval by $j$. It’s start and end time are denoted by $t_{j}^{s}$ and $t_{j}$, respectively. At the end of interval $j$, the inner learner computes a set of estimates, $\hat{\Theta}^{i}_{j}$, of the system model parameter $\theta$, in which the system model parameter is guaranteed to lie with high probability. An estimate of the system model $\hat{\theta}^{i}_{j}$ is selected from the set $\hat{\Theta}^{i}_{j}$ at the beginning of the next interval and is held constant within the interval, i.e., $\hat{\theta}^{i}_{t}=\hat{\theta}^{i}_{j},\forall\ t_{j}<t\leq t_{j+1}$. The algorithm also makes an estimation for the initial state of the interval. The estimated initial state for interval $j+1$ is denoted as $\hat{x}_{j+1}$. Let the duration of the first interval be denoted by $H$. The duration of the $j$th interval, $H_{j}$, is defined as $$H_{j}=2^{j-1}H.$$ (7) Given the model estimate $\hat{\theta}^{i}_{j}=[\hat{A}^{i}_{j},\hat{B}^{i}_{j}]$, and the state estimate $\hat{x}_{j+1}$, the nominal dynamics for interval $j+1$ is defined as $$\overline{x}_{t+1}=\hat{A}^{i}_{j}\overline{x}_{t}+\hat{B}^{i}_{j}u_{t},\ % \overline{x}_{t^{s}_{j+1}}=\hat{x}_{j+1},\ t_{j}<t\leq t_{j+1},$$ (8) where $\hat{x}_{j+1}$ is the estimated initial state for the interval $j+1$. The terminal cost function $d:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}$ is included to stabilize the closed loop system, (Grimm et al., 2005). The function $d(.)$ satisfies (i) $d(.)\geq\underline{\alpha}_{d}\sigma(.)$, where $\underline{\alpha}_{d}$ is a positive constant, (ii) $\Gamma d(x)\leq\inf_{u}c_{t}(x,u)\ \forall\ t$, (iii) there exists a $\tilde{u}_{t}\in\mathbb{R}^{m}$ such that $c_{t}(x_{t},\tilde{u}_{t})\leq\overline{\alpha}_{c}\sigma(x_{t})$, where $\overline{\alpha}_{c}$ is a positive constant, and $$d(x_{t+1})-d(x_{t})\leq 0,\text{when}\ u_{t}=\tilde{u}_{t}.$$ (9) The factor $\Gamma\in\mathbb{R}_{\geq 1}$ is a constant and is set such that $\Gamma\geq\max\{1,\frac{\overline{\alpha}\overline{\alpha}_{c}}{\alpha% \underline{\alpha}_{d}}\}$, where $\overline{\alpha}=M_{\lambda}/(1-e^{-\lambda_{e}})$. Condition (i) and (iii) are required to establish the existence of a Lyapounov-like function for the closed loop system (see Theorem $1$, (Grimm et al., 2005)), and condition (ii) along with global exponential controllability is a sufficient to guarantee that the Lyapunov-like function mentioned above is upper bounded by a constant times $\sigma(.)$. Together these conditions are required to establish a key Lemma (also Corollary 3, (Grimm et al., 2005)) which we present later. We assume such a terminal cost function is computable for the set $\Theta$. We denote the control sequence computed by the control policy $\mathcal{H}$ at time $t$ for the horizon $M$ by $U_{t}=\{u_{t,t},u_{t,t+1},...,u_{t,t+M}\}$. The algorithm computes the control input for the time step $t$, $t_{i}<t\leq t_{i+1}$, by minimizing the look-ahead cost over the horizon $M$ for the nominal dynamics: $$\displaystyle\min_{U_{t}}\sum_{k=t}^{t+M-1}c_{t}(\overline{x}_{t,k},u_{t,k})+% \Gamma d(\overline{x}_{t,M}),\ \text{s.t.}$$ $$\displaystyle\overline{x}_{t,k+1}=\hat{A}^{i}_{j}\overline{x}_{t,k}+\hat{B}^{i% }_{j}u_{t,k},\ u_{t,k}\in\mathcal{U},\ \overline{x}_{t,t}=\overline{x}_{t}.$$ (10) The control input $u^{i}_{t}$ is given by $u^{i}_{t}=u_{t,t}$, and we denote it as $\text{MPC}_{t}(\overline{x}_{t},\hat{\theta}_{t})$. The final control input for time $t$ is computed by modifying $u_{t}$ to ensure that the control sequence has persistent excitation, which is needed to ensure that the set $\hat{\Theta}$ is improved after a new interval is completed. Let the final control input be $\overline{u}^{i}_{t}$ and the perturbation applied to $u^{i}_{t}$ be $\delta u^{i}_{t}$, then $$\overline{u}^{i}_{t}=u^{i}_{t}+\delta u^{i}_{t}.$$ (11) Note that this will result in a violation of the control input constraint. Let $z^{i}_{k}:=[(x^{i}_{k})^{\top}\ (\overline{u}^{i}_{k})^{\top}]^{\top}$, and $$V_{j}=\sum_{k=1}^{t_{j}}z^{i}_{k}(z^{i}_{k})^{\top}$$ Definition 2 We say that the control inputs are persistently exciting provided $\exists$ a constant $\gamma>0$ such that $$V_{j}\geq c_{j}=\gamma c_{p,j}t_{j}>0,$$ where $c_{p,j}$ is a positive constant for the interval $j$, and $X\geq c>0\ \iff\ v^{\top}Xv\geq c,\ \forall\lVert v\rVert_{2}=1$. Let $W_{t}:=\left[(\overline{u}^{i}_{t})^{\top},...,(\overline{u}^{i}_{t+n})^{\top}% \right]^{\top}$, $e_{t}=u^{i}_{t}/\lVert u^{i}_{t}\rVert_{2}$. The perturbation $\delta u^{i}_{t}$ is either $$\displaystyle\delta u^{i}_{t}=\pm\sqrt{c_{p,j}}e^{\perp}_{t}\ \text{or}\ % \delta u^{i}_{t}=\pm O(\sqrt{c_{p,j}})e_{t},$$ $$\displaystyle e^{\perp}_{t}=u^{\perp}_{t}/\lVert u^{\perp}_{t}\rVert_{2},\ (W^% {\perp}_{t-n})^{\top}W_{t-n-k}=0,$$ $$\displaystyle\forall\ 1\leq k\leq q-1,\text{where}\ W^{\perp}_{t-n}=[(u^{\perp% }_{t-n})^{\top},...,(u^{\perp}_{t})^{\top}]^{\top},$$ (12) and $q$ is a design constant to be specified later. Please see the proof for the exact definition of $\delta u^{i}_{t}$. The key idea here is to perturb $u_{t}$ just enough so that the persistence of excitation is satisfied while the perturbed control input does not cause the controller’s regret and the cumulative constraint violation to grow more than sub-linearly w.r.t the period $T$ of the episode. We will show later that the persistence of excitation condition is satisfied by this design. 3.1 Selecting $\hat{\theta}^{i}_{j}$ and $\hat{x}_{j+1}$ The algorithm selects an estimate of the model parameter $\hat{\theta}^{i}_{j}\in\hat{\Theta}^{i}_{j}$ and the approximation $\hat{x}_{j+1}$ of the state $x^{i}_{t^{s}_{j+1}}$ by solving the following optimization at the end of interval $j$: $$\displaystyle\{\hat{x}_{j+1},\hat{\theta}_{j}\}=\min_{\hat{x},\hat{\theta}}% \sum_{k=t^{s}_{j+1}}^{t_{j+1}}c_{k}(\overline{x}_{k},\text{MPC}_{k}(\overline{% x}_{k},\hat{\theta})),$$ $$\displaystyle\overline{x}_{k+1}=\hat{A}\overline{x}_{k}+\hat{B}\text{MPC}_{k}(% \overline{x}_{k},\hat{\theta}),\ \overline{x}_{t^{s}_{j+1}}=\hat{x},$$ $$\displaystyle\hat{x}\in\left\{x\ \bigg{|}\ \lVert x-y_{t^{s}_{j+1}}\rVert_{2}% \leq\epsilon_{c}\right\},\ \hat{\theta}\in\hat{\Theta}^{i}_{j}.$$ (13) The objective of this optimization is just the cost over the duration of the interval $j+1$ for a given $\hat{\theta}\in\hat{\Theta}^{i}_{j}$ and an initial state estimate $\hat{x}$. 4 Inner Learner $\mathcal{A}_{f}$ The inner learner updates its loss function at the end of every interval $j$. Denote the inner-learner’s updated loss function at the end of interval $j$ by $\mathcal{L}^{i}_{j}(\theta)$. Then $$\mathcal{L}^{i}_{j}(\theta)=\sum_{k=1}^{t_{j}}l_{\theta,k},l_{\theta,k}=\lVert y% ^{i}_{k+1}-\theta[(y^{i}_{k})^{\top},(\overline{u}^{i}_{k})^{\top}]^{\top}% \rVert^{2}_{2}.$$ (14) To compute the set $\hat{\Theta}^{i}_{j}$ at the end of the iteration $j>1$, the inner-learner computes $\hat{\theta}_{l,j}$, the $\lambda$-regularized least-squares estimate by minimizing its updated loss function plus a regularizer $\mathcal{R}_{e}(\theta,\hat{\phi}_{i})$: $$\displaystyle\hat{\theta}_{l,j}=\arg\min_{\theta}\sum_{k=1}^{t_{j}}l_{\theta,k% }+\mathcal{R}_{e}(\theta,\hat{\phi}_{i}),$$ $$\displaystyle\mathcal{R}_{e}(\theta,\hat{\phi}_{i})=\lambda\lVert\theta-\hat{% \phi}_{i}\rVert^{2}_{F}.$$ Using the least square estimate $\hat{\theta}_{l,j}$, the inner learner computes a set $\hat{\Theta}^{i}_{j}$ within which the system model parameter is guaranteed to lie with probability greater than $1-\tilde{\delta}$: $$\displaystyle\hat{\Theta}_{j}=\left\{\theta\ \bigg{|}\ \lVert\theta-\hat{% \theta}_{l,j}\rVert_{F}\leq\beta_{j}(\tilde{\delta})\right\}\cap\Theta,\ \text% {where}$$ $$\displaystyle\beta_{j}(\tilde{\delta})\leq\frac{1}{\sqrt{\gamma c_{p,j}t_{j}}}% \tilde{R}_{j}+\lambda S/\gamma_{y},$$ (16) $$\displaystyle\tilde{R}_{j}=\hat{R}_{j}\sqrt{4\log{\left(\frac{\left(\sqrt{2}% \right)^{n+m}}{\tilde{\delta}}\right)}},$$ $\hat{R}_{j}=n+n^{2}\lVert\hat{\phi}_{i}\rVert_{F}+n^{2}\lVert\theta^{*}_{j}-% \hat{\phi}_{i}\rVert_{F}$, $\gamma_{y}$ is a constant to be defined later, $\theta^{*}_{j}=\arg\min_{\theta}\sum_{k=1}^{t_{j}}l_{\theta,k}$. The cumulative error in the estimation of the model parameters for the duration of an episode is given by $$E_{\theta,T}=\sum_{t=1}^{T-1}\lVert\hat{\theta}_{t}-\theta\rVert_{2}$$ (17) In the next theorem we provide a bound on this cumulative error under persistence of excitation. Let $$\displaystyle j^{*}=\inf_{j\in\mathbb{Z}_{\geq 1}}j$$ $$\displaystyle\text{s.t.}\ jn_{c}\geq\max\left\{2n_{c},\tilde{n}_{c}\left(\log{% \tilde{\tilde{n}}_{c}\left(\log{2T}/\delta\right)}\right)^{2}\right\},$$ $$\displaystyle\quad\quad n_{c}=(n+1)m,\tilde{n}_{c}=\left(16n^{2}R^{2}/\gamma% \right)^{2},$$ $$\displaystyle\quad\quad\tilde{\tilde{n}}_{c}=\left(\sqrt{2}\right)^{n+m+2}.$$ (18) Theorem 1 Consider the estimation in Eq. (LABEL:eq:least-squares-est) and let $\hat{\theta}_{j}$ be selected according to Eq. (13). Suppose the persistence of excitation (Definition 2) holds, $\lambda=T^{-1}$, $\tilde{\delta}=\delta/(2\log{2T})$, $c_{p,j}=H^{-1/2}_{j}$, $H=j^{*}n_{c}+n$. Then, with probability greater than $1-O(\delta)$ $$E_{\theta,T}\leq O\left(\left(1+\lVert\hat{\phi}_{i}\rVert_{F}+\lVert\theta^{*% ,i}-\hat{\phi}_{i}\rVert_{F}\right)T^{3/4}\right),$$ (19) where $\theta^{*,i}=\arg\max_{j}\lVert\theta^{*}_{j}-\hat{\phi}_{i}\rVert_{F}$. Please see the Appendix for the proof. 5 Outer Learner $\mathcal{A}_{s}$ Here we discuss the outer learner and also formally establish how the meta-updates provided by the outer learner continually improves the average of the cumulative error in the estimation of the model parameters for an episode, where the average is calculated across the episodes experienced so far, with experience of more episodes. We set the loss function for the outer learner for the $i$th episode as $\lVert\theta^{*,i}-\hat{\phi}\rVert_{F}+\lVert\hat{\phi}\rVert_{F}$. We denote the overall loss function for the outer learner at the end of episode $i$ which is the sum of the loss functions for the individual episodes till episode $i$ by $\mathcal{L}^{i+1}_{s}$. Thus $$\mathcal{L}^{i+1}_{s}=\sum_{k=1}^{i}\left(\lVert\theta^{*,k}-\hat{\phi}\rVert_% {F}+\lVert\hat{\phi}\rVert_{F}\right).$$ (20) In the theorem we present next we show the best rate at which the meta-learning can improve the average of the cumulative error in the model parameter estimation for an episode with more experience. We denote this average by $\overline{E}_{\theta,N}$: $$\overline{E}_{\theta,N}=\frac{1}{N}\sum_{i=1}^{N}E_{\theta,T}.$$ (21) Theorem 2 Consider the setting in Theorem 1 and the meta-learner. Then, the best average regret with probability greater than $1-O(N\delta)$ is given by $$\overline{E}_{\theta,N}\leq O\left(\left(1+\frac{1}{\sqrt{N}}\right)T^{3/4}% \right).$$ (22) Please see the Appendix for the proof. The scaling has the factor $T^{3/4}$ because even after the meta-updates converge each episode can still incur a regret of $T^{3/4}$. Most importantly, the scaling shows that the best average cumulative error $\overline{E}_{\theta,N}$ reduces at the rate of $1/\sqrt{N}$ as $N$ increases which is a result of the meta-update. This suggests that the worst regret for learning within an episode can be continuously improved with experience of more episodes. 6 Controller Performance The total cost and the cumulative constraint violation for the controller for episode $i$ are given by $$\displaystyle\mathcal{L}^{i}_{c,T}=\sum_{t=1}^{T}c_{t}(x^{i}_{t},\overline{u}^% {i}_{t}),\mathcal{V}^{i}_{c,T}=\mathcal{V}(\overline{u}^{i}_{1:T}),\ \text{where}$$ $$\displaystyle x_{1:T}=\{x^{i}_{1},x^{i}_{2},...,x^{i}_{T}\},\ \overline{u}^{i}% _{1:T}=\{\overline{u}^{i}_{1},\overline{u}^{i}_{2},...,\overline{u}^{i}_{T}\}.$$ (23) Denote the cost and the cumulative constraint violation for the best policy that satisfies the control input constraints for the duration of an episode by $\mathcal{L}_{b,T}$ and $\mathcal{V}_{b,T}$, respectively. Then, $$\displaystyle\mathcal{L}_{b,T}=\sum_{t=1}^{T}c_{t}(x^{b}_{t},u^{b}_{t}),\ % \mathcal{V}_{b,T}=0.$$ (24) First, we provide a Lemma that bounds the regret and cumulative constraint violation for an episode. Lemma 1 Consider the system given in Eq. (1). Suppose Assumptions 1, 2, 3 are valid, $M>\overline{\alpha}^{2}/\alpha+1$, and the control input is given by Eq. (11). Then under the event that $\theta\in\hat{\Theta}_{j},\ \forall\ j$ $$\displaystyle\mathcal{L}^{i}_{c,T}-\mathcal{L}_{b,T}\leq O(E_{\theta,T})+O(% \log(T))+\sum_{i}O(\sqrt{c_{p,i}}H_{i}),$$ $$\displaystyle\mathcal{V}^{i}_{c,T}-\mathcal{V}_{b,T}\leq\sum_{i}O(\sqrt{c_{p,i% }}H_{i}).$$ In the next theorem we combine the result form Theorem 1 and Lemma 1 to provide an upper bound for the best average regret that is achievable for the controller’s cost and cumulative constraint violation over an episode. Theorem 3 Consider the setting given in Lemma 1, Theorem 1, and the meta-learner. Then the best average regret with probability greater than $1-O(N\delta)$ is given by $$\displaystyle\sum_{i=1}^{N}\frac{\mathcal{L}^{i}_{c,T}-\mathcal{L}_{b,T}}{N}% \leq O\left(\left(1+\frac{1}{\sqrt{N}}\right)T^{3/4}\right),$$ $$\displaystyle\sum_{i=1}^{N}\frac{\mathcal{V}^{i}_{c,T}}{N}\leq O\left(\left(1+% \frac{1}{\sqrt{N}}\right)T^{3/4}\right).$$ Please see the Appendix for the proof. 7 Conclusion In this work we proposed a model based meta-learning receding horizon control algorithm for an iterative control setting, where in each iteration the system to be controlled is different and unknown and the control objective is a general convex cost function with general control input constraints. We proved that the proposed algorithm achieves $O(T^{3/4})$ regret for the controller cost for an episode of duration $T$ with respect to the optima and an average of this regret across the iterations that varies as $O((1+1/\sqrt{N})T^{3/4})$ with $N$ being the number of iterations. We also proved that the proposed algorithm achieves $O(T^{3/4})$ regret for an episode of duration $T$ with respect to the optimal and an average of this regret across $N$ given iterations that varies as $O((1+1/\sqrt{N})T^{3/4})$. Hence we established that the meta-learning control algorithm continually improves its performance within an episode with experience of more episodes. References Abbasi-Yadkori & Szepesvari (2011) Abbasi-Yadkori, Y. and Szepesvari, C. Regret bounds for the adaptive control of linear quadratic systems. In Proceedings of the 24th Annual Conference on Learning Theory, pp.  1–26, 2011. Abernethy et al. (2009) Abernethy, J., Agarwal, A., and Bartlett, P. L. A stochastic view of optimal regret through minimax duality. In Proceedings of the 22nd Annual Conference on Learning Theory, 2009. Agarwal et al. (2019a) Agarwal, N., Bullins, B., Hazan, E., Kakade, S., and Singh, K. Online control with adversarial disturbances. In International Conference on Machine Learning, pp. 111–119, 2019a. Agarwal et al. (2019b) Agarwal, N., Hazan, E., and Singh, K. Logarithmic regret for online control. In Advances in Neural Information Processing Systems, pp. 10175–10184, 2019b. Andrychowicz et al. (2016) Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., Shillingford, B., and De Freitas, N. Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems, pp. 3981–3989, 2016. Balcan et al. (2019) Balcan, M.-F., Khodak, M., and Talwalkar, A. Provable guarantees for gradient-based meta-learning. In International Conference on Machine Learning, 2019. Cohen et al. (2019) Cohen, A., Koren, T., and Mansour, Y. Learning linear-quadratic regulators efficiently with only $\sqrt{T}$ regret. In International Conference on Machine Learning, pp. 1300–1309, 2019. Dean et al. (2018) Dean, S., Mania, H., Matni, N., Recht, B., and Tu, S. Regret bounds for robust adaptive control of the linear quadratic regulator. In Advances in Neural Information Processing Systems, pp. 4188–4197, 2018. Duan et al. (2016) Duan, Y., Schulman, J., Chen, X., Bartlett, P. L., Sutskever, I., and Abbeel, P. Rl${}^{2}$: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016. Fazel et al. (2018) Fazel, M., Ge, R., Kakade, S., and Mesbahi, M. Global convergence of policy gradient methods for the linear quadratic regulator. In International Conference on Machine Learning, pp. 1467–1476, 2018. Finn et al. (2017) Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pp. 1126–1135, 2017. Finn et al. (2019) Finn, C., Rajeswaran, A., Kakade, S., and Levine, S. Online meta-learning. In International Conference on Machine Learning, pp. 1920–1930, 2019. Flennerhag et al. (2019) Flennerhag, S., Rusu, A. A., Pascanu, R., Visin, F., Yin, H., and Hadsell, R. Meta-learning with warped gradient descent. In International Conference on Learning Representations, 2019. Green & Moore (1986) Green, M. and Moore, J. B. Persistence of excitation in linear systems. Systems & control letters, 7(5):351–360, 1986. Grimm et al. (2005) Grimm, G., Messina, M. J., Tuna, S. E., and Teel, A. R. Model predictive control: for want of a local control lyapunov function, all is not lost. IEEE Transactions on Automatic Control, 50(5):546–558, 2005. Hazan et al. (2006) Hazan, E., Kalai, A., Kale, S., and Agarwal, A. Logarithmic regret algorithms for online convex optimization. In International Conference on Computational Learning Theory, pp.  499–513. Springer, 2006. Jenatton et al. (2016) Jenatton, R., Huang, J., and Archambeau, C. Adaptive algorithms for online convex optimization with long-term constraints. In International Conference on Machine Learning, pp. 402–411, 2016. Klatte & Kummer (1985) Klatte, D. and Kummer, B. Stability properties of infima and optimal solutions of parametric optimization problems. In Nondifferentiable Optimization: Motivations and Applications, pp.  215–229. Springer, 1985. Li & Malik (2016) Li, K. and Malik, J. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016. Li et al. (2019) Li, Y., Chen, X., and Li, N. Online optimal control with linear dynamics and predictions: Algorithms and regret analysis. In Advances in Neural Information Processing Systems, pp. 14887–14899, 2019. Mishra et al. (2018) Mishra, N., Rohaninejad, M., Chen, X., and Abbeel, P. A simple neural attentive meta-learner. In International Conference on Learning Representations, 2018. Molybog & Lavaei (2020) Molybog, I. and Lavaei, J. Global convergence of MAML for LQR. arXiv preprint arXiv:2006.00453, 2020. Moore (1983) Moore, J. Persistence of excitation in extended least squares. IEEE Transactions on Automatic Control, 28(1):60–68, 1983. Naik & Mammone (1992) Naik, D. K. and Mammone, R. J. Meta-neural networks that learn s learning. In [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, volume 1, pp.  437–442. IEEE, 1992. Nichol & Schulman (2018) Nichol, A. and Schulman, J. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(3):4, 2018. Rajeswaran et al. (2019) Rajeswaran, A., Finn, C., Kakade, S. M., and Levine, S. Meta-learning with implicit gradients. In Advances in Neural Information Processing Systems, pp. 113–124, 2019. Ravi & Larochelle (2016) Ravi, S. and Larochelle, H. Optimization as a model for few-shot learning. 2016. Schmidhuber (1987) Schmidhuber, J. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-… hook. PhD thesis, Technische Universität München, 1987. Shalev-Shwartz & Kakade (2009) Shalev-Shwartz, S. and Kakade, S. M. Mind the duality gap: Logarithmic regret algorithms for online optimization. In Advances in Neural Information Processing Systems, pp. 1457–1464, 2009. Shalev-Shwartz et al. (2011) Shalev-Shwartz, S. et al. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011. Simchowitz et al. (2020) Simchowitz, M., Singh, K., and Hazan, E. Improper learning for non-stochastic control. arXiv preprint arXiv:2001.09254, 2020. Thrun & Pratt (1998) Thrun, S. and Pratt, L. Learning to learn: Introduction and overview. In Learning to Learn, pp.  3–17. Springer, 1998. Wang et al. (2016) Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016. Yuan & Lamperski (2018) Yuan, J. and Lamperski, A. Online convex optimization for cumulative constraints. In Advances in Neural Information Processing Systems, pp. 6137–6146, 2018. Zhang et al. (2019) Zhang, K., Hu, B., and Basar, T. Policy optimization for $\mathcal{H}_{2}$ linear control with $\mathcal{H}_{\infty}$ robustness guarantee: Implicit regularization and global convergence. arXiv preprint arXiv:1910.09496, 2019. Zinkevich (2003) Zinkevich, M. Online convex programming and generalized infinitesimal gradient ascent. In International Conference on Machine Learning, pp. 928–936, 2003. Appendix Proof for Theorem 1 We start by stating Theorem 16 from (Abbasi-Yadkori & Szepesvari, 2011) as a Lemma. Lemma 2 Let $\mathcal{F}_{t}$ be a filtration. Let $m_{t}$ be an $\mathbb{R}^{d}$ valued stochastic process adapted to the filtration $\mathcal{F}_{t}$, $\eta_{t}\in\mathbb{R}$ be an real-valued martingale difference adapted to $\mathcal{F}_{t}$ and is conditionally sub-Gaussian with constant $R$. Consider the martingale stochastic process $$S_{t}=\sum_{k=1}^{t}\eta_{k}m_{k-1}.$$ Consider the matrix valued process $$\overline{V}_{t}=\tilde{V}+\tilde{V}_{t},\ \tilde{V}_{t}=\sum_{k=1}^{t}m_{k-1}% m^{\top}_{k-1}$$ Then, with probability $1-\delta$, $\delta>0$, we get that $$\forall\ t>0,\lVert S_{t}\rVert^{2}_{\overline{V}^{-1}_{t}}\leq 2R^{2}\log% \left(\frac{\text{det}(\overline{V}_{t})^{1/2}\text{det}(\tilde{V})^{1/2}}{% \delta}\right)$$ For convenience, here we just assume $u_{t}$ to be the final control input and also drop the superscript $i$ for all the variables. From the state equation Eq. (1), we know that $$x_{t+1}=Ax_{t}+Bu_{t}.$$ (25) Let $X_{j}=[z_{1},z_{2},z_{3},...,z_{t_{j}}]^{\top}$, where $z_{t}=[x^{\top}_{t}u^{\top}_{t}]^{\top}$ and $Y_{i}=[x_{2},x_{3},....,x_{t_{j}+1}]^{\top}$. Then, it follows from Eq. (25) that $$Y_{j}=X_{j}\theta^{\top}$$ (26) Let $X^{y}_{j}=[z^{y}_{1},z^{y}_{2},z^{y}_{3},...,z^{y}_{t_{j}}]^{\top}$, where $z^{y}_{t}=[y^{\top}_{t}u^{\top}_{t}]^{\top}$ and $Y^{y}_{j}=[y_{2},y_{3},....,y_{t_{j}+1}]^{\top}$. Let $\theta^{*}_{j}$ be given by $$\theta^{*}_{j}=\arg\min_{\theta\in\Theta}\sum_{k}l_{\theta,k},\ l_{\theta,k}=% \lVert y_{k+1}-\theta z^{y}_{k}\rVert^{2}_{2}.$$ (27) Let $$\displaystyle\mathcal{E}^{y}_{j}=[\epsilon_{2},\epsilon_{3},...,\epsilon_{t_{i% }+1}]^{\top}$$ $$\displaystyle\mathcal{E}_{j}=[[\epsilon^{\top}_{1}\ 0^{\top}_{m\times 1}]^{% \top},[\epsilon^{\top}_{2}\ 0^{\top}_{m\times 1}]^{\top},...,[\epsilon^{\top}_% {t_{j}}\ 0^{\top}_{m\times 1}]^{\top}]^{\top}.$$ Then, by definition $$Y^{y}_{j}=Y_{j}+\mathcal{E}^{y}_{j},\ X^{y}_{j}=X_{j}+\mathcal{E}_{j}.$$ We denote these variables by $Y^{y}_{\tau},Y_{\tau},X^{y}_{\tau},\mathcal{E}_{\tau},\mathcal{E}^{y}_{\tau}$ for a general time index $\tau$ that is different from $t_{j}$. First, we show that $(X^{y}_{j})^{\top}X^{y}_{j}$ is invertible. $$\displaystyle(X^{y}_{j})^{\top}X^{y}_{j}=(X_{j}+\mathcal{E}_{j})^{\top}(X_{j}+% \mathcal{E}_{j})$$ $$\displaystyle(X^{y}_{j})^{\top}X^{y}_{j}=X^{\top}_{j}X_{j}+X^{\top}_{j}% \mathcal{E}_{j}+\mathcal{E}^{\top}_{j}X_{j}+\mathcal{E}^{\top}_{j}\mathcal{E}_% {j}.$$ For a matrix $W$, let the $\sum_{j}\lVert V^{-1/2}W_{j}\rVert_{2}$, where $W_{j}$ are columns of $W$, be denoted by $\lVert W\rVert_{V^{-1}}$. Consider an arbitrary unit vector $v\in\mathbb{R}^{n+m}$. Then, using the fact that $\lVert V^{-1/2}(.)\rVert_{2}\leq\lVert V^{-1/2}(.)\rVert_{F}\leq\lVert.\rVert_% {V^{-1}}$, and applying Cauchy-Schwarz, we get that $$\displaystyle v^{\top}(X^{y}_{j})^{\top}X^{y}_{j}v\geq v^{\top}X^{\top}_{j}X_{% j}v$$ $$\displaystyle+v^{\top}\left(X^{\top}_{j}X_{j}\right)^{1/2}\left(X^{\top}_{j}X_% {j}\right)^{-1/2}\left(X^{\top}_{j}\mathcal{E}_{j}+\mathcal{E}^{\top}_{j}X_{j}% \right)v,$$ $$\displaystyle\geq v^{\top}X^{\top}_{j}X_{j}v$$ $$\displaystyle-\lVert v^{\top}\left(X^{\top}_{j}X_{j}\right)^{1/2}\rVert_{2}% \lVert\left(X^{\top}_{j}\mathcal{E}_{i}+\mathcal{E}^{\top}_{j}X_{j}\right)% \rVert_{\left(X^{\top}_{j}X_{j}\right)^{-1}}.$$ (28) Let $V_{\tau}=\sum_{k=1}^{\tau}z_{k}z^{\top}_{k}=X^{\top}_{\tau}X_{\tau}$. Also, $V_{j}=\sum_{k=1}^{t_{j}}z_{k}z^{\top}_{k}=X^{\top}_{j}X_{j}$. In our case, $z_{k+1}$ is a vector-valued process adapted to $\mathcal{F}_{k}$, any component of $\epsilon_{k}$ by definition is a martingale difference and adapted to $\mathcal{F}_{k}$. Hence, Lemma 2 can be applied to each component of $\epsilon_{k}$, by recognizing that $m_{k-1}=z_{k}/\sqrt{2}$ and $\eta_{k}$ can be any of the components of $\sqrt{2}\epsilon_{k}$, setting $\tilde{V}=V_{\tau}/2$, $\tilde{V}_{\tau}=V_{\tau}/2$, and using $\tau$ in place $t$. Then, using union bound after applying Lemma 2 to each component of $\epsilon_{k}$ and for each $\tau\in\{t_{1},t_{j}\}$, we get that with probability at least $1-2n\tilde{\delta}$ for all $\tau\in\{t_{1},t_{j}\}$ $$\lVert X^{\top}_{\tau}\mathcal{E}_{\tau}\rVert_{V^{-1}_{\tau}}\leq nR\sqrt{4% \log{\left(\frac{\text{det}(V_{\tau})^{1/2}\text{det}(\tilde{V})^{-1/2}}{% \tilde{\delta}}\right)}},$$ (29) where the additional factor $n$ follows from the fact that $\eta_{k}$ has at most $n$ non-zero components. Noting that $V_{\tau}=V_{j}$ when $\tau=t_{i}$, we get that with probability at least $1-n\tilde{\delta}$ $$\lVert X^{\top}_{j}\mathcal{E}_{j}\rVert_{V^{-1}_{i}}\leq nR\sqrt{4\log{\left(% \frac{(\sqrt{2})^{n+m}}{\tilde{\delta}}\right)}}$$ Then, by triangle inequality, we get that with probability at least $1-n\tilde{\delta}$ $$\lVert X^{\top}_{j}\mathcal{E}_{j}+\mathcal{E}^{\top}_{j}X_{j}\rVert_{V^{-1}_{% j}}\leq 2nR\sqrt{4\log{\left(\frac{(\sqrt{2})^{n+m}}{\tilde{\delta}}\right)}}.$$ (30) The persistence of excitation implies that $$v^{\top}X^{\top}_{j}X_{j}v\geq\gamma c_{p,j}t_{j}.$$ Then, using Eq. (30) in Eq. (28) we get that with probability at least $1-n\tilde{\delta}$ $$\displaystyle v^{\top}(X^{y}_{j})^{\top}X^{y}_{j}v\geq\sqrt{\gamma c_{p,j}t_{j% }}\times$$ $$\displaystyle\left(\sqrt{\gamma c_{p,j}t_{j}}-2nR\sqrt{4\log{\left(\frac{(% \sqrt{2})^{n+m}}{\tilde{\delta}}\right)}}\right).$$ (31) Similarly, for the initial interval, where $t_{1}=H,c_{p,1}=H^{-1/2}$, using Eq. (29) and following a similar argument, we get that under the same event $$\displaystyle v^{\top}(X^{y}_{1})^{\top}X^{y}_{1}v\geq\sqrt{\gamma c_{p,1}t_{1% }}\times$$ $$\displaystyle\left(\sqrt{\gamma}H^{1/4}-2nR\sqrt{4\log{\left(\frac{(\sqrt{2})^% {n+m}}{\tilde{\delta}}\right)}}\right).$$ (32) Equation (18) and the fact that $H=j^{*}n_{c}$ implies $$\sqrt{\gamma}H^{1/4}-2nR\sqrt{4\log{\left(\frac{(\sqrt{2})^{n+m}}{\tilde{% \delta}}\right)}}>0.$$ (33) And so for any interval $j$, we get that $$\sqrt{\gamma}H_{j}^{1/4}-2nR\sqrt{4\log{\left(\frac{(\sqrt{2})^{n+m}}{\tilde{% \delta}}\right)}}>0.$$ (34) Hence, from Eq. (31) and Eq. (33), it follows that with probability at-least $1-n\tilde{\delta}$ $$\displaystyle v^{\top}(X^{y}_{j})^{\top}X^{y}_{j}v>\gamma_{y}c_{p,j}t_{j},% \forall\ v\ \text{s.t.}\ \lVert v\rVert_{2}=1,$$ $$\displaystyle\gamma_{y}=\gamma\left(1-\frac{2nR}{\sqrt{\gamma H^{1/2}}}\sqrt{4% \log{\left(\frac{(\sqrt{2})^{n+m}}{\tilde{\delta}}\right)}}\right).$$ (35) That is, with probability at-least $1-n\tilde{\delta},(X^{y}_{j})^{\top}X^{y}_{j}$ is invertible. Consequently, the solution to Eq. (27) under such an event satisfies $$\displaystyle Y^{y}_{j}=X^{y}_{j}(\theta^{*}_{j})^{\top},$$ $$\displaystyle\text{where}\ (\theta^{*}_{j})^{\top}=\left((X^{y}_{j})^{\top}X^{% y}_{j}\right)^{-1}(X^{y}_{j})^{\top}Y^{y}_{j}.$$ (36) From $Y^{y}_{j}=X^{y}_{j}(\theta^{*}_{i})^{\top}$ we get that $Y^{y}_{1}=X^{y}_{1}(\theta^{*}_{j})^{\top}$. Also, $X^{y}_{1}$ is a full column rank matrix because Eq. (32) holds under such an event. Hence, under such an event, $(\theta^{*}_{j})^{\top}$ is also given by $$\displaystyle(\theta^{*}_{j})^{\top}=\left((X^{y}_{1})^{\top}X^{y}_{1}\right)^% {-1}(X^{y}_{1})^{\top}Y^{y}_{1}$$ $$\displaystyle\text{i.e.}\ \lVert(\theta^{*}_{j})^{\top}\rVert_{F}\leq\lVert% \left((X^{y}_{1})^{\top}X^{y}_{1}\right)^{-1}\rVert_{F}\lVert(X^{y}_{1})^{\top% }Y^{y}_{1}\rVert_{F}.$$ The fact that the elements of $X^{y}_{1}$, and $Y^{y}_{1}$ are bounded and Eq. (32) imply that the norm of $(\theta^{*}_{j})^{\top}$ is bounded under such an event. Let $S$ be such that $\lVert(\theta^{*}_{j})\rVert_{F}\leq S$. Then, subtracting Eq. (26) from Eq. (36) we get that $$X_{j}\left((\theta^{*}_{j})^{\top}-\theta^{\top}\right)=\mathcal{E}^{y}_{j}-% \mathcal{E}_{j}(\theta^{*}_{j})^{\top}$$ (37) under such an event. Since $X^{\top}_{j}X_{j}$ is invertible, which holds because of the persistence of excitation, it follows that $$\left((\theta^{*}_{j})^{\top}-\theta^{\top}\right)=\left(X^{\top}_{j}X_{j}% \right)^{-1}X^{\top}_{j}\left(\mathcal{E}^{y}_{j}-\mathcal{E}_{j}(\theta^{*}_{% j})^{\top}\right).$$ (38) The solution $\hat{\theta}_{l,j}$ to the least-squares problem in Eq. (LABEL:eq:least-squares-est) is given by $$\hat{\theta}^{\top}_{l,j}=\left((X^{y}_{j})^{\top}X^{y}_{j}+\lambda I\right)^{% -1}(X^{y}_{j})^{\top}Y^{y}_{j}.$$ (39) Then, using Eq. (27) we get that $$\displaystyle\hat{\theta}^{\top}_{l,j}=\left((X^{y}_{j})^{\top}X^{y}_{j}+% \lambda I\right)^{-1}(X^{y}_{j})^{\top}(X^{y}_{j}(\theta^{*}_{j})^{\top})$$ $$\displaystyle=(\theta^{*}_{j})^{\top}-\left((X^{y}_{j})^{\top}X^{y}_{j}+% \lambda I\right)^{-1}\lambda(\theta^{*}_{j})^{\top}.$$ Combining the previous equation with Eq. (38), we get that with probability at-least $1-n\tilde{\delta}$ $$\displaystyle\hat{\theta}^{\top}_{l,j}-\theta^{\top}=\left(X^{\top}_{j}X_{j}% \right)^{-1}X^{\top}_{i}\left(\mathcal{E}^{y}_{j}-\mathcal{E}_{j}(\theta^{*}_{% j})^{\top}\right)$$ $$\displaystyle-\left((X^{y}_{j})^{\top}X^{y}_{j}+\lambda I\right)^{-1}\lambda(% \theta^{*}_{j})^{\top}.$$ Then, using the fact that $\lVert V^{-1/2}(.)\rVert_{F}\leq\lVert.\rVert_{V^{-1}}$, and Cauchy-Schwarz inequality on the first term, and using Eq. (35) on the second term, we get that with probability at-least $1-n\tilde{\delta}$ $$\displaystyle\lVert\hat{\theta}^{\top}_{l,j}-\theta^{\top}\rVert_{F}$$ $$\displaystyle\leq\lVert\left(X^{\top}_{j}X_{j}\right)^{-1/2}\rVert_{2}\lVert X% ^{\top}_{j}\left((\mathcal{E}^{y}_{j}-\mathcal{E}_{j}(\theta^{*}_{j})^{\top}% \right)\rVert_{V^{-1}_{j}}+\frac{\lambda S}{\gamma_{y}c_{p,j}t_{j}}$$ $$\displaystyle\leq\frac{1}{\sqrt{\lambda_{\text{min}}(V_{j})}}\lVert X^{\top}_{% j}\left((\mathcal{E}^{y}_{j}-\mathcal{E}_{j}(\theta^{*}_{j})^{\top}\right)% \rVert_{V^{-1}_{j}}+\frac{\lambda S}{\gamma_{y}c_{p,j}t_{j}}.$$ (40) Now, using triangle inequality we get that $$\displaystyle\lVert X^{\top}_{j}\left((\mathcal{E}^{y}_{j}-\mathcal{E}_{j}(% \theta^{*}_{j})^{\top}\right)\rVert_{V^{-1}_{j}}$$ $$\displaystyle\leq\lVert X^{\top}_{j}\mathcal{E}^{y}_{j}\rVert_{V^{-1}_{j}}+% \lVert X^{\top}_{j}\left(\mathcal{E}_{j}(\theta^{*}_{j})^{\top}\right)\rVert_{% V^{-1}_{j}}$$ (41) Let $X(k,l)$ denote the $l$th component of $k$th row of a matrix $X$. Then, $\mathcal{E}_{j}(\theta^{*}_{j})^{\top}(k,l)=[\epsilon^{\top}_{k}\ 0^{\top}_{m% \times 1}](\theta^{*}_{j})^{\top}(:,l)$. Then, $$\displaystyle\lVert X^{\top}_{j}\left((\mathcal{E}^{y}_{j}-\mathcal{E}_{j}(% \theta^{*}_{j})^{\top}\right)\rVert_{V^{-1}_{j}}$$ $$\displaystyle\leq\lVert X^{\top}_{j}\mathcal{E}^{y}_{j}\rVert_{V^{-1}_{j}}+% \sum_{l=1}^{n}\lVert X^{\top}_{j}\left(\mathcal{E}_{j}(\theta^{*}_{j})^{\top}(% :,l)\right)\rVert_{V^{-1}_{j}}$$ $$\displaystyle\leq\lVert X^{\top}_{j}\mathcal{E}^{y}_{j}\rVert_{V^{-1}_{j}}$$ $$\displaystyle+\sum_{l=1}^{n}\sum_{m=1}^{n}\lVert X^{\top}_{j}\left(\mathcal{E}% _{j}(:,m)(\theta^{*}_{j})^{\top}(m,l)\right)\rVert_{V^{-1}_{j}}$$ (42) Then, applying Lemma 2 on each individual term of the sum on the right of equation Eq. (42), and using union bound we get that with probability at-least $1-n(n+2)\tilde{\delta}$ $$\lVert\hat{\theta}^{\top}_{l,j}-\theta^{\top}\rVert_{F}\leq\frac{1}{\sqrt{% \lambda_{\text{min}}(V_{i})}}\tilde{R}_{j}+\frac{\lambda S}{\gamma_{y}c_{p,j}t% _{j}}.$$ Now, the persistence of excitation lower bounds $\lambda_{\text{min}}(V_{j})$: $\lambda_{\text{min}}(V_{j})\geq\gamma c_{p,j}t_{j}$. Using this in the previous equation we get that with probability at-least $1-n(n+2)\tilde{\delta}$ $$\lVert\hat{\theta}^{\top}_{l,j}-\theta^{\top}\rVert_{F}\leq\frac{1}{\sqrt{% \gamma c_{p,j}t_{j}}}\tilde{R}_{j}+\frac{\lambda S}{\gamma_{y}c_{p,j}t_{j}}.$$ Combining this with Eq. (13), it follows that with probability at-least $1-O(\tilde{\delta})$ $$\lVert\hat{\theta}^{\top}_{j}-\theta^{\top}\rVert_{F}\leq\frac{2}{\sqrt{\gamma c% _{p,j}t_{j}}}\tilde{R}_{j}+\frac{2\lambda S}{\gamma_{y}},\ \forall\ t_{j}<t% \leq t_{j+1}.$$ (43) Let $N_{T}$ be the number of intervals. Given that $H_{j}=2^{j-1}H$, $$\sum_{j=1}^{N_{t}}H_{j}=H(2^{N_{t}}-1)=T$$ Hence, $N_{t}=\log{((T+H)/H)}\leq\log{T}$. Given that $\hat{\theta}_{t}$ is constant within an interval, for interval $j$ $$\sum_{k=t^{s}_{j}}^{t_{j}}\lVert\hat{\theta}_{k}-\theta\rVert_{F}=\lVert\hat{% \theta}_{j-1}-\theta\rVert_{F}H_{j}.$$ Combining Eq. (43) with the previous equation, using union bound and the fact that $\lambda=T^{-1}$, we get that with probability at-least $1-O(\tilde{\delta}N_{t})$ $$\displaystyle\sum_{j=1}^{T}\lVert\hat{\theta}_{j}-\theta\rVert_{F}\leq\sum_{i=% 1}^{N_{t}}\left(\frac{2}{\sqrt{\gamma c_{p,j-1}t_{j-1}}}\tilde{R}_{j-1}+\frac{% 2\lambda S}{\gamma_{y}}\right)H_{j}$$ $$\displaystyle=\sum_{j=1}^{N_{t}}\left(\frac{2\tilde{R}_{j-1}H_{j}}{\sqrt{% \gamma c_{p,j-1}t_{j-1}}}\right)+O(S/\gamma_{y}).$$ (44) Then, using $H_{j}=2^{j-1}H,c_{p,j}=H_{j}^{-1/2}$, we get that $$\displaystyle c_{p,j}\sum_{k=1}^{j}H_{k}=2^{-(j-1)/2}H^{-1/2}\sum_{k=1}^{j}2^{% k-1}H$$ $$\displaystyle=2^{-(j-1)/2}H^{1/2}\sum_{k=1}^{j}2^{k-1}$$ $$\displaystyle=H^{1/2}2^{-(j-1)/2}(2^{j}-1)\geq\sqrt{2^{j-1}H}.$$ Substituting the above expression in Eq. (44), we get that with probability at-least $1-O(\tilde{\delta}N_{t})$ $$\displaystyle\sum_{j}\lVert\hat{\theta}_{j}-\theta\rVert_{F}\leq\sum_{j=1}^{N_% {t}}\left(\frac{(32)^{1/4}\tilde{R}_{j}H_{j}}{\sqrt{\gamma}\sqrt{2^{(j-1)/2}H^% {1/2}}}\right)+O(S/\gamma_{y})$$ $$\displaystyle=\sum_{j=1}^{N_{t}}\left(\frac{(32)^{1/4}\tilde{R}2^{(j-1)3/4}H^{% 3/4}}{\sqrt{\gamma}}\right)+O(S/\gamma_{y}).$$ Defining, $\tilde{R}=\max_{j}\tilde{R}_{j}$ with probability at-least $1-O(\tilde{\delta}N_{t})$ $$\displaystyle\sum_{j=1}^{T}\lVert\hat{\theta}_{j}-\theta\rVert_{F}\leq\frac{(3% 2)^{1/4}\tilde{R}_{j}H^{3/4}}{\sqrt{\gamma}}\sum_{j=1}^{N_{t}}\left(2^{(j-1)3/% 4}\right)$$ $$\displaystyle+O(S/\gamma_{y})$$ $$\displaystyle=\frac{(32)^{1/4}\tilde{R}H^{3/4}}{\sqrt{\gamma}}\frac{2^{N_{t}3/% 4}-1}{2^{3/4}-1}+O(S/\gamma_{y})$$ $$\displaystyle\leq\frac{(32)^{1/4}\tilde{R}H^{3/4}}{\sqrt{\gamma}}\frac{2^{N_{t% }3/4}}{2^{3/4}-1}+O(S/\gamma_{y})$$ $$\displaystyle\leq\frac{(32)^{1/4}\tilde{R}T^{3/4}}{\sqrt{\gamma}(1-1/2^{3/4})}% +O(S/\gamma_{y}).$$ Since $N_{t}\leq\log{T}$, we get that with probability at-least $1-O(\delta)$ $$\sum_{k=1}^{T}\lVert\hat{\theta}_{k}-\theta\rVert_{F}\leq\frac{(32)^{1/4}% \tilde{R}T^{3/4}}{\sqrt{\gamma}(1-1/2^{3/4})}+O(S/\gamma_{y}),$$ where $O(S/\gamma_{y})$ and $1/\gamma_{y}\leq H/(\gamma^{1/2}n^{1/4})=O(\log(\log(T)))$. From here the final result follows. $\blacksquare$ Proof for Theorem 22 We know from Theorem 1 that for an episode $i$ with probability at-least $1-O(\delta)$: $$E_{\theta,T}\leq O\left(\left(1+\lVert\hat{\phi}_{i}\rVert_{F}+\lVert\theta^{*% ,i}-\hat{\phi}_{i}\rVert_{F}\right)T^{3/4}\right)$$ (45) That is with probability at-least $1-O(N\delta)$: $$\overline{E}_{\theta,N}\leq 1/N\sum_{i=1}^{N}O\left(\left(1+\lVert\hat{\phi}_{% i}\rVert_{F}+\lVert\theta^{*,i}-\hat{\phi}_{i}\rVert_{F}\right)T^{3/4}\right)$$ (46) Given that $\lVert\hat{\phi}\rVert_{F}+\lVert\theta^{*,i}-\hat{\phi}\rVert_{F}$ is convex in $\hat{\phi}$, there exists a meta-learner which incurs a regret of $O(\sqrt{N})$ for the sequence of convex functions given by $\lVert\hat{\phi}\rVert_{F}+\lVert\theta^{*,i}-\hat{\phi}\rVert_{F}$ (Zinkevich, 2003). Hence, we get that with probability $1-O(N\delta)$: $$\overline{E}_{\theta,N}\leq O\left(\left(1+1/\sqrt{N}\right)T^{3/4}\right).$$ (47) $\blacksquare$ Proof for Lemma 1 We first introduce a Lemma. Consider the following nonlinear system: $$x_{t+1}=f(x_{t},u_{t})$$ (48) Consider the MPC control law that sets the control input by $u_{t}=u^{*}_{t,t}$, where $u^{*}_{t,t}$ is the first term of the control sequence $U^{*}_{t}=\{u^{*}_{t,t},u^{*}_{t,t+1},...,u^{*}_{t,M+t-1}\}$, and $U^{*}_{t}$ is the solution of the following optimization: $$\displaystyle\min_{U_{t}}J_{M}(x_{t},U_{t}),$$ $$\displaystyle J_{M}(x_{t},U_{t})=\Gamma d(x_{t,M+t})+\sum_{k=t}^{M+t-1}c_{t}(x% _{t,k},u_{t,k}),$$ $$\displaystyle u_{t,k}\in\mathcal{U},\ x_{t,k+1}=f(x_{t,k},u_{t,k}).$$ (49) Lemma 3 Consider the system in Eq. (48). Suppose the control input is given by $u_{t}=u^{*}_{t,t}$, where $U^{*}_{t}$ is the solution to Eq. (49), $d(.)$ is a terminal cost function for the system given by Eq. (48), the sequence of cost functions $\{c_{t}\}$ satisfy Assumption 3, $M>\overline{\alpha}^{2}/\alpha+1$, then there exist constants $M_{c}\geq 1,\lambda_{c}>0$ such that $$\sigma(x_{k})\leq M_{c}\exp^{-\lambda_{c}(k-t)}\sigma(x_{t}),\ k\geq t.$$ Please see Corollary 3 in (Grimm et al., 2005) for the proof. We drop superscript $i$ for convenience. Consider the state equation, Eq. (1), with the control input as $\overline{u}_{t}$: $$x_{t+1}=Ax_{t}+B\overline{u}_{t}=\theta z_{t}.$$ (50) Since $\overline{u}_{t}$ is bounded and $\rho(A)<1,\ \forall\theta\in\Theta$, $x_{t}$ is bounded. Let this bound be given by $\lVert z_{t}\rVert_{2}\leq x_{c}$. We define the following nominal dynamics for interval $j+1$: $$\tilde{x}_{t+1}=\hat{A}_{j}\tilde{x}_{t}+\hat{B}_{j}u_{t},\tilde{x}_{t^{s}_{j+% 1}}=x_{t^{s}_{j+1}}.$$ (51) Let $\tilde{z}_{t}=[\tilde{x}^{\top}_{t},u^{\top}_{t}]^{\top}$ and $$\displaystyle x^{\delta u}_{t+1}=\sum_{k=t^{s}_{j+1}}^{t}A^{t-k}B\delta u_{k},$$ $$\displaystyle x^{\delta\theta}_{t+1}=\sum_{k=t^{s}_{i+1}}^{t}A^{t-k}(\delta% \theta_{k})\tilde{z}_{k},\ \delta\theta_{k}=(\theta-\hat{\theta}_{k}).$$ (52) The claim is that within interval $j+1$ $$x_{t}=\tilde{x}_{t}+x^{\delta u}_{t}+x^{\delta\theta}_{t},\ \text{where}\ % \tilde{x}_{t+1}=\hat{\theta}_{t}\tilde{z}_{t}.$$ We show this by induction. Let $x_{t}=\tilde{x}_{t}+x^{\delta u}_{t}+x^{\delta\theta}_{t}$ be true. Then $$\displaystyle x_{t+1}=Ax_{t}+B\overline{u}_{t}=Ax_{t}+Bu_{t}+B\delta u_{t}$$ $$\displaystyle x_{t+1}=A(\tilde{x}_{t}+x^{\delta u}_{t}+x^{\delta\theta}_{t})+% Bu_{t}+B\delta u_{t}$$ $$\displaystyle x_{t+1}=A(\tilde{x}_{t}+x^{\delta\theta}_{t})+Bu_{t}+(Ax^{\delta u% }_{t}+B\delta u_{t})$$ $$\displaystyle x_{t+1}=\theta\tilde{z}_{t}+Ax^{\delta\theta}_{t}+(Ax^{\delta u}% _{t}+B\delta u_{t})$$ $$\displaystyle x_{t+1}=\hat{\theta}_{t}\tilde{z}_{t}+(\delta\theta)\tilde{z}_{t% }+Ax^{\delta\theta}_{t}+(Ax^{\delta u}_{t}+B\delta u_{t})$$ $$\displaystyle x_{t+1}=\hat{\theta}_{t}\tilde{z}_{t}+(Ax^{\delta\theta}_{t}+(% \delta\theta)\tilde{z}_{t})+(Ax^{\delta u}_{t}+B\delta u_{t})$$ $$\displaystyle\text{i.e.}\ x_{t+1}=\tilde{x}_{t+1}+x^{\delta u}_{t+1}+x^{\delta% \theta}_{t+1}.$$ The relation trivially holds for $t=2$. Hence, by induction holds for all $t$. Then, using the fact that $c_{t}$ is locally Lipschitz, the fact that $\lVert x_{t}\rVert_{2}\leq x_{c}$ always and the fact that $\overline{u}_{t}$ is always bounded, there exists a constant $k_{0}$ s.t. $$c_{t}(x_{t},\overline{u}_{t})\leq c_{t}(\tilde{x}_{t},u_{t})+k_{0}\left(\lVert x% ^{\delta\theta}_{t}\rVert_{2}+\lVert x^{\delta u}_{t}\rVert_{2}+\lVert\delta u% _{t}\rVert_{2}\right).$$ (53) Since $\rho(A)<1$, $\lim_{k\rightarrow\infty}\lVert A^{k}\rVert_{2}\rightarrow 0$. Hence, there exists a constant $n_{\rho}$ such that $\lVert A^{n_{\rho}}\rVert_{2}<1$. Hence, there exist constants $c_{\rho}$ and $\gamma<1$, where $\gamma^{n_{\rho}}=\lVert A^{n_{\rho}}\rVert_{2}<1$ such that $\lVert A^{k}\rVert_{2}\leq c_{\rho}\gamma^{k}$ for all $k>0$. Then, from Eq. (52) and triangle inequality it follows that within interval $i$ $$\displaystyle\lVert x^{\delta\theta}_{t+1}\rVert_{2}\leq\sum_{k=t^{s}_{i}}^{t}% \lVert A^{t-k}(\delta\theta_{k})\tilde{z}_{k}\rVert_{2}$$ $$\displaystyle\leq\sum_{k=t^{s}_{i}}^{t}\lVert A^{t-k}\rVert_{2}\lVert\delta% \theta_{k}\tilde{z}_{k}\rVert_{2}\leq c_{\rho}\sum_{k=t^{s}_{i}}^{t}\gamma^{t-% k}\lVert\delta\theta_{k}\tilde{z}_{k}\rVert_{2}.$$ Since $\delta\theta_{k}=\theta-\hat{\theta}_{k}=\theta-\hat{\theta}_{j-1}=\delta% \theta_{j-1}$ is a constant in interval $j$ and $\hat{\theta}_{j-1}\in\hat{\Theta}_{j-1}\subseteq\Theta$, $\lVert\tilde{z}_{t}\rVert\leq x_{c}$. Hence, $$\displaystyle\lVert x^{\delta\theta}_{t+1}\rVert_{2}\leq c_{\rho}\sum_{k=t^{s}% _{j}}^{t}\gamma^{t-k}\lVert\delta\theta_{j-1}\rVert_{2}x_{c}$$ $$\displaystyle\leq c_{\rho}\sum_{k=t^{s}_{j}}^{t}\gamma^{t-k}\lVert\delta\theta% _{j-1}\rVert_{2}x_{c}\leq c_{\rho}\lVert\delta\theta_{j-1}\rVert_{2}x_{c}\sum_% {k=t^{s}_{j}}^{t}\gamma^{t-k}$$ $$\displaystyle=c_{\rho}\lVert\delta\theta_{j-1}\rVert_{2}x_{c}\frac{1-\gamma^{t% -t^{s}_{j}+1}}{1-\gamma}\leq c_{\rho}\lVert\delta\theta_{j-1}\rVert_{2}\frac{x% _{c}}{1-\gamma}.$$ (54) Similarly $$\displaystyle\lVert x^{\delta u}_{t+1}\rVert_{2}\leq\sum_{k=t^{s}_{j}}^{t}% \lVert A^{t-k}B\delta u_{k}\rVert_{2}\leq\sum_{k=t^{s}_{j}}^{t}\lVert A^{t-k}% \rVert_{2}\lVert B\delta u_{k}\rVert_{2}$$ $$\displaystyle\leq c_{\rho}\sum_{k=t^{s}_{j}}^{t}\gamma^{t-k}\lVert B\rVert_{2}% \lVert\delta u_{k}\rVert_{2}\leq c_{\rho}\sqrt{c_{p,{j}}}\frac{\lVert B\rVert_% {2}}{1-\gamma}$$ $$\displaystyle\leq c_{\rho}\sqrt{c_{p,{j}}}\frac{S}{1-\gamma}.$$ (55) From the definition of nominal dynamics used for solving Eq. (10), i.e. Eq. (8), and the definition of nominal dynamics used in this proof, Eq. (51), we get that $$\overline{x}_{t}=\hat{A}_{j-1}^{t-t^{s}_{j}}(\hat{x}_{j}-x_{t^{s}_{j}})+\tilde% {x}_{t}.$$ Using the above expression, the fact that $\lVert\hat{x}_{j}-x_{t^{s}_{j}}\rVert_{2}\leq 2\epsilon_{c}$ and Eq. (53), we get that $$\displaystyle c_{t}(x_{t},\overline{u}_{t})\leq c_{t}(\overline{x}_{t},u_{t})+% k_{0}\left(2\lVert\hat{A}^{t-t^{s}_{j}}_{j-1}\rVert_{2}\epsilon_{c}+\lVert x^{% \delta\theta}_{t}\rVert_{2}\right)$$ $$\displaystyle+k_{0}\left(\lVert x^{\delta u}_{t}\rVert_{2}+\lVert\delta u_{t}% \rVert_{2}\right).$$ (56) Since $\rho(\hat{A}_{j-1})<1$, there exist constants $\hat{c}_{\rho}$ and $\hat{\gamma}<1$ such that $\lVert\hat{A}^{k}_{j-1}\rVert_{2}\leq\hat{c}_{\rho}\hat{\gamma}^{k}$. Hence, $\sum_{k=1}^{\infty}\lVert\hat{A}^{k}_{j-1}\rVert_{2}\leq\hat{c}$, a constant. Using this and summing the expression in Eq. (56) over the interval $j$ we get that $$\displaystyle\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(x_{t},\overline{u}_{t})\leq\sum_{% t=t^{s}_{j}}^{t_{j}}c_{t}(\overline{x}_{t},u_{t})$$ $$\displaystyle+\sum_{t=t^{s}_{j}}^{t_{j}}k_{0}\left(2\lVert\hat{A}^{t-t^{s}_{j}% }_{j-1}\rVert_{2}\epsilon_{c}+\lVert x^{\delta\theta}_{t}\rVert_{2}\right)$$ $$\displaystyle+\sum_{t=t^{s}_{j}}^{t_{j}}k_{0}\left(\lVert x^{\delta u}_{t}% \rVert_{2}+\lVert\delta u_{t}\rVert_{2}\right)$$ $$\displaystyle\leq\sum_{t=t^{s}_{j}}^{t_{j}}\left(c_{t}(\overline{x}_{t},u_{t})% +k_{0}\left(\lVert x^{\delta\theta}_{t}\rVert_{2}\right)\right)$$ $$\displaystyle+\sum_{t=t^{s}_{j}}^{t_{j}}k_{0}\left(\lVert x^{\delta u}_{t}% \rVert_{2}+\lVert\delta u_{t}\rVert_{2}\right)+2k_{0}\hat{c}\epsilon_{c}.$$ Then, using Eq. (55), Eq. (54), and the fact that $\lVert\delta u_{t}\rVert_{2}=O(\sqrt{c_{p,j}})$, we get that $$\displaystyle\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(x_{t},\overline{u}_{t})\leq\sum_{% t=t^{s}_{j}}^{t_{j}}\left(c_{t}(\overline{x}_{t},u_{t})\right)+O\left(\sum_{t=% t^{s}_{j}}^{t_{j}}\lVert\delta\theta_{j-1}\rVert_{2}\right)$$ $$\displaystyle+O(\sqrt{c_{p,j}}H_{j})+2k_{0}\hat{c}\epsilon_{c}.$$ (57) Consider the following dynamics for the interval $j$: $$\overline{x}^{b}_{t+1}=A\overline{x}^{b}_{t}+B\overline{u}^{b}_{t},\overline{u% }^{b}_{t}=\text{MPC}_{t}(\overline{x}^{b}_{t},\theta),\ \overline{x}^{b}_{t^{s% }_{j}}=x_{t^{s}_{j}}.$$ (58) Under the event that $\theta=\hat{\Theta}_{j}$, given how $\hat{x}_{j}$ and $\hat{\theta}_{j-1}$ are selected (Eq. (13)), we get that $$\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(\overline{x}_{t},u_{t})\leq\sum_{t=t^{s}_{j}}^% {t_{j}}c_{t}(\overline{x}^{b}_{t},\overline{u}^{b}_{t}).$$ (59) For the interval $j$, since all the conditions stated in Lemma 3 are satisfied by the dynamics given in Eq. (58) we have that $$\sigma(\overline{x}^{b}_{t})\leq M_{c}e^{-\lambda_{c}(k-t^{s}_{j})}\sigma(x_{t% ^{s}_{j}}),\ k\geq t^{s}_{j}.$$ (60) From the global exponential controllability condition for the sequence of cost functions $\{c_{t}\}$, and the fact that $\Gamma d(x)\leq\inf_{u}c_{t}(x,u)\ \forall\ t$, it follows that (see Section III.A., (Grimm et al., 2005))): $$\displaystyle\min_{U_{t}}J_{M}(\overline{x}^{b}_{t},U_{t})\leq\overline{\alpha% }\sigma(\overline{x}^{b}_{t}),\ \text{where}\ \overline{\alpha}=M_{\lambda}/(1% -e^{-\lambda_{e}}),$$ $$\displaystyle\text{i.e.},\ c_{t}(\overline{x}^{b}_{t},\overline{u}^{b}_{t})% \leq\min_{U_{t}}J_{M}(\overline{x}^{b}_{t},U_{t})\leq\overline{\alpha}\sigma(% \overline{x}^{b}_{t}).$$ (61) Then, using Eq. (61) and Eq. (60) in Eq. (59) we get that $$\displaystyle\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(\overline{x}_{t},u_{t})\leq\sum_{% t=t^{s}_{j}}^{t_{j}}c_{t}(\overline{x}^{b}_{t},\overline{u}^{b}_{t})$$ $$\displaystyle\leq\sum_{t=t^{s}_{j}}^{t_{j}}\overline{\alpha}M_{c}e^{-\lambda_{% c}(k-t^{s}_{j})}\sigma(x_{t^{s}_{j}})=\frac{\overline{\alpha}M_{c}\sigma(x_{t^% {s}_{j}})}{1-e^{-\lambda_{c}}}=O(1).$$ (62) Note that, by definition $c(x^{b}_{t},u^{b}_{t})\geq\alpha\sigma(x^{b}_{t})\geq 0$. Hence $$\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(\overline{x}_{t},u_{t})-\sum_{t=t^{s}_{j}}^{t_% {j}}c_{t}(x^{b}_{t},u^{b}_{t})\leq\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(\overline{x}% _{t},u_{t})\leq O(1).$$ (63) Then, combining Eq. (63) and Eq. (57) we get that $$\displaystyle\sum_{t=t^{s}_{j}}^{t_{j}}c_{t}(x_{t},\overline{u}_{t})-\sum_{t=t% ^{s}_{j}}^{t_{j}}c_{t}(x^{b}_{t},u^{b}_{t})\leq O(1)+O(\sqrt{c_{p,j}}H_{j})$$ $$\displaystyle+O\left(\sum_{t=t^{s}_{j}}^{t_{j}}\lVert\delta\theta_{j-1}\rVert_% {2}\right).$$ Summing the above over all intervals we get that $$\displaystyle\sum_{t=1}^{T}c_{t}(x_{t},\overline{u}_{t})-\sum_{t=1}^{T}c_{t}(x% ^{b}_{t},u^{b}_{t})\leq O(N_{T})+O\left(\mathbb{E}_{\theta,T}\right)$$ $$\displaystyle+\sum_{j=1}^{N_{t}}O(\sqrt{c_{p,j}}H_{j}).$$ Then, since $N_{T}\leq\log{T}$ (see the argument in the proof of Theorem 1), it follows that $$\displaystyle\sum_{t=1}^{T}c_{t}(x_{t},\overline{u}_{t})-\sum_{t=1}^{T}c_{t}(x% ^{b}_{t},u^{b}_{t})\leq O(\log{T})+O\left(\mathbb{E}_{\theta,T}\right)$$ $$\displaystyle+\sum_{j=1}^{N_{t}}O(\sqrt{c_{p,j}}H_{j}).$$ $\blacksquare$ Proof for Theorem 3 To apply Theorem 1 to Lemma 1 we need to establish that the persistence of excitation holds. First, we present a Lemma that establishes the conditions under which the persistence of excitation holds. Let $$W_{t}:=\left[(u_{t})^{\top},...,(u_{t+n})^{\top}\right]^{\top}$$ where $(u_{t},...,u_{t+n},...,)$ is a sequence of control inputs. Lemma 4 Suppose the pair $(A,B)$ is controllable. Then there exists a sequence of control inputs such that the matrix $$\left[W_{t},W_{t+1},W_{t+2},...,W_{t+q-1}\right],$$ is full row rank when $q=s((n+1)m)$, where $s$ is any integer, and in this case the persistence of excitation condition holds for $p=q+n$. Proof: We first present Lemma 3.1. from (Moore, 1983). Denote a linear system by the matrices $(A,B)$. Denote the state and control input at time $t$ by $x_{t}$ and $u_{t}$. Lemma 5 Suppose that the system given by $(A,B)$ is controllable. Then, for arbitrary $x_{k},u_{k},u_{k+1},...,u_{k+n-1}$ and an arbitrary non-zero $n-$vector $\zeta$, there exists nonzero vectors $\beta$ and $\varepsilon$ of appropriate dimension, independent of $x_{k},u_{k},u_{k+1},...,u_{k+n-1}$ but dependent on $\zeta$, such that $$\beta^{T}[u^{\top}_{k},u^{\top}_{k+1},...,u^{\top}_{k+n-1}]^{\top}=\zeta^{\top% }[x_{k},x_{k+1},...,x_{k+n}]\varepsilon.$$ Please see Lemma 3.1., (Moore, 1983) for the proof. To prove our result we use an argument similar to the proof of Lemma 3.1., (Moore, 1983). Consider an imaginary output $y_{k}$ of the linear time invariant system given by the dynamics $x_{k+1}=Ax_{k}+Bu_{k}$: $$y_{k}=\zeta^{\top}[x^{\top}_{k}\ u^{\top}_{k}]^{\top},$$ (64) where $\zeta$ is arbitrary. Let $\zeta^{\top}=[\zeta^{\top}_{x}\ \zeta^{\top}_{u}]$. The corresponding McMillan form of the transfer function of this system for the output in Eq. (64) is of the form $$\mathcal{H}(z)=\frac{C_{0}+C_{1}z^{-1}+...+C_{n}z^{-n}}{d_{0}+d_{1}z^{-1}+...+% d_{n}z^{-n}},d_{0}=1,$$ (65) where the coefficients $[d_{0},d_{1},...,d_{n}]$ correspond to the minimal polynomial. The terms $C_{0},C_{1},...,C_{n}$ have a specific form given by (see (Green & Moore, 1986)) $$C_{j}=\sum_{l=0}^{j}d_{j-l}P_{l},P_{l}=\zeta^{\top}_{x}A^{l-1}B,P_{0}=\zeta^{% \top}_{u}.$$ Let $$\beta:=[C_{n},C_{n-1},...,C_{0}]^{\top},\varepsilon:=[d_{n},...,d_{1},d_{0}]^{\top}$$ (66) Then, from Eq. (65), it follows that $$\displaystyle\beta^{T}[u^{\top}_{k},u^{\top}_{k+1},...,u^{\top}_{k+n}]^{\top}=% \zeta^{\top}\times$$ $$\displaystyle\left[\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right],\left[\begin{array}[]{c}x_{k+1}\\ u_{k+1}\end{array}\right],...,\left[\begin{array}[]{c}x_{k+n}\\ u_{k+n}\end{array}\right]\right]\varepsilon.$$ (67) We can rewrite $\beta^{T}$ as $$\beta^{\top}=\zeta^{\top}G_{\theta},G_{\theta}=\left[\begin{array}[]{cc}Q_{% \theta}&0\\ H_{\theta}&I\end{array}\right]$$ (68) where $Q_{\theta}=[q_{n},q_{n-1},...,q_{1}]$, $q_{j}=\sum_{l=1}^{j}d_{j-l}A^{l-1}B$, $H_{\theta}=[d_{n}I,d_{n-1}I,...,d_{1}I],I=I_{m}$, the identity matrix of size $m$. Since the pair $(A,B)$ is controllable, $G_{\theta}$ is a full row rank matrix. Consider $Q_{\theta}$. The term with the highest power of $A$ in $q_{j}$ is $A^{j-1}B$ and its coefficient is $d_{0}=1$ for all $j$. Thus $\text{span}\{[B,AB,A^{2}B...,A^{n-1}B]\}=\text{span}\{[q_{n},q_{n-1},...,q_{1}]\}$. Hence, for a non-zero $\zeta$ there exists a non-zero element in $\beta$ and $\varepsilon$ is non-zero because $d_{0}=1$. Next, we show that if the matrix $\left[W_{t},W_{t+1},...,W_{t+q-1}\right]$ is a full row rank matrix then the persistence of excitation holds for $p=q+n$. Let the matrix $\left[W_{t},W_{t+1},...,W_{t+q-1}\right]$ be a full row rank matrix and let $$\sum_{k=t}^{t+q-1}W_{k}W^{\top}_{k}\geq c_{p}\gamma q>0,$$ (69) where $\gamma$ is a positive constant and a matrix $$X\geq c_{p}\ \iff\ v^{\top}Xv\geq c_{p}\ \forall\ v\ \text{s.t.}\ \lVert v% \rVert_{2}=1.$$ From Eq. (67) it follows that $$\lVert\beta^{T}W_{k}\rVert^{2}_{2}\leq\bigg{\lVert}\zeta^{\top}\left[\left[% \begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right],...,\left[\begin{array}[]{c}x_{k+n}\\ u_{k+n}\end{array}\right]\right]\bigg{\rVert}^{2}_{2}\lVert\varepsilon\rVert^{% 2}_{2}.$$ Summing from $k=t$ to $k=t+q-1$ on both sides we get that $$\displaystyle\beta^{\top}\left(\sum_{k=t}^{t+q-1}W_{k}W^{\top}_{k}\right)\beta$$ $$\displaystyle\leq\lVert\varepsilon\rVert^{2}_{2}\zeta^{\top}\sum_{k=t}^{t+q-1}% \left[\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right],\left[\begin{array}[]{c}x_{k+1}\\ u_{k+1}\end{array}\right],...,\left[\begin{array}[]{c}x_{k+n}\\ u_{k+n}\end{array}\right]\right]\times$$ $$\displaystyle\left[\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right],\left[\begin{array}[]{c}x_{k+1}\\ u_{k+1}\end{array}\right],...,\left[\begin{array}[]{c}x_{k+n}\\ u_{k+n}\end{array}\right]\right]^{\top}\zeta$$ $$\displaystyle\leq(n+1)\lVert\varepsilon\rVert^{2}_{2}\zeta^{\top}\sum_{k=t}^{t% +q+n-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\zeta.$$ Then, using Eq. (69) in the previous equation we get that $$\zeta^{\top}\sum_{k=t}^{t+q+n-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\zeta\geq\frac{% \gamma c_{p}q\beta^{\top}\beta}{((n+1)\lVert\varepsilon\rVert^{2}_{2})}.$$ (70) Since the elements of $\varepsilon$ are the coefficients of the minimal polynomial of $A$, each element of $\varepsilon$ is a polynomial function of the eigenvalues of the system matrix $A$. Denote the roots (or the eigenvalues) of $A$ by $r_{1},...,r_{n}$. Let $r=\max\{|r_{1}|,|r_{2}|,|r_{3}|,...,|r_{n}|\}$. It is clear that $r$ is finite because $\theta\in\Theta$, and $\Theta$ is compact. Then, using Vieta’s formula we get that $$(-1)^{k}d_{n-k}=\sum_{1\leq i_{1}\leq i_{2}\leq..\leq i_{n-k}\leq n}\prod_{j=1% }^{n-k}r_{i_{j}},\ \forall\ k<n.$$ Applying binomial theorem we get that $|d_{k}|\leq(1+r)^{n}$ and $\lVert\varepsilon\rVert^{2}_{2}\leq n(1+r)^{2n}+1$. Using this relation in Eq. (70) we get that $$\displaystyle\zeta^{\top}\sum_{k=t}^{t+q+n-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\zeta$$ $$\displaystyle\geq\frac{\gamma c_{p}q(\beta^{\top}\beta)}{((n+1)(n(1+r)^{2n}+1)% )}.$$ Let $\lambda_{min}(G_{\theta}G^{\top}_{\theta})=c_{g}$. Since $(A,B)$ is a controllable pair $c_{g}>0$. Applying this fact to the previous equation we get that $$\displaystyle\sum_{k=t}^{t+q+n-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\geq\frac{\gamma c% _{p}qc_{g}}{((n+1)(n(1+r)^{2n}+1))}$$ $$\displaystyle=\tilde{\gamma}c_{p}q\geq\tilde{\gamma}c_{p}p/2>0,$$ (71) where $\tilde{\gamma}=\gamma c_{g}/((n+1)(n(1+r)^{2n}+1))$. Next, we show that we can construct a sequence of inputs such that the condition in Eq. (69) is satisfied for $q=s((n+1)m)$. Let $e^{m}_{i}$ be the unit vector along the $i$th dimension of $\mathbb{R}^{m}$. Consider $$\displaystyle u_{k}=e^{m}_{j},\ \text{where}\ j=\max_{\tilde{j}\leq m}\tilde{j}$$ $$\displaystyle\text{s.t.}\ k-1\ \text{mod}\ (n+1)\tilde{j}=0,$$ $$\displaystyle u_{k}=0,\ \text{if}\ \nexists\ \tilde{j}\leq m$$ $$\displaystyle\text{s.t.}\ k-1\ \text{mod}\ (n+1)\tilde{j}=0.$$ (72) For this construction, the matrix $$M_{k}=[W_{k},W_{k+1},...,W_{k+q-1}]$$ can also be written as $$M_{k}=\Pi\left([e_{1},e_{2},...,e_{q}]\right),$$ for any $k$, where $\Pi$ denotes a permutation of the columns of the input matrix, and $e_{j}$ are unit vectors along each dimension in $\mathbb{R}^{(n+1)m}$. By construction, Eq. (69) is satisfied. Then, following the argument outlined above we get that the persistence of excitation is satisfied for $p=q+n$ when the sequence of control inputs is given by Eq. (72) and $q=s((n+1)m$. $\blacksquare$ Now we prove the main theorem. We drop the superscript $i$ for convenience. Take any arbitrary interval. We first establish that there exist $\delta u_{t}$ for all $t$ as defined in Eq. (12) such that the matrix $$\displaystyle M_{t}=[W_{t},W_{t+1},...,W_{t+(n+1)m-1}]$$ $$\displaystyle\text{where}\ W_{t}=\left[(\overline{u}_{t})^{\top},...,(% \overline{u}_{t+n})^{\top}\right]^{\top},$$ is full rank. We prove this by induction. For the first part we show that if $M_{t-1}$ is full rank then $M_{t}$ is full rank when the control input $\overline{u}_{t+(n+1)m+n-1}$ is generated according to Eq. (11) and Eq. (12). We prove this on a case by case basis. Let $t_{c}:=t+(n+1)m+n-1$. Case $e^{\perp}_{t_{c}}\neq 0$: denote the matrix after removing the first column of $M_{t-1}$ by $M^{-1}_{t-1}$. Since $M_{t-1}$ is full rank and a square matrix, $M^{-1}_{t-1}$ is one rank less than $M_{t-1}$. By definition, the unit vector along the dimension of null space of $(M^{-1}_{t-1})^{\top}$ is $W^{\perp}_{t+(n+1)m-1}$. To ensure that $M_{t}$ is full rank it is sufficient to ensure that $$\displaystyle(W^{\perp}_{t+(n+1)m-1})^{\top}W_{t+(n+1)m-1}\neq 0,$$ $$\displaystyle\text{i.e.}\ \sum_{k=t+(n+1)m-1}^{k=t_{c}}(u^{\perp}_{k})^{\top}% \overline{u}_{k}\neq 0$$ $$\displaystyle\text{i.e.}\ \sum_{k=t+(n+1)m-1}^{k=t_{c}-1}\left((u^{\perp}_{k})% ^{\top}\overline{u}_{k}\right)+(u^{\perp}_{t_{c}})^{\top}\overline{u}_{t_{c}}% \neq 0.$$ Let $$\sum_{k=t+(n+1)m-1}^{k=t_{c}-1}\left((u^{\perp}_{k})^{\top}\overline{u}_{k}% \right)=g.$$ Then, it is sufficient to ensure that $$g+(u^{\perp}_{t_{c}})^{\top}\overline{u}_{t_{c}}\neq 0.$$ Let $$g^{\perp}=(u^{\perp}_{t_{c}})^{\top}u_{t_{c}},e_{t_{c}}=\frac{u_{t_{c}}}{% \lVert u_{t_{c}}\rVert},g_{s}=\frac{g^{\perp}+g}{|g^{\perp}+g|}\frac{g^{\perp}% }{|g^{\perp}|}.$$ Consider $$\delta u_{t_{c}}=\left\{\begin{array}[]{cc}\frac{g}{|g|}\sqrt{c_{p}}e^{\perp}_% {t_{c}}&\text{if}\ g^{\perp}=0,\\ g_{s}\sqrt{c_{p}}e_{t_{c}}&\ \text{if}\ g^{\perp}\neq 0,g_{s}<0,\\ &|\lVert u_{t_{c}}\rVert_{2}-\sqrt{c_{p}}|\geq\sqrt{c_{p}},\\ 2g_{s}\sqrt{c_{p}}e_{t_{c}}&\ \text{if}\ g^{\perp}\neq 0,g_{s}<0,\\ &|\lVert u_{t_{c}}\rVert_{2}-\sqrt{c_{p}}|<\sqrt{c_{p}},\\ \sqrt{c_{p}}e_{t_{c}}&\ \text{otherwise}.\end{array}\right.$$ For this choice of $\delta u_{t_{c}}$, Eq. (Proof for Theorem) is always valid because the sign of $(u^{\perp}_{t_{c}})^{\top}\delta u_{t_{c}}$ will always be aligned with the sign of $g+g^{\perp}$. In addition, there exists a unit vector $v_{e}$, where $$v_{e}=\left\{\begin{array}[]{cc}e^{\perp}_{t_{c}}&\text{if}\ g^{\perp}=0\\ e_{t_{c}}&\text{otherwise}\end{array}\right.,\ \text{s.t.}\ v^{\top}_{e}% \overline{u}_{t_{c}}\geq\sqrt{c_{p}}.$$ This implies that, for the above choice of $\delta u_{t_{c}}$ $$\lVert W_{t+(n+1)m-1}\rVert_{2}\geq\sqrt{c_{p}}.$$ Case $e^{\perp}_{t_{c}}=0$: in this case we show that the matrix $M_{t}$ is full rank for any arbitrary $\delta u_{t+n(m+1)+m-1}$. As in the previous case, the null space of $(M^{-1}_{t-1})^{\top}$ is one-dimensional space. And, by definition the unit vector along this dimension is $W^{\perp}_{t+(n+1)m-1}$. Let $$\tilde{W}^{\perp}:=[0^{\top},\ W^{\perp}_{t+(n+1)m-1}(1:nm)^{\top}]^{\top},$$ where $W^{\perp}_{t+(n+1)m-1}(1:nm)$ is the vector of the first $nm$ elements of $W^{\perp}_{t+(n+1)m-1}$. Because $e^{\perp}_{t_{c}}=0$ and $W^{\perp}_{t+(n+1)m-1}$ is non-zero, $W^{\perp}_{t+(n+1)m-1}(1:nm)$ is non-zero. Hence, the vector $\tilde{W}^{\perp}$ is non-zero. First, we show by contradiction that $$(W^{\perp}_{t+(n+1)m-1})^{\top}W_{t+(n+1)m-1}\neq 0.$$ If this condition is violated then $W^{\perp}_{t+(n+1)m-1}$ is orthogonal to all the columns of $[W_{t},W_{t+1},...,W_{t+(n+1)m-1}]$. This implies that the non-zero vector $\tilde{W}^{\perp}$ is orthogonal to all the columns of $M_{t-1}$ because, as per construction, the control input vectors (each of dimension $m$) that constitute the column vectors of $M_{t}$ are the same control input vectors that are one position below in the corresponding columns of $M_{t-1}$. This is a contradiction because by assumption $M_{t-1}$ is full rank. Thus, in this case $$(W^{\perp}_{t+(n+1)m-1})^{\top}W_{t+(n+1)m-1}\neq 0,$$ always. And, since $e^{\perp}_{t_{c}}=0$ in this case, $\delta u_{t_{c}}$ can be set as any vector. Consider the perturbation $\delta u_{t_{c}}=\sqrt{c_{p}}e_{t_{c}}$. For this choice of $\delta u_{t_{c}}$, $$\lVert W_{t+(n+1)m-1}\rVert_{2}\geq\sqrt{c_{p}}.$$ This completes the first part of the proof by induction. Next, we observe that we can construct a $M_{t^{s}_{j}}$ following the same procedure outlined above such that $M_{t^{s}_{j}}$ is full rank. Hence, by induction it follows that $M_{t}$ is full rank for all $t$ within an interval. We note that the constant $c_{p}$ can be specified based on the interval and the norm of the columns are at the least $\sqrt{c_{p}}$ by construction. Next, we show that $$\sum_{k=t}^{t+(n+1)m-1}W_{k}W^{\top}_{k}\geq c_{p}>0.$$ Let $W_{k}=\beta_{k}\hat{W}_{k}$, where $\lVert\hat{W}_{k}\rVert=1$ and $\beta_{k}=\lVert W_{k}\rVert_{2}$. Denote $\hat{W}^{\top}_{l}\hat{W}_{r}$ by $\delta_{l,r}$. Consider an arbitrary vector $v\in\mathbb{R}^{n(m+1)}$, which satisfies $\lVert v\rVert_{2}=1$. Since $M_{t}$ is full rank, there exists $\alpha_{k}\in\mathbb{R}$ not all zero such that $v=\sum_{k}\alpha_{k}\hat{W}_{k}.$ Hence $$\displaystyle v^{\top}\left(\sum_{k=t}^{t+(n+1)m-1}W_{k}W^{\top}_{k}\right)v=% \sum_{k=t}^{t+(n+1)m-1}v^{\top}W_{k}W^{\top}_{k}v$$ $$\displaystyle=\sum_{k=t}^{t+(n+1)m-1}\beta^{2}_{k}v^{\top}\hat{W}_{k}\hat{W}^{% \top}_{k}v$$ $$\displaystyle=\sum_{k=t}^{t+(n+1)m-1}\beta^{2}_{k}v^{\top}\hat{W}_{k}\hat{W}^{% \top}_{k}v.$$ By construction, $\lVert W_{k}\rVert_{2}=\beta_{k}\geq\sqrt{c_{p}}$. Hence, $$\displaystyle v^{\top}\left(\sum_{k=t}^{t+(n+1)m-1}W_{k}W^{\top}_{k}\right)v$$ $$\displaystyle\geq c_{p}\sum_{k}\left(\sum_{l}\alpha_{l}\hat{W}_{l}\right)^{% \top}\hat{W}_{k}\hat{W}^{\top}_{k}\left(\sum_{l}\alpha_{l}\hat{W}_{l}\right)$$ $$\displaystyle=c_{p}\sum_{k}\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,k}% \delta_{r,k}$$ $$\displaystyle=c_{p}\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,r}+c_{p}\sum_% {k\neq r}\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,k}\delta_{r,k}$$ $$\displaystyle=2c_{p}\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,r}+c_{p}\sum% _{k\neq r,k\neq l}\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,k}\delta_{r,k}$$ $$\displaystyle=2c_{p}\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,r}+c_{p}\sum% _{k}\sum_{l\neq k}\sum_{r\neq k}\alpha_{l}\alpha_{r}\delta_{l,k}\delta_{r,k}.$$ (73) Also, $$\displaystyle\sum_{l\neq k}\sum_{r\neq k}\alpha_{l}\alpha_{r}\delta_{l,k}% \delta_{r,k}$$ $$\displaystyle=\left(\sum_{l\neq k}\alpha_{l}V_{l}\right)^{\top}W_{k}W^{\top}_{% k}\left(\sum_{l\neq k}\alpha_{l}V_{l}\right)\geq 0.$$ (74) Then, using Eq. (74) and the fact that $\lVert v\rVert_{2}=\sum_{l}\sum_{r}\alpha_{l}\alpha_{r}\delta_{l,r}=1$ in Eq. (73), we get that $$v^{T}\left(\sum_{k=t}^{t+(n+1)m-1}W_{k}W^{\top}_{k}\right)v\geq 2c_{p}>c_{p}.$$ (75) Then, from Eq. (75) and given that $q=n_{c}=(n+1)m$, for any $t$ s.t. $t^{s}_{j}\leq t\leq t_{j}$, we get $$v^{T}\left(\sum_{k=t}^{t+j^{*}n_{c}-1}W_{k}W^{\top}_{k}\right)v\geq 1/n_{c}c_{% p,i}(j^{*}n_{c}).$$ Set $\gamma=1/n_{c}$. Then, following the argument in the proof of Lemma 4 we get that $$\sum_{k=t}^{t+j^{*}n_{c}+n-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\geq\tilde{\gamma% }c_{p,i}(j^{*}n_{c})>0,$$ By definition, $H=j^{*}n_{c}+n$. And so, from Eq. (18) it follows that $n\leq H/2$, which in turn implies that $j^{*}n_{c}\geq H/2$. Hence, $$\sum_{k=t}^{t+H-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\geq\tilde{\gamma% }c_{p,i}H/2.$$ Extending this sum over the duration of the interval $j$, which contains $2^{j-1}$ periods of length $H$, each of which by design satisfies the previous equation, we get $$\sum_{k=t^{s}_{j}}^{t+H_{j}-1}\left[\begin{array}[]{c}x_{k}\\ u_{k}\end{array}\right]\left[x^{\top}_{k},u^{\top}_{k}\right]\geq\tilde{\gamma% }c_{p,j}(2^{j-1}H)/2.$$ This establishes persistence of excitation. Then, the final result follows from the application of Theorem 22 to Lemma 1. $\blacksquare$.
Scherk-Schwarz orbifolds at the LHC D D Smaranda and D J Miller Abstract We examine orbifold theories of Grand Unification with Scherk-Schwarz twisting, performing a renormalisation group analysis and applying low energy experimental constraints. We rule out the minimal SU(5) models, and consider simple extensions including additional fields, such as an additional scalar field, or additional symmetries, such as $SU(5)\times U(1)$ or $E_{6}$. We find that it is very difficult to generate a large enough Higgs mass while simultaneously passing LHC experimental search constraints. 1 Introduction The Large Hadron Collider’s (LHC) triumph on the discovery of the $125\,$GeV Higgs boson [1, 2] has been tempered somewhat by the lack of evidence of physics beyond the Standard Model (SM). Indeed, Supersymmetry, which was for many years the most popular beyond the SM speculation, is now facing significant exclusions from LHC data [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] . While many of these exclusions are for so-called “simplified models” (see e.g. Refs. [3, 33]), which make assumptions about the superpartner mass spectrum, these exclusions are particularly hard on models where the supersymmetry breaking parameters are assigned fixed values at some high supersymmetry breaking scale. Indeed the constrained Minimal Supersymmetric Standard Model, which gives all the supersymmetry breaking parameters a common value at a high scale, is now mostly ruled out [34, 35]. However, typical supersymmetric models are complicated by having over 100 additional free parameters (due to the lack of knowledge of the supersymmetry breaking mechanism), so plenty parameter space remains for more non-minimal models of supersymmetry and the LHC will of course continue to search for them. This naturally leads to two complementary approaches to the search for supersymmetry. Firstly, not all the supersymmetry breaking parameters are important for LHC searches, so one may make very reasonable assumptions about the model (such as no new source of CP-violation, no Flavour Changing Neutral Currents, and first and second generation universality) to reduce the number of parameters. For example, the phenomenological MSSM (pMSSM) [36] has “only” 19 additional parameters, making its investigation at the LHC much more practicable. Alternatively, one may posit mechanisms of supersymmetry breaking at the high scale to predict relations between the supersymmetry breaking parameters, and the consequent low energy spectrum that can be confronted with data. This is often married with a Grand Unified Theory (GUT) in which the SM gauge groups are unified into a larger group. One interesting idea is that supersymmetry may be broken by the compactification of extra dimensions [37, 38]. An additional extra dimension may have escaped previous detection by being rolled-up with a radius $R$ that is smaller than the resolution of our colliders. Compactifying the extra dimension by imposing additional symmetries results in heavy Kaluza-Klein states [39, 40] and can break supersymmetry as well as any underlying symmetry of grand unification. The symmetries imposed on the compactified dimension and the representation of the states under these symmetries define the model. In ordinary compactifations, the states transform trivially under these extra symmetries, e.g. $\phi(x^{\mu},y+2\pi R)=\phi(x^{\mu},y)$, where $y$ is the extra dimension. However, if the transformation becomes non-trivial, e.g. $\phi(x^{\mu},y+2\pi R)=T\phi(x^{\mu},y)$ where $T$ is a model dependent operator, the theory is said to have a Scherk-Schwarz twist  [37, 38]. In this study we will seek to investigate some simple Scherk-Schwarz models of compactification to test if they are phenomenologically compatible with electroweak symmetry breaking and low energy experimental constraints. In Section 2 we will describe the theoretical framework of Schrek-Schwarz compactification, including the breaking of supersymmetry and the GUT symmetry, and discuss the placement of fermionic matter. We will describe our methodology for investigating these models in Section 3 as well as list our low energy experimental constraints. Following this we will go on to discuss the results for each model in Sections 4 to 7. These will include the Barbieri, Hall and Nomura $SU(5)$ model [41] in Section 4; an $SU(5)$ model with an additional singlet in Section 5; an $SU(5)\times U(1)$ model in Section 6; and an $E_{6}$ model in Section 7. In Section 8 we will summarise our findings and draw some conclusions. 2 Theoretical Framework Here, we briefly review the theoretical framework of Schrek-Schwarz models. In Section 2.1 we will discuss the compactification and introduce our additional symmetries, and show how this may be projected onto 4 dimensions in Section 2.2. We will demonstrate how this can be used to break supersymmetry and the unification gauge group in Sections 2.3 and 2.4 respectively. Finally we will discuss the placement of fermionic matter in Section 2.5. 2.1 Scherk-Schwarz compactification We first briefly review the Schrek-Schwarz compactification of extra dimensions, following the notation of Quiros [42], to provide context for our study and set our notation. Here we initially consider only 5-dimensional models and restrict ourselves to flat compactifications. We split our space-time coordinates into $x^{\mu}$, defined on our usual flat Minkowski space-time, and $y$, our extra coordinate describing the compactified space $C$ of finite size $R$. We work in the regime $E\ll 1/R$ and integrate out the heavy modes of the theory, resulting in a 4-dimensional effective field theory of the 5-dimensional action. In general, the compact manifold $C$ may be written in terms of a non-compact manifold $\mathcal{M}$, modded out by a discrete group $G$, so that $C=\mathcal{M}/G$. The discrete group $G$ acts freely on the manifold $\mathcal{M}$ via some operators $\tau_{g}$, $$\tau_{g}:\mathcal{M}\rightarrow\mathcal{M},\qquad g\in G,$$ (1) where the $\tau_{g}$ live in the representation space of $G$. The compact space is obtained by identifying points that belong to the same ‘orbit’, $$y\equiv\tau_{g}(y),$$ (2) which in turn must be reflected in the symmetry of our theory. That is, physics should not be dependent on individual points in $y$, but rather on their orbits, and our (5-dimensional) Lagrangian must reflect this identification, $$\mathcal{L}_{5}[\phi(x^{\mu},y)]=\mathcal{L}_{5}[\phi(x^{\mu},\tau_{g}(y))],$$ (3) where $\phi(x^{\mu},y)$ are some generic fields. Clearly a sufficient condition on these fields is $\phi(x^{\mu},\tau_{g}(y))=\phi(x^{\mu},y)$, which leads to what we call ordinary compactification. However, a more general necessary and sufficient condition is $\phi(x^{\mu},\tau_{g}(y))=T_{g}\phi(x^{\mu},y)$, where $T_{g}$ is an appropriate representation of $G$ acting on field space. The case $T_{g}=1$ recovers ordinary compactification, but non-trivial $T_{g}$ results in Scherk-Schwarz compactification, which is the main focus of this paper. The simplest compact space we can use is the circle, $C=S^{1}$, which may be constructed as the identification $\mathds{R}^{1}/\mathds{Z}$, where $\mathds{R}^{1}$ is the real line and $\mathds{Z}$ corresponds to a translation of $2\pi nR$ with $n\in\mathds{Z}$. The action of the infinite discrete group modulo $\mathds{Z}$ is given by the operators $\tau_{n}$, acting on elements $y\in\mathds{R}^{1}$ by mapping them onto $$\tau_{n}(y)=y+2\pi nR,\qquad n\in\mathds{Z},$$ (4) where $R$ is the radius of the circle $S^{1}$. Effectively we’ve taken the real number line $\mathds{R}^{1}$ and ‘curled’ it up, therefore restricting the domain of our manifold to $[y,y+2\pi R)$. The first generator of $\mathds{Z}$, $\tau_{1}=2\pi R$, will correspond to the only independent ‘twist’ $T$, acting on fields $\phi$ according to, $$\phi(x^{\mu},\tau_{1}(y))=\phi(x^{\mu},y+2\pi R)=T\phi(x^{\mu},y),$$ (5) since the other operators $\tau_{n}$, $n>1$, can be built out of multiple applications of $\tau_{1}$. Unfortunately these 5-dimensional models do not allow for chiral fermions [43], due to the 5D Lorentz algebra containing $\gamma^{5}$ resulting in the smallest irreducible representation being a Dirac fermion. To overcome this we further ‘fold’ our extra dimension, converting our circle into an interval. This is an orbifold compactification. We assign a parity to our fields under a $\mathds{Z}_{2}$ transformation $$y\to\xi(y)=-y,$$ (6) which identifies the lower half of our circle with the upper half, as seen in Figure 1. The manifold $O=S^{1}/\mathds{Z}_{2}$ is no longer smooth but now becomes an orbifold with fixed points at $y=0$ and $\pi R$. As before, our Lagrangain must remain invariant under this transformation, so the fields transform as $$\phi(x^{\mu},\xi_{h}(y))=Z\phi(x^{\mu},y),$$ (7) where $Z$ is the parity assignment. When integrating over the 5th dimension and writing our theory as a 4-dimensional effective theory, we will generate an effective tower of Kaluza-Klein states. Under the $\mathds{Z}_{2}$ identification the two component spinors within the Dirac fermion will have opposite parities [44]. Those which have an even assignment (i.e. von Neumann boundary conditions) will be allowed to have zero modes whereas the odd ones (i.e. Dirichlet boundary conditions) will not. Therefore, by choosing our parities we can prevent whichever zero mode we like from appearing, allowing us to lift the right-handed fermions and regain, what is in effect, a chiral theory. We note that $Z^{2}=\mathbbm{1}$, and $Z,T$ must obey the consistency condition, $$TZT=Z\quad\Leftrightarrow\quad ZTZ=T^{-1}$$ (8) We can easily see this latter relation geometrically by applying $y\xrightarrow{\tau}y+2\pi R\xrightarrow{\xi}-y-2\pi R\xrightarrow{\tau}-y=\xi(% y),$ and requiring an analogous relation between the field operators. Recall that $T$ corresponds to an operator expressing the symmetry defined by $G$, so we can write $T$ as, $$T=\exp(2\pi i\beta^{a}\lambda^{a}),$$ (9) where $\beta^{a}$ parameterises the symmetry transformation, and $\lambda^{a}$ are the Hermitian generators. For infinitesimal transformations we may rewrite the consistency condition to $\mathcal{O}(\beta^{2})$ as, $$\{\beta^{a}\lambda^{a},Z\}=0.$$ (10) We will later be interested in fields that transform as doublets under a global SU(2) symmetry. In this case since $Z^{2}=\mathbbm{1}$, we have two choices for $Z$, i.e. $Z=\sigma_{3}$ or $Z=\pm\mathbbm{1}$. The latter choice $Z=\pm\mathbbm{1}$ requires $T=\pm\mathbbm{1}$ and we recover ordinary compactifications. For the non-trivial case, $Z=\sigma_{3}$ and the generators of $T$ are also Pauli matrices, $\vec{\lambda}=\vec{\sigma}$. If $T=\exp(i\pi\sigma^{3})=\mathbbm{1}$, we again have an ordinary compactification. The remaining solution $T=\exp(2\pi i(\beta^{1}\sigma^{1}+\beta^{2}\sigma^{2}))$ may be simplified by rotating away the $\sigma^{1}$ direction whilst preserving $\sigma^{3}$, so that, $$Z=\sigma^{3},\qquad T=\exp(2\pi i\alpha\sigma^{2}),$$ (11) where $\alpha$ parameterises the transformation. 2.2 Projecting to 4 dimensions The building blocks of our 5D theory are hypermultiplets $\mathscr{H}$, and vector multiplets $\mathscr{V}$, which we present in the formalism introduced by Mirabelli and Peskin [45]. The hypermultiplets $\mathscr{H}$ consist of complex scalars $A^{i}$, $i=1,2$ and a Dirac spinor $\Psi$, $$\mathscr{H}=(A^{i},\Psi).$$ (12) Note that the minimal fermionic matter field in 5D is the Dirac spinor since it is the lowest weight representation of the Lorentz algebra. Furthermore note that $\Psi,A^{i}$ transform as a doublet under $SU(2)_{R}$ [46], the residual 5D supersymmetry. The Vector multiplets $\mathscr{V}$ consist of the 5D gauge fields $A_{M},\mbox{ M = 0\ldots 5}$, gauginos $\lambda^{i}$, $i=1,2$, and a scalar $\Sigma$ in the adjoint representation, $$\mathscr{V}=(A_{M},\lambda^{i},\Sigma).$$ (13) $\lambda^{i}$ transforms as a doublet under $SU(2)_{R}$, where $\lambda^{i}$ are symplectic Majorana spinors, $$\lambda^{i}=\begin{pmatrix}\lambda^{i}_{L}\\ \\ \epsilon_{ij}\overline{\lambda}_{jL}\end{pmatrix},\qquad\overline{\lambda}_{jL% }\equiv-i\sigma^{2}(\lambda^{j}_{L})^{*},$$ (14) with $\lambda^{i}_{L}$ a left handed Weyl spinor. These are defined in the 5D space-time with, $$\eta_{MN}=\mathop{\mathrm{diag}}(1,-1,-1,-1,-1),\qquad\gamma^{M}=\{\gamma^{\mu% },\gamma^{5}\},\qquad\gamma^{5}=\begin{pmatrix}-i&0\\ 0&i\end{pmatrix}\otimes I_{2},$$ (15) and $$\sigma^{\mu}=(\mathbbm{1},\vec{\sigma}),\qquad\overline{\sigma}^{\mu}=(% \mathbbm{1},-\vec{\sigma}),$$ (16) where again we emphasize that we’re considering a flat extra dimension, ignoring the brane tension (i.e. not a warped scenario). The on-shell vector multiplet $\mathscr{V}$ is extended to off-shell by adding a $SU(2)_{R}$ triplet of real valued auxiliary fields $X^{a},a=1,2,3$, and the hypermultiplet $\mathscr{H}$ is similarly extended by adding a complex doublet of auxiliary fields $F^{i},i=1,2$, $$\displaystyle\mathscr{V}_{\text{on-shell}}$$ $$\displaystyle=(A_{M},\Sigma,\lambda^{i})$$ $$\displaystyle\rightarrow$$ $$\displaystyle\mathscr{V}_{\text{off-shell }}$$ $$\displaystyle=(A_{M},\Sigma,\lambda^{i},X^{a}),$$ (17) $$\displaystyle\mathscr{H}_{\text{on-shell}}$$ $$\displaystyle=(A^{i},\Psi)$$ $$\displaystyle\rightarrow$$ $$\displaystyle\mathscr{H}_{\text{off-shell}}$$ $$\displaystyle=(A^{i},\Psi,F^{i}).$$ (18) These fields obey the supersymmetry transformations of Ref. [42]. For the $S^{1}/\mathds{Z}_{2}$ orbifold, the fixed points at $y=0$ and $\pi R$ provide 4-dimensional Minkowski manifolds, and compactification will result in a tower of Kaluza-Klein states as usual. We may restrict which zero-modes appear by assigning their $\mathds{Z}_{2}$ assignment to $+1$ if the zero-mode is to be allowed and $-1$ if we want to forbid it. Specifically, by choosing $Z=\sigma^{3}$, we assign, $$\displaystyle Z=+1$$ $$\displaystyle:\qquad A^{M},\lambda^{1}_{L},X^{3};\qquad\quad\hskip 4.267913pt% \xi^{1}_{L},$$ (19) $$\displaystyle Z=-1$$ $$\displaystyle:\qquad A^{5},\Sigma,\lambda^{2}_{L},X^{1,2};\qquad\xi^{2}_{L},$$ (20) where $\xi^{i},i=1,2$ are the corresponding 5D supersymmetry parameters, which are symplectic Majorana spinors. After orbifolding, the states on the $y=0$ brane will obey reduced supersymmetric transformations [42], and $$\begin{rcases}\delta_{\xi}X^{3}=(\xi^{1}_{L})^{\dagger}\overline{\sigma}^{\mu}% \mathcal{D}_{\mu}\lambda^{1}_{L}-i(\xi^{1}_{L})^{\dagger}\mathcal{D}_{5}% \overline{\lambda}^{2}_{L}+h.c.\\ \delta_{\xi}(\partial_{5}\Sigma)=-i(\xi^{2}_{L})^{\dagger}\mathcal{D}_{5}% \overline{\lambda}^{2}_{L}+h.c.\end{rcases}\quad\Rightarrow\quad\delta_{\xi}(X% ^{3}-\partial_{5}\Sigma)=\xi^{1}_{L}\overline{\sigma}^{\mu}\mathcal{D}_{\mu}% \lambda^{1}_{L}+h.c.$$ (21) We see that $X^{3}-\partial_{5}\Sigma$ transforms as a total derivative. The vector multiplet projected onto the brane is then, in the Wess-Zumino gauge, $(A^{\mu},\lambda^{1}_{L},D)$ where $D=X^{3}-\partial_{5}\Sigma$ [47, 48]. Analogously for the hypermultiplet, starting with $\mathcal{H}=(A^{i},\Psi,F^{i})$, with Dirac spinor $\Psi=(\psi_{L},\psi_{R})$, we have the assignment, $$\displaystyle Z=+1$$ $$\displaystyle:\qquad A^{1},\psi_{L},F^{1};\qquad\xi^{1}_{L},$$ (22) $$\displaystyle Z=-1$$ $$\displaystyle:\qquad A^{2},\psi_{R},F^{2};\qquad\xi^{2}_{L}.$$ (23) After orbifolding we have, $$\begin{rcases}\delta_{\xi}F^{1}=i\sqrt{2}(\xi^{1}_{L})^{\dagger}\overline{% \sigma}^{\mu}\partial_{\mu}\psi_{L}+\sqrt{2}(\xi^{1}_{L})^{\dagger}\partial_{5% }\psi_{R}\\ \delta_{\xi}(\partial_{5}A^{2})=\sqrt{2}(\xi^{1}_{L})^{\dagger}\partial_{5}% \psi_{R}\end{rcases}\quad\Rightarrow\quad\partial_{\xi}(F^{1}-\partial_{5}A^{2% })=i\sqrt{2}(\xi^{1}_{L})^{\dagger}\overline{\sigma}^{\mu}\partial_{\mu}\psi_{L}$$ (24) which also transforms as a total derivative. The off-shell chiral supermultiplet on the $y=0$ brane is $(A^{1},\psi_{L},F)$ with $F=F^{1}-\partial_{5}A^{2}$. The 5D Lagrangian for the gauge fields is the standard 5D Super Yang-Mills Lagrangian [47], $$\mathcal{L}_{5}=\Tr\left\{-\frac{1}{2}F^{2}_{MN}+(\mathcal{D}_{M}\Sigma)^{2}+i% \overline{\lambda}\gamma^{M}\mathcal{D}_{M}\lambda+\vec{X}^{2}-\overline{% \lambda}[\Sigma,\lambda]\right\}.$$ (25) The corresponding Lagrangian on the $y=0$ brane will have a standard form corresponding to a 4D chiral multiplet coupled to a gauge multiplet (see Quiros [42]). This gives a bulk and a brane Lagrangian with the added feature of a superpotential $W$ that connects the bulk and brane matter fields via the interaction of chiral superfields on the $y=0$ brane, $W(\Phi_{0},\mathcal{A})$, where by $\Phi_{0}$ we mean any general 4D chiral superfield. The 5D Lagrangian for the hypermultiplet $\mathscr{H}$ components, ignoring the gauge coupling for now, will be, $$\mathcal{L}_{5}=|\partial_{M}A^{i}|^{2}+i\overline{\psi}\gamma^{M}\partial_{M}% \psi+|F^{i}|^{2}.$$ (26) The brane Lagrangian involving interactions with matter will then be given by, $$\mathcal{L}_{4}=F^{1}\frac{\partial W}{\partial A^{1}}+h.c.=(F^{1}-\partial_{5% }A^{2})\frac{\partial W}{\partial A^{2}}+h.c.$$ (27) Integrating out the auxiliary field $F^{1}$ leaves the action, $$S=\int d^{4}x\hskip 1.422638ptdy\left\{|\partial_{M}A^{i}|^{2}+i\overline{\psi% }\gamma^{M}\partial_{M}\psi-\delta(y)\left[(\partial_{5}A^{2}\frac{\partial W}% {\partial A^{1}}+h.c.)+\delta(y)\absolutevalue{\frac{\partial W}{\partial A^{1% }}}\right]\right\}.$$ (28) 2.3 Supersymmetry breaking We first demonstrate supersymmetry breaking in the simpler case where gauge symmetry is unaffected. Consider a vector multiplet $\mathscr{V}=(A_{M},\lambda^{i},\Sigma)$ and two Higgs matter hypermultiplets $\mathscr{H^{a}}=(H^{a}_{i},\Psi^{a})\hskip 2.845276pt,a=1,2$ which can be rotated into one another under an $SU(2)_{H}$ flavor symmetry. The 5D action will then be invariant under $SU(2)_{R}\times SU(2)_{H}$ with a Lagrangian, (30) as long as the fields are of the appropriate respresentations, e.g. $\lambda^{i}\sim(\mathbf{2}_{R},1_{H})$, $\Psi^{a}\sim(1_{R},\overline{\mathbf{2}}_{H})$, $H^{a}_{i}\sim(\mathbf{2}_{R},\overline{\mathbf{2}}_{H})$, with the subscripts $R,H$ refering to $SU(2)_{R}$ or $SU(2)_{H}$. With our choice of $Z=\sigma^{3}$ we then have the eigenvalues, $$\displaystyle Z=+1$$ $$\displaystyle:\qquad\lambda^{1}_{L},V_{\mu};\qquad H^{1}_{1},\psi^{1}_{R};% \quad H^{2}_{2},\psi^{2}_{L},$$ (31) $$\displaystyle Z=-1$$ $$\displaystyle:\qquad\lambda^{2}_{L},V_{5},\Sigma;\hskip 2.560748pt\quad H^{1}_% {2},\psi^{1}_{L};\quad H^{2}_{1},\psi^{2}_{R},$$ (32) which forbids massless Kaluza-Klein modes for the $Z=-1$ states. The parity operator may be written as a product of operators acting on either the $SU(2)_{R}$ or $SU(2)_{H}$ symmetries, $$Z=\pm(\sigma^{3})_{R}\otimes(\sigma^{3})_{H}\otimes i\gamma^{5},$$ (33) where the $i\gamma^{5}$ acts only on the spinor indices of the representations to project the left/right handed chirality of the Dirac spinors. Extending the twist operator $T$ to $SU(2)_{R}\times SU(2)_{H}$ gives, $$T=e^{2\pi i\alpha\sigma^{2}}\otimes-e^{2\pi i\gamma\sigma^{2}},$$ (34) where $\alpha$ parameterises the $SU(2)_{R}$ symmetry, and $\gamma$ the $SU(2)_{H}$. Under this twist, fields $\phi$ must obey our boundary conditions, $$\phi(x^{\mu},y+2\pi R)=e^{2\pi i\alpha\sigma^{2}}\phi(x^{\mu},y),$$ (35) where to illustrate the argument we’ve just taken the action dictated by the $SU(2)_{R}$ field space. The above has the trivial solution: $$\phi(x^{\mu},y+2\pi R)=e^{i\alpha\sigma^{2}y/R}\tilde{\phi}(x^{\mu},y)$$ (36) where $\tilde{\phi}(x^{\mu},y+2\pi R)=\tilde{\phi}(x^{\mu},y)$ is a periodic field in $y$ and can be in turn be expanded into its KK modes. Applying this reasoning to our fields, we find, $$\displaystyle\begin{pmatrix}\lambda_{1}\\ \lambda_{2}\end{pmatrix}$$ $$\displaystyle=e^{i\alpha\sigma^{2}y/R}\begin{pmatrix}\tilde{\lambda}_{1}\\ \tilde{\lambda}_{2}\end{pmatrix},$$ (37) $$\displaystyle\begin{pmatrix}\Psi^{1}\\ \Psi^{2}\end{pmatrix}$$ $$\displaystyle=\begin{pmatrix}\tilde{\Psi}^{1}\\ \tilde{\Psi}^{2}\end{pmatrix}e^{-i\gamma\sigma^{2}y/R},$$ (38) $$\displaystyle\begin{pmatrix}H^{1}_{1}&H^{1}_{2}\\ H^{2}_{1}&H^{2}_{2}\end{pmatrix}$$ $$\displaystyle=e^{i\alpha\sigma^{2}y/R}\begin{pmatrix}\tilde{H}^{1}_{1}&\tilde{% H}^{1}_{2}\\ \tilde{H}^{2}_{1}&\tilde{H}^{2}_{2}\end{pmatrix}e^{-i\gamma\sigma^{2}y/R},$$ (39) where each aquires an $\alpha$ and/or $\gamma$ parameterised exponential according to their transformation properties under $SU(2)_{R}\times SU(2)_{H}$. Applying this to the Lagrangian of Eq. (LABEL:BIGLAG), the kinetic part, or more specifically the $\partial_{5}$ derivative, acts on the boundary conditions giving us effective $4D$ soft SUSY breaking masses as in Barbieri, Hall, and Nomura’s model [41], $$\mathcal{L}_{\cancel{SUSY}}=-\frac{1}{2}\frac{\alpha}{R}(\lambda^{1(0)}_{L}% \lambda^{1(0)}_{L}+h.c.)-\left(\frac{\alpha^{2}}{R^{2}}+\frac{\gamma^{2}}{R^{2% }}\right)(\absolutevalue{h_{u}}^{2}+\absolutevalue{h_{d}}^{2})+\frac{2\alpha% \gamma}{R^{2}}(h_{u}h_{d}+h.c)-\frac{\gamma}{R}(\overline{\psi}_{h}\psi_{h}+h.% c.),$$ (40) where we’ve labeled the zero-modes of our solutions, $h_{u}=H^{1(0)}_{1}$, $h_{d}=H^{2(0)}_{2}$, $\overline{\psi}_{h}=\overline{\psi}^{2(0)}_{L}$, $\psi_{h}=\psi^{1(0)}_{R}$. In the language of the MSSM, the Scherk-Schwarz twists have generated universal gaugino breaking terms ($m_{0}=\hat{\alpha}$), and holomorphic Higgs terms ($m^{2}_{H_{u}}=m^{2}_{H_{d}}=\hat{\alpha}^{2}$, $\mu=\hat{\gamma}$, $\mu B=-2\hat{\alpha}\hat{\gamma}$) via the $\hat{\alpha}\equiv\alpha/R,\hat{\gamma}=\gamma/R$ parameters controlling the $SU(2)_{R}\times SU(2)_{H}$ breaking. 2.4 Gauge Breaking We have seen how the Scherk-Schwarz compactification provides supersymmetry breaking, but it can also break our GUT’s gauge symmetry $\mathscr{G}$ to a subgroup $\mathscr{H}$ on the brane. To do this, we extend the definition of the parity assignment on the fields with non-trivial gauge structure to, $$\displaystyle A_{M}^{A}(x^{\mu},-y)$$ $$\displaystyle=\alpha^{M}\Lambda^{AB}A^{B}_{M}(x^{\mu},y),$$ (41) $$\displaystyle\psi(x^{\mu},-y)$$ $$\displaystyle=\lambda_{R}\otimes(i\gamma^{5})\psi(x^{\mu},+y),$$ (42) where $\alpha^{M}=\pm 1$ are the previous parity assignments, $\Lambda^{AB}$ is a matrix with $\Lambda^{2}=1$ and eigenvalues $\pm 1$, and $\lambda_{R}$ is a hermitian matrix acting on the representation space of the field $\psi_{R}$. In order to keep the bulk kinetic term $F^{A}_{MN}F^{A\,MN}$ invariant, $\Lambda$ must satisfy, $$f^{ABC}=\Lambda^{AA^{\prime}}\Lambda^{BB^{\prime}}\Lambda^{CC^{\prime}}f^{A^{% \prime}B^{\prime}C^{\prime}},$$ (43) where $f^{ABC}$ are the structure constants of the gauge group. Since $\Lambda$ has eigenvalues $\pm 1$ it can be written in a diagonal basis as $\Lambda^{AA^{\prime}}=\delta^{AA^{\prime}}\eta^{A^{\prime}}$, with $\eta^{A^{\prime}}=\pm 1$. In this basis we have, $$f^{ABC}=\eta^{A}\eta^{B}\eta^{C}f^{ABC},$$ (44) where there is no summation over repeated indices. We are free to choose whatever parity assignment $\eta$’s we like, and break the gauge symmetry, as long as they obey this constraint. Conversely, setting all $\eta$’s to $1$ recovers the trivial case of $\Lambda=1,\lambda_{R}=1$, maintaining the gauge symmetry. To break our group $\mathscr{G}$ to a subgroup $\mathscr{H}$ we must therefore keep the parities of field components in the directions corresponding to the generators of $\mathscr{H}$ even, while setting the others to be odd. We simplfy the treatment by choosing the $\eta^{A}$’s such that the generators $T^{A}$ are naturally split into two cases. Firstly, $T^{a}$ with $\eta^{a}=+1$ such that the surviving gauge group has generators $\mathscr{H}=\{T^{a}\}$. These $T^{a}$ transforms as $T^{a}\rightarrow\delta^{aa^{\prime}}\eta^{a^{\prime}}T^{a^{\prime}}=T^{a}$ so that the automorphism and the subgroup is preserved. Secondly, $T^{\hat{a}}$ with $\eta^{\hat{a}}=-1$ such that the broken group has generators $\mathscr{K}=\mathscr{G}/\mathscr{H}=\{T^{\hat{a}}\}$ (and now $T^{\hat{a}}\rightarrow-T^{\hat{a}}$). For example if our gauge group is $SU(2)$ and we choose $a=3;\hat{a}=1,2$ we would have $SU(2)$ breaking down to $U(1)$. The $\eta$ assignment will also impact the fields that live in the gauge representation space. Since we require the bulk action be invariant, we require the coupling, $$igA^{A}_{M}\overline{\psi}\gamma^{M}T^{A}\psi$$ (45) remain invariant. To acheive this the $\lambda_{R}$ matrix must satisfy, $$[\lambda_{R},T^{a}_{R}]=0\qquad\{\lambda_{R},T^{\hat{a}}_{R}\}=0$$ (46) Our choice in $\Lambda$ has split our representation into two implicit subspaces, with the $Z$ parity assignment dictated by the (anti-)commutation relations. For example, taking $SU(5)$ as the unification gauge group, we may choose $\Lambda$ such that $T^{a}\in G_{SM},T^{\hat{a}}\in SU(5)/G_{SM}$ so that $\lambda_{R}=\mathop{\mathrm{diag}}(+1,+1,+1,-1,-1)$, and the lowest non-trivial $SU(5)$ representation, $\mathbf{5}$, is naturally separated into $\mathbf{3}\oplus\mathbf{2}$. Fields with $Z=-1$ are prevented from having zero-modes, and acquire a heavy mass of $\mathcal{O}(1/R)$ via the $\partial_{5}$ derivative. The surviving gauge group can use the standard Higgs mechanism to undergo the usual Standard Model electro-weak breaking. We noted earlier that we can combine our $Z$ and $T$ transformations to form an alternative $Z^{\prime}$, giving us the equivalent orbifold $\mathds{R}^{1}/\mathds{Z}_{2}\times\mathds{Z}_{2}^{\prime}$. The above gauge breaking argument may be applied to the $\mathds{R}^{1}/\mathds{Z}_{2}\times\mathds{Z}_{2}^{\prime}$ orbifold. To this extent the gauge breaking can be assigned to either $Z,Z^{\prime}$ or the translation $T$ or a combination (due to them being isometries obeying the consistency condition in Eq. 8). The physical symmetry of the theory then consists of the generators $T^{a}$ that simultaneously commute with the chosen forms for $Z,Z^{\prime},T$. If we take $Z\sim\mathop{\mathrm{diag}}(+,+,+,+,+)$ and want to achieve $SU(5)\rightarrow G_{SM}$ breaking, we can chose, $$Z\sim\mathop{\mathrm{diag}}(+,+,+,+,+),\qquad T\sim\mathop{\mathrm{diag}}(+,+,% +,-,-).$$ (47) Note that the simultaneously anti-commuting generators $T^{\hat{a}}$ will determine the presence of non-trivial Wilson lines phases which can lead to Hosotani breaking [49, 50, 51], depending on the matter content of the theory. The above form of the gauge symmetry breaking assignment is chosen to ensure that we do not have any Wilson line phases present in the 4D theory. To summarise, the actions of our isometries on field space are defined by, $$\displaystyle Z$$ $$\displaystyle=(\sigma^{3})_{R}\otimes(\sigma^{3})_{H}\otimes\mathop{\mathrm{% diag}}(+,+,+,+,+),$$ (48) $$\displaystyle T$$ $$\displaystyle=e^{2\pi i\alpha\sigma^{2}}\otimes-e^{2\pi i\gamma\sigma^{2}}% \otimes\mathop{\mathrm{diag}}(+,+,+,-,-).$$ (49) The Scherk-Schwarz compactification allows us to break both supersymmetry and the unification gauge group on the $y=0$ brane. 2.5 Fermionic Matter: Brane vs Bulk The fermionic matter allows some freedom in whether they should be in the bulk as hypermultiplets via $\mathcal{L}_{5}$ or only on the brane as chiral multiplets via $\mathcal{L}_{4i},i=0$. Their placement will impact the number of required multiplets to provide the low energy Standard Model fields. For clarity, in this discussion we will assume an $SU(5)$ gauge structure. We begin with the simplest placement, brane matter. In this case we use the usual chiral multiplets from an ordinary $SU(5)$ model, i.e. the supersymetric Standard Model fields $U,D,Q,L,E$ which are contained in the $T_{\mathbf{10}}\sim\mathbf{10}\supset\{Q,U,E\}$ and $F_{\mathbf{\overline{5}}}\sim\overline{\mathbf{5}}\supset\{D,L\}$. These representations are now coupled to the $\mathds{Z}_{2}$ chiral projection of the Higgs hypermultiplets on the brane via the superpotential $W$. We note that when projecting the bulk matter hypermultiplets we form two chiral multiplets defined by either $Z=\pm 1$. The components of the hypermultiplet must transform to maintain gauge invariance in the bulk as dictated by the Lagrangian in Eq. (LABEL:BIGLAG). More specifically the components contained in the $Z=+1$ chiral multiplet will transform as the fundamental of the group while those in the $Z=-1$ one will transform as the conjugate, which we denote with a superscript ${}^{c}$. For an arbitrary matter hypermultiplet, coupled to an $SU(5)$ gauge structure, $$\mathscr{A}=(A^{i},\Psi_{a})\qquad\rightarrow\qquad\mathcal{A}=(A^{1},\psi^{A}% _{R})\sim\mathbf{5};\quad\mathcal{A}^{c}=(A^{2},\psi^{A}_{L})\sim\overline{% \mathbf{5}}$$ (50) Therefore as usual we have, $$S_{\text{Matter}}=\int d^{4}x\hskip 2.845276ptdy\hskip 2.845276pt\delta(y)% \left[\int d^{2}\theta\sum_{j,k=1}^{3}(y_{1})_{jk}T_{\mathbf{10}_{j}}T_{% \mathbf{10}_{k}}H^{c}_{\mathbf{5}}+(y_{2})_{jk}T_{\mathbf{10}_{j}}F_{\mathbf{% \overline{5}}_{k}}H_{\mathbf{\overline{5}}}+h.c.\right]$$ (51) where $H_{\mathbf{5}}=(H^{1}_{1},\psi^{1}_{R}),H_{\mathbf{\overline{5}}}=(H^{2}_{2},% \psi^{2}_{L})$, and we’ve introduced $3$ generations denoted by the index structure $j,k$. After orbifolding, the $H_{\mathbf{5}},H_{\mathbf{\overline{5}}}$ automatically acquire a $2-3$ splitting and the rest of the model’s phenomenology is analogous to the usual supersymetric SU(5) GUT. If on the other hand we put our matter fields as components of hypermultiplets in the bulk we run into another issue. Since all the bulk hypermultiplets will automatically undergo the $2-3$ splitting induced by the $T$ action, inserting just one of the chiral analogs $\mathscr{T}_{\mathbf{10}}$, $\mathscr{F}_{\mathbf{\overline{5}}}$ would result in having some of the states in the Standard Model spectrum projected out, i.e. we would not have the correct zero-mode spectrum. To get around this we must add two copies of each $SU(5)$ fermionic matter hypermultiplet, assigned opposite $Z$ parities with respect to each other. That is, we introduce $4$ hypermultiplets $\mathscr{T}_{\mathbf{10}}=\{T_{\mathbf{10}},T^{c}_{\mathbf{10}}\}$, $\mathscr{T}^{\prime}_{\mathbf{10}}=\{T^{\prime}_{\mathbf{10}},T^{\prime c}_{% \mathbf{10}}\}$, $\mathscr{F}_{\mathbf{\overline{5}}}=\{F_{\mathbf{\overline{5}}},F^{c}_{\mathbf% {\overline{5}}}\}$, $\mathscr{F}^{\prime}_{\mathbf{\overline{5}}}=\{F_{\mathbf{\overline{5}}},F^{c}% _{\mathbf{\overline{5}}}\}$, which we give the $Z$ assignments, $$\displaystyle\{T_{\mathbf{10}},T^{c}_{\mathbf{10}}\}$$ $$\displaystyle\rightarrow\{(+)T_{\mathbf{10}},(-)T^{c}_{\mathbf{10}}\},$$ (52) $$\displaystyle\{T^{\prime}_{\mathbf{10}},T^{\prime c}_{\mathbf{10}}\}$$ $$\displaystyle\rightarrow\{(-)T^{\prime}_{\mathbf{10}},(+)T^{\prime c}_{\mathbf% {10}},\}$$ (53) and analogous assignments for $\mathscr{F}_{\mathbf{\overline{5}}}$, $\mathscr{F}^{\prime}_{\mathbf{\overline{5}}}$. With these assignments our Lagrangian becomes that presented in Ref. [41]. However with this matter placement we have another added complexity since the individual hypermultiplets transform under the residual $SU(2)_{R}$ symmetry (note that we assume a trivial flavour action acting on $\mathscr{T},\mathscr{F}$ ). After orbifolding, the non-trivial SS conditions provide us with squark soft SUSY breaking masses via the kinetic part of the Lagrangian in Eq. (LABEL:BIGLAG), along with a contribution to the trilinear squark coupling $A_{0}$ via the $\partial_{5}Q^{2}$ term in Eq. (28). 3 Methodology and Constraints The compactification of the high scale extra dimensional model provides us with an effective 4D softly broken supersymmetric model at high energies. We would like to examine this model’s low energy spectrum to ensure that it is phenomenologically consistent with experimental observations. We include as inputs the high scale model parameters and use these to set the soft SUSY breaking parameters. We then use Remormalisation Group Equations (RGEs) to evolve our parameters down to the low scale, where we apply constraints. The RGE running is performed using the FlexibleSUSY [v.2.0.1] [52] spectrum generator with two-loop RGEs provided by SARAH [v.4.12.2] [53]. SARAH also provides the electroweak tadpole conditions. For example, in the $SU(5)$ model discussed in Section 4 the high scale inputs are $\hat{\alpha}$ and $\hat{\gamma}$, which we relate to the soft SUSY breaking parameters via Eqs. (58) and (59), and these are then run down to the low scale where electroweak symmetry is broken and experimental constraints applied. In principle, the electroweak tadpole equations could set our final low energy observables, the ratio of vacuum expectation values (vevs) of the two Higgs doublets, $\tan\beta$, and the $Z$-boson mass. However, for technical reasons it is easier to assign these values at the low scale. This means we have to (temporarily) relax some of our high scale relations between the soft SUSY breaking parameters and the model inputs. We choose to allow our choice of $\tan\beta$ to fix $\hat{\gamma}$ and leave $\mu B$ (the soft SUSY breaking parameter corresponding to the Higgs-higgsino mass parameter) unfixed. Only at the end of the process will we check if $\mu B=-2\hat{\alpha}\hat{\gamma}$ as required by Scherk-Schwarz compactification. We will refer to this as the ‘Scherk-Schwarz condition’. In practice, we will not insist this condition is obeyed exactly, due to the uncertainties arising from the RGE running. Instead we will insist that the Scherk-Schwarz condition is obeyed with 95% confidence. We stress that in principle, this is no different than forcing the relation at high energies and searching for values of $\tan\beta$ that satisfy the tadpole equations. To explore the parameter space we employ a ‘seeded random walk’ scanning algorithm. We first sample the phase space with a uniform distribution to find points that produce EWSB and inspect if they come close to satisfying our required constraints (such as the correct Higgs mass), with ‘closeness’ being defined by a global $\chi^{2}$. Then we perform a random walk around each point to search for those with a better fit and if such a point is found it becomes the new seed. This is repeated until we find a point that agrees with the required constraints (if it exists). The search is abandoned if computation time excedes a preset limit. This provides us with points that are theoretically well behaved but may still be experimentally excluded. We therefore then must check LHC and dark matter constraints. We apply LHC bounds and constraints from the ATLAS and CMS collaborations: 1. We insist on a Higgs mass in the range $123\leq m_{H}\leq 127\,$GeV, where we’ve assumed a $2\,$GeV theoretical uncertainty dominates those from the experimental measurement [1, 2]. 2. We require a gluino mass $m_{\tilde{g}}\geq 2\,$TeV [4, 54]. 3. We require a lightest neutralino mass $m_{\tilde{\chi}_{1}^{0}}\geq 537\,$GeV for $\tan\beta\in[10,50]$ [55]. 4. The stop quark $m_{\tilde{t}}$ should be heavier than $1\,$TeV [12]. 5. The lightes chargino mass is required to be $m_{\tilde{\chi}^{\pm}_{1}}\geq 460\,$GeV [56]. 6. Any extra gauge boson must have mass $m_{Z^{\prime}}\geq 2.4\,$TeV [57]. For scenarios that pass the LHC constraints and satisfy the Scherk-Schwarz constraint, we apply constraints on the Dark Matter relic density. We use the measurement from Planck [58], $$\Omega_{c}h^{2}=0.1157\pm 0.0023,$$ (54) and include a further $10\%$ uncertainty arising from the mass difference from MicrOmegas [59, 60, 61] and FlexibleSUSY. We therefore accept points with a dark matter relic density smaller than $\Omega_{c}h^{2}=0.1275$ to allow for the possibility of other sources of Dark Matter. 4 The Barbieri, Hall and Nomura SU(5) Model We first model we consider is an $SU(5)$ GUT in 5D, compactified on the $S^{1}/Z_{2}$ orbifold, as proposed by Barbieri, Hall and Nomura [41]. This model contains a vector multiplet $\mathscr{V}=(A_{M},\lambda^{i},\Sigma)$ and two Higgs hypermultiplets $\mathscr{H}^{a}=(H^{a},\Psi^{a})$, $a=1,2$. The 5D action is invariant under an $SU(2)_{R}\times SU(2)_{H}$ global symmetry where the fields have the representations $\lambda^{i}\sim(\mathbf{2}_{R},1_{H})$, $\Psi^{a}\sim(1_{R},\mathbf{2}_{H})$, $H^{a}_{i}\sim(\mathbf{2}_{R},\mathbf{2}_{H})$. The extra dimension is compactified at a scale $1/R=10^{16}\,$GeV to break both the $SU(5)$ symmetry and the supersymmetry. Under the compactification symmetries $y\leftrightarrow-y$ and $y\leftrightarrow y+2\pi R$ the fields transform with $$\displaystyle Z$$ $$\displaystyle=(\sigma^{3})_{R}\otimes(\sigma^{3})_{H}\otimes\mathop{\mathrm{% diag}}(+,+,+,+,+),$$ (55) $$\displaystyle T$$ $$\displaystyle=e^{2\pi i\alpha\sigma^{2}}\otimes-e^{2\pi i\gamma\sigma^{2}}% \otimes\mathop{\mathrm{diag}}(+,+,+,-,-),$$ (56) using the notation of [42], where the final matrix is acting on the $SU(5)$ space. The derivative with respect to the fifth dimension in the kinetic part of the Lagrangian acts on the boundary conditions giving us effective $4D$ soft SUSY breaking terms of the form [41] $$\displaystyle\mathcal{L}_{\cancel{SUSY}}=$$ $$\displaystyle-\frac{1}{2}\frac{\alpha}{R}(\lambda^{1(0)}_{L}\lambda^{1(0)}_{L}% +h.c.)-\left(\frac{\alpha^{2}}{R^{2}}+\frac{\gamma^{2}}{R^{2}}\right)(% \absolutevalue{h_{u}}^{2}+\absolutevalue{h_{d}}^{2})+\frac{2\alpha\gamma}{R^{2% }}(h_{u}h_{d}+h.c)$$ $$\displaystyle-\frac{\gamma}{R}(\overline{\psi}_{h}\psi_{h}+h.c.),$$ (57) where we’ve labeled the zero-modes as $h_{u}=H^{1(0)}_{1},h_{d}=H^{2(0)}_{2},\overline{\psi}_{h}=\overline{\psi}^{2(0% )}_{L},\psi_{h}=\psi^{1(0)}_{R}$. As previously discussed, we may still choose where to define our matter fields. We may either keep them restricted to the $y=0$ brane or allow them to propogate in the 5D bulk. Restricting them to the brane results in the MSSM at low energies with supersymmetry breaking masses given by $$\displaystyle m_{1/2}$$ $$\displaystyle=\hat{\alpha},$$ $$\displaystyle\mu$$ $$\displaystyle=\hat{\gamma},$$ $$\displaystyle m^{2}_{h_{u},h_{d}}$$ $$\displaystyle=\hat{\alpha}^{2},$$ $$\displaystyle\mu B$$ $$\displaystyle=-2\hat{\alpha}\hat{\gamma},$$ (58) and $$\displaystyle m^{2}_{\tilde{q},\tilde{u},\tilde{d},\tilde{l},\tilde{e}}$$ $$\displaystyle=0,$$ $$\displaystyle A_{0}$$ $$\displaystyle=-\hat{\alpha},$$ (59) where we take the GUT scale $M_{GUT}$ as the compactification scale $M_{GUT}=1/R$ and define $\hat{\alpha}=\alpha/R$ and $\hat{\gamma}=\gamma/R$. Note that with the brane matter placement the trilinear $A_{0}$ still gets a contribution from the $\partial_{5}H^{2}(dW/dH^{1})$ term in Eq. (28). If we instead allow matter in the bulk we gain extra contributions to $A_{0}$ and the squark soft SUSY breaking masses which arise from the $SU(2)_{R}$ symmetry. Then we have soft masses as seen in Eq. (58), but now have, $$\displaystyle m^{2}_{\tilde{q},\tilde{u},\tilde{d},\tilde{l},\tilde{e}}=$$ $$\displaystyle\hat{\alpha}^{2},$$ $$\displaystyle A_{0}=$$ $$\displaystyle-3\hat{\alpha}.$$ (60) The extra contributions to $A_{0}$ and the soft SUSY breaking squark masses arise as a consequence of the matter fields transforming under the $SU(2)_{R}$ symmetry. With $1/R\sim 10^{16}\,$GeV, this model will naturally produce a supersymmetry breaking scale of order the GUT scale, far too high for low energy supersymmetry. In [41] the authors set $\alpha$ and $\gamma$ to be extremely small, so that $\hat{\alpha}=\alpha/R$ and $\hat{\gamma}=\gamma/R$ are of order a TeV. Consequently $\alpha$ and $\gamma$ must be of order $10^{-13}$, which presents a fine-tuning problem. Why must they be so small but non-zero? It seems that we have just swapped the gauge hierarchy problem for another fine-tuning problem. We will not tackle this issue here, but only express hope that these small parameters may be caused by the underlying UV completion of the theory. Once our low energy scenarios are generated in FlexibleSUSY we then confront them with the Scherk-Schwarz condition and the experimental constraints outlined in Sec. 3, Since we have allowed $\tan\beta$ to fix $\hat{\gamma}$ our only input parameters are $\hat{\alpha}$ and $\tan\beta$. We show generated scenarios in the $\hat{\alpha}$-$\tan\beta$ plane in Figures 2 and 3. The colour bar represents the mass of the lightest Higgs boson which we would like to identify with the discovered $125\,$GeV resonance. Points denoted with a circle have passed the LHC and DM constraints, and have the desired Higgs mass. In contrast, points that pass the Scherk-Schwarz constraint are denoted by triangles. The fainter points in the background are points that fail these constraints (but are otherwise well behaved). Figure 2 shows scenarios where the matter is kept on the $y=0$ brane, while Figure 3 allows matter to propogate in the bulk. We see that there is no overlap between the points providing the correct Higgs mass, while passing LHC and DM constraints, and those that conform with the Scherk-Schwarz condition. In essence, the Scherk-Schwarz condition prohibits a heavy enough lightest Higgs boson. However, we note that the Higgs boson mass is not too far from its measured value, particularly when matter is allowed to propagate in the bulk, which encourages us to study non-minimal extensions. 5 An SU(5) model with an additional singlet We have seen that a minimal SU(5) model does not support Higgs bosons heavy enough to be the observed 125 GeV resonance. However, the Higgs boson mass may gain contributions from additional states in the spectrum, so we now extend our investigation by considering the model with an additional scalar electroweak singlet. We have two choices for introducing the new scalar: we could introduce a chiral multiplet scalar singlet on the brane $S=(s,\psi_{s})$; or introduce a hypermultiplet $\mathscr{S}=\{s^{i},\Psi_{S}\}$ coupled to the Higgs. Here we will only couple our scalar to the Higgs and to itself, but again consider having matter in both the brane or the bulk. The most general next-to-minimal superpotential that will result in either of the scalar/matter combinations at the low energy will be that of a general NMSSM: $$W=W_{\text{Higgs-Fermions}}+\mu H_{u}H_{d}+\lambda H_{u}H_{d}S+\frac{1}{3}% \kappa S^{3}+LS+\frac{1}{2}M_{S}S^{2}$$ (61) Note that we have kept an explicit $\mu H_{u}H_{d}$ term in contrast to the more usual $\mathds{Z}_{3}$-invariant NMSSM for which this term is absent. This is because the model does indeed produce an effective $\mu$ via the $\partial_{5}$ derivative, thus breaking the $\mathds{Z}_{3}$ symmetry of the NMSSM. Using a shift symmetry we set the linear term $L=0$, and will also set $M_{S}=0$, not to be confused with $m_{S}^{2}$ the soft SUSY breaking mass for the scalar superfield. Our effective holomorphic terms are then a combination of the Scherk-Schwarz $SU(2)_{H}$ flavor breaking along with a contribution arising from the vev of $S$, $$\displaystyle\mu_{\text{eff}}$$ $$\displaystyle=\mu+\frac{1}{\sqrt{2}}\lambda\expectationvalue{S},$$ (62) $$\displaystyle\mu B_{\text{eff}}$$ $$\displaystyle=\mu B+\frac{1}{\sqrt{2}}T_{\lambda}\expectationvalue{S}+\frac{1}% {2}\kappa\lambda\expectationvalue{S}^{2}.$$ (63) We also assume that the only soft SUSY breaking masses arise from the Scherk-Schwarz mechanism. Therefore our only additional input parameters are $\kappa$ and $\lambda$. For the simplest case with the scalar $S$ on the brane as a chiral supermultiplet along with brane confined matter. We find soft SUSY breaking masses as in Eqs.(58) and (59), and additionally $$m_{S}^{2}=0\qquad T_{\lambda}=-2\lambda\hat{\alpha}\qquad T_{\kappa}=0.$$ (64) This new equation (64) holds also for a bulk fermions too but must be used with Eqs.(58) and (60). The results or our analysis for this model are shown in Figures 4 and 5, where we using the same convention for the points as in the previous figures. We allowed the additional parameters $\lambda$ and $\kappa$ to vary from $0$ to $0.9$. We see that without enforcing the Scherk-Schwarz constraint, both versions produce an appropriate low energy SM spectrum with the appropriate Higgs mass. The only significant difference is that bulk matter allows a larger range of $\tan\beta$ values, while the brane matter requires $\tan\beta\lesssim 25$. It is also interesting to note that all the acceptable points reside in the region with $\hat{\alpha}\gtrsim 2\,$TeV indicating that these correspond to a SUSY scale that ‘naturally’ falls in the $\mathcal{O}(1-10)$$\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ range. However, as for the ‘vanilla’ $SU(5)$ model, the points that pass the Scherk-Schwarz constraint do not overlap with those which pass Higgs and LHC constraints. The contribution to the Higgs mass from the additional singlet has not been sufficient to provide agreement. This is a recurrent theme that we see all through our studies; the points that would originate from the Scherk-Schwarz breaking of $SU(2)_{H}$, $SU(2)_{R}$, have difficulty producing a large enough Higgs mass and/or don’t pass LHC constraints. This is more pronounced when we have fermions on the brane than when they are in the bulk, but remains true in both cases. Indeed, in the latter case, the Schrek-Schwarz constraint comes rather close to the acceptable phenomenological region, so a closer look is needed. Perhaps we have been overly conservative with our error estimates for $\mu,\mu B$, and a relaxation of these uncertainties would allow agreement. For example, the maximum Higgs mass for the Scherk-Schwarz points is $m_{H}\approx$116.9\text{\,}\mathrm{G}\mathrm{e}\mathrm{V}$$, which is close enough to provide some doubt. Alternatively we may place the additional scalar in the bulk. To acheive this, we introduce the $SU(5)$ singlet hypermultiplet $\mathscr{S}=\{s^{i},\Psi_{s}\},\Psi_{s}=(\psi^{s}_{L},\psi^{s}_{R})$, where $s^{i},i=1,2$ transforms only under the $SU(2)_{R}$ residual supersymmetry. Analogous to our previous treatment, we assign the $Z$ parities, $$\displaystyle Z=+1$$ $$\displaystyle:\qquad s^{1},\psi^{s}_{L},$$ (65) $$\displaystyle Z=-1$$ $$\displaystyle:\qquad s^{2},\psi^{s}_{R}.$$ (66) This projects out the corresponding zero-modes, which are then coupled to our Higgs in the same way as in Eq. (61). Under $T$, the fields transform according to, $$\begin{pmatrix}s^{1}\\ s^{2}\end{pmatrix}=e^{i\alpha\sigma^{2}y/R}\begin{pmatrix}\tilde{s}^{1}\\ \tilde{s}^{2}\end{pmatrix},$$ (67) which again will produce a soft SUSY breaking mass for the scalar $m_{S}^{2}$, via the $\partial_{5}$. In this case of a bulk scalar hypermultiplet $\mathscr{S}$ we again have Eq. (58), and either Eq. (59) or Eq. (60) for either brane or bulk fermions respectively. However, instead of Eq. (64) we now have $$m_{S}^{2}=\hat{\alpha}^{2}\qquad T_{\lambda}=-3\lambda\hat{\alpha}\qquad T_{% \kappa}=-\kappa\hat{\alpha}.$$ (68) The results for these choices are shown in Figures 6 and 7. Our story seems to repeat itself as the Scherk-Schwarz condition is not compatible with the Higgs mass and/or LHC constraints. Once again, the gap is much more pronounced for brane matter than bulk matter, and indeed the gap looks almost absent in Figure 7. To be clear that there is indeed no overlap in this latter case, we have also plotted the data of Figure 7 as $2\hat{\alpha}+\mu B/\mu$ against $m_{H}$, with $\tan\beta$ as the point’s colour in Figure 8. The Schrek-Schwarz condition is exactly realised for points at $2\hat{\alpha}+\mu B/\mu=0$ and the spread of points around this value is due to uncertainties. One can clearly see that these points have no overlap with the correct Higgs mass region. Even when we artificially inflate our uncertainties by a factor of 10 (not shown), we do not find an overlap, though the Higgs mass becomes significantly better. So unfortunately once again we cannot reconcile this model and Scherk-Schwarz breaking with the Higgs mass and experimental constraints. We may also consider a variant of this model similar to the more usual $\mathds{Z}_{3}$-invariant NMSSM but setting our $\mu=0$, so that the superpotential is $$W=W_{\text{Higgs-Fermions}}(\mu=0)+\lambda H_{u}H_{d}S+\frac{1}{3}\kappa S^{3}% +LS+\frac{1}{2}M_{S}S^{.}$$ (69) Here we have effectively set $\hat{\gamma}=0$ and allowed an effective $\mu$ and its supersymmetry breaking partner parameter to be generated entirely throught the vev of the new scalar. That is, $$\displaystyle\mu_{\text{eff}}$$ $$\displaystyle=\frac{1}{\sqrt{2}}\lambda\expectationvalue{S},$$ (70) $$\displaystyle\mu B_{\text{eff}}$$ $$\displaystyle=\frac{1}{\sqrt{2}}T_{\lambda}\expectationvalue{S}+\frac{1}{2}% \kappa\lambda\expectationvalue{S}^{2}.$$ (71) Significantly, since $\hat{\gamma}=0$, the Scherk-Schwarz constraint is absent and electroweak symmetry breaking proceeds just like in the NMSSM with freedom to chose $\tan\beta$. With this constraint gone, we may be hopeful that we can now find an acceptable Higgs mass and avoid LHC constraints. Again we have in principle a choice of brane/bulk scalar, as well as brane/bulk fermions. However, we find that only one combination, scalars on the brane with fermions in the bulk, provides EWSB at all. Unfortunately even this model is unsatisfactory because it is unable to avoid LHC experimental constraints (mainly chargino and neutralino mass constraints) and constraints on the Dark Matter relic density. Removing any of these constraints would allow a significant number of allowed scenarios, but enforcing them all at once rules out the entire parameter space. To provide an example, we show the $\hat{\alpha}$-$\tan\beta$ plane with the chargino constraint removed in Figure 9. The colour of the point now depends on the chargino mass and we can see that they are well below our constraint of $460\,$GeV described in section 3. 6 An SU(5) $\times$ U(1) model Confronted with the inability of the simplest models to generate a heavy enough Higgs boson while avoiding LHC and Dark Matter constraints, we may once again increase the complexity of our model. Next we will consider an $SU(5)$ model with an additional $U(1)$ symmetry, similar to the USSM. The superpotential is identical to that of Eq. (69), but with the added complexity of the low energy gauge group being extended to $G_{SM}\times U(1)$. The additional $U(1)$ is broken at the SUSY scale via the brane scalar (projected or placed), prior to which we assign our fields appropriate charges. The assignment of these charges is arbitrary and model dependent, but it is useful to set them to correspond with those arising from embedding in some larger group such as $SO(10)$ or $E_{6}$ (see section 7 for more details). As an example, we will choose the $E_{6}$ inspired charge assignments. The $5D$ bulk gauge structure is assumed to be $SU(5)\times U(1)$, and as in the previous examples, the Schrek-Schwarz compactification, with $Z$ and $T$ unchanged, will break this gauge group on the brane down to $G_{SM}\times U(1)$. The $E_{6}$ inspired charges under the $U(1)$ group are [62], $$\begin{array}[]{rlrlrlrlrl}Q_{q}&=\displaystyle\frac{1}{\sqrt{40}},&Q_{l}&=% \displaystyle\frac{2}{\sqrt{40}},&Q_{d}&=\displaystyle\frac{2}{\sqrt{40}},&Q_{% u}&=\displaystyle\frac{1}{\sqrt{40}},&Q_{e}&=\displaystyle\frac{1}{\sqrt{40}},% \\ &&Q_{H_{d}}&=\displaystyle-\frac{3}{\sqrt{40}},&Q_{H_{u}}&=\displaystyle-\frac% {2}{\sqrt{40}},&Q_{S}&=\displaystyle\frac{5}{\sqrt{40}}.\end{array}$$ (72) The high scale boundary conditions and soft SUSY breaking masses remain as those for the $SU(5)$ model with an additional scalar in section 5, namely Eq. (58) and the appropriate choice of Eqs. (59), (60), (64) and (68), depending on the choice of whether the scalar and the fermions are placed on the brane or in the bulk. The difference in the spectra arises due to the presence of the extra $U(1)_{N}$ which will modify the RGEs. In addition the breaking of $U(1)_{N}$ will produce a $Z^{\prime}$ boson, and we exclude points that violate the ATLAS bounds [63]. Performing our parameter scans for the additional scalar on the brane gives Figures 10 and 11, for brane or bulk fermions respectively. We see many regions that pass LHC and Dark Matter constraints, but again the points passing the Scherk-Schwarz constraint do not overlap, though they come close when the scalar is on the brane and fermions are in the bulk case, only finally being excluded by the LHC constraints. This pattern repeats if we allow the additional scalar into the bulk, as seen in Figures 12 and 13. It is interesting that with the additional scalar $S$ in the bulk and fermions on the brane, the constraints favour scenarios with lower $\tan\beta$. We also examined the same model with $\mu$ explicitly set to $0$ (so $\hat{\gamma}=0$), as we did for the model in section 5, to bypass the Scherk-Schwarz constraint. Unfortunately no placement of our fields on brane or bulk were able to produce scenarios with EWSB. Of course, the setting of our $U(1)$ charges need not follow the pattern of $E_{6}$, as the $U(1)$ may be of some completely different origin. Another obvious example would be a $U(1)$ as a remnant of $SO(10)$, in which case the charge assignments would be [64], $$\begin{array}[]{rlrlrlrlrl}Q_{q}&=-1,&Q_{l}&=3,&Q_{d}&=1,&Q_{u}&=3,&Q_{e}&=-5,% \\ &&Q_{H_{d}}&=-2,&Q_{H_{u}}&=2,&Q_{S}&=10.\end{array}$$ (73) However, none of our models with these charge assignments, including fields in the bulk or on the brain and with $\mu$ set to $0$ or not, were able to provide satisfactory electroweak scale spectra. 7 An $E_{6}$ model The first model discussed in section 6 carried the $U(1)$ charge assignments that might arise from a larger $E_{6}$ symmetry group. However, if the unification group were indeed $E_{6}$ one would expect other additional fields that may survive down to the electroweak scale. An example of such a model is the E${}_{6}$SSM [62, 65, 66, 67, 68], which has a superpotential, $$\displaystyle W_{E_{6}SSM}=$$ $$\displaystyle W_{MSSM}(\mu=0)+\lambda H_{u}H_{d}S+\lambda_{\alpha\beta}S(H^{d}% _{\alpha})(H^{u}_{\beta})+\kappa_{ij}S(D_{i}\overline{D}_{j})+\tilde{f}_{% \alpha\beta}S_{\alpha}(H^{d}_{\beta}H_{u})$$ $$\displaystyle+f_{\alpha\beta}S_{\alpha}(H_{d}H^{u}_{\beta})+g^{D}_{ij}(Q_{i}L_% {4})\overline{D}_{j}+h^{E}_{i\alpha}e^{c}_{i}(H^{d}_{\alpha})+\mu_{L}L_{4}% \overline{L}_{4}$$ (74) where $\alpha,\beta=1,2,3$ and $i,j=1,2$ are generation indices. (For the definitions of these additional fields, see Ref. [68]). Applying the Schrek-Schwarz high scale boundary conditions, with the $\mathbf{27}$ and $\overline{\mathbf{27}}$ representations placed in the bulk, gives, $$m_{1/2}=\hat{\alpha},\qquad m^{2}_{h_{u},h_{d},S,H^{u}_{\alpha},H^{d}_{\beta},% D_{i},\overline{D}_{j},S_{\alpha},L_{4},\overline{L}_{4}}=\hat{\alpha}^{2},% \qquad T_{\xi}=-3\xi\hat{\alpha}$$ (75) and Eq. (60), where $\xi\in\left\{\lambda,\kappa_{ij},\lambda_{\alpha\beta},\tilde{f}_{\alpha\beta}% ,f_{\alpha\beta},g^{D}_{ij},h^{E}_{i\alpha}\right\}$. In practice, we allow $\mu_{L}$ to vary independently, and set the values of $m^{2}_{H_{d}}$, $m^{2}_{H_{u}}$, $m^{2}_{S}$ during EWSB. We would then have to check for a new Scherk-Schwarz condition to make sure the full boundary conditions are obeyed. Unfortunately, even without enforcing this new Scherk-Schwarz condition, we find that the boundary conditions on the other parameters at the high scale are so restrictive that we can find no valid low energy scenarios. We note that the implementation of this model is somewhat different from those described earlier because the the Higgs bosons themselves are in the $\mathbf{27}$ and $\overline{\mathbf{27}}$. Therefore the $SU(2)_{H}$ symmetry should be enlarged to encompass the full $\mathbf{27}$ and $\overline{\mathbf{27}}$. However, here we have taken the simplest route, ignoring this enlarged symmetry and allowing the holomorphic $\mu_{L}L_{4}\overline{L}_{4}$ term to arise from some another unknown mechanism altogether (that is, allowing it to vary). It is possible that a more non-minimal implementation, where the $\mathbf{27}$ and $\overline{\mathbf{27}}$ symmetry is fully incorparated would have more luck in producing a viable phenomenology, but this is beyond the scope of this paper. 8 Conclusions In this investigation we have examined models of Scherk-Schwarz orbifold compactification. In these scenarios, the extra dimension of a $5D$ space is given periodic boundary conditions and rolled-up to a radius $R\sim 1/M_{\rm GUT}$; the space is folded to provide an orbifold with fixed points in the standard fashion. Scherk-Schwarz compactification differs from standard orbifold compactification in that it allows non-trivial transformations of the fields under the orbifolding symmetries. This Scherk-Schwarz orbifolding allows for the breaking of both supersymmetry and the GUT symmetry. We apply this compactification to several models of Grand Unification, including $SU(5)$ unification, $SU(5)$ with an additional singlet, $SU(5)\times U(1)$, and an $E_{6}$ inspired model, all with several variations. The Scherk-Schwarz mechanism provides severe constraints on the supersymmetry breaking parameters at the unification scale. We apply these constraints and use Renormalisation Group equations to evolve the theory down to the electroweak scale, where they are confronted with low energy constraints from the LHC, the Dark Matter relic density and the Higgs mass. We find that these boundary conditions are very difficult to combine with a $125\,$GeV Higgs boson. Generally, these models prefer a lighter Higgs boson and rather low $\tan\beta$, and despite and exhaustive scan and variations in the models we were unable to find parameter choices which simultaneously obeyed all low scale measurement constraints. In cases where the Higgs mass was in the correct range, for example in the $SU(5)$ models with an extra singlet when an effective Higgs-higgsino mass term was entirely generated by the Scherk-Schwarz breaking, the models were ruled out by other low energy constraints such as LHC chargino exclusions. Although we studied several models with lots of variations, this work does not claim to rule out the Scherk-Schwarz compactification in general. One could imagine having more complicated gauge groups and extra-dimensional geometry which would change the unification constraints on the supersymmetry breaking masses. Indeed, we saw in the implementation of the E${}_{6}$ gauge group that one additional freedom in allowing an alternative treatment of the large representations that now include the Higgs. However, we are confident in making the claim that Scherk-Schwarz compactification of $SU(5)$ models, $SU(5)$ models with an extra singlet, and $SU(5)\times U(1)$ models where the extra dimension is compactified on $S^{1}/\mathds{Z}_{2}$ are not compatible with electroweak scale obervations. Acknowledgements DDS would like to thank Brian Alden for help with parallelising codes. DJM acknowledges partial support from the STFC grant ST/P000746/1. References References [1] ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC, Phys. Lett. B716 (2012) 1–29, [arXiv:1207.7214]. [2] CMS Collaboration, S. Chatrchyan et al., Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC, Phys. Lett. B716 (2012) 30–61, [arXiv:1207.7235]. [3] ATLAS Collaboration, G. Aad et al., Summary of the searches for squarks and gluinos using $\sqrt{s}=8$ TeV pp collisions with the ATLAS experiment at the LHC, JHEP 10 (2015) 054, [arXiv:1507.05525]. [4] ATLAS Collaboration, M. Aaboud et al., Search for squarks and gluinos in final states with jets and missing transverse momentum using 36 fb ${}^{-1}$ of $\sqrt{s}=13$ TeV pp collision data with the ATLAS detector, Phys. Rev. D97 (2018), no. 11 112001, [arXiv:1712.02332]. [5] ATLAS Collaboration, M. Aaboud et al., Search for dark matter and other new phenomena in events with an energetic jet and large missing transverse momentum using the ATLAS detector, JHEP 01 (2018) 126, [arXiv:1711.03301]. [6] ATLAS Collaboration, M. Aaboud et al., Search for supersymmetry in final states with two same-sign or three leptons and jets using 36 fb${}^{-1}$ of $\sqrt{s}=13$ TeV $pp$ collision data with the ATLAS detector, JHEP 09 (2017) 084, [arXiv:1706.03731]. [7] ATLAS Collaboration, M. Aaboud et al., Search for new phenomena using the invariant mass distribution of same-flavour opposite-sign dilepton pairs in events with missing transverse momentum in $\sqrt{s}=13$ TeV pp collisions with the ATLAS detector, Eur. Phys. J. C78 (2018), no. 8 625, [arXiv:1805.11381]. [8] ATLAS Collaboration, M. Aaboud et al., Search for new phenomena with large jet multiplicities and missing transverse momentum using large-radius jets and flavour-tagging at ATLAS in 13 TeV $pp$ collisions, JHEP 12 (2017) 034, [arXiv:1708.02794]. [9] ATLAS Collaboration, M. Aaboud et al., Search for supersymmetry in final states with missing transverse momentum and multiple $b$-jets in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 06 (2018) 107, [arXiv:1711.01901]. [10] ATLAS Collaboration, M. Aaboud et al., Search for supersymmetry in events with $b$-tagged jets and missing transverse momentum in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 11 (2017) 195, [arXiv:1708.09266]. [11] ATLAS Collaboration, M. Aaboud et al., Search for direct top squark pair production in final states with two leptons in $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector, Eur. Phys. J. C77 (2017), no. 12 898, [arXiv:1708.03247]. [12] ATLAS Collaboration, M. Aaboud et al., Search for top-squark pair production in final states with one lepton, jets, and missing transverse momentum using 36 fb ${}^{-1}$ of $\sqrt{s}=13$ TeV pp collision data with the ATLAS detector, JHEP 06 (2018) 108, [arXiv:1711.11520]. [13] ATLAS Collaboration, M. Aaboud et al., Search for a scalar partner of the top quark in the jets plus missing transverse momentum final state at $\sqrt{s}$=13 TeV with the ATLAS detector, JHEP 12 (2017) 085, [arXiv:1709.04183]. [14] ATLAS Collaboration, M. Aaboud et al., Search for supersymmetry in final states with charm jets and missing transverse momentum in 13 TeV $pp$ collisions with the ATLAS detector, JHEP 09 (2018) 050, [arXiv:1805.01649]. [15] ATLAS Collaboration, M. Aaboud et al., Search for direct top squark pair production in events with a Higgs or $Z$ boson, and missing transverse momentum in $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector, JHEP 08 (2017) 006, [arXiv:1706.03986]. [16] ATLAS Collaboration, M. Aaboud et al., Search for chargino-neutralino production using recursive jigsaw reconstruction in final states with two or three charged leptons in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, Phys. Rev. D98 (2018), no. 9 092012, [arXiv:1806.02293]. [17] ATLAS Collaboration, M. Aaboud et al., Search for electroweak production of supersymmetric states in scenarios with compressed mass spectra at $\sqrt{s}=13$ TeV with the ATLAS detector, Phys. Rev. D97 (2018), no. 5 052010, [arXiv:1712.08119]. [18] ATLAS Collaboration, G. Aad et al., Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $\sqrt{s}=8$ TeV ${pp}$ collisions with the ATLAS detector, Eur. Phys. J. C75 (2015), no. 5 208, [arXiv:1501.07110]. [19] ATLAS Collaboration, M. Aaboud et al., Search for the direct production of charginos and neutralinos in final states with tau leptons in $\sqrt{s}=$ 13 TeV $pp$ collisions with the ATLAS detector, Eur. Phys. J. C78 (2018), no. 2 154, [arXiv:1708.07875]. [20] ATLAS Collaboration, M. Aaboud et al., Search for electroweak production of supersymmetric particles in final states with two or three leptons at $\sqrt{s}=13\,$TeV with the ATLAS detector, Eur. Phys. J. C78 (2018), no. 12 995, [arXiv:1803.02762]. [21] ATLAS Collaboration, M. Aaboud et al., Search for pair production of higgsinos in final states with at least three $b$-tagged jets in $\sqrt{s}=13$ TeV $pp$ collisions using the ATLAS detector, Phys. Rev. D98 (2018), no. 9 092002, [arXiv:1806.04030]. [22] ATLAS Collaboration, M. Aaboud et al., Search for supersymmetry in events with four or more leptons in $\sqrt{s}=13$ TeV $pp$ collisions with ATLAS, Phys. Rev. D98 (2018), no. 3 032009, [arXiv:1804.03602]. [23] ATLAS Collaboration, M. Aaboud et al., Search for long-lived charginos based on a disappearing-track signature in pp collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 06 (2018) 022, [arXiv:1712.02118]. [24] ATLAS Collaboration, M. Aaboud et al., Search for heavy long-lived charged $R$-hadrons with the ATLAS detector in 3.2 fb${}^{-1}$ of proton–proton collision data at $\sqrt{s}=13$ TeV, Phys. Lett. B760 (2016) 647–665, [arXiv:1606.05129]. [25] ATLAS Collaboration, M. Aaboud et al., Search for long-lived, massive particles in events with displaced vertices and missing transverse momentum in $\sqrt{s}$ = 13 TeV $pp$ collisions with the ATLAS detector, Phys. Rev. D97 (2018), no. 5 052012, [arXiv:1710.04901]. [26] ATLAS Collaboration, M. Aaboud et al., Search for metastable heavy charged particles with large ionization energy loss in pp collisions at $\sqrt{s}=13$ TeV using the ATLAS experiment, Phys. Rev. D93 (2016), no. 11 112015, [arXiv:1604.04520]. [27] ATLAS Collaboration, G. Aad et al., Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV $pp$ collisions at the LHC using the ATLAS detector, Phys. Rev. D90 (2014), no. 11 112005, [arXiv:1409.5542]. [28] ATLAS Collaboration, G. Aad et al., Search for massive, long-lived particles using multitrack displaced vertices or displaced lepton pairs in pp collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector, Phys. Rev. D92 (2015), no. 7 072004, [arXiv:1504.05162]. [29] ATLAS Collaboration, M. Aaboud et al., Search for new phenomena in different-flavour high-mass dilepton final states in pp collisions at $\sqrt{s}=13$ Tev with the ATLAS detector, Eur. Phys. J. C76 (2016), no. 10 541, [arXiv:1607.08079]. [30] ATLAS Collaboration, M. Aaboud et al., Search for R-parity-violating supersymmetric particles in multi-jet final states produced in $p$-$p$ collisions at $\sqrt{s}=13$ TeV using the ATLAS detector at the LHC, Phys. Lett. B785 (2018) 136–158, [arXiv:1804.03568]. [31] ATLAS Collaboration, M. Aaboud et al., A search for pair-produced resonances in four-jet final states at $\sqrt{s}=$ 13 TeV with the ATLAS detector, Eur. Phys. J. C78 (2018), no. 3 250, [arXiv:1710.07171]. [32] ATLAS Collaboration, M. Aaboud et al., Search for B-L R -parity-violating top squarks in $\sqrt{s}$=13 TeV pp collisions with the ATLAS experiment, Phys. Rev. D97 (2018), no. 3 032003, [arXiv:1710.05544]. [33] C. Csaki, The Minimal supersymmetric standard model (MSSM), Mod. Phys. Lett. A11 (1996) 599, [hep-ph/9606414]. [34] C. Han, K.-i. Hikasa, L. Wu, J. M. Yang, and Y. Zhang, Status of CMSSM in light of current LHC Run-2 and LUX data, Phys. Lett. B769 (2017) 470–476, [arXiv:1612.02296]. [35] P. Bechtle et al., Killing the cMSSM softly, Eur. Phys. J. C76 (2016), no. 2 96, [arXiv:1508.05951]. [36] MSSM Working Group Collaboration, A. Djouadi et al., The Minimal supersymmetric standard model: Group summary report, in GDR (Groupement De Recherche) - Supersymetrie Montpellier, France, April 15-17, 1998, 1998. hep-ph/9901246. [37] J. Scherk and J. H. Schwarz, Spontaneous breaking of supersymmetry through dimensional reduction, Physics Letters B 82 (1979), no. 1 60 – 64. [38] J. Scherk and J. H. Schwarz, How to get masses from extra dimensions, Nuclear Physics B 153 (1979) 61 – 88. [39] T. Kaluza, Zum Unitätsproblem der Physik, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.) 1921 (1921) 966–972, [arXiv:1803.08616]. [40] O. Klein, Quantum Theory and Five-Dimensional Theory of Relativity. (In German and English), Z. Phys. 37 (1926) 895–906. [,76(1926)]. [41] R. Barbieri, L. J. Hall, and Y. Nomura, Softly broken supersymmetric desert from orbifold compactification, Phys. Rev. D66 (2002) 045025, [hep-ph/0106190]. [42] M. Quiros, New ideas in symmetry breaking, in Summer Institute 2002 (SI 2002) Fuji-Yoshida, Japan, August 13-20, 2002, pp. 549–601, 2003. hep-ph/0302189. [,549(2003)]. [43] R. Sundrum, Tasi 2004 lectures: To the fifth dimension and back, in Theoretical Advanced Study Institute in Elementary Particle Physics: Many Dimensions of String Theory (TASI 2005) Boulder, Colorado, June 5-July 1, 2005, pp. 585–630, 2005. hep-th/0508134. [,585(2005)]. [44] C. Csaki, J. Hubisz, and P. Meade, TASI lectures on electroweak symmetry breaking from extra dimensions, in Physics in D ¿= 4. Proceedings, Theoretical Advanced Study Institute in elementary particle physics, TASI 2004, Boulder, USA, June 6-July 2, 2004, pp. 703–776, 2005. hep-ph/0510275. [45] E. A. Mirabelli and M. E. Peskin, Transmission of supersymmetry breaking from a four-dimensional boundary, Phys. Rev. D58 (1998) 065002, [hep-th/9712214]. [46] M. F. Sohnius, Introducing supersymmetry, Physics Reports 128 (1985), no. 2 39 – 204. [47] A. Hebecker, 5-D superYang-Mills theory in 4-D superspace, superfield brane operators, and applications to orbifold GUTs, Nucl. Phys. B632 (2002) 101–113, [hep-ph/0112230]. [48] N. Arkani-Hamed, T. Gregoire, and J. G. Wacker, Higher dimensional supersymmetry in 4-D superspace, JHEP 03 (2002) 055, [hep-th/0101233]. [49] Y. Hosotani, Dynamical mass generation by compact extra dimensions, Physics Letters B 126 (1983), no. 5 309 – 313. [50] Y. Hosotani, Dynamical gauge symmetry breaking as the casimir effect, Physics Letters B 129 (1983), no. 3 193 – 197. [51] Y. Hosotani, Dynamics of non-integrable phases and gauge symmetry breaking, Annals of Physics 190 (1989), no. 2 233 – 253. [52] P. Athron, J.-h. Park, D. Stöckinger, and A. Voigt, FlexibleSUSY—A spectrum generator generator for supersymmetric models, Comput. Phys. Commun. 190 (2015) 139–172, [arXiv:1406.2319]. [53] F. Staub, SARAH, arXiv:0806.0538. [54] CMS Collaboration, A. M. Sirunyan et al., Search for supersymmetry in proton-proton collisions at 13 TeV using identified top quarks, Phys. Rev. D97 (2018), no. 1 012007, [arXiv:1710.11188]. [55] ATLAS Collaboration, G. Aad et al., Searches for heavy long-lived charged particles with the ATLAS detector in proton-proton collisions at $\sqrt{s}=8$ TeV, JHEP 01 (2015) 068, [arXiv:1411.6795]. [56] ATLAS Collaboration, M. Aaboud et al., Search for long-lived charginos based on a disappearing-track signature in pp collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 06 (2018) 022, [arXiv:1712.02118]. [57] ATLAS Collaboration, M. Aaboud et al., Search for additional heavy neutral Higgs and gauge bosons in the ditau final state produced in 36 fb ${}^{-1}$ of pp collisions at $\sqrt{s}=13$ TeV with the ATLAS detector., JHEP 01 (2018) 055, [arXiv:1709.07242]. [58] G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, M. R. Nolta, M. Halpern, R. S. Hill, N. Odegard, L. Page, K. M. Smith, J. L. Weiland, B. Gold, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, G. S. Tucker, E. Wollack, and E. L. Wright, Nine-year wilkinson microwave anisotropy probe (wmap) observations: Cosmological parameter results, arXiv:1212.5226. [59] G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, MicrOMEGAs: A Program for calculating the relic density in the MSSM, Comput. Phys. Commun. 149 (2002) 103–120, [hep-ph/0112278]. [60] G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, micrOMEGAs: Version 1.3, Comput. Phys. Commun. 174 (2006) 577–604, [hep-ph/0405253]. [61] D. Barducci, G. Belanger, J. Bernon, F. Boudjema, J. Da Silva, S. Kraml, U. Laa, and A. Pukhov, Collider limits on new physics within micrOMEGAs4.3, arXiv:1606.03834. [62] S. F. King, S. Moretti, and R. Nevzorov, Exceptional supersymmetric standard model, Phys. Lett. B634 (2006) 278–284, [hep-ph/0511256]. [63] ATLAS-Exotics, Atlas exotics summary plots, . Accessed: 2018-05-03. [64] T. Asaka, W. Buchmuller, and L. Covi, Gauge unification in six-dimensions, Phys. Lett. B523 (2001) 199–204, [hep-ph/0108021]. [65] S. F. King, S. Moretti, and R. Nevzorov, Theory and phenomenology of an exceptional supersymmetric standard model, Phys. Rev. D73 (2006) 035009, [hep-ph/0510419]. [66] P. Athron, S. F. King, D. J. Miller, S. Moretti, and R. Nevzorov, The Constrained Exceptional Supersymmetric Standard Model, Phys. Rev. D80 (2009) 035009, [arXiv:0904.2169]. [67] P. Athron, S. F. King, D. J. Miller, S. Moretti, and R. Nevzorov, Constrained Exceptional Supersymmetric Standard Model with a Higgs Near 125 GeV, Phys. Rev. D86 (2012) 095003, [arXiv:1206.5028]. [68] R. Nevzorov, $E_{6}$ inspired supersymmetric models with exact custodial symmetry, Phys. Rev. D87 (2013), no. 1 015029, [arXiv:1205.5967].
Modeling the Influence of Visual Density on Cluster Perception in Scatterplots Using Topology Ghulam Jilani Quadri and Paul Rosen Abstract Scatterplots are used for a variety of visual analytics tasks, including cluster identification, and the visual encodings used on a scatterplot play a deciding role on the level of visual separation of clusters. For visualization designers, optimizing the visual encodings is crucial to maximizing the clarity of data. This requires accurately modeling human perception of cluster separation, which remains challenging. We present a multi-stage user study focusing on 4 factors—distribution size of clusters, number of points, size of points, and opacity of points—that influence cluster identification in scatterplots. From these parameters, we have constructed 2 models, a distance-based model, and a density-based model, using the merge tree data structure from Topological Data Analysis. Our analysis demonstrates that these factors play an important role in the number of clusters perceived, and it verifies that the distance-based and density-based models can reasonably estimate the number of clusters a user observes. Finally, we demonstrate how these models can be used to optimize visual encodings on real-world data. keywords: Scatterplot, clustering, perception, empirical evaluation, visual encoding, crowdsourcing, topological data analysis \onlineid 0 \vgtccategoryResearch \vgtcpapertypeTheory/Model \authorfooter Ghulam Jilani Quadri is with the University of South Florida. E-mail: [email protected]. Paul Rosen is with the University of South Florida. E-mail: [email protected]. \shortauthortitleQuadri and Rosen: Modeling Cluster Perception Using Topology \teaser Demonstration of our density-based model being used to evaluate how differentiable clusters are in scatterplots that vary (a) the number of points shown ($500$, $2500$, and $12500$) or (b) the opacity of points ($1\%$, $5\%$, $10\%$, $50\%$, and $100\%$). The threshold plots (lower right) show how easy it is to visually identify (horizontally, bigger is better) a certain number of clusters (vertically). For example, when varying the opacity (b), the threshold plot shows that 3 clusters are most clearly visible in the pink ($5\%$ opacity) and green scatterplots ($10\%$ opacity), significantly less visible in the blue scatterplot ($1\%$ opacity), and not visible at all in the purple ($50\%$ opacity) or orange scatterplots ($100\%$ opacity). Designers can use these threshold plots to select the visual encodings that maximize the clarity of data. \vgtcinsertpkg \firstsection Introduction Scatterplots are commonly used to reveal several types of relationships between quantitative variables [36]. Numerous perceptual studies have evaluated the effectiveness of scatterplots in low-level tasks that include assessing trends [21, 60], measuring correlation [64, 41, 7], and average and relative mean judgments [37]. Clustering, in particular, is an aggregate-level task [59, 67, 52] that has been utilized in a variety of applications, e.g., weather forecasting, text analysis, and large-scale data analysis [75, 83, 51, 79]. Clustering occurs when patterns in the data form distinct groups [3, 68]. However, at its core, clustering is an ill-posed problem, as the “correct” clustering depends upon multiple factors [22, 34]. When considering clustering in scatterplots, several factors play a role in how they are perceived. Data aspects, such as the data distribution size/type and the number of data points, can influence the visual presentation. On the other hand, visual encoding properties, such as mark type, size, and opacity, have the potential to influence perceptual judgments [14]. What is not well understood is how these various factors, many of which are under the control of the visualization designer, influence the perception of clusters. The presentation of data is particularly important when considering that a biased representation of the data may provide an inaccurate summary, leading to invalid conclusions [74, 49, 38, 45]. In this paper, we explore the multi-factor judgments used in identifying clusters in scatterplots through a crowdsourced user study. Based upon this study, we develop 2 models for the perception of clusters in scatterplots, using a data structure from Topological Data Analysis, called the merge tree [80]. We validate the models on a variety of variables—the number of points, cluster distribution size, size of data points, and opacity of data points—to verify the accuracy of the models and analyze their effects. Our results show that the perception of the number of clusters does indeed depend upon all 4 factors. Moreover, we show that the merge tree-based models do match an average user’s perception of the clusters in a given scatterplot. Finally, we demonstrate how the models can be used to optimize visualization designs. While some variables, such as distribution size, are difficult to control in a visualization, designers can use our models and findings as a guideline to balance the design factors that they do have control over—the number of points shown (e.g., via subsampling [44, 12]), data point size [75, 77], or opacity [54, 58]—to optimize the saliency of the clusters in a visualization. 1 Prior Work We provide brief coverage of clustering in scatterplots and perception of the visual factors evaluated in our study. 1.1 Clustering in Scatterplots Clustering plays an important role in exploring and understanding many types of data [67, 68]. A design factor survey defined clustering as a high-level data characterization—the ability to identify groups of similar items [68]. Amar et al. presented a set of tasks for visual analytics that defined clusters as having “similar attribute values in a given set of data” [3]. Taxonomies of Clustering Factors Identifying clusters is directly influenced by the perception of cluster separation, and much of our understanding has come from studying dimension reduction (DR) techniques. Lewis et al. have compared the effectiveness of DR techniques using human judgments and concluded that T-SNE performs better than other commonly used methods when expecting clusters in the data [48]. Etemadpour et al. showed, however, the performance of DR techniques also depends on data characteristics [32], e.g., the separability of clusters, and later created a user-centric taxonomy of visual tasks related to clustering in DR techniques [31]. A taxonomy of visual cluster separation in scatterplots used a qualitative evaluation to identify 4 important factors—scale, point distance, shape, and position [70]. The taxonomy gives a context to our visual factor selection. Sedlmair and Aupetit later evaluated 15 class separation measures for assessing the quality of DR using human input for building a machine learning framework [69] and later extended the framework to include an even greater number of measures [5]. Perceptual Models of Clustering Several works have considered how to model the perception of clusters. For example, a recent study that used eye-tracking to analyze user perception in cluster identification, highlighted the role of Gestalt laws especially proximity and closure [33]. Matute et al. provided a method to quantify and represent scatterplots through skeleton-based descriptors that measured scatterplot similarity [55]. However, their approach does not consider visual encodings in the evaluation. ScatterNet, a deep learning model, captures perceptual similarities between scatterplots to emulate human clustering decisions but lacks explainability in the choices [53]. The scagnostics technique focused on identifying the patterns in scatterplots, including clusters [19]. However, a study by Pandey et al. showed that they do not reliably reproduce human judgments [63]. Recently, ClustMe used visual quality measures to model human judgments to rank scatterplots [1]. ClustMe performed well in reproducing human decisions for clustering patterns. In contrast, we are studying the extent to which various factors influence the perception of clusters and building explainable models of how humans perceive cluster separation using the merge tree data structure. Clustering in Non-Scatterplot Contexts Clustering has been studied in other types of visualization, such as text [2], maps [79, 51], and bubble charts [75]. A task-based evaluation found that on small data, bar and pie charts outperformed tables, scatterplots, and line charts in clustering tasks [66]. The performance in cluster perception in pie charts is traced back to its effectiveness in facilitating proportional judgments through a part-whole relationship [25, 71]. Similarly, we hypothesize that the relative distance between clusters and the relative density of the image influence cluster identification. 1.2 Factor Selection on Scatterplots Several prior perceptual studies have demonstrated the effect of visual encodings on analysis tasks [74, 38, 14]. A variety of factors influence group or separation perception [84], including color, size, shape [70], orientation [15], texture [4], opacity [58], density [82], motion and animation [30, 78, 11], chart size [43], and others. Other studies have demonstrated a perceptual effect in scatterplots when changing factors in the data, including data distribution types, number of points, the proximity of concentrations of points, data point opacity, and relative density [38, 13, 45, 74, 37, 65, 17]. Overdraw in scatterplots, in particular, has been addressed with a variety of techniques, e.g., splatterplot [57], recursive sampling [12], set cover optimization [44], feature-preserving visual abstraction [10], or by applying various clutter reduction techniques [28], e.g., sampling [81, 20, 27, 26] or changing opacity [54]. From this collection of possible factors, we focus our study specifically on the factors that most influence visual density, including the distribution of and distance between concentrations of points, the number and size of data points, and data point opacity in the visualization. Point Distribution Several prior studies have investigated the influence of the distribution of data points on cluster perception. An early study of 8 participants on 24 homogeneous dot patterns studied the impact of varying densities and gaps between 2 square-shaped clusters [61]. Sadahiro later developed a mathematical model to represent cluster perception in point distributions based on 3 factors—proximity, concentration, and density change—and suggested perception is significantly influenced by the concentration and density change [65]. Similarly, the scagnostics density property identifies concentrations of points directly influenced by the distribution of points [82]. Number of Data Points Sadahiro also showed that the higher the number of points in a given area, the higher the chances are that they would be perceived as a cluster, due to increased density [65]. Gleicher et al.’s empirical study asked participants to compare and identify average values in multi-class scatterplots [37]. It demonstrated that judgments are improved with a higher number of points. Size of Data Points The size of symbols is an important factor in visual aggregation tasks in scatterplots [75]. As the size of data points increases, so does the density, which directly influences cluster perception [65]. Symbol size also has a direct influence on discriminability in certain tasks [49], e.g., in color perception tasks [73]. Szafir’s study on color-difference perception found that perceived color difference varies by the size of marks [74]. Size also influences search task effectiveness. Gramazio et al.’s study on target search demonstrated that while the quantity of data points has little effect on searching for a target, increasing symbol size reduces search time in a display of random points [38]. Opacity of Data Points As the number of data points increases, scatterplots suffer from overplotting, which obscures the data distribution. Reducing mark opacity can alleviate overplotting to aid various visual analytics tasks [67], e.g., spike detection in dot plots [17]. Furthermore, different opacity levels aid in different visual tasks—while low opacity benefits density estimation for large data, it also makes locating outliers more difficult [58]. Matejka et al. defined an opacity scaling model for scatterplots that is based on the data distribution and crowdsourced responses to opacity scaling tasks [54]. Still, their study did not evaluate how a scatterplot design based on data symbol opacity can affect user performance on visual analysis tasks. Somewhat related to opacity is luminance, which can be modeled using extreme end lightness [50], creating a popout effect [40]. 2 Study Methodology We investigate how visual factors affect subject responses in the task of counting the number of clusters in a scatterplot. From this, we build and analyze 2 models to estimate the number of clusters an average user would perceive. One model is based on the separation distance between distributions, and the other uses the visual density of points. 2.1 Factors Data are presented as point marks (i.e., circles ) on the scatterplots and groups of similar objects form clusters. Based on our review of prior work, we chose to use a normal distribution to generate clusters, and we selected the following experimental factors: 1. Distribution size ($S$) 2. Number of data points ($N$) 3. Size of data points ($P$) 4. Data point opacity ($O$) 2.2 Experiments Setup We designed our experimental study in 3 stages: 1) a preliminary experiment to calibrate the experimental factors; 2) a crowdsourced Amazon’s Mechanical Turk (AMT) experiment to validate our models; and 3) a follow-up AMT study to elaborate upon 1 of our models. 2.3 Data Generation Datasets are synthesized using 5 input parameters (see Figure 1): stimuli dimensions ($[X\times Y]$ pixels); number of clusters ($C$); distribution size, i.e., standard deviation ($S$ pixels); number of points ($N$); and signal-to-noise ratio ($SNR$). First, $C$ cluster centers are randomly placed within a “safe zone” defined as 1 standard deviation from the stimuli (image) border, in other words, $x\in[S,X-S]$ and $y\in[S,Y-S]$. Each cluster is assigned an equal share of the available points ($N/C$). Points are randomly placed around their cluster center using a normal distribution with a standard deviation of $S$ pixels. Points outside of the image dimensions are discarded without replacement. Next, an additional $N/SNR$ points representing noise are placed randomly using a uniform distribution across the image dimensions. Finally, to generate images, 2 more input parameters are used: point size ($P$ pixels) and point opacity ($O$). The points are drawn as filled circles of $P$ area with $O$ opacity. Example stimuli are shown in Figure 2. In all experiments some inputs were kept constant: • Stimuli dimensions ($[X\times Y]$): $[550_{px}\times 550_{px}]$ — The vertical size was selected such that the image would fit on the majority of desktop monitors without scrolling [72]. The horizontal resolution was selected to match, avoiding any directional bias. • Signal-to-noise ratio ($SNR$): $10:1$ — We manually optimized the $SNR$ by looking for a high level of noise that would not overwhelm the clusters. We ended at $10:1$, making the maximum total number of data points in any given dataset $N+0.1\cdot N$. 3 Preliminary Experiment We performed a preliminary user study to test initial hypotheses and calibrate parameters for the larger AMT experiment. Based on our observation and study of prior work, we drafted the following hypotheses: [H1] The distribution size of clusters affects the accuracy in cluster count identification in scatterplots. [H2] The number of data points affects the accuracy in cluster count identification in scatterplots. 3.1 Properties and Data Generation As aligned with previous empirical studies, e.g., [66], we selected parameter values to maintain a reasonable level of difficulty. We designed the task such that the response time for a single stimulus would be 5 to 20 seconds. We selected the following experimental factors: • Number of clusters ($C$): $\{4-12\}$ — The number of clusters was selected using trial-and-error to avoid tasks that were too easy (i.e., trivial to count) or too difficult (i.e., larger number or sparse clusters). • Data point size/area ($P$): $\{20_{px}\}$ — Experimental calibration was not needed for point size, as reasonable values could be determined analytically. Therefore, the point size was fixed in order to calibrate other factors. • Number of data points ($N$): $\{1000,5000,10000\}$ • Distribution size ($S$): $\{20_{px},35_{px},50_{px},65_{px},90_{px}\}$ — $N$ and $S$ were the main factors to test/calibrate. The value ranges were selected using our observation of sample stimuli and judgment of factors from prior work, considering sufficient range, minimum and maximum values, and the number of experimental conditions that could be reasonably tested. • Data point opacity ($O$): $\{100\%\}$ — Points were fully opaque. The dependent variable we tested was: • User-selected number of clusters ($U$): $[1-15]$ Dataset generation for the preliminary experiment was done in the following manner—for every combination of $S$ and $N$, $500$ stimuli (i.e., images) were generated with a random number of clusters, $C$. Other parameters were fixed as described, leading to a pool of $|S|\times|N|\times 500=7500$ stimuli. 3.2 Study Procedure We developed a webpage for the experiments, where each participant was shown 50 images from the pool of $7500$, one at a time, and asked the number of clusters they could see. Answers were recorded using a drop-down box with options $1-15$. The maximum allocated time for each task was 20 seconds. At the expiration of time, the page was automatically advanced. To mitigate any effects or bias, we placed a blank screen between every 2 tasks [42]. At the beginning of the experiment, we included a brief introduction to clustering and 3 training tasks for each participant, which were similar to the study tasks that followed. The experiment was expected to last 20-30 minutes, including demographic details and training tasks. We recruited 30 participants from the College of Engineering at the University of South Florida for the IRB approved study. Participant ages ranged from 18-28 ($\mu_{age}$=23), with 24 males and 6 females. No compensation was provided. In total $50$ trials $\times$ $30$ participants = $1500$ responses were collected. While performing data quality checks on the responses, we found discrepancies—participants responding to stimuli in less than 1 second or those with responses of 1 cluster to all stimuli—in 4 participants results and removed them from analysis, leaving $1300$ responses ($26\times 50)$. We further identified and removed $161$ stimuli that had been reused from the pool only keeping the first occurrence111We acknowledged this is a flaw in our preliminary study design. However, since our primary goal was parameter calibration, the experiment still has value. We avoid this bias in our AMT experiment by generating stimuli per participant., leaving $1139$ responses for analysis. 3.3 Analysis and Result To measure accuracy for a given scatterplot, $\tau$, we use the differential: ${\mathbb{D}}(\tau)=U_{\tau}-C_{\tau}$, where $U_{\tau}$ is the user response and $C_{\tau}$ is the number of clusters in the data. We analyzed the differential against the independent factors distribution size ($S$) and the number of data points ($N$) using a 2-way ANOVA test. We also calculated partial eta-squared ($\eta^{2}$). We observed that $S$ and $N$ have a significant effect on the accuracy in identifying the number of clusters, ($F_{S}(4,1130)=48.57,p<0.01,\eta^{2}=0.12$) and ($F_{N}(2,1130)=8.29,p<0.01,\eta^{2}=0.02$), respectively. These results confirm 3 and 3. Although we found a significant effect, user accuracy had a low average $\mu_{{\mathbb{D}}}=-3.27$ and a high standard deviation $\sigma_{{\mathbb{D}}}=2.66$ (accurate predictions would have an average of $0$ with a small standard deviation). Figure 3 shows the histogram of differentials, which appear as a truncated normal distribution. The negative shift in the $\mu_{\mathbb{D}}$ revealed that of the number of points or distribution size alone is insufficient to model the number of clusters users would perceive—an accurate model needs to consider the overlap of clusters. For example, in Figure 2, all images have an identical number of generated clusters, but the interaction between clusters causes differing numbers of clusters to appear. Instead, the distance between clusters or the visual density of the data needs to be considered as an additional factor in cluster perception modeling. The next section introduces models that each considers one of these factors. 4 Topology-based Modeling of Clustering We propose 2 models for capturing human perception of clusters based upon approaches from Topological Data Analysis (TDA) [80]. TDA is a set of approaches used in the study of the “shape” of data, including scalar fields, vector fields, and high-dimensional point clouds [76]. Both models capture the clustering structure using a data structure called the merge tree. The merge tree encodes a series of topological events in the form of creation and merging of components (specifically, $0$-dimensional homology groups), based upon properties of the space under a real-valued function. The first model, based upon the distance between cluster centers, is captured using a technique called persistent homology [23]. The second model, based upon the visual density of points, is captured by calculating the join tree of a scalar field [9]. 4.1 Distance-based Model The distance-based model tries to capture human perception of clusters by considering the spatial resolution at which 2 or more cluster distributions will blend to be perceived as 1. We do this using the technique of persistent homology (PH) [23]. We provide a simplified view of PH under our limited context. For a detailed introduction, see [24]. Construction begins with a finite set of points $V$ representing cluster centers embedded in Euclidean space (i.e., their positions on the scatterplot). Given a real number $D>=0$, we consider a set of balls of diameter $D$ centered at points in $V$. Continuously increasing the diameter forms a $1$-parameter family of nested unions of balls, $0=D_{0}\leq D_{1}\leq D_{2}\leq\cdots\leq D_{m}=\infty$. If at a given diameter $D_{i}$, 2 balls overlap, we consider these balls as a single component. 4(b) shows an example dataset with 4 values of $D_{i}$. As $D_{i}$ increases, more balls intersect and merge into larger components. At $D_{\infty}$, all balls will overlap, forming a single component. To compute the PH, the points $V$ form the vertices of a graph. A $1$-simplex (an edge) is formed between 2 points in $V$ if and only if their balls intersect (i.e., the distance between them is $\leq D_{i}$). Sweeping $D_{i}$ from $0\rightarrow\infty$, as $D_{i}$ increases, new edges are added to the graph. Components are efficiently calculated at each step by finding connected components of the graph using the set union data structure. The total computation time is $O(|E|\alpha(|V|))$, where $E$ are the edges of the graph, and $\alpha$ refers to the inverse Ackermann function, an extremely slow-growing function. At $D_{\infty}$, PH forms the complete graph. Therefore, there are $O(|V|^{2})$ edges. Creating the merge tree from the prior construction is relatively simple. The merge tree is parameterized with respect to $D$. At $D_{0}=0$, all cluster center components are born. In other words, the balls have $0$ volume. These birth events appear in the merge tree as $1$ node per cluster, e.g., see the bottom of 4(c). As $D_{i}$ increases, when 2 components first merge at a given $D_{i}$, a merge node is added to the merge tree at $D_{i}$ connecting those components. For example, at $D_{1}$, the purple and pink components intersect, causing them to merge into a single component. From that point forward, 1 of the merged components dies (i.e., no longer exists), while the other takes on the identity of the new merged component (in this context, it does not matter which). Referring back to 4(c), when purple and pink merge at $D_{1}$, pink dies, while purple takes on the identity of the merged component. When the components finally merge into a single component, yellow in our example, this component dies at $\infty$. In other words, no matter how large the balls get, that 1 component will exist. It is also relevant to note that this particular construction has many parallels to single-linkage hierarchical clustering. This model has 2 main limitations: (1) it assumes that clusters are isotropic and have similar distributions; and (2) it requires knowledge about the location of cluster centers. Our next model uses a related framework to overcome these limitations. 4.2 Density-based Model The density-based model attempts to directly identify the relative visual density at which users will differentiate between clusters. The density-based model is found by calculating the join tree of a scalar field. We again provide a simplified treatment—for a detailed description, see [9]. First, a 2D histogram of the visual density is created for the scatterplot (i.e., a density plot). The image plane is divided into a set of grid cells of uniform width and height (selection of this resolution is discussed in our evaluation). Within each grid cell, the number of white pixels is counted, and this is considered the density222We acknowledge this is not the usual calculation of density, e.g., see [19], which would count the number of black pixels. However, our configuration makes the remainder of the discussion easier., $f_{xy}$. For illustrative purposes, this value is mapped to the range $F\in[0,255]$, where $0$ is empty (i.e., completely black) and $255$ is full (i.e., completely white), as shown in 5(a). The components of the density histogram are identified by sweeping $F$, such that $0=F_{0}<F_{1}<F_{2}<\cdots<F_{m}=\infty$. At each $F_{i}$, histogram cells where $f_{xy}\leq F_{i}$ are extracted and components found by joining neighboring cells (we use the 8 surrounding neighbors). This is computed by treating histogram cells as graph nodes, $V$, iff $f_{xy}\leq F_{i}$. Graph edges, $E$, connect vertices that are neighbors in the density histogram, and connected components are extracted using the set union data structure with performance $O(|E|\alpha(|V|))$. Since only immediate neighbors are considered for connecting, there are $O(|V|)$ edges. To construct the merge tree, sweeping $F_{i}$ from $0\rightarrow\infty$, nodes are born at the first $F_{i}$, where a new component appears. As $F_{i}$ is increased, the components expand until they merge with another component. When components merge, the component with the more recent birth (i.e., higher $f_{xy}$) dies, while the component with the lower $f_{xy}$ continues. For example, in 5(c), at $F_{1}$, the pink and purple components are about to merge. When they do at $F_{2}$, the pink component dies since it was born more recently (i.e., $f_{pink}>f_{purple}$), and the merged component in purple continues. Once all clusters have merged into a single component, that component dies at $\infty$ (i.e., it always exists, no matter how large $F_{i}$ gets). The value of this model over the distance-based model is that it only requires the input scatterplot. It needs no information about the cluster centers, and it makes no assumptions about the distribution of points within those clusters. 4.3 Persistence Threshold Plot Thus far, the models only encode the clustering structure as a function of distance or as a function of density in the merge tree. The method to select the number of clusters that will be perceived by a user is calculated similarly, irrespective of the underlying model, though the input parameters (distance vs. density) have different meanings. For this, we generate a persistence threshold plot. For a given merge tree, each component has its persistence, $\rho$, calculated. The persistence is the difference between birth and death values of the component (i.e., $\rho=death-birth$)333For the distance-based model, birth is always $0$ making $\rho=death$. However, the full definition unifies the distance- and density-based models.. The fundamental intuition behind persistence is that it measures the relative scale of a feature (e.g., the relative change in density), as opposed to the absolute scale of the feature (e.g., the absolute density value). We use persistence as a threshold to model the number of clusters a user would count in a scatterplot and vice versa. This information is represented in a persistence threshold plot or threshold plot. To form the plot, for the threshold $T\in[0,\infty)$, at a given $T_{i}$, we count the number of clusters whose $\rho>T_{i}$. This information is encoded into the line chart (see Figure 6) by plotting the threshold $T$ horizontally and the number of clusters vertically. Given these functions, we have the ability to determine critical thresholds (using either model) for the visual separation of clusters. For example, the red dashed lines in 6(b) show the persistence threshold ($T_{de}$) that corresponds to perceiving 3 clusters and vice versa. With this relationship, our models can now be used to estimate the number of clusters that a user would select in a given scatterplot. 5 Main Experiment We evaluate how well the merge tree models estimate the number of clusters perceived in a scatterplot by studying 3 factors ($S$, $N$, $P$). In addition to revisiting 3 and 3, we include 3 new hypotheses: [H3] Data point size ($P$), having a direct impact on visual density, affects the accuracy in cluster count identification in scatterplots. [H4] Using a persistence threshold correlated to the distribution size ($S$) of normally distributed clusters, the distance-based model will estimate the number of clusters perceived by users. [H5] Using a persistence threshold correlated to the size of data point ($P$), the number of data points ($N$), and by their interaction effect ($N*P$), the density-based model will estimate the number of clusters perceived by users. 5.1 Properties and Data Generation Using the information learned in preliminary experiment, following values were modified for the main experiment (i.e., all others remained the same, see subsection 3.1): • Data point size/area ($P$): $\{1_{px},3_{px},5_{px},7_{px}\}$ — On the low end, $1_{px}$ point size is the minimum possible value. On the high end, $7_{px}$ was chosen in combination with the number of points to limit the maximum visual density to $\sim 30\%$ of a given stimulus. • Number of data points ($N$): $\{500,2500,12500\}$ — To decide the number of data points, we considered if data points are uniformly distributed, the maximum visual density is $MVD=\frac{N*a}{X*Y}$, where $[X\times Y]$ are stimuli dimensions $[550\times 550]$. With a target of $<30\%$, using $P=7_{px}$ and $N=12500$ the visual density, $MVD=0.29$, i.e., $29\%$ of pixels filled. We noted a logarithmic effect in the preliminary experiment. Therefore, logarithmic intervals (base 5) are used. • Distribution size ($S$): $\{25_{px},40_{px},55_{px},70_{px},85_{px}\}$ — The distribution size was chosen to be similar to the preliminary experiment, slightly adjusted to have fixed intervals of $15_{px}$ between values. The data generation process is kept similar to the preliminary experiment. A key difference is that task stimuli are generated for each participant covering all combinations of factors. For each subject, $|S|\times|N|=15$ dataset are generated and rendered into $|15|\times|P|=60$ scatterplot stimuli. Each participant received similar variability and the same combination of factors in their stimuli. 5.2 Study Procedure This study was designed similarly to the preliminary experiment (see subsection 3.2) with the following variations. Each subject was shown in stimuli from their own pool of 60 stimuli in random order, and we included a post-test questionnaire, asking participants to describe their criteria for selecting the number of clusters. We recruited participants from Amazon’s Mechanical Turk (AMT) for the IRB approved study [8, 18]. Based upon a post hoc power analysis of the preliminary experiment data, we recruited a total of 40 participants (21 male, 19 female; ages: $[18-64]$, median age group: $[25-34]$) limited to the US or Canada. 45% of participants reported having corrected vision. All participants had a HIT approval rate of $\geq 95\%$, and were compensated at US Federal minimum wage. In total, $60$ trials $\times$ $40$ participants $=2400$ responses were collected. We carried out some data quality checks on responses, and the following responses were eliminated—$9$ responses with task completion time of less than 1 second and $27$ responses that ran out of time—leaving a total of $2364$ responses for analysis. Suitability of Studying Point Size Using AMT Studying visual factors, mark size in particular, on a crowdsourced environment has potential biases due to lack of control of user hardware, retinal size, viewing distance, ambient lighting, etc. For example, search task performance decreases as the viewing angle increases [29]. However, this lack of control is a commonly accepted limitation in crowdsourced studies—numerous recent AMT studies have considered mark size, among other properties, that could be impacted by this lack of environmental control, e.g., Szafir’s study of perceived color differences [74], Chung et al.’s evaluation of orderability in visual channels [13], and Kim and Heer’s study of the effectiveness of multiple visual encodings [45]. 5.3 Analysis Methodology We ran our data and user responses through the merge tree-based models. For the distance-based model, we first take the centers of each cluster to build the model. Then, we use the user response to the number of clusters ($U$) to extract a persistence threshold, $T_{di}$. After generating the threshold for all stimuli, a linear regression, using linear least squares, is calculated for $T_{di}$ on the factor distribution size, $T^{S}_{di}(s)=c_{1}\cdot s+c_{2}$, where the distribution size, $s$, is input, and $c_{1}$ and $c_{2}$ are calculated by the regression. 7(a) shows the resulting regression. The density-based model is built by using the scatterplot to generate a visual density histogram, which is the input to the model. Then, the user response to the number of clusters ($U$) is used to extract a persistence threshold, $T^{*}_{de}$. For the density-based model, multiple factors are tested ($N$, $P$, and $N*P$), each requiring their own linear regression, i.e., $T_{de}^{N}(n)=c_{1}\cdot n+c_{2}$; $T_{de}^{P}(p)=c_{1}\cdot p+c_{2}$; and $T_{de}^{N*P}(n,p)=c_{1}\cdot n+c_{2}\cdot p+c_{3}$. Threshold functions ($T^{S}_{di}$ and $T^{*}_{de}$) from the merge tree are used to calculate the model-predicted number of clusters. To measure the accuracy of the user response on a given scatterplot, $\tau$, we add new differentials, ${\mathbb{D}}^{S}_{di}$ and ${\mathbb{D}}^{*}_{de}$, for the distance- and density-based models, respectively: ${\mathbb{D}}_{di}^{S}(\tau)=U_{\tau}-C_{di}(T^{S}_{di}(\tau))$     ${\mathbb{D}}_{de}^{*}(\tau)=U_{\tau}-C_{de}(T^{*}_{de}(\tau))$ where $U_{\tau}$ is the user response, and $C_{di}$ and $C_{de}$ are the number of clusters produced by the models using a given threshold. We used the value of differentials (${\mathbb{D}}$, ${\mathbb{D}}^{S}_{di}$, and ${\mathbb{D}}^{*}_{de}$) as the primary measure to analyze the effects of the factors in the cluster counting. The histograms of the differentials for both models can be found in Figure 8. The study followed a within-subjects design, where all 40 subjects were exposed to all the same treatment. Hence, we use repeated measures (RM) ANOVA to analyze the effects of the factors on ${\mathbb{D}}$. For some results, due to violations of sphericity, according to Mauchly’s test, reported degrees of freedom and $p$-values are Greenhouse-Geisser corrected (highlighted in green) [39, 56]. Along with RM ANOVA, we calculated partial eta-squared ($\eta^{2}$). As per Cohen’s guidelines for measures of $\eta^{2}$: $0.01$ denotes small effect, $0.06$ denotes medium effect, and $0.14$ denotes large effect [16]. 5.4 Results 5.4.1 Model Accuracy The distance- and density-based models both successfully estimated user perception for counting clusters. Figure 8 shows the performance of all models in terms of differential. From our analysis, we observed the highest estimation accuracy was achieved using the density-model, from best to worst, ${\mathbb{D}}_{de}^{N*P}$: ($\mu=0.18$, $\sigma=1.58$); ${\mathbb{D}}_{de}^{P}$: ($\mu=0.50$, $\sigma=1.67$); and ${\mathbb{D}}_{de}^{N}$: ($\mu=-0.53$, $\sigma=2.14$). The distance-based model performs next best, ${\mathbb{D}}_{di}^{S}$: ($\mu=1.12$, $\sigma=2.64$). Whereas, without a model performed the worst, ${\mathbb{D}}$: ($\mu=-3.74$, $\sigma=3.00$). 5.4.2 Factor Effect Analysis Without a Model We performed 3-factor RM ANOVA testing to analyze the factors, distribution size ($S$), number of points ($N$), and point size ($P$) in terms of the effect on the differential without a model, ${\mathbb{D}}$. We observed that the distribution size ($S$) and the number of points ($N$) had a significant effect for counting clusters with respect to the differential, ${\mathbb{D}}$, with ($F_{S}(4,2304)=286.11,p<0.001,\eta^{2}=0.32$) and ($F_{N}(1.98,1576.43)=33.98,p<0.001$, $\eta^{2}=0.029$), respectively. On the other hand, data point size ($P$) failed to reach significance, with ($F_{P}(3,2304)=0.21,p=0.889$, $\eta^{2}=0.0002$). We also tested for interaction effects and only observed a significant effect between $S$ and $N$, ($F_{S*N}(8,2304)=8.18,p<0.001,\eta^{2}=0.028$). The $\eta^{2}$ analysis showed a large effect size on distribution size ($S$) and a small on the number of points ($N$) and interaction effect $S*N$. This is likely because smaller distributions create denser clusters with better separation, while larger distributions blend to create ambiguous boundaries. From these results, both 3 and 3 are reconfirmed. The lack of significance on point size ($P$) indicates that 5 should be rejected. However, we will revisit this hypothesis later. 5.4.3 Distance-based Model Factor Analysis Using persistence threshold on distribution size, $T_{di}^{S}$, we calculated the differential (${\mathbb{D}}_{di}^{S}$) and performed 3-factor RM ANOVA to observe the main effects of the individual factors distribution size ($S$), number of points ($N$), and point size ($P$), as well as interaction effects (see Table 1). The analysis identified a significant effect of distribution size ($S$) and the number of points ($N$) on the differential (${\mathbb{D}}_{di}^{S}$), but the point size ($P$) failed to reach significance. In particular, we found a large effect for distribution size ($S$) on ${\mathbb{D}}_{di}^{S}$. We also observed a small-medium effect in the number of points ($N$) and a negligible effect on the point size ($P$). We did not anticipate any interaction effects, and only $S*N$ showed a small effect. In terms of accuracy, as pointed out in subsubsection 5.4.1, the distance-based model improved overall accuracy over using no model (see 8(b)). Investigating further, 7(b) shows the accuracy per distribution size. Note that the accuracy was sound for all distribution sizes, except at $S=85$, which negatively impacted overall performance. We speculate that this is due to the significant blending of distributions at this extreme. Given the large effect in $S$ and overall improvement in accuracy, we consider 5 confirmed. 5.4.4 Density-based Model For the density-based model, we calculate 3 variations of the threshold and differential that use the factors that most directly influence visual density. Those are the number of data points ($T_{de}^{N}$/${\mathbb{D}}_{de}^{N}$), the size of data points ($T_{de}^{P}$/${\mathbb{D}}_{de}^{P}$), and their interaction ($T_{de}^{N*P}$/${\mathbb{D}}_{de}^{N*P}$). For each, we performed 3-factor RM ANOVA testing on the individual factors the distribution size ($S$), the number of points ($N$), and the point size ($P$), as well as interaction effects (see Table 2). Histogram Resolution The density-based model uses the visual density of a given scatterplot to model cluster perception. To calculate visual density, a 2D histogram is calculated on the image with bins of uniform width and height, $[B_{px}\times B_{px}]$. The choice of bin size for the density histogram is potentially influential in our analysis, as bins that are too small may cause instability, and bins that are too large may miss clusters. To determine the appropriate bin size, we performed an analysis on the data from Figure 2. A set of stimuli images are generated with fixed values for factors ($C=6$, $N=2500$, and $P=7_{px}$) and different values for $S=\{25_{px},40_{px},55_{px},70_{px},85_{px}\}$. We plotted the normalized density threshold (i.e., density threshold divided by area of a bin, i.e., $T_{de}/B^{2}$) generated by $U=6$ clusters for 5 different bin sizes (see Figure 9). The results showed instability in the density threshold for smaller values and a stable result starting at $[20\times 20]$. For this reason, 3 resolutions of histogram cell sizes are reported: $[10_{px}\times 10_{px}]$, $[20_{px}\times 20_{px}]$, and $[40_{px}\times 40_{px}]$, but our main discussion focuses on $[20_{px}\times 20_{px}]$. Number of Points Model ($T_{de}^{N}$/${\mathbb{D}}_{de}^{N}$) RM ANOVA results demonstrate significant and consistent main effects of $S$, $N$, $P$, and interaction effect of $N*P$, which can be seen in Table 2. Point size has a large effect on ${\mathbb{D}}_{de}$, confirming our hypotheses and previous work (e.g., [65]) of density’s influence on cluster perception. The number of points showed a small-medium effect size on ${\mathbb{D}}_{de}$, also align with our hypotheses. The accuracy of the number of points model was the worst of the 3 density models, though still significantly better than no model (see 8(c)). The accuracy of the model, plotted by the number of points in 10(a), shows lower accuracy as the number of points increases. Point Size Model ($T_{de}^{P}$/${\mathbb{D}}_{de}^{P}$) In this model, the number of points showed a medium-large effect size, while point size demonstrated a medium effect size for the differential. On the other hand, interaction of $N*P$ results small values of $\eta^{2}$ (see Table 2). The overall accuracy of this model was better than the number of points model (see 8(c)). 10(b) shows the accuracy per point size. The model was largely accurate, except for the smallest size, $P=1_{px}$. Interaction Model ($T_{de}^{N*P}$/${\mathbb{D}}_{de}^{N*P}$) Similar to the previous 2 models, significant effects were observed for all factors. However, only point size demonstrated medium effect size (see Table 2). This model showed the best overall accuracy of any model tested (see 8(c)). This makes logical sense, as the density is the combination of the number of points and their size. 10(c) shows the accuracy per number of points and per point size. For both cases, the accuracy was improved. However, $P=1_{px}$ was still the worst performing category. Our analysis showed the number of points, point size, and their interaction all had significant effects and improved accuracy over no model. Therefore, we consider 5 confirmed. Furthermore, we identified some large effects with point size for the density-based model, and this indirect relationship confirms 5. 5.4.5 Post-Test Questionnaire To further support our hypotheses, we asked the participants to state the criteria that influenced their counting of clusters in a free-response format at the end of the experiment. Their responses largely mirrored our findings---10% cited the size of symbol; 25% responses cited something amounting to distribution size; 25% cited distance between clusters; and 65% of responses included density as a factor444Some subjects listed multiple criteria.. 5.5 Follow-up Study on Opacity We now evaluate opacity modification, which studies have shown to be more effective in overdraw reduction than other techniques, such as reducing point size or changing the shape of the data point [35]. Given the results for the density-based model, we hypothesize that it will be able to model the perception of clusters when opacity is applied to the data points. [H6] Using a density-threshold correlated to the opacity of data points ($O$), the density-based model will have a significant effect for the number of clusters perceived by the viewer. Properties and Data Generation Our data synthesis and rendering of scatterplots is similar to main experiments (see subsection 5.1) in most aspects. We fix the number of points $N=200,000$ and point size $P=7_{px}$ to overdraw the data on the scatterplot (see Figure 11 for an example). The distribution size is the same as in the main experiment $S=\{25_{px},40_{px},55_{px},70_{px},85_{px}\}$. The data point opacity was selected on a logarithmic interval, $O=\{1\%,10\%,100\%\}$ over a white background. Each subject sees each condition 2 times. Thus, for each subject, $2\times|S|\times|N|=10$ datasets are generated and rendered into $|10|\times|P|\times|O|=30$ scatterplot stimuli. Study Procedure The task for the study was identical to the main experiment in subsection 5.2. We recruited 40 participants (21 male, 19 female; median age group: $[35-44]$) from AMT, limited to subjects located in the US or Canada. 45% of participants reported having corrected vision. Subjects were compensated at US federal minimum wage. In total $30$ tasks $\times$ $40$ participants resulted in $1200$ responses. We carried out data quality checks on responses—2 participants ($60$ total responses) were discarded because the majority of responses were the default value of $1$, and $23$ responses that ran out of time were rejected. A total of $1117$ responses were analyzed. 5.5.1 Analysis and Results The analysis was performed similarly to subsubsection 5.4.4. We performed 2-factor RM ANOVA testing and evaluated the effect of factor opacity of data point ($O$) with varying the distribution size ($S$) on measure ${\mathbb{D}}_{de}^{O}$. The calculation of the visual density histogram was modified such that it summed the pixel intensities, instead of counting the number of filled pixels. For building the histograms, we only considered the bin of size $[20_{px}\times 20_{px}]$. We calculated the density threshold on the opacity factor using linear least squares regression on $T_{de}^{O}(o)=c_{1}\cdot o+c_{2}$. Using $T_{de}^{O}$, we calculated the differential ${\mathbb{D}}_{de}^{O}$. Opacity showed a medium-large effect ($F_{O}(2,1102)=35.1,p<0.001,\eta^{2}=0.09$), followed by a medium effect for the distribution of data points ($F_{S}(4,1102)=17.44,p<0.001,\eta^{2}=0.086$). The interaction effect of $O$ and $S$ also showed medium-large effect ($F_{O*S}(8,1102)=12.33,p<0.001,\eta^{2}=0.12$). The effects that were observed confirm 5.5. Perhaps unsurprisingly, further investigation suggested that opacity has a more substantial effect when the distribution of the data is dense and a smaller effect when the data distribution is sparse. The results confirm previous findings on scatterplot overplotting [54], asserting that a larger distribution size in a scatterplot requires more opaque points, whereas a narrower distribution size requires more transparent points. 6 Model Usage Identifying clusters is an important low-level visual analytics task [3], as well as in data analysis in general [62]. Still, as mentioned in Sect. 1, clustering is an ill-posed problem, with the “correct result” being subject to the constraints of the algorithm or individual performing the analysis. Through our evaluation, we have shown that our models, the density-based model, in particular, performed well in estimating the number of clusters an average human would perceive. However, this in and of itself is not the real application value of the models. Instead, the models can be used to optimize the visual encodings to maximize the saliency of the visualization. Furthermore, the threshold plots provide an evidence-based rationale for design decisions. 6.1 Controlling Design Factors The models we have introduced construct a bridge for visualization designers between their choice of visual encodings and how users perceive clusters. Table 3 summarizes our findings and suggests how designers should focus their design decisions on selecting visual properties that robustly support cluster identification in scatterplots. For the distance-based model, distribution size ($S$) was the only factor with a large effect size. On the other hand, with the density-based model, the number of points ($N$), point size ($P$), and opacity ($O$) all showed large effects on cluster count perception. Distribution Size — Visualization designers generally do not have control over distribution size ($S$) in the data. Although distributions are rarely known a priori, they can be extracted from scatterplots, e.g., using Gaussian mixture models, which, combined with the distance-based model, could be used to help the designer to understand the number of clusters a user is likely to see. Nevertheless, this approach is unlikely to be helpful to the majority of designers. Number of Points — Visualization designers have limited control of the number of points ($N$), mostly in terms of data subsampling, e.g., [46, 12, 44], which influences visual density. Using uniform random sampling, e.g., [27, 20], or targeted nonuniform sampling, e.g., by using density [6], can reduce the number of points and, thus, the visual density. The density-based model can be used to evaluate what level of sampling provides optimal saliency of clusters. Point Size — The point size ($P$) is the first design factor with complete control in scatterplots. As pointed out earlier, increasing the area of pixels also increases the visual density. Once again, the density-based model can be used to help select the point size that provides the optimal saliency. There is an important interplay between the number of points and the point size, as adjusting either can influence the visual density. Opacity — Opacity ($O$) is another factor for which the designer has complete control, from fully transparent to fully opaque, once again impacting the visual density of the scatterplot. As suggested by Urribarri and Castro, when selecting opacity, there is a trade-off with picking a point size [77] and, given our analysis, also with the number of data points shown. Nevertheless, using the density-based model, various opacity levels, along with the number of points and their size, can be evaluated and the optimal configuration selected. 6.2 Threshold Plots for Optimizing Cluster Saliency As our goal is to improve the effectiveness of the visualization design, it is important to understand how designers can use our models to reduce ambiguity in the data, and thereby reduce the chance of misinterpretation, e.g., by having a visualization that is too sparse or over-saturated. Using a pre-studies of the effects of different visual encoding configurations in scatterplots, visualization practitioners can pick the configuration that maximizes the visibility of clusters. See <https://usfdatavisualization.github.io/TopoClusterPerceptionDemo> for a demo. Consider Figure 11, for example. The 3 scatterplots are each plotted in the persistence threshold plot. The horizontal axis reveals for each plot how salient each of the clusters are. The 10% opacity plot shows between 1 and 8 clusters are visible, but either 2 or 6 clusters are most visible. That is not to say other numbers of clusters are not visible, but they are simply not as distinctive. Of the 3 scatterplots in Figure 11, 10% provides the best saliency, followed by 1%, then 100% opacity. 6.3 Case Study To demonstrate the utility of the models on real data, we showed how the choice of visual encoding impacts the cluster perception. We performed a case study using dimensionality reduction on the MNIST dataset [47], which is an extensive database of handwritten letters commonly used to test machine learning techniques. Here we explore the dataset, which consists of 70K samples with 10 labels of handwritten digits (the labels are not used in sampling or rendering). We applied both t-SNE (see 1(a)) and PCA (see 1(b)) to plot the features on a 2D scatterplot and demonstrate the influence of 2 factors, the number of points and opacity. In 1(a), we show the results of varying the number of points (after dimension reduction), where $N=\{500,2500,12500\}$, by using random sampling. The resulting persistence threshold plot shows that for $N=500$, in red, clusters are difficult to differentiate. For $N=2500$, in blue, and $N=12500$, in purple, both have similar levels of effectiveness, with $12500$ having a slight advantage, making it the better choice for representing this example. In 1(b), we show the results of varying the opacity of the data points, where $O=\{1\%,5\%,10\%,50\%,100\%\}$. The results in the persistence threshold plot fall into 3 groups. On the first extreme, $O=50\%$, in purple, and $O=100\%$, in orange, provide no differentiation of any clusters. On the other extreme, $O=1\%$, in blue, shows that a relatively low level of saliency for 1, 2, or 3 clusters. The final group, $O=5\%$, in pink, and $O=10\%$, in green, both show identically high levels of saliency for 1, 2, or 3 clusters in the data, making either of these the better choice for representing this example. 7 Discussion & Conclusions Scatterplots are a common type of visualization, used to identify clusters in datasets. In this work, we tested and validated the importance of 4 visual factors—distribution size, the number of data points, the size of data points, and the opacity of data points—in cluster perception and built 2 models: a distance- and density-based model for the task. Our results confirm the theoretical models of Sadahiro, which states data points distribution (proximity), and number and size of data points (concentration and density change) affect cluster perception [65]. Finally, our findings confirm the important role that the choice of visual factors can have on cluster identification—visualization practitioners may apply these models for optimizing properties of their visualizations. Model Limitations Both of our models have some limitations. In the distance-based model, we required knowledge of the centers of the clusters with a fixed-size isotropic normal distribution for the model—considering other distributions would likely require modifications to the model. This requirement is particularly restrictive with respect to non-synthetic data. We showed that user response accuracy in the density-based model was significantly better than the distance-based model. However, choosing the correct density histogram resolution is a critical task that may also be dependant on the data. A choice of an extremely high or low resolution could reduce the accuracy of the threshold value. Additionally, although the density model does not directly consider a normal distribution, we have only tested it against fixed-size normal distributions. Using the model with other types of distribution should be treated with caution. Study Limitations The study itself has some additional limitations. First, we have not considered many other factors that could influence performance in either model, e.g., chart size, screen resolutions, etc. We have also not extensively analyzed variance between individuals, although we did note some small variation during our analysis (i.e., some individuals had over- or under-estimation tendencies). Another limitation that we have only provided a limited analysis of mixing effects, e.g., changing the size of points, while also changing the opacity. A final limitation is that we have not considered the correlation between confidence, which is highly related to the nature of data [32], and the correctness on each model. Alternative Models Alternative models could potentially be developed to similarly explain the variance. With respect to distance, hierarchical clustering (HC) could be used, which is functionally equivalent to our distance model. For density, since stimuli are built on Gaussian distributions, a Gaussian Mixture Model (GMM) could be used. GMMs, being numerically extracted, cannot provide the same theoretical guarantees as our models, which are technically combinatorial. The theoretical guarantees, coming from persistent homology, also include stability guarantees. With stability, small changes in the input are guaranteed to produce only small changes to the output. A consequence of stability is robustness to noise. The noise has low persistence, not influencing the selection of the number of clusters. Automatic Parameter Optimization One natural extension of this work is to develop a (semi-)automatic model for selecting design factors for a dataset. Unfortunately, using threshold plots as-is represents an under-constrained optimization, and it requires, at the very least, a user specification of the number of clusters in the data. Acknowledgements.The authors wish to thank Bei Wang, Les Piegl, Jorge Adorno Nieves, Zach Beasley, and Junyi Tu for their inputs on the project. This project is supported in part by National Science Foundation (IIS-1845204). References [1] M. M. Abbas, M. Aupetit, M. Sedlmair, and H. Bensmail. Clustme: A visual quality measure for ranking monochrome scatterplots based on cluster patterns. Computer Graphics Forum, 38(3):225–236, 2019. doi: 10 . 1111/cgf . 13684 [2] E. Alexander, J. Kohlmann, R. Valenza, M. Witmore, and M. Gleicher. Serendip: Topic model-driven visual exploration of text corpora. In IEEE Conference on Visual Analytics Science and Technology (VAST), 2014. doi: 10 . 1109/VAST . 2014 . 7042493 [3] R. Amar, J. Eagan, and J. Stasko. Low-level components of analytic activity in information visualization. In IEEE Symposium on Information Visualization (InfoVis), pp. 111–117, 2005. doi: 10 . 1109/INFVIS . 2005 . 1532136 [4] G. Anobile, G. M. Cicchini, and D. C. Burr. Number as a primary perceptual attribute: A review. Perception, 45(1-2):5–31, 2016. doi: 10 . 1177/0301006615602599 [5] M. Aupetit and M. Sedlmair. Sepme: 2002 new visual separation measures. In IEEE Pacific Visualization Symposium (PacificVis), pp. 1–8, 2016. doi: 10 . 1109/PACIFICVIS . 2016 . 7465244 [6] E. Bertini and G. Santucci. Give chance a chance: modeling density to enhance scatter plot quality through random data sampling. Information Visualization, 5(2):95–110, 2006. doi: 10 . 1057/palgrave . ivs . 9500122 [7] L. A. Best, A. C. Hunter, and B. M. Stewart. Perceiving relationships: A physiological examination of the perception of scatterplots. In International Conference on Theory and Application of Diagrams, 2006. doi: 10 . 1007/11783183_33 [8] R. Borgo, L. Micallef, B. Bach, F. McGee, and B. Lee. Information visualization evaluation using crowdsourcing. Computer Graphics Forum, 37(3):573–595, 2018. doi: 10 . 1111/cgf . 13444 [9] H. Carr, J. Snoeyink, and U. Axen. Computing contour trees in all dimensions. Computational Geometry: Theory and Applications, 24, 2003. doi: 10 . 1016/S0925-7721(02)00093-7 [10] H. Chen, W. Chen, H. Mei, Z. Liu, K. Zhou, W. Chen, W. Gu, and K.-L. Ma. Visual abstraction and exploration of multi-class scatterplots. IEEE Transactions on Visualization and Computer Graphics, 20, 2014. doi: 10 . 1109/TVCG . 2014 . 2346594 [11] H. Chen, S. Engle, A. Joshi, E. D. Ragan, B. F. Yuksel, and L. Harrison. Using animation to alleviate overdraw in multiclass scatterplot matrices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–12, 2018. [12] X. Chen, T. Ge, J. Zhang, B. Chen, C.-W. Fu, O. Deussen, and Y. Wang. A recursive subdivision technique for sampling multi-class scatterplots. IEEE Transactions on Visualization and Computer Graphics, 2019. doi: 10 . 1109/TVCG . 2019 . 2934541 [13] D. H. Chung, D. Archambault, R. Borgo, D. J. Edwards, R. S. Laramee, and M. Chen. How ordered is it? on the perceptual orderability of visual channels. Computer Graphics Forum, 35(3):131–140, 2016. doi: 10 . 1111/cgf . 12889 [14] W. S. Cleveland and R. McGill. Graphical perception: Theory, experimentation, and application to the development of graphical methods. Journal of the American Statistical Association, 79(387):531–554, 1984. doi: 10 . 1080/01621459 . 1984 . 10478080 [15] E. H. Cohen, M. Singh, and L. T. Maloney. Perceptual segmentation and the perceived orientation of dot clusters: The role of robust statistics. Journal of Vision, 8(7):6–6, 2008. doi: 10 . 1167/8 . 7 . 6 [16] J. Cohen. Eta-squared and partial eta-squared in fixed factor anova designs. Educational and psychological measurement, 33(1):107–112, 1973. doi: 10 . 1177/001316447303300111 [17] M. Correll, M. Li, G. Kindlmann, and C. Scheidegger. Looks good to me: Visualizations as sanity checks. IEEE Transactions on Visualization and Computer Graphics, 2018. doi: 10 . 1109/TVCG . 2018 . 2864907 [18] K. Crowston. Amazon mechanical turk: A research tool for organizations and information systems scholars. In Shaping the Future of ICT Research. Methods and Approaches, pp. 210–221. Springer, 2012. doi: 10 . 1007/978-3-642-35142-6_14 [19] T. N. Dang and L. Wilkinson. Transforming scagnostics to reveal hidden features. IEEE Tran. on Visualization and Computer Graphics, 2014. doi: 10 . 1109/TVCG . 2014 . 2346572 [20] A. Dix and G. Ellis. By chance enhancing interaction with large data sets through statistical sampling. In Proceedings of the Working Conference on Advanced Visual Interfaces, pp. 167–176, 2002. doi: 10 . 1145/1556262 . 1556289 [21] M. E. Doherty, R. B. Anderson, A. M. Angott, and D. S. Klopfer. The perception of scatterplots. Perception & Psychophysics, 69(7), 2007. doi: 10 . 3758/BF03193961 [22] E. Domany. Superparamagnetic clustering of data the definitive solution of an ill-posed problem. Physica A: Statistical Mechanics and its Applications, 263(1-4):158–169, 1999. doi: 10 . 1016/S0378-4371(98)00494-4 [23] H. Edelsbrunner and J. Harer. Persistent homology-a survey. Contemporary Mathematics, 453:257–282, 2008. doi: 10 . 1090/conm/453/08802 [24] H. Edelsbrunner and J. Harer. Computational Topology: An Introduction. American Mathematical Society, 2010. doi: 10 . 1090/mbk/069 [25] W. C. Eells. The relative merits of circles and bars for representing component parts. J. of the American Statistical Association, 21, 1926. doi: 10 . 1080/01621459 . 1926 . 10502165 [26] G. Ellis, E. Bertini, and A. Dix. The sampling lens: making sense of saturated visualisations. In CHI’05 extended abstracts on Human Factors in Computing Systems, pp. 1351–1354, 2005. doi: 10 . 1145/1056808 . 1056914 [27] G. Ellis and A. Dix. Density control through random sampling: an architectural perspective. In Proceedings Sixth International Conference on Information Visualisation, pp. 82–90. IEEE, 2002. doi: 10 . 1109/IV . 2002 . 1028760 [28] G. Ellis and A. Dix. A taxonomy of clutter reduction for information visualisation. IEEE Transactions on Visualization and Computer Graphics, 2007. doi: 10 . 1109/TVCG . 2007 . 70535 [29] J. M. Enoch. Effect of the size of a complex display upon visual search. Journal of the Optical Society of America, 49(3):280–286, 1959. doi: 10 . 1364/JOSA . 49 . 000280 [30] R. Etemadpour and A. G. Forbes. Density-based motion. Information Visualization, 16(1):3–20, 2017. [31] R. Etemadpour, L. Linsen, C. Crick, and A. G. Forbes. A user-centric taxonomy for multidimensional data projection tasks. In IVAPP, pp. 51–62, 2015. [32] R. Etemadpour, R. Motta, J. G. de Souza Paiva, R. Minghim, M. C. F. De Oliveira, and L. Linsen. Perception-based evaluation of projection methods for multidimensional data visualization. IEEE transactions on visualization and computer graphics, 21(1):81–94, 2014. [33] R. Etemadpour, B. Olk, and L. Linsen. Eye-tracking investigation during visual analysis of projected multidimensional data with 2d scatterplots. In 2014 International Conference on Information Visualization Theory and Applications (IVAPP), pp. 233–246. IEEE, 2014. [34] X. Z. Fern, I. Davidson, and J. G. Dy. Multiclust 2010: discovering, summarizing and using multiple clusterings. ACM SIGKDD Explorations Newsletter, 12(2):47–49, 2011. [35] S. Few and P. Edge. Solutions to the problem of over-plotting in graphs. Visual Business Intelligence Newsletter, 2008. [36] M. Friendly and D. Denis. The early origins and development of the scatterplot. Journal of the History of the Behavioral Sciences, 2005. doi: 10 . 1002/jhbs . 20078 [37] M. Gleicher, M. Correll, C. Nothelfer, and S. Franconeri. Perception of average value in multiclass scatterplots. IEEE Transactions on Visualization and Computer Graphics, 19(12):2316–2325, 2013. doi: 10 . 1109/TVCG . 2013 . 183 [38] C. C. Gramazio, K. B. Schloss, and D. H. Laidlaw. The relation between visualization size, grouping, and user performance. IEEE Transactions on Visualization and Computer Graphics, 20(12):1953–1962, 2014. doi: 10 . 1109/TVCG . 2014 . 2346983 [39] S. W. Greenhouse and S. Geisser. On methods in the analysis of profile data. Psychometrika, 24(2):95–112, 1959. doi: 10 . 1007/BF02289823 [40] C. Gutwin, A. Cockburn, and A. Coveney. Peripheral popout: The influence of visual angle and stimulus intensity on popout effects. In ACM SIGCHI Conference on Human Factors in Computing Systems, 2017. doi: 10 . 1145/3025453 . 3025984 [41] L. Harrison, F. Yang, S. Franconeri, and R. Chang. Ranking visualizations of correlation using weber’s law. IEEE Transactions on Visualization and Computer Graphics, 20(12):1943–1952, 2014. doi: 10 . 1109/TVCG . 2014 . 2346979 [42] C. Healey and J. Enns. Attention and visual memory in visualization and computer graphics. IEEE Transactions on Visualization and Computer Graphics, 18(7):1170–1188, 2012. doi: 10 . 1109/TVCG . 2011 . 127 [43] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: the effects of chart size and layering on the graphical perception of time series visualizations. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1303–1312, 2009. [44] R. Hu, T. Sha, O. Van Kaick, O. Deussen, and H. Huang. Data sampling in multi-view and multi-class scatterplots via set cover optimization. IEEE Transactions on Visualization and Computer Graphics, 2019. doi: 10 . 1109/TVCG . 2019 . 2934799 [45] Y. Kim and J. Heer. Assessing effects of task and data distribution on the effectiveness of visual encodings. Computer Graphics Forum, 2018. doi: 10 . 1111/cgf . 13409 [46] B. C. Kwon, J. Verma, P. J. Haas, and C. Demiralp. Sampling for scalable visual analytics. IEEE Computer Graphics and Applications, 2017. doi: 10 . 1109/MCG . 2017 . 6 [47] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. doi: 10 . 1109/5 . 726791 [48] J. Lewis, L. Van der Maaten, and V. de Sa. A behavioral investigation of dimensionality reduction. In Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 34, 2012. [49] J. Li, J.-B. Martens, and J. J. van Wijk. A model of symbol size discrimination in scatterplots. In ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 2553–2562. ACM, 2010. doi: 10 . 1145/1753326 . 1753714 [50] J. Li, J. J. van Wijk, and J.-B. Martens. A model of symbol lightness discrimination in sparse scatterplots. In IEEE Pacific Visualization Symposium (PacificVis), 2010. doi: 10 . 1109/PACIFICVIS . 2010 . 5429604 [51] M. Li, F. Choudhury, Z. Bao, H. Samet, and T. Sellis. Concavecubes: Supporting cluster-based geographical visualization in large data scale. Computer Graphics Forum, 37(3):217–228, 2018. doi: 10 . 1111/cgf . 13414 [52] H. Liao, Y. Wu, L. Chen, and W. Chen. Cluster-based visual abstraction for multivariate scatterplots. IEEE Transactions on Visualization and Computer Graphics, pp. 2531–2545, 2018. doi: 10 . 1109/TVCG . 2017 . 2754480 [53] Y. Ma, A. K. Tung, W. Wang, X. Gao, Z. Pan, and W. Chen. Scatternet: A deep subjective similarity model for visual analysis of scatterplots. IEEE Transactions on Visualization and Computer Graphics, pp:0, 2018. doi: 10 . 1109/TVCG . 2018 . 2875702 [54] J. Matejka, F. Anderson, and G. Fitzmaurice. Dynamic opacity optimization for scatter plots. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2707–2710, 2015. doi: 10 . 1145/2702123 . 2702585 [55] J. Matute, A. C. Telea, and L. Linsen. Skeleton-based scagnostics. IEEE Transactions on Visualization and Computer Graphics, 2017. doi: 10 . 1109/TVCG . 2017 . 2744339 [56] J. W. Mauchly. Significance test for sphericity of a normal n-variate distribution. The Annals of Mathematical Statistics, 11(2):204–209, 1940. doi: 10 . 1214/aoms/1177731915 [57] A. Mayorga and M. Gleicher. Splatterplots: Overcoming overdraw in scatter plots. IEEE Transactions on Visualization and Computer Graphics, 2013. doi: 10 . 1109/TVCG . 2013 . 65 [58] L. Micallef, G. Palmas, A. Oulasvirta, and T. Weinkauf. Towards perceptual optimization of the visual design of scatterplots. IEEE Transactions on Visualization and Computer Graphics, 23(6):1588–1599, 2017. doi: 10 . 1109/TVCG . 2017 . 2674978 [59] T. Munzner. Visualization Analysis and Design. CRC press, 2014. doi: 10 . 1201/b17511 [60] H. Nguyen, P. Rosen, and B. Wang. Visual exploration of multiway dependencies in multivariate data. In SIGGRAPH ASIA Symposium on Visualization, p. 2. ACM, 2016. doi: 10 . 1145/3002151 . 3002162 [61] J. O’Callaghan. Human perception of homogeneous dot patterns. Perception, 3(1):33–45, 1974. doi: 10 . 1068/p030033 [62] N. Otter, M. Porter, U. Tillmann, P. Grindrod, and H. Harrington. A roadmap for the computation of persistent homology, 2017. doi: 10 . 1140/epjds/s13688-017-0109-5 [63] A. V. Pandey, J. Krause, C. Felix, J. Boy, and E. Bertini. Towards understanding human similarity perception in the analysis of large sets of scatter plots. In ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 3659–3669. ACM, 2016. doi: 10 . 1145/2858036 . 2858155 [64] R. A. Rensink and G. Baldridge. The perception of correlation in scatterplots. Computer Graphics Forum, 29(3):1203–1210, 2010. doi: 10 . 1111/j . 1467-8659 . 2009 . 01694 . x [65] Y. Sadahiro. Cluster perception in the distribution of point objects. Cartographica: The International Journal for Geographic Information and Geovisualization, 34(1):49–62, 1997. doi: 10 . 3138/Y308-2422-8615-1233 [66] B. Saket, A. Endert, and Ç. Demiralp. Task-based effectiveness of basic visualizations. IEEE Transactions on Visualization and Computer Graphics, 2018. doi: 10 . 1109/TVCG . 2018 . 2829750 [67] A. Sarikaya and M. Gleicher. Scatterplots: Tasks, data, and designs. IEEE Transactions on Visualization and Computer Graphics, 2018. doi: 10 . 1109/TVCG . 2017 . 2744184 [68] A. Sarikaya, M. Gleicher, and D. Szafir. Design factors for summary visualization in visual analytics. Computer Graphics Forum, 2018. doi: 10 . 1111/cgf . 13408 [69] M. Sedlmair and M. Aupetit. Data-driven evaluation of visual quality measures. Computer Graphics Forum, 34(3):201–210, 2015. doi: 10 . 1111/cgf . 12632 [70] M. Sedlmair, A. Tatu, T. Munzner, and M. Tory. A taxonomy of visual cluster separation factors. Computer Graphics Forum, 2012. doi: 10 . 1111/j . 1467-8659 . 2012 . 03125 . x [71] I. Spence and S. Lewandowsky. Displaying proportions and percentages. Applied Cognitive Psychology, 5(1):61–77, 1991. doi: 10 . 1002/acp . 2350050106 [72] StatCounter. Desktop screen resolution stats worldwide. http://gs.statcounter.com/screen-resolution-stats/desktop/worldwide, 2019. [73] M. Stone. Visualization viewpoints: In color perception, size matters. IEEE Computer Graphics and Applications, 32(2):8, 2012. doi: 10 . 1109/mcg . 2012 . 37 [74] D. A. Szafir. Modeling color difference for visualization design. IEEE Transactions on Visualization and Computer Graphics, 2018. doi: 10 . 1109/TVCG . 2017 . 2744359 [75] D. A. Szafir, S. Haroz, M. Gleicher, and S. Franconeri. Four types of ensemble coding in data visualizations. Journal of Vision, 2016. doi: 10 . 1167/16 . 5 . 11 [76] J. Tierny. Topological data analysis for scientific visualization. Springer, 2017. [77] D. K. Urribarri and S. M. Castro. Prediction of data visibility in two-dimensional scatterplots. Information Visualization, 16(2):113–125, 2017. doi: 10 . 1177/1473871616638892 [78] R. Veras and C. Collins. Saliency deficit and motion outlier detection in animated scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12, 2019. [79] C. Ware and M. D. Plumlee. Designing a better weather display. Information Visualization, 12(3-4):221–239, 2013. doi: 10 . 1177/1473871612465214 [80] L. Wasserman. Topological data analysis. Annual Review of Statistics and Its Application, 5:501–532, 2018. doi: 10 . 1146/annurev-statistics-031017-100045 [81] L.-Y. Wei. Multi-class blue noise sampling. ACM Transactions on Graphics (TOG), 29(4):79, 2010. doi: 10 . 1145/1778765 . 1778816 [82] L. Wilkinson, A. Anand, and R. Grossman. Graph-theoretic scagnostics. In IEEE Symposium on Information Visualization (InfoVis), 2005. doi: 10 . 1109/INFVIS . 2005 . 1532142 [83] A. K. Wong and E.-S. A. Lee. Aligning and clustering patterns to reveal the protein functionality of sequences. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 11(3):548–560, 2014. doi: 10 . 1109/TCBB . 2014 . 2306840 [84] B. Wong. Points of view: gestalt principles. Nature Methods, 2010. doi: 10 . 1038/nmeth1110-863
Cooperative Distributed Sequential Spectrum Sensing Jithin K S and Vinod Sharma Department of Electrical Communication Engineering Indian Institute of Science Bangalore 560012, India Email: {jithin,vinod}@ece.iisc.ernet.in    Raghav Gopalarathnam Department of Electrical Engineering Indian Institute of Technology Madras Chennai 600036, India Email: [email protected] Abstract We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account. Keywords- Cognitive Radio, Spectrum Sensing, Cooperative Distributed Algorithm, SPRT. ††footnotetext: This work is partially supported by a grant from MCIT, Govt. of India I Introduction Cognitive Radio has evolved as a working solution for the scarcity of spectrum due to the proliferation of wireless services. Cognitive Radios (CRs) access the spectrum licensed to other service providers opportunistically without interference to the existing communication services. For this the Cognitive users sense the spectrum to detect the usage of the channel by the primary (licensed) users. However due to the inherent transmission impairments of wireless channels and strict spectrum sensing requirements for Cognitive Radios [17] spectrum sensing has become one of the main challenges faced by them. Cooperative spectrum sensing ([20], [23]) in which different cognitive radios interact with each other, is proposed as an answer to the problems caused by multipath fading, shadowing and hidden node problem in single node spectrum sensing methods. Also it improves the probability of false alarm and the probability of miss-detection. These are achieved via the exploitation of spatial diversity among the Cognitive users. Cooperative spectrum sensing can be either centralized or distributed [23]. In the centralized algorithm a central unit gathers sensing data from the Cognitive Radios and identifies the spectrum usage ([23], [15]). On the other hand, in the distributed case each secondary user collects observations, makes a local decision and sends to a fusion node to make the final decision. The information that is exchanged between the secondary users and the fusion node can be a soft decision (summary statistic) or a hard decision [15]. Soft decisions can give better gains at the fusion center but also consume higher bandwidth at the control channels (used for sharing information among secondary users). However hard decisions provide as good a performance as soft decisions when the number of cooperative users increases [5]. Spectrum sensing algorithms used at a node can use a fixed sample size (one shot) or sequential detection ([7], [10], [16], [23]). In case of fixed sample size detectors with the complete knowledge of primary signal, matched filter is the optimal detector [8] that maximises the SNR. When the only known apriori information is the noise power, the energy detector is optimal [8]. Sequential detection can provide better performance [12]. In the sequential approach one can consider detecting when a primary turns ON (or OFF) (change detection) or just the hypothesis testing whether the primary is ON or OFF. Sequential change detection is well studied in ([2], [10], [12]). In sequential hypothesis testing ([6], [9], [16]) one considers the case where the status of the primary channel is known to change very slowly, e.g., detecting occupancy of a TV transmission. Usage of idle TV bands by the Cognitive network is being targeted as the first application for cognitive radio. In this setup Walds’Sequential Probability Ratio Test (SPRT) provides the optimal performance for a single node ([14], [22]). [23] has an extensive survey of spectrum sensing methods. Other spectrum sensing schemes include methods based on higher order statistics [13], wavelet transforms [18] and compressed sensing [19]. We use the sequential hypothesis testing framework in the cooperative setup. We use SPRT at each local node and again at the fusion center. This has been motivated by our previous algorithm, DualCUSUM used for distributed change detection. Thus we will call this algorithm DualSPRT. However this has been studied in ([9] and [16]) as well. But unlike ([9], [16]) we also provide theoretical analysis of this algorithm and consider the effect of fading in the channel between the primary and secondary nodes. We also model the receiver noise at the fusion node and use physical layer fusion to reduce the transmission time of the decisions by the local nodes to the fusion node. This paper is organised as follows. Section II describes the model. Section III starts with the DualSPRT algorithm. Simulation results and analysis are also provided in Section III. Then we consider the case where the SNRs are different at different Cognitive Radios. The received SNR may or may not be known to the CR nodes. In Section IV we introduce fading at the channel between the primary transmitter and the Cognitive Radios. The channel gains may not be available to the local secondary nodes. Section V concludes the paper. II System Model We consider a Cognitive Radio system with one primary transmitter and $L$ secondary users. The $L$ nodes sense the channel to detect the spectral holes. The decisions made by the secondary users are transmitted to a fusion node via a Multiple Access Channel (MAC) for it to make a final decision. Let $X_{k,l}$ be the observation made at secondary user $l$ at time $k$. The $\{X_{k,l},~{}k\geq 1\}$ are independent and identically distributed (iid). It is assumed that the observations are independent across Cognitive Radios. Based on $\{X_{n,l},~{}n\leq k\}$ the secondary user $l$ transmits $Y_{k,l}$ to the fusion node. It is assumed that the secondary nodes are synchronised so that the fusion node receives $Y_{k}=\sum_{l=1}^{L}Y_{k,l}+Z_{k}$, where $\{Z_{k}\}$ is iid receiver noise. The fusion center uses $\{Y_{k}\}$ and makes a decision. The observations $\{X_{k,l}\}$ depend on whether the primary is transmitting (Hypothesis $H_{1}$) or not (Hypothesis $H_{0}$) as $$X_{k,l}=\left\{\begin{array}[]{cc}Z_{k,l},&~{}~{}~{}k=1,2,\ldots,~{}\textit{% under $H_{0}$}\\ h_{l}S_{k}+Z_{k,l},&~{}~{}~{}k=1,2,\ldots,~{}\textit{under $H_{1}$}\\ \end{array}\right.$$ (1) where $h_{l}$ is the channel gain of the $l^{th}$ user, $S_{k}$ is the primary signal and $Z_{k,l}$ is the observation noise at the ${l^{th}}$ user at time $k$. We assume $\{Z_{k,l},k\geq 1\}$ are iid. Let $N$ be the time to decide on the hypothesis by the fusion node. We assume that $N$ is much less than the coherence time of the channel so that the slow fading assumption is valid. This means that ${h_{l}}$ is random but remains constant during the spectrum sensing duration. The general problem is to develop a distributed algorithm in the above setup which solves the problem: $$\min E_{DD}\stackrel{{\scriptstyle\triangle}}{{=}}E[N|{H_{i}}]~{},$$ (2) $$\mbox{subject to }P_{FA}\leq\alpha$$ where $H_{i}$ is the true hypothesis, $i=\{0,1\}$ and $P_{FA}$ is the probability of false alarm, i.e., probability of making a wrong decision. We will separately consider $E[N|{H_{1}}]$ and $E[N|{H_{0}}]$. It is well known that for a single node case ($L=1$) Wald’s SPRT performs optimally in terms of reducing $E[N|{H_{1}}]$ and $E[N|{H_{0}}]$ for a given $P_{FA}$. Motivated by the good performance of DualCUSUM in ([1], [7]) and the optimality of SPRT for a single node, we propose using DualSPRT in the next section and study its performance. III DualSPRT algorithm To explain the setup and analysis we start with the simple case, where the channel gains, $h_{l}$=1 for all $l^{\prime}s$. We will consider fading in the next section. DualSPRT is as follows: 1. Secondary node, $l$, runs SPRT algorithm, $$\displaystyle W_{0,l}=0$$ $$\displaystyle W_{k,l}=W_{k-1,l}+\log\left[f_{1,l}\left(X_{k,l}\right)\left/f_{% 0,l}\left(X_{k,l}\right.\right)\right],k\geq 1$$ (3) where $f_{1,l}$ is the density of $X_{k,l}$ under ${H_{1}}$ and$f_{0,l}$ is the density of $X_{k,l}$ under ${H_{0}}$. 2. Secondary node $l$ transmits a constant $b_{1}$ at time $k$ if $W_{k,l}\geq\gamma_{1}$ or transmits $b_{0}$ when $W_{k,l}\leq\gamma_{0}$, i.e., $Y_{k,l}=b_{1}1_{\{W_{k,l}\geq\gamma_{1}\}}+b_{0}1_{\{W_{k,l}\leq\gamma_{0}\}}$ where $\gamma_{0}<0<\gamma_{1}$ and $1_{A}$ denotes the indicator function of set A. Parameters $b_{1},b_{0},\gamma_{1},\gamma_{0}$ are chosen appropriately. 3. Physical layer fusion is used at the Fusion Centre, i.e. , $Y_{k}=\sum_{l=1}^{L}Y_{k,l}+Z_{k}$, where $Z_{k}$ is the iid noise at the fusion node. 4. Finally, Fusion center runs SPRT: $$\displaystyle F_{k}=F_{k-1}+\log\left[g_{1}\left(Y_{k}\right)\left/g_{0}\left(% Y_{k}\right.\right)\right],\ \ F_{0}=0,$$ (4) where $g_{0}$ is the density of $Z_{k}+\mu_{0}$, the MAC noise at the fusion node, and $g_{1}$ is the density of $Z_{k}+\mu_{1}$, $\mu_{0}$ and $\mu_{1}$ being design parameters. 5. The fusion center decides about the hypothesis at time $N$ where $$N=\inf\{k:F_{k}\geq\beta_{1}~{}or~{}F_{k}\leq\beta_{0}\}$$ and $\beta_{0}<0<\beta_{1}$. The decision at time $N$ is $H_{1}$ if $F_{N}\geq\beta_{1}$, otherwise $H_{0}$. In order to have equal $P_{FA}$ under both hypothesis, we choose $$\gamma_{1}=-\gamma_{0}=\gamma~{}and~{}\beta_{1}=-\beta_{0}=\beta.$$ Of course $P_{FA}$ can be taken different under $H_{0}$ or $H_{1}$ by appropriately choosing $\gamma_{1},~{}\gamma_{0},~{}\beta_{1},~{}\beta_{0}$. Any prior information available about $H_{0}$ or $H_{1}$ can be used to decide constants. Performance of this algorithm depends on ($\gamma_{1},\gamma_{0},\beta_{1},\beta_{0},b_{1},b_{0},\mu_{1},\mu_{0}$). Also we choose these parameters such that the probability of false alarm, $P_{fa}$ at local nodes is much lower than $P_{FA}$. A good set of parameters for given SNR values can be obtained from known results of SPRT. Deciding at local nodes and transmitting them to the fusion node reduces the transmission rate and transmit energy used by the local nodes in communication with the fusion node. Also, physical layer fusion in Step 3 reduces transmission time, but requires synchronisation of different local nodes. If synchronisation is not possible, then some other algorithm, e.g., TDMA can be used. DualSPRT (without physical layer synchronization and fusion receiver noise) has been shown to perform well in ([9], [16]). In the rest of the following we analyse the performance under our setup. III-A Performance Analysis We first provide the analysis for $E_{DD}$ and then for $P_{FA}$.The analysis for $E_{DD}$ is similar to that of DualCUSUM in [7]. For simplicity, in the following we will take $\gamma_{1}=-\gamma_{0}=\gamma$, $\beta_{1}=-\beta_{0}=\beta$, $\mu_{1}=-\mu_{0}=\mu$ and $b_{1}=-b_{0}=1$. Then $P_{FA}$ under the two hypothesis is same. $E_{DD}$ Analysis At the fusion node $F_{k}$ crosses $\beta$ under ${H_{1}}$ when a sufficient number of local nodes transmit $b_{1}$. The dominant event occurs when the number of local nodes transmitting are such that the mean drift of the random walk ${F_{k}}$ will just have turned positive. In the following we find the mean time to this event and then the time to cross $\beta$ after this. The $E_{DD}$ analysis is same under hypothesis $H_{0}$ and $H_{1}$. Hence we provide the analysis for $H_{1}$. At secondary node $l$ SPRT $\{W_{k,l},k\geq 0\}$ is a random walk. Let $\delta=E_{H_{1}}[\log\left(f_{1}\left(X_{k,l}\right)\left/f_{0}\left(X_{k,l}% \right)\right)\right]$ , $\sigma^{2}=Var[\log\left[f_{1}\left(X_{k,l}\right)\left/f_{0}\left(X_{k,l}% \right.\right)]\right]$. We know $\delta>0$. The time $\tau_{\gamma}$ for $W_{k}$ at each local node to cross the threshold $\gamma$ satisfies $E[\tau_{\gamma}]\sim\gamma/\delta$ for large values of $\gamma$ (needed for small $P_{FA}$. Then by central limit theorem we can show that at each node $$\tau_{\gamma}\sim\mathcal{N}(\frac{\gamma}{\delta},\frac{\sigma^{2}\gamma}{% \delta^{3}})~{}.$$ (5) Now, as in [7], we can show that, $$E_{DD}\approx E[t_{j}]+\frac{\beta-\bar{F_{j}}}{\delta_{j}}$$ (6) where $\delta_{j}$ is the drift of the fusion center SPRT, $F_{k}$ when $j$ local nodes are transmitting, $t_{j}$ is the point at which the drift of $F_{k}$ changes from $\delta_{j-1}$ to $\delta_{j}$, $\bar{F_{j}}=E[F_{t_{j}-1}]$, the mean value of $F_{k}$ just before transition epoch $t_{i}$ and $$j=min\{i:\delta_{i}>0~{}and~{}\frac{\beta-\bar{F_{i}}}{\delta_{i}}<E[t_{i+1}]-% E[t_{i}]\}.$$ An iterative method is proposed [2] to calculate $E[t_{i}]$ and $\bar{F_{j}}$ in an efficient manner. For the above analysis for $E_{DD}$ we followed the analysis of DualCUSUM in [7]. However there are some difference in the SPRT at the fusion center here from the DualCUSUM in [7]. But comparison with simulations show that we will get an acceptable approximations. $P_{FA}$ Analysis It can be easily verified that $t_{k}$, defined earlier is the ${k^{th}}$ order statistics of $L$ iid random variables, $\tau_{\gamma,l}$ (first passage time to threshold $\gamma$ by the $l^{th}$ node,whose probability density function is given in (5)) . Then $P_{FA}$ when $H_{1}$ is the true hypothesis is given by, $$P_{H_{1}}(False~{}alarm)=P_{H_{1}}(False~{}alarm~{}before~{}t_{1})$$ (7) $\hskip 59.750787pt+P_{H_{1}}(False~{}alarm~{}between~{}t_{1}and~{}t_{2})% \newline \hskip 62.596063pt+P_{H_{1}}(False~{}alarm~{}between~{}t_{2}~{}and~{}t_{3})+....$ One expects that the first term in (7) should be the dominant term. This is because $P_{fa}$ is much smaller than $P_{FA}$ and hence after $t_{1}$, the drift of $F_{k}$ will be more positive. Therefore the probability of false alarm goes down. We have verified this from simulations also. Hence we focus on the first term. $$\textrm{Let}~{}S_{k}~{}=~{}\log\left[g_{1}\left(Y_{k}\right)/g_{0}\left(Y_{k}% \right)\right]~{}\textrm{and}~{}\theta=\beta/2\mu.$$ Therefore $F_{k}=S_{1}+S_{2}+...+S_{k}$. Every $S_{i},1\leq i\leq k$ has a common term $2\mu$ (in case of Gaussian $g_{1}$ and $g_{0}$) , thus changing the threshold to $\theta=\beta/2\mu$. Then $P_{H_{1}}(FA~{}before~{}t_{1})$ $$\displaystyle=$$ $$\displaystyle\sum_{k=1}^{\infty}P\Big{[}\{F_{k}<-\theta\}\cap_{n=1}^{k-1}\{F_{% n}>-\theta\}\big{|}t_{1}>k\Big{]}P[t_{1}>k]$$ $$\displaystyle=$$ $$\displaystyle\sum_{k=1}^{\infty}\Big{(}P[F_{k}<-\theta|\cap_{n=1}^{k-1}\{F_{n}% >-\theta\}]~{}P[\cap_{n=1}^{k-1}\{F_{n}>-\theta\}]\Big{)}$$ $$\displaystyle\Big{(}1-\Phi_{t_{1}}(k)\Big{)}$$ $$\displaystyle\stackrel{{\scriptstyle(A)}}{{=}}$$ $$\displaystyle\sum_{k=1}^{\infty}\Big{(}P[F_{k}<-\theta|F_{k-1}>-\theta]~{}P[% \inf_{1\leq n\leq k-1}F_{n}>-\theta]\Big{)}$$ $$\displaystyle\Big{(}1-\Phi_{t_{1}}(k)\Big{)}$$ $$\displaystyle\stackrel{{\scriptstyle(B)}}{{\geq}}$$ $$\displaystyle\ \sum_{k=1}^{\infty}\Big{(}\int_{c=0}^{2\theta}P[S_{k}<-c]f_{F_{% k-1}}\{-\theta+c\}dc\Big{)}$$ $$\displaystyle\Big{(}1-2P[F_{k-1}<-\theta]\Big{)}\Big{(}1-\Phi_{t_{1}}(k)\Big{)}$$ where $\Phi_{t_{1}}$ is the Cumulative Distribution Function of $t_{1}$. As we are considering only {$F_{k},k\leq t_{1}$}, we remove the dependencies on $t_{1}$. (A) is because of the Markov property of the random walk. (B) is due to the inequality, $$P[\sup_{k\leq n}F_{k}\geq\theta]\leq 2P[F_{n}\geq\theta]$$ for the Gaussian random walk $F_{k}$ [4]. Similary we can write an upper bound by replacing $P[\cap_{n=1}^{k-1}\{F_{n}>-\theta\}]$ with $P[F_{k-1}>-\theta$]. In Table I we compare the lower bound on $P_{FA}$ with the simulation results. We can make this lower bound tighter if we do the same set of analysis for the Gaussian random walk between ${t_{1}}$ and $t_{2}$ with appropriate changes and add to the results we already obtained. III-B Example We apply the DualSPRT on the following example and compare the $E_{DD}$ and $P_{FA}$ via analysis provided above with the simulation results. We assume that the pre-change distribution ${f_{0}}$ and the post-change distribution ${f_{1}}$ are Gaussian with different means. This model is relevant when the noise and interference are log-normally distributed [20]. This is a useful model when $X_{k,l}$ is the sum of energy of a large number of observations at the secondary node at low SNR. Parameters used for simulation are as follows: There are 5 secondary nodes, ($L=5$), $f_{0}\sim\mathcal{N}(0,1)$ and $f_{1}\sim\mathcal{N}(1,1)$, where $\mathcal{N}(a,b)$ denote Gaussian distribution with mean $a$ and variance $b$. Also $f_{0}=f_{0,l}$ and $f_{1}=f_{1,l}$ for $1\leq l\leq L$, $\gamma_{1}=-\gamma_{0}=\gamma$, $\beta_{1}=-\beta_{0}=\beta$, $\mu_{1}=-\mu_{0}=\mu$ and $b_{1}=-b_{0}=1$. The $P_{FA}$ and the corresponding $E_{DD}$ are provided in Table I. The parameters are chosen to provide good performance for the given $P_{FA}$. The table also provides the results obtained via analysis. III-C Analysis for different SNRs The above analysis is for the case when $X_{k,l}$ have the same distribution for different $l$ under the hypothesis $H_{0}$ and $H_{1}$. However in practice the $X_{k,l}$ for different local nodes $l$ will often be different because their receiver noise can have different variances and / or the path losses from the primary transmitter to the secondary nodes can be different. The above analysis for this case needs slight changes for $E_{DD}$ as well as $P_{FA}$. For the analysis of $E_{DD}$ one difference is that $\tau_{\gamma,l}$, $l=1,\ldots,L$ are no longer iid. Now the iterative scheme used in Section III A to calculate $E_{t_{j}}$ and $\bar{F_{j}}$ does not work. Thus, knowing the minimum number of local nodes needed to make the mean drift of $F_{k}$ positive (say it is $i^{*}$), we compute the mean of the $i^{*}$ order statistics of the independent random variable $\tau_{\gamma,l},l=1,\ldots,L$ via [3]. Then we approximate the $E_{DD}$ by $$E[t_{i^{*}}]+\frac{\beta-\left(\frac{E[t_{i^{*}}]-E[t_{i^{*}-1}]}{\delta_{i^{*% }-1}}\right)}{\delta_{i^{*}}}~{}.$$ (8) For $P_{FA}$ analysis we need the distribution of the first order statistics $t_{1}$ for $\tau_{\gamma,l},l=1\ldots,L$ and then use the method proposed in Section III A. We provide an example to verify the accuracy of the performance analysis provided above. III-D Example There are five secondary nodes with primary to secondary channel gain being 0, -1.5, -2.5, -4 and -6 dB respectively (corresponding post change means are 1, 0.84, 0.75, 0.63, 0.5). $f_{0}\sim\mathcal{N}(0,1),f_{0}=f_{0,l}$ for $1\leq l\leq L$. Table II provides the $E_{DD}$ and $P_{FA}$ via analysis and simulations. We see a good match. III-E Different and unknown SNRs Next we consider the case where the received signal power is fixed but not known to the local Cognitive Radio nodes. This can happen if the transmit power of the primary is not known and / or there is unknown shadowing. Now we limit ourselves to the energy detector where the observations $X_{k,l}$ are a summation of energy of $N$ samples received by the $l^{th}$ Cognitive Radio node. Then for somewhat large $N$, the pre and post change distributions of $X_{k,l}$ can be approximated by Gaussian distributions: $f_{0,l}\sim\mathcal{N}({\sigma}^{2},2{\sigma}^{4}/N)$ and $f_{1,l}\sim\mathcal{N}(P_{l}+{\sigma}^{2},2(P_{l}+{\sigma}^{2})^{2}/N)$, where $P_{l}$ is the received power at the $l^{th}$ CR node and noise $Z_{k,l}\sim\mathcal{N}({0,\sigma}^{2})$. Under low SNR conditions ${(P_{l}+\sigma^{2})}^{2}\approx\sigma^{4}$ and hence $X_{k,l}$ are Gaussian distributed with mean change under $H_{0}$ and $H_{1}$. Now taking $X_{k,l}-\sigma^{2}$ as the data for the detection algorithm at the $l^{th}$ node, since $P_{l}$ is unknown we can formulate this problem as a sequential hypothesis testing problem with $$H_{0}:\theta=0~{};~{}H_{1}:\theta\geq\theta_{1}~{}.$$ (9) where $\theta$ is $P_{l}$ and ${\theta_{1}}$ is appropriately chosen. The problem $$H_{0}:\theta\leq\theta_{0}~{};~{}H_{1}:\theta\geq\theta_{1}~{},$$ (10) subject to the error constraints $$P_{\theta}\{rejectH_{0}\}\leq\alpha~{}for~{}\theta\leq\theta_{0}$$ (11) $$P_{\theta}\{rejectH_{1}\}\leq\beta~{}for~{}\theta\geq\theta_{1}$$ for exponential family of distributions is well studied in ([11], [12]). The following algorithm of Lai is asymptotically Bayes optimal [11] and hence we use it at the local nodes instead of SPRT. Let $\theta\in A=[a_{1},a_{2}]$. Define $$W_{n,l}=max\left[\sum_{k=1}^{n}\log\frac{f_{\hat{\theta_{n}}}(X_{k})}{f_{% \theta_{0}}(X_{k})},\sum_{k=1}^{n}\log\frac{f_{\hat{\theta_{n}}}(X_{k})}{f_{% \theta_{1}}(X_{k})}\right]~{},$$ (12) $$N(g,c)=\inf\left\{n:W_{n,l}\geq g(nc)\right\}~{},$$ (13) where $g()$ is a time varying threshold. Its approximate expression is given in [11]. At time $N(g,c)$ decide upon $H_{0}$ or $H_{1}$ according as $${\hat{\theta}_{N(g,c)}\leq\theta^{*}}~{}or~{}{\hat{\theta}_{N(g,c)}\geq\theta^% {*}}~{},$$ where $\theta^{*}$ is obtained by solving $I(\theta^{*},\theta_{0})=I(\theta^{*},\theta_{1})$, and $I(\theta,\lambda)$ is the Kullback-Leibler information number. Also for Gaussian $f_{0}$ and $f_{1}$, ${\hat{\theta}}_{n}=max\{a_{1},min[S_{n}/n,a_{2}]\}$. The choice of $\theta_{1}$ in (9) affects the performance of $E[N|H_{0}]$ and $E[N|H_{1}]$ for the algorithm (12)-(13), where $N=N(g,c)$. For our case where $H_{0}:\theta=0$, unlike in (10) where $H_{0}:\theta\leq 0$, $E[N|H_{0}]$ largely depends upon the value $\theta_{1}$. As $\theta_{1}$ increases, $E[N|H_{0}]$ decreases and $E[N|H_{1}]$ increases. If $P_{l}\in[\underline{P},\overline{P}]$ for all $l$ then a good choice of $\theta_{1}$, is $(\overline{P}-\underline{P})/2$. In the distributed setup with received power at the local nodes unknown, the local nodes will use the Lai’s algorithm mentioned above while the fusion node runs the SPRT. All other details remain same. We call this algorithm GLR-SPRT. The performance of GLR-SPRT is compared with DualSPRT (where the received powers are assumed known at the local nodes) for Example III D in Table III. Interestingly $E[N|H_{1}]$ for GLR-SPRT is actually lower than for DualSPRT , but $E[N|H_{0}]$ is higher. IV Channel with Fading In this section we consider the system where the channels from the primary transmitter to the secondary nodes have fading $(h_{l}\neq 1)$. We assume slow fading, i.e., the channel coherence time is longer than the hypothesis testing time. We consider two cases, Case 1: the fading gain is known to the CR nodes. Case 2: the fading gain is not known to the CR nodes. When the fading gain $h_{l}$ is known to the $l^{th}$ secondary node then this case can be considered as the different SNR case studied in Section III C. Thus we only consider Case 2 where the channel gain $h_{l}$ is not known to the $l^{th}$ node. We consider the energy detector setup of Section III E. However, $P_{l}$, the received signal power at the local node $l$ is random. If the fading is Rayleigh distributed then $P_{l}$ has exponential distribution. The hypothesis testing problem becomes $$H_{0}:f_{0,l}\sim\mathcal{N}(0,{\sigma}^{2});H_{1}:f_{1,l}\sim\mathcal{N}(% \theta,{\sigma}^{2})$$ (14) where $\theta$ is random with exponential distribution and ${\sigma}^{2}$ is the variance of noise. We are not aware of this problem being handled via sequential hypothesis testing. However we use Lai’s algorithm in Section III E where we take $\theta_{1}$ to be the median of the distribution of $\theta$, such that $P(\theta\geq\theta_{1})=1/2$. This seems a good choice for $\theta_{1}$ to compromise between $E[N|H_{0}]$ and $E[N|H_{1}]$. We use this algorithm on an example where ${\sigma}^{2}=1,\theta=exp(1)$, Var($Z_{k}$) = 1, and $L=5$. The performance of this algorithm is compared with that of DualSPRT (with perfect channel state information) in Table IV (under $H_{0}$) and Table V (under $H_{1}$). The $E_{DD}$ and $P_{FA}$ were computed by simulations each case by 100000 times and taking the average. We observe that under $H_{1}$, for high $P_{FA}$ this algorithm works better than DualSPRT with channel state information, but as $P_{FA}$ decreases DualSPRT becomes better and the difference increases. For $H_{0}$, GLRSPRT is always worse and the difference is almost constant. V conclusions and future work We have proposed an energy efficient, distributed cooperative spectrum sensing technique, DualSPRT which uses SPRT at the cognitive radios as well as at the fusion center. We also provide analysis of DualSPRT. Next we modify the algorithm so as to be able to detect when the received SNR is not known and when there is slow fading channels between the primary and the secondary nodes. Future work should consider analysis of the GLR algorithms and optimising over the current setup. References [1] T. Banerjee, V. Kavitha and V. Sharma, “Energy efficient change detection over a MAC using Physical layer fusion”, in Proc. IEEE Conference, ICASSP, April 2008. [2] T. Banerjee, V. Sharma, V. Kavitha and A. K. JayaPrakasam, “Generalized Analysis of a Distributed Energy Efficient Algorithm for Change Detection”, accepted in IEEE Trans. on Wireless Communications [3] H. M. Barakat and Y. H. Abdelkader, “Computing the moments of order statistics from nonidentical random variables”, Statistical Methods and applications, Springer-Verlag, 2003 [4] P. Billingsley, “Probability and Measure”, Third Edition, Wiley-Interscience, 1995 [5] J. F. Chamberland and V. V. Veeravalli, “Decentralized detection in sensor networks”, IEEE Trans. Signal Proces., vol.51, issue 2, pp.407-416, Feb 2003. [6] K. W. Choi, W. S. Jeon, and D. G. Jeong, “Sequential detection of cyclostationary signal for cognitive radio systems,” IEEE Trans. Wireless Commun., vol.8, no.9, pp.4480-4485, Sep.2009. [7] A. K. Jayaprakasam and V. Sharma, “Cooperative robust sequential detection algorithms for spectrum sensing in cognitive radio”, in Proc of ICUMT, Oct 2009. [8] S. M. Kay, “Fundamentals of Statistical Signal Processing: Detection Theory”, Englewood Cliffs:Prentice-Hall, vol.2, 1998. [9] N. Kundargi and A. Tewfik, “Hierarchical Sequential Detection In The Context Of Dynamic Spectrum Access For Cognitive Radios”, in Proc of IEEE Electronics, Circuits and Systems, ICECS, pp.514-517, Dec 2007. [10] L. Lai,Y. Fan and H. V. Poor, ”Quickest Detection in Cognitive Radio: A Sequential Change Detection Framework”, in Proc of IEEE GLOBECOM, Nov 2008. [11] T. L. Lai, “Nearly optimal sequential tests of composite hypotheses”, The Annals of Statistics, vol.16, no.2, pp.856-886, 1988. [12] T. L. Lai, “Sequential analysis: Some classical problems and new challenges (with discussion)”, Statistica Sinica, vol.11, pp.303-408, 2001 [13] A. N. Mody, “Spectrum sensing of the DTV signals in the vicinity of the video carrier using Higher Order Statistics”, IEEE Std. 8022-07/0359r0, July 2007. [14] H. V. Poor and O. Hadjiliadis, “Quickest Detection”, Cambridge University Press, New York, 2009 [15] Z. Quan, S. Cui, H. V. Poor, A. Sayed, “Collaborative wideband sensing for cognitive radios”, IEEE Signal Processing Magazine, vol.25, no.6, pp.60-73, November 2008. [16] Y. Shei and Y. T. Su, “A sequential test based cooperative spectrum sensing scheme for cognitive radios”, in Proc. of IEEE Personal, Indoor and Mobile Radio Communications PIMRC, Sept 2008. [17] S. Shellhammer, “Numerical Spectrum Sensing Requirements”, IEEE Std.8022-06/0088r0, June 2006. [18] Z. Tian and G. B. Giannakis, “A wavelet approach to wideband spectrum sensing for cognitive radios,” in Proc. IEEE Int. Conf. Cog- nitive Radio Oriented Wireless Networks and Commun (Crowncom), June 2006. [19] Z. Tian and G. B. Giannakis, “Compressed sensing for wideband cognitive radios,” in Proc. IEEE ICASSP, vol.4, pp.1357-1360, Apr. 2007. [20] V. V. Veeravalli and J. Unnikrishnan, “Cooperative spectrum sensing for primary detection in cognitive radios,” IEEE journal on Selected Topics in Signal Processing, pp.18–27, Feb 2008. [21] A. Wald, “Sequential Analysis”, John Wiley and Sons,New York, 1947. [22] A. Wald and J. Wolfowitz, “Optimum character of the sequential probability ratio test”, The Annals of Statistics, vol.19, pp.326-339, 1948. [23] T. Yucek and H. Arslan, “A survey of spectrum sensing algorithms for cognitive radio applications” , IEEE Communications Surveys and Tutorials, vol.11, no.1, pp.116-130, March 2009.
Ascribing quantum system to Schwarzschild spacetime with naked singularity Andrzej Góźdź [email protected] Institute of Physics, Maria Curie-Skłodowska University, pl. Marii Curie-Skłodowskiej 1, 20-031 Lublin, Poland    Aleksandra Pȩdrak [email protected] Department of Fundamental Research, National Centre for Nuclear Research, Pasteura 7, 02-093 Warszawa, Poland    Włodzimierz Piechocki [email protected] Department of Fundamental Research, National Centre for Nuclear Research, Pasteura 7, 02-093 Warszawa, Poland (December 14, 2022) Abstract We quantize the Schwarzschild spacetime with naked singularity using the affine coherent states quantization method. The novelty of our approach is quantization of both temporal and spatial coordinates. Quantization smears the gravitational singularity indicated by the Kretschmann invariant avoiding its localization in the configuration space. This way we resolve the singularity problem of considered spacetime at quantum level. I Introduction One of the motivations of this paper is constructing the tools to be used in the quantization of the Lemaître-Tolman-Bondi model of spacetime. Another one is testing the idea of quantization of both temporal and spatial variables of simple gravitational system to be used later in the case of more sophisticated gravitational models. The system we consider to be quantized is the celebrated Schwarzschild spacetime Schw ; Dro . We ascribe to this gravitational system a quantum system by making use of the affine coherent states (ACS) approach that we have recently used for the quantization of the Belinski-Khalatnikov-Lifshitz scenario with generic cosmological singularity AWG ; AW . To this end, we quantize not only spatial but also temporal coordinates. Instead of phase space used in Hamiltonian formulations, we introduce the notion of an extended configuration space including the time variable. This space is used to quantize both elementary and composite observables. As far as we are aware, our paper is the first one which proposes the quantization of the temporal and spatial variables in general relativity. Quite general rationale for such dealing is the following: the distinction between space and time violates relativity; in particular, the general covariance of arbitrary transformations of temporal and spatial coordinates. By resolving the gravitational singularity problem, we mean showing that quantization smears the singularity indicated by the Kretschmann scalar avoiding its localization in the configuration space. Recently, we have found that the ACS quantization depends on the choice of the parametrization of the affine group AWT . In this paper we present another “parameter” of the ACS method, unknown before, that is connected with the freedom in the choice of the center of the affine group. There are at least three goals of this paper: (i) presenting a powerful quantization method especially suitable for quantization of gravitational systems, (ii) applying successfully this method to the resolution of gravitational singularity of an isolated object, and (iii) showing that treating temporal and spatial coordinates on the same footing, supporting the covariance of general relativity, enables the construction of consistent quantum theory. The paper is organized as follows: In Sec. II we recall the known properties of the Schwarzschild spacetime. Sec. III is devoted to the quantum theory. We recall the formalism of the affine coherent states quantization method. Then, we quantize the temporal and spatial coordinates which are elementary observables. Quantization of the main observable, the Kretschmann scalar, is carried out in Sec. IV. It includes examination of the expectation value of Kretschmann’s operator. We conclude in Sec.V. Appendixes include some practical rules concerning calculations of special expressions, eigensolutions for elementary observables, expectation value of the Kretschmann operator within some basis of the carrier space, and determination of some parameters used in the paper. In the following we choose $\;G=c=1\;$ except where otherwise noted. II Classical model One of the simplest vacuum solutions to Einstein’s equations, representing the spherically symmetric black hole is the Schwarzschild spacetime. The Schwarzschild metric in the so-called Schwarzschild coordinates $(t,r,\theta,\phi)\in\mathbb{R}\times(0,\infty)\times S^{2}$ reads black-bible ; Piotr : $${\mathrm{d}}s^{2}=-\left(1-\frac{r_{s}}{r}\right){\mathrm{d}}t^{2}+\left(1-\frac{r_{s}}{r}\right)^{-1}{\mathrm{d}}r^{2}+r^{2}\left({\mathrm{d}}\theta^{2}+\sin^{2}\theta{\mathrm{d}}\phi^{2}\right)\;,$$ (1) where $t$ is the time coordinate measured by a stationary clock located infinitely far from black hole, $r$ is the radial coordinate measured as the circumference (divided by $2\pi$) of a sphere centered around the black hole, $\theta$ and $\phi$ are angle coordinates of the sphere $S^{2}$, $r_{s}=2M$ denotes the Schwarzschild radius defining the event horizon, and $M$ is the mass parameter of the black hole. It is commonly known that $r=r_{s}$ defines not gravitational, but a coordinate singularity. The event horizon divides the Schwarzschild spacetime into the interior and exterior regions of that black hole. The exterior metric, defined by (1), is static. In the interior region, the exterior spatial radial and temporal coordinates exchange their character so that the metric coefficients become time dependent Piotr . There exists the isometry of the interior of the Schwarzschild black hole with the vacuum Kantowski-Sachs spacetime (see, e.g. Edward ) which can be used for the quantization of the former. We make some remarks on that quantization in the concluding section. In this paper we ascribe a quantum system to the Schwarzschild spacetime devoid of the event horizon. Such gravitational model is defined by the metric (1) with $M<0$, which is static for any $r>0$ (see, e.g. Piotr ). This way we avoid the problem of bearing of the horizon on the quantization which simplifies the latter. To identify the curvature singularity, we cannot use the Ricci scalar and tensor as these are vanishing for the vacuum solution. However, another curvature invariant, the Kretschmann scalar is non-zero and reads black-bible ; Piotr : $$\mathcal{K}:=R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}=\frac{48M^{2}}{r^{6}}\;,$$ (2) so that it exhibits the gravitational singularity as $r\rightarrow 0$. The Kretschmann invariant is the main observable to be examined at quantum level. III Quantum description The classical description of the model presented in the previous section includes two elementary observables: time and radial coordinates. The former is timelike and the latter is spacelike. In the standard quantization procedure, the time variable may play the role of an evolution parameter as in the Schrödinger equation. In what follows, we quantize both the temporal and spatial coordinates. For both variables the time $t$ and the radial coordinates $r$, we construct their quantum counterparts. Both quantum observables (operators) we treat on the same footing. It means that time is no longer a parameter, but similarly as the radial coordinate, a quantum observable represented by an appropriate operator obtained by a quantization procedure. In the following, as it was mentioned earlier, we are using the affine coherent states quantization (ACS). As we will see later the ACS quantization leads to the operators $\hat{t}$ and $\hat{r}$ which, in general, do not commute. Due to the Heisenberg uncertainty principle they cannot be considered as a compatible pair of quantum observables. In addition, because of this property, one cannot construct the common eigenstates of both observables, which would represent spacetime position states. In such case, the most appropriate candidates for the spacetime position states are the coheret states. The coherent states furnish a set of non–orthogonal states. It means that, in general, the spacetime position states are always connected by a non–zero transition amplitudes. They cannot be considered as a set of independent alternatives as it is in the case of common eigenstates of commuting self-adjoint operators. In this paper we want to check if introducing of time as quantum observables can help to resolve at the quantum level the main problem of general relativity, which is the existence of solutions with gravitational singularities. To begin with, we address the singularity problem of the simplest solution to Einstein’s gravity, but we plan to apply this approach to more advanced singular solutions within general relativity. III.1 Affine coherent states quantization The covariance of general relativity requires to treat both variables $t$ and $r$ on the same footing in both the classical and quantum descriptions. To fulfill this condition we begin with introducing the notion of the extended configuration space $T$ of our system by including time as next coordinate variable required in description of this quantum system. It is defined as follows $$T=\{(t,r)\leavevmode\nobreak\ |\leavevmode\nobreak\ (t,r)\in\mathbb{R}\times\mathbb{R}_{+}\},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathbb{R}_{+}=(0,+\infty)\,,$$ (3) where $t$ and $r$ are the time and the radial coordinates, respectively, which occur in the line element (1). The corresponding operators $\hat{t}$ and $\hat{r}$ are constructed in the subsection B by the ACS quantization procedure. As usually in quantum mechanics, to define a quantum observable one needs to determine an operational procedure which connects this observable with its quantum description. In the case of $t$ and $r$ one needs to measure time and spacial distance. To relate the values of measured time and radial variables to our states and operators we introduce the consistency conditions (26) and (27). They represent compatibility of expectation values of the time and position operators, within the coherent states, with measured values. The other space variables $\theta$ and $\phi$ of (1), used to implement the spherical symmetry of considered spacetime, do not enter the definition of $T$ as the main observable to be quantized, the Kretschmann scalar, does not depend on these variables. In the following we sketch the basic facts about affine quantization required in further considerations. The most important formula in this subsection is the expression (23) for quantization of any arbitrary classical mechanics function defined on the configuration space of a given physical system. Since the configuration space is a half-plane, every point $(t,r)\in T$ can be uniquely identified with the corresponding element $g(\chi_{1}(t,r),\chi_{2}(t,r))$ of the affine group $\textrm{Aff}(\mathbb{R})$, where $\chi(t,r)=(\chi_{1}(t,r),\chi_{2}(t,r))$ is a one-to-one mapping between $T$ and any arbitrary chosen fixed parametrization of $\textrm{Aff}(\mathbb{R})$. As the standard parametrization of the affine group (see AWT for more details) we assume the parametrization $(p,q)\in\mathbb{R}\times\mathbb{R}_{+}$ which obey the following multiplication law $$g(p_{1},q_{1})\cdot g(p_{2},q_{2}):=g(p_{1}+q_{1}p_{2},q_{1}q_{2})\in\textrm{Aff}(\mathbb{R})\,,$$ (4) and the left invariant measure on this group is defined as $$d\mu(p,q)=dp\frac{dq}{q^{2}}\,.$$ (5) The corresponding left invariant integration over the affine group is given by $$\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q):=\frac{1}{2\pi}\int_{-\infty}^{\infty}dp\int_{0}^{\infty}dq/q^{2}\,.$$ (6) It enables defining the Hilbert space of functions on the affine group $\mathcal{H}_{g}:=L^{2}(\textrm{Aff}(\mathbb{R}),d\mu(g))$, where $g=g(p,q)\in\textrm{Aff}(\mathbb{R})$. Because $(p,q)=\chi(t,r)$ is a one-to-one function, the coordinates $(t,r)\in T$ also parameterize the affine group $\textrm{Aff}(\mathbb{R})$, i.e., we have the mapping $(t,r)\to g(\chi_{1}(t,r),\chi_{2}(t,r))$. The corresponding measure (not necessarily invariant) in the $(t,r)$ parametrization reads $$d\mu(p,q)=\sigma(t,r)dtdr=\begin{vmatrix}\frac{\partial\chi_{1}}{\partial t}&\frac{\partial\chi_{2}}{\partial t}\\ \frac{\partial\chi_{1}}{\partial r}&\frac{\partial\chi_{2}}{\partial r}\end{vmatrix}dtdr=:d\lambda(t,r)\,.$$ (7) It is known (see AWT for more details) that the affine group has two (inequivalent) irreducible unitary representations defined in the Hilbert space $\mathcal{H}_{x}:=L^{2}(\mathbb{R}_{+},d\nu(x))$, where $d\nu(x):=dx/x$. We choose the one defined as follows $$U(p,q)\Psi(x):=e^{ipx}\Psi(qx)\,,$$ (8) where $\Psi(x)\in\mathcal{H}_{x}$. The carrier space $\mathcal{H}_{x}$ is known to have the basis GM $$e^{(\alpha)}_{n}(x)=\sqrt{\frac{n!}{(n+\alpha)!}}\,e^{-x/2}x^{(1+\alpha)/2}\,L_{n}^{(\alpha)}(x),$$ (9) where $L_{n}^{(\alpha)}$ is the Laguerre polynomial, $\alpha>-1$, and $(n+\alpha)!=\Gamma(n+\alpha+1)$. One can verify that $\int_{0}^{\infty}e^{(\alpha)}_{n}(x)e^{(\alpha)}_{m}(x)d\nu(x)=\delta_{nm}$ so that $e^{(\alpha)}_{n}(x)$ is an orthonormal basis. The coherent states in the standard parametrization of the affine group, $\langle x|g(p,q)\rangle\in\mathcal{H}_{x}$, are defined as follows $$\langle x|g(p,q)\rangle=U(p,q)\Phi_{0}(x)=e^{ipx}\Phi_{0}(qx)\,,$$ (10) where $\Phi_{0}(x)\in\mathcal{H}_{x}$ is the so-called fiducial vector. It is a sort of a free “parameter” of the ACS quantization. First of all, it should be normalized so that we should have $$\langle\Phi_{0}|\Phi_{0}\rangle:=\int_{0}^{\infty}d\nu(x)\langle\Phi_{0}|x\rangle\langle x|\Phi_{0}\rangle=\int_{0}^{\infty}d\nu(x)|\Phi_{0}(x)|^{2}=1\,,$$ (11) where we have used the formula AWT $$\int_{0}^{\infty}d\nu(x)|x\rangle\langle x|=\hat{1\kern-4.75pt1}\,,$$ (12) which applies to $\mathcal{H}_{x}$. The resolutions of the identity $\hat{1\kern-4.75pt1}$ in the Hilbert space $\mathcal{H}_{x}$, in terms of the coherent states, reads AWT $$\frac{1}{A_{\Phi_{0}}}\,\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\leavevmode\nobreak\ |g(p,q)\rangle\langle g(p,q)|=\hat{1\kern-4.75pt1}\,,$$ (13) where $$A_{\Phi_{0}}:=\int_{0}^{\infty}\frac{dx}{x^{2}}|\Phi_{0}(x)|^{2}<\infty\,,$$ (14) which defines another condition to be imposed on the fiducial vector $\Phi_{0}(x)$. Using (13) we can (formally) map any observable $f:T\rightarrow\mathbb{R}$ into a symmetric operator $\hat{f}:\mathcal{H}_{x}\rightarrow\mathcal{H}_{x}$ as follows (see, App. (A) and AWG ; AWT for more details) $$\hat{f}:=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|g(p,q)\rangle f(p,q)\langle g(p,q)|\,.$$ (15) However, as it was shown in the paper AWT , the affine quantization is dependent on the parametrization of the affine group $\textrm{Aff}(\mathbb{R})$ and it has to be considered also as a kind of a free “parameter” in the ACS quantization. A fundamental expression in the ACS quantization is a non-orthogonal decomposition of unity constructed from the coherent states $|h(t,r)\rangle=|g(\chi(t,r)\rangle$: $$\frac{1}{A_{\Phi_{0}}}\,\int_{\textrm{Aff}(\mathbb{R})}d\lambda(t,r)\leavevmode\nobreak\ |h(t,r)\rangle\langle h(t,r)|=\frac{1}{A_{\Phi_{0}}}\,\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\leavevmode\nobreak\ |g(p,q)\rangle\langle g(p,q)|=\hat{1\kern-4.75pt1}\,.$$ (16) The affine group manifold itself is a homogenous space. All points in this manifold are equivalent to each other. This means that from the physical point of view we have an additional freedom in mapping of the configuration space onto the group manifold $g(\chi(,)):T\to\textrm{Aff}(\mathbb{R})$. More precisely, using the ACS quantization it is usually assumed that the element $(t_{0}=0,r_{0}=1)\in T$ of the configuration space is mapped onto the unit element $g(0,1)$ of the affine group $\textrm{Aff}(\mathbb{R})$. Because of the homogeneity of the group manifold, this assignment is in fact arbitrary. Every choice of the mapping $g(\chi(t,r))\in\textrm{Aff}(\mathbb{R})$ fixes in some way a relative position between configuration space and the affine group manifold by the relation $T\ni(t_{0}=0,r_{0}=1)\to g(a,b)=g(\chi(0,1))$. It defines the point $g(a,b)\in\textrm{Aff}(\mathbb{R})$ which we call the “center” of the group manifold associated to this configuration space. In the standard parametrization, where the transformation $\chi$ is the identity transformation, the center is identified with the unity $g(0,1)$ of the affine group. To use this freedom one can check that the resolution of unity is invariant with respect to any arbitrary left shift operation of the affine group manifold: $$\frac{1}{A_{\Phi_{0}}}\,\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\leavevmode\nobreak\ |g(a,b)\cdot g(p,q)\rangle\langle g(a,b)\cdot g(p,q)|=\hat{1\kern-4.75pt1}\,,$$ (17) however, it is not invariant with respect to the right shift operation: $$\frac{1}{A_{\Phi_{0}}}\,\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\leavevmode\nobreak\ |g(p,q)\cdot g(a,b)\rangle\langle g(p,q)\cdot g(a,b)|=\Delta(g(a,b)^{-1})\hat{1\kern-4.75pt1}\,.$$ (18) In general, the function $\Delta(g)$ is the Haar modulus of the Lie group $G$ defined as $$\int_{G}d\mu(g)f(g\cdot h)=:\Delta(h^{-1})\int_{G}d\mu(g)f(g)\,,$$ (19) where $d\mu(g)$ denotes the left invariant measure on $G$. Note that the right shift of the unity resolution is still proportional to resolution of unity. The right shift translates the “center” of the affine group manifold to the new point $g(a,b)$. Summing up, in the ACS quantization, which is a deformation of the resolution of the unit operator, we have three free “parameters”: choice of a fiducial vector, choice of an affine group parametrization, and choice of a center of the group manifold. In fact, any choice of the center can be done by an appropriate choice of the mapping $(p,q)=\chi(t,r)$. However, from technical point of view it is useful to distinguish both operations: choice of the group parametrization and a choice of the appropriate center, because the left $g^{\prime}=g\cdot\tilde{g}$ and right $g^{\prime\prime}=\tilde{g}\cdot g$ shift of the element $g\in\textrm{Aff}(\mathbb{R})$ on the group manifold commutes, i.e., both operations are independent. This is useful property in calculations with invariant measure. Using this freedom, the quantization process (15) can be now generalized to a deformation of the resolution of unity rewritten in a general parametrization with the additional right shift which fixes the center of the mapping between the configuration space and $\textrm{Aff}(\mathbb{R})$. Again introducing the shortcut $|h(t,r)\rangle=|g(\chi(t,r))\rangle$ the required resolution of unity read $$\frac{\Delta(h(a^{\prime},b^{\prime}))}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\lambda(t,r)|h(t,r)\cdot h(a^{\prime},b^{\prime})\rangle\langle h(t,r)\cdot h(a^{\prime},b^{\prime})|=\hat{1\kern-4.75pt1}\,.$$ (20) Now, the ACS quantization of any function $f(t,r)$ on the configuration space is defined as $$\hat{f}=\frac{\Delta(h(a^{\prime},b^{\prime}))}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\lambda(t,r)|h(t,r)\cdot h(a^{\prime},b^{\prime})\rangle f(t,r)\langle h(t,r)\cdot h(a^{\prime},b^{\prime})|\,,$$ (21) where the shift of the group manifold center is given by $h(a^{\prime},b^{\prime})=g(\chi(a^{\prime},b^{\prime}))=:g(a,b)\in\textrm{Aff}(\mathbb{R})$. It is useful to rewrite this formula in the form of our standard affine group parametrization $$\hat{f}=\frac{\Delta(h(a^{\prime},b^{\prime}))}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\lambda(t,r)|g(\chi(t,r))\cdot g(a,b)\rangle f(t,r)\langle g(\chi(t,r))\cdot g(a,b)|\,.$$ (22) After change of variables under integral $p=\chi_{1}(t,r)$ and $q=\chi_{2}(t,r)$, and performing the right shift operation (19) in the coherent states, one gets the final expression for quantization of the function $f(t,r)$: $$\hat{f}=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|g(p,q)\rangle f\left(\chi^{-1}\left(p-\frac{a}{b}q,\frac{q}{b}\right)\right)\langle g(p,q)|\,.$$ (23) III.2 Quantization of elementary observables The formula (23) allows to quantize almost any real function on the configuration space, $T$, giving the corresponding operator. The most elementary observables are time and radial coordinates $$\displaystyle\hat{t}=\frac{1}{A_{\Phi}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|g(p,q)\rangle\chi^{-1}_{1}\left(p-\frac{a}{b}q,\frac{q}{b}\right)\langle g(p,q)|\,,$$ (24) $$\displaystyle\hat{r}=\frac{1}{A_{\Phi}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|g(p,q)\rangle\chi^{-1}_{2}\left(p-\frac{a}{b}q,\frac{q}{b}\right)\langle g(p,q)|\,.$$ (25) They are required for description of the Schwarzschild spacetime. As it was mentioned earlier in the subsection A, we have to relate the values of measured time and coordinate with our quantum description. For this purpose we have to choose the group parametrization and the group manifold center $g(a,b)$ to fulfil the following consistency conditions: $$\displaystyle\langle\hat{t};h(t,r)\rangle=t\,,$$ (26) $$\displaystyle\langle\hat{r};h(t,r)\rangle=r\,,$$ (27) where $\langle\hat{A};\psi\rangle:=\langle\psi|\hat{A}|\psi\rangle$ denotes expectation value of the observable $\hat{A}$ in the state labelled by $\psi$. These condition relates the measured time and radial coordinate to the corresponding quantum observables and states. It turns out that we do not need to reparameterize our group to fulfil required conditions (26) and (27). We only have to choose properly the group manifold center parameters $g(a,b)$ in the standard parametrization. In general, these parameters are dependent on the choice of the fiducial vector. In the following we get two useful expressions for expectation values within the coherent states $|g(t,r)\rangle$, which can be easily obtained by applying invariance of the Haar measure. For any arbitrary operator (23) quantized by means of the affine group we get $$\displaystyle\langle\hat{f};g(t,r)\rangle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\langle g(t,r)|g(p,q)\rangle f\left(p-\frac{a}{b}q,\frac{q}{b}\right)\langle g(p,q)|g(t,r)\rangle$$ $$\displaystyle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|\langle g(0,1)|g(p,q)\rangle|^{2}f\left(t+\left(p-\frac{a}{b}q\right)r,\frac{q}{b}r\right)\,,$$ (28) and for any product of two such operators we obtain $$\displaystyle\langle\hat{f}_{1}\hat{f}_{2};g(t,r)\rangle=\frac{1}{A_{\Phi_{)}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{1},q_{1})\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{2},q_{2})$$ $$\displaystyle f_{1}\left(t+(p_{1}-\frac{a}{b}q_{1})r,\frac{q_{1}}{b}r\right)f_{2}\left(t+\left(p_{2}-\frac{a}{b}q_{2}\right)r,\frac{q_{2}}{b}r\right)$$ $$\displaystyle\langle g(0,1)|g(p_{1},q_{1})\rangle\langle g(p_{1},q_{1})|g(p_{2},q_{2})\rangle\langle g(p_{2},q_{2})|g(0,1)\rangle\,.$$ (29) Using the formula (III.2), the expectation value for the time observable can be written in the following form $$\langle\hat{t};g(t,r)\rangle=\frac{1}{A_{\Phi}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|\langle g(0,1)|g(p,q)\rangle|^{2}\left(t+r\left(p-\frac{a}{b}q\right)\right)\,.\\ $$ (30) After integration over $p$ and $q$ one gets $$\langle\hat{t};g(t,r)\rangle=t+\left(\langle\check{p}\rangle_{0}-\frac{a}{b}\langle\check{q}\rangle_{0}\right)r\,,$$ (31) where we introduce the abbreviations: $$\check{f}:=\frac{1}{A_{\Phi}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|g(p,q)\rangle f(p,q)\langle g(p,q)|\,,$$ (32) and $$\langle\check{f}\rangle_{0}:=\frac{1}{A_{\Phi}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|\langle g(0,1)|g(p,q)\rangle|^{2}f(p,q)=\langle g(0,1)|\check{f}|g(0,1)\rangle\,.$$ (33) Thus, $\langle\check{f}\rangle_{0}$ denotes the expectation value of the operator $\check{f}$ in the fixed coherent state $|g(0,1)\rangle$ corresponding to the unity of the affine group. In the case of more complicated expressions like $f_{1}^{n}f_{2}^{m}$ instead of the notation (32), where the check symbol is over the expression, we write $(f_{1}^{n}f_{2}^{m})\check{\phantom{w}}$. Similarly, we obtain $$\langle\hat{r};g(t,r)\rangle=\frac{\langle\check{q}\rangle_{0}}{b}r\,.$$ (34) Assuming $\langle\check{p}\rangle_{0}-\frac{a}{b}\langle\check{q}\rangle_{0}=0$ and $b=\langle\check{q}\rangle_{0}$, i.e. $a=\langle\check{p}\rangle_{0}$, the self consistency conditions (26) and (27) become fulfilled. An important property of any quantum observable $\hat{A}$ is its variance. The variance determines the value of smearing of a quantum observable. This influences behaviour of a given physical system substantially. In the quantum state labelled by $\psi$ the variance is defined as follows $$\mathrm{var}(\hat{A};\psi):=\langle(\hat{A}-\langle\hat{A};\psi\rangle)^{2};\psi\rangle=\langle\hat{A}^{2};\psi\rangle-\langle\hat{A};\psi\rangle^{2}\,.$$ (35) Formally, the variance is the stochastic deviation from the expectation value of the observable $\hat{A}$. Suppose the operator $\hat{A}$ is essentially self-adjoint on some dense subspace $\mathcal{S}$ of the Hilbert space $\mathcal{H}_{x}$. For every quantum state $\psi\in\mathcal{S}$ of a physical system which belongs to the domain of the operator $A$ one can check that $$\Big{(}\mathrm{var}(\hat{A};\psi)=0\Big{)}\Longleftrightarrow\Big{(}\hat{A}\psi=\lambda\psi,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \lambda\in\mathbb{R}\Big{)}\,,$$ (36) i.e., the variance of the operator $\hat{A}$ is equal to 0, if and only if, the quantum system is in an eigenstate of the operator $\hat{A}$. Then the corresponding observable is not smeared. The statement (36) is implied by properties of the scalar product, norm and the operator itself: $$\mathrm{var}(\hat{A};\psi)=\langle(\hat{A}-\langle\hat{A};\psi\rangle)\psi\,|\,(\hat{A}-\langle\hat{A};\psi\rangle)\psi\rangle=\|(\hat{A}-\langle\hat{A};\psi\rangle)\psi\|^{2}\,.$$ Thus, $$\Big{(}\mathrm{var}(\hat{A};\psi)=0\Big{)}\Rightarrow\Big{(}(\hat{A}-\langle\hat{A};\psi\rangle)\psi=0\Big{)}\Rightarrow\Big{(}\hat{A}\psi=\langle\hat{A};\psi\rangle\psi\Big{)}\,.$$ The latter equality means that $\langle\hat{A};\psi\rangle$ is the eigenvalue of $\hat{A}$ corresponding to the eigenstate $\psi$. On the other hand, if $\hat{A}\psi=\lambda\psi$, we have $$\mathrm{var}(\hat{A};\psi)=\langle\hat{A}^{2};\psi\rangle-\langle\hat{A};\psi\rangle^{2}=\lambda^{2}\langle\psi|\psi\rangle-\lambda^{2}\langle\psi|\psi\rangle^{2}=0\,,$$ as $\psi$ is a normalized vector. This completes the verification of the validity of (36). The variances of the operators $\hat{t}$ and $\hat{r}$ in the coherent states $|g(t,r)\rangle$ can be directly calculated. They describe the smearing of both observables. The behaviour of variances and expectation values for time and radial coordinate allows to determine if they behave similarly to their classical counterparts or not. Because of the self-consistency condition the only unknown components are $\langle\hat{t}^{2};g(t,r)\rangle$ and $\langle\hat{r}^{2};g(t,r)\rangle$. Using the formula (III.2) $$\langle\hat{t}^{2};g(t,r)\rangle=t^{2}+2\left\langle\check{p}-\frac{\langle\check{p}\rangle_{0}}{\langle\check{q}\rangle_{0}}\check{q}\right\rangle_{0}tr+\sigma_{t}r^{2}\,,$$ (37) where $$\sigma_{t}:=\left\langle\check{p}^{2}-\frac{\langle\check{p}\rangle_{0}}{\langle\check{q}\rangle_{0}}(\check{p}\check{q}+\check{q}\check{p})+\left(\frac{\langle\check{p}\rangle_{0}}{\langle\check{q}\rangle_{0}}\right)^{2}\check{q}^{2}\right\rangle_{0}\,.$$ (38) In Eq. (37) the second term vanishes and the variance of the time coordinate operator reads $$\mathrm{var}(\hat{t};g(t,r))=\sigma_{t}\,r^{2}\,.$$ (39) Similarly, for the radial coordinate operator $\hat{r}$ we get $$\langle\hat{r}^{2};g(t,r)\rangle=\frac{\langle\check{q}^{2}\rangle_{0}}{\langle\check{q}\rangle_{0}^{2}}\,r^{2}\,,$$ (40) so that the variance of $\hat{r}$ becomes $$\mathrm{var}(\hat{r};g(t,r))=\sigma_{r}\,r^{2}\,,$$ (41) where $$\sigma_{r}:=\frac{\langle\check{q}^{2}\rangle_{0}-\langle\check{q}\rangle_{0}^{2}}{\langle\check{q}\rangle_{0}^{2}}\,.$$ (42) In both cases the standard deviation from the expectation value (square root of the variance) is proportional to the radius $r$. The coefficients in (38) and (42) depend only on the fiducial vector $\Phi_{0}(x)$. One can see that while approaching to classical singularity $r\to 0$, the quantum radial observable behaves as the classical one because its expectation value goes to zero and its variance also goes to zero. However, the ratio of the standard deviation from the expectation value to the expextation value of $\hat{r}$ is constant. This suggests an existence of non-zero relative fluctuations of the radial coordinate even at singularity. Such fluctuations can be a germ which leads to larger fluctuations of other quantum observables, like spacetime invariants, and finally to avoiding the singularity in the Schwarzschild spacetime. To find the lowest bound of the product $\sigma_{t}\sigma_{r}$ one can use the Heisenberg type uncertainty principle in the form proposed by Robertson Robertson (1929). In this case we get $$\mathrm{var}(\hat{t};g(t,r))\,\mathrm{var}(\hat{r};g(t,r))\geq\frac{\langle i[\check{p},\check{q}]\rangle_{0}^{2}}{4\langle\check{q}\rangle_{0}^{2}}\,r^{4}\,,$$ (43) which gives the required lowest bound for product of both smearing coefficients $$\sigma_{t}\,\sigma_{r}\geq\frac{\langle i[\check{p},\check{q}]\rangle_{0}^{2}}{4\langle\check{q}\rangle_{0}^{2}}\,.$$ (44) As an example we give values of the above constants for some particular fiducial vectors. Let us take $$\Phi_{0}(x)=\frac{1}{\sqrt{(2n-1)!}}x^{n}e^{-\frac{x}{2}}\,,$$ (45) where $n>1$ is a natural number selected to ensure the convergence properties. One easily gets $$\displaystyle\sigma_{t}=\frac{2n-1}{2n-2}\,,$$ (46) $$\displaystyle\sigma_{r}=\frac{1}{2n-2}\,.$$ (47) Thus, the inequality (44) reads $$\sigma_{t}\,\sigma_{r}\geq\frac{1}{4(2n-1)^{2}}\,.$$ (48) IV Quantization of the Kretschmann scalar Observables which characterize the behaviour of the spacetime at a given spacetime point are the curvature invariants. In our case the most important is the Kretschmann scalar (2) . The classical Kretschmann invariant diverges as $r\rightarrow 0$. Does this singularity survive quantization? Is the expectation value of the Kretschmann operator $\hat{\mathcal{K}}$ regular across the configuration space $T$? What is the quantum smearing of $\hat{\mathcal{K}}$? These are the issues to be addressed in this section. Using our quantization rules (23), the quantum Kretschmann observable can be written as $$\hat{\mathcal{K}}=48M^{2}\langle\check{q}\rangle_{0}^{6}\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|g(p,q)\rangle\frac{1}{q^{6}}\langle g(p,q)|\,.$$ (49) IV.1 Eigenproblem for $\hat{\mathcal{K}}$ operator As the first step, let us consider the eigenproblem of the operator $\hat{\mathcal{K}}$ which allows to establish eigenfuctions (or rather generalized eigenfuctions) and spectrum of the Kretschmann operator $$\int_{\mathbb{R}_{+}}d\nu(y)\;\mathbf{K}_{\mathcal{K}}(x,y)\;\psi^{(\mathcal{K})}_{k}(y)=k\,\psi^{(\mathcal{K})}_{k}(x)\,,$$ (50) written in terms of the integral kernel $$\displaystyle\mathbf{K}_{\mathcal{K}}(x,y)=\langle x|\hat{\mathcal{K}}|y\rangle=\frac{\langle\check{q}\rangle_{0}^{6}}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\;\langle x|g(p,q)\rangle\;\frac{48M^{2}}{q^{6}}\;\langle g(p,q)|y\rangle=$$ $$\displaystyle=\frac{48M^{2}}{A_{\Phi_{0}}}\langle\check{q}\rangle_{0}^{6}\left[\int_{\mathbb{R}_{+}}\frac{dq}{q^{8}}\left|\Phi_{0}(q)\right|^{2}\right]\;\delta(x-y)x^{7}=\mathcal{A}\;\delta(x-y)x^{7}\,,$$ (51) where the coefficient $\mathcal{A}=\frac{48M^{2}}{A_{\Phi_{0}}}\langle\check{q}\rangle_{0}^{6}\left[\int_{\mathbb{R}_{+}}\frac{dq}{q^{8}}\left|\Phi_{0}(q)\right|^{2}\right]$. It must be noticed that the condition $\mathcal{A}<\infty$ requires an appropriate behavior of the fiducial vector at $x$ equal to zero and infinity. Direct calculations lead to the following generalized eigenfunctions $$\psi^{(\mathcal{K})}_{k}(x)=\delta\left(x^{6}-\frac{k}{\mathcal{A}}\right),\quad 0<k<\infty\,,$$ (52) and the positive spectrum $0<k<\infty$ of the Kretschmann operator. For further interpretation it is useful to calculate a form of these solutions as functions of the affine group elements. In the standard parametrization $(p,q)$ the above states can be written as $$\psi^{(\mathcal{K})}_{k}(p,q)=\frac{1}{6}\left(\frac{\mathcal{A}}{k}\right)^{\frac{5}{6}}\exp\left[i\sqrt[6]{\frac{k}{\mathcal{A}}}p\right]\;\Phi_{0}^{\star}\left(q\sqrt[6]{\frac{k}{\mathcal{A}}}\right)\,.$$ (53) It is obtained due to the useful transformation formula $$\langle g(p,q)|f\rangle=\int_{\mathbb{R}_{+}}d\nu(x)\;e^{-ipx}\Phi_{0}^{\star}(qx)f(x)\,.$$ (54) According to general quantum rules one can expect that $|\psi^{(\mathcal{K})}_{k}(t,r)|^{2}$ is related to density probability (in this case it cannot be normalized) of finding the Schwarzschild spacetime in the Kretschmann observable eigenstate if this physical system is in the coherent state. As one can see this density probability is independent of $t$ and depends only on the explicit form of the fiducial vector. An important information implied by the eigenproblem solution of $\hat{\mathcal{K}}$ is that the quantum Kretschmann scalar can be potentially infinite because its spectrum is not bounded from above. IV.2 Expectation value for the $\hat{\mathcal{K}}$ operator Expectation values which give a link between quantum theory and observed values of quantum observables are state dependent. This feature is related to an important question about quantum states of our physical system. As we mentioned earlier, the fundamental observables $\hat{t}$ and $\hat{r}$ do not commute, but classically they are good observables of our quantum system so that the Schwarzschild spacetime cannot be in any common eigenstate of $\hat{t}$ and $\hat{r}$. In fact, it is a consequence of the Heisenberg uncertainty principle. In this context we need to check if the expectation values of the operator $\hat{\mathcal{K}}$, determined witin the coherent states representing elementary states of the spacetime, behave like the classical Kretschmann scalar. Using the formula (76) from the appendix A, one gets simple general expression for the expectation value of the Kretschmann operator $$\langle\Psi|\hat{\mathcal{K}}|\Psi\rangle\equiv\langle\hat{\mathcal{K}};\Psi\rangle=\mathcal{A}\int_{\mathbb{R}_{+}}dx\,x^{6}|\Psi(x)|^{2}\,.$$ (55) It turns out that the classical form of the Kretschmann scalar is proportional to the expectation value of the Kretschmann operator calculated within the coherent states $|g(t,r)\rangle$ fulfilling the consistency conditions $$\langle\hat{\mathcal{K}};g(t,r)\rangle=48M^{2}\frac{\langle(q^{-6})\check{\phantom{w}}\rangle_{0}}{\langle\check{q}\rangle^{-6}_{0}}\frac{1}{r^{6}}\,.$$ (56) Therefore, the mean value $\langle\hat{\mathcal{K}};g(t,r)\rangle$ has formally the singularity at $r=0$, as in the classical case. However, to determine its behaviour in quantum case fully we have to calculate its variance. Applying (III.2) to the operator (49) gives $$\langle\hat{\mathcal{K}}^{2};g(t,r)\rangle=(48M^{2})^{2}\frac{\langle((q^{-6})\check{\phantom{w}})^{2}\rangle_{0}}{\langle\check{q}\rangle^{-12}_{0}}\frac{1}{r^{12}}\,.$$ (57) Combining the expressions (56) and (57) we get the required variance of the Kretschmann operator within the coherent states $$\mathrm{var}(\hat{\mathcal{K}};g(t,r))=(48M^{2})^{2}\left(\langle((q^{-6})\check{\phantom{w}})^{2}\rangle_{0}-\langle(q^{-6})\check{\phantom{w}}\rangle_{0}^{2}\right)\langle\check{q}\rangle^{12}_{0}\;\frac{1}{r^{12}}\,.$$ (58) The variance (58) tends also to infinity as $r$ approaches zero. However, the ratio of the expectation value $\langle\hat{\mathcal{K}};g(t,r)\rangle$ and the standard deviation $\sqrt{\mathrm{var}(\hat{\mathcal{K}};g(t,r))}$ is independent on $r$ and $t$ $$s=\frac{\langle(q^{-6})\check{\phantom{w}}\rangle_{0}}{\sqrt{\langle((q^{-6})\check{\phantom{w}})^{2}\rangle_{0}-\langle(q^{-6})\check{\phantom{w}}\rangle_{0}^{2}}}\,,$$ (59) i.e., both, the expectation value of $\hat{\mathcal{K}}$ and its standard deviation are proportional. This behavior of the variance protects the mean value of the quantum Kretschmann observable within the coherent states to be singular. The operator $\hat{\mathcal{K}}$ represents a well behaving smeared observable which is completely undetermined at the classical singularity $r=0$, see Fig.1. Fluctuations of the Kretschmann quantum observable grow to infinity. This is a novel mechanism which allows to omit singularity after quantization of classical variables. Above, our new mechanism was checked only on the fundamental set of states, i.e., for the affine coherent states. In the appendix C we show that the expectation values and variances of the Kretschmann operator within the dense set of states $$\Psi_{n}(x)=Nx^{n}\exp\left[i\tau_{0}x-\frac{\gamma^{2}x^{2}}{2}\right],$$ (60) where $N^{2}=2\gamma^{n}/(n-1)!$ and $n=1,2,\dots$, behaves exactly in the same way as it was obtained for the affine coherent states. V Conclusions The extension of the configuration space to include temporal variable at the same footing as spatial variables is the novelty in the programme of quantization of gravity. In this paper we have used this idea to address the issue of the fate of the naked gravitational singularity of the Schwarzschild spacetime at the quantum level. Quantization of the time variable has enabled resolving the singularity problem. The above idea seems to be fruitful and worth of being applied to more realistic models of spacetime with naked singularities like the ones considered in a series of papers by Pankaj Joshi and his collaborators (see, e.g., PJ1 ; PJ2 ; PJ3 ; PJ4 ; PJ5 ; PJ6 ; PJ7 ; PJ8 and references therein). If isolated objects with naked singularities do occur in the real world, their examinations may bring highly valuable data to be used in the construction of quantum gravity. It is so because the isolated objects with covered singularities, i.e. black holes, may have screened some essential quantum gravity data due to the presence of horizons. The solution of the eigenproblem for the Kretschmann operator shows that the spectrum is bounded from below and unbounded from above. The latter seems to lead to an embarrassment, but further examinations in the context of expectation value and the variance of the Kretschmann operator indicate the resolution of this difficulty. Making use of the affine coherent states quantization, we have found that the expectation value of the Kretschmann operator $\hat{\mathcal{K}}$ is singular and behaves like $1/r^{6}$ as in the classical case. However, its variance behaves like $1/r^{12}$. One can say that quantization smears the singularity, avoiding its localization in the region of the configuration space including the singularity. In addition, since the variance not only does not vanish but diverges as $r\rightarrow 0$, the state corresponding to $r=0$ cannot be any eigenstate of the operator $\hat{\mathcal{K}}$, which is suggested by the property (36). Thus, the system cannot occupy the state corresponding to the gravitational singularity. One can say that probability of finding our system in the singular state is equal to zero. The above result, carried out for the affine coherent states, has been confirmed in App. C for any vector of the carrier space $L^{2}(\mathbb{R}_{+},d\nu(x))$. This proves the generality of our singularity avoiding mechanism. Our conclusion seems to be true for any quantum state of the system under consideration. The issue of possible resolution of the singularity problem of the Schwarzschild black hole ($M>0$) at quantum level, has been addressed in several papers (see, e.g., Blan1 ; Abhay ; Lisa and references therein). It is based on the isometry of the interior of the black hole with the vacuum Kantowski-Sachs spacetime. An interesting approach is presented in Blan1 . The corrections to the Raychaudhuri equation in the interior of the Schwarzschild black hole derived from loop quantum gravity (LQG) has been examined. The resulting effective equation implies the defocusing of geodesics which prevents the formation of conjugate points so that leads to the resolution of the singularity problem. In Abhay the Kruskal-Szekeres coordinates Piotr are applied. Quantum corrections of LQG are used to resolve the singularity problem, and the resulting quantum extension of spacetime has interesting features. An effective LQG model of the Schwarzschild black hole interior based on Thiemann’s identities is proposed in Lisa . The effective dynamics leads to the resolution of the classical singularity. A spherically symmetric vacuum gravity is quantized using LQG techniques in Jorge . Dirac’s quantization procedure leads to the resolution of the singularity of the classical theory inside black holes. The loop quantization of the model of Schwarzschild interior coupled to a massless scalar field has been studied Ma . Obtained results indicates the existence of a non-vanishing minimal mass of that black hole, which implies the existence of some black hole remnants after the Hawking evaporation. An extension of the present paper to the case of the Schwarzschild black hole is straightforward. It will be considered in the context of quantization of the Lemaître-Tolman-Bondi model of isolated object, with naked or covered singularity, in the near future Janek . Acknowledgements.We would like to thank Jan Ostrowski for helpful discussions. Appendix A Some remarks about calculations According to methodology of integral quantization,one can see that for every classical observable $f(t,r)$ the corresponding quantized operator $\hat{f}$ (see (23)) is symmetric because its quadratic form $$\langle\Psi|\hat{f}|\Psi\rangle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)|\langle g(p,q)|\Psi\rangle|^{2}f\left(\chi^{-1}\left(p-\frac{a}{b}q,\frac{q}{b}\right)\right)$$ (61) is real for $\Psi$ belonging to the domain of the operator $\hat{f}$. The operator $\hat{f}$ can be bounded by the following expression $$\displaystyle\|\hat{f}|h\rangle\|^{2}=\Big{|}\frac{1}{A_{\Phi_{0}}^{2}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{1},q_{1})\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{2},q_{2})$$ $$\displaystyle f\left(\chi^{-1}\left(p_{1}-\frac{a}{b}q_{1},\frac{q_{1}}{b}\right)\right)f\left(\chi^{-1}\left(p_{2}-\frac{a}{b}q_{2},\frac{q_{2}}{b}\right)\right)\langle h|g(p_{1},q_{1})\rangle\langle g(p_{1},q_{1})|g(p_{2},q_{2})\rangle\langle g(p_{2},q_{2})|h\rangle\Big{|}$$ $$\displaystyle\leq\frac{1}{A_{\Phi_{0}}^{2}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{1},q_{1})\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{2},q_{2})\Big{|}f\left(\chi^{-1}\left(p_{1}-\frac{a}{b}q_{1},\frac{q_{1}}{b}\right)\right)$$ $$\displaystyle f\left(\chi^{-1}\left(p_{2}-\frac{a}{b}q_{2},\frac{q_{2}}{b}\right)\right)\langle h|g(p_{1},q_{1})\rangle\langle g(p_{1},q_{1})|g(p_{2},q_{2})\rangle\langle g(p_{2},q_{2})|h\rangle\Big{|}$$ $$\displaystyle\leq\Big{[}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{1},q_{1})\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{2},q_{2})\Big{|}f\left(\chi^{-1}\left(p_{1}-\frac{a}{b}q_{1},\frac{q_{1}}{b}\right)\right)\langle g(p_{1},q_{1})|g(p_{2},q_{2})\rangle$$ $$\displaystyle f\left(\chi^{-1}\left(p_{2}-\frac{a}{b}q_{2},\frac{q_{2}}{b}\right)\right)\Big{|}\,\Big{]}{\|\,|h\rangle\,\|}^{2}\,,$$ (62) for all $|h\rangle$ in the domain of the operator $\hat{f}$. The last step is obtained by making use of the Schwartz inequality $|\langle g(p,q)|h\rangle|\leq\||h\rangle\|$. If the above integral contained in square bracket is finite the operator $\hat{f}$ is continuous in $L^{2}(\mathbb{R}_{+},d\nu(x))$, i.e., $\hat{f}$ is a self-adjoint operator. However, in practice, even the elementary observables $\hat{t}$ and $\hat{r}$ are unbounded operators and require more careful procedures of extension their domains. Matrix elements of operators are crucial expressions required in quantum calculations. They can be used to extend such operators by symmetrization of their matrix elements $$\langle\psi_{2}|\hat{A}|\psi_{1}\rangle_{sym}:=\frac{1}{2}\left(\langle\psi_{2}|\hat{A}\psi_{1}\rangle+\langle\psi_{1}|\hat{A}\psi_{2}\rangle^{\star}\right)\,.$$ (63) For any symmetric operator $\hat{A}$ the following identity hold $$\langle\psi_{2}|\hat{A}|\psi_{1}\rangle_{sym}=\langle\psi_{2}|\hat{A}|\psi_{1}\rangle\ ,$$ (64) for $\psi_{1}$ and $\psi_{2}$ in the domain of $\hat{A}$, however, in other cases this equality can be broken and then the symmetrization (63) is useful. Let us assume that $A(p^{\prime},q^{\prime})$, where $p^{\prime}=p^{\prime}(p,q)$ and $q^{\prime}=q^{\prime}(p,q)$ are real functions. A typical matrix elements are of the following form $$\langle\psi_{2}|\hat{A}|\psi_{1}\rangle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\langle\psi_{2}|g(p,q)\rangle A(p^{\prime},q^{\prime})\langle g(p,q)|\psi_{1}\rangle\,.$$ (65) Calculating in the space $L^{2}(\mathbb{R}_{+},d\nu(x))$ we allow for changing of integration order $$\displaystyle\langle\psi_{2}|\hat{A}|\psi_{1}\rangle=\int_{\mathbb{R}_{+}}d\nu(x_{2})\int_{\mathbb{R}_{+}}d\nu(x_{1})$$ $$\displaystyle\psi_{2}(x_{2})^{\star}\left\{\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)e^{ip(x_{2}-x_{1})}A(p^{\prime},q^{\prime})\Phi_{0}(qx_{2})\Phi_{0}(qx_{1})^{\star}\right\}\psi_{1}(x_{1})$$ $$\displaystyle=\int_{\mathbb{R}}dx_{2}\int_{\mathbb{R}}dx_{1}\theta(x_{2})\frac{1}{x_{2}}\psi_{2}(x_{2})^{\star}$$ $$\displaystyle\left\{\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)e^{ip(x_{2}-x_{1})}A(p^{\prime},q^{\prime})\Phi_{0}(qx_{2})\Phi_{0}(qx_{1})^{\star}\right\}\theta(x_{1})\frac{1}{x_{1}}\psi_{1}(x_{1})$$ (66) and we extend integration over $x$ variables on the whole real axis adding the Heaviside function $\theta(x)$. This is useful for regularization of integrals, if needed, in the spirit of distribution theory. As an example, let us consider the operator and its matrix element between the position eigenstate $|x\rangle$ and any arbitrary vector in the $L^{2}(\mathbb{R}_{+},d\nu(x))$ space $$\langle x|\hat{p}|\psi\rangle=\theta(x)\frac{1}{2\pi A_{\phi}}\int_{\mathbb{R}}dp\int_{\mathbb{R}_{+}}\frac{dq}{q^{2}}\langle x|g(p,q)\rangle p\langle g(p,q)|\psi\rangle\ .$$ (67) Using the explicit form of the scalar products we get $$\langle x|\hat{p}|\psi\rangle=\frac{1}{A_{\phi}}\theta(x)\int_{\mathbb{R}_{+}}\frac{dq}{q^{2}}\int_{\mathbb{R}}dy\left(\int_{\mathbb{R}}dp\,pe^{ip(x-y)}\right)\Phi_{0}(qx)\Phi_{0}(qy)^{\star}\theta(y)\frac{1}{y}\psi(y)\ .$$ (68) To regularize the integral over $p$, we use the known formula $$\int_{\mathbb{R}}dp\,p\,e^{ip(x-y)}=-i2\pi\delta^{\prime}(x-y)\ ,$$ (69) where prime denotes distributional derivative of the Dirac delta. After using this expression and definition of $\delta^{\prime}$ one gets $$\displaystyle\langle x|\hat{p}|\psi\rangle=\frac{-i}{A_{\phi}}\theta(x)\int_{\mathbb{R}_{+}}\frac{dq}{q^{2}}\Phi_{0}(qx)\frac{\partial}{\partial x}\left[\theta(x)\Phi_{0}(qx)^{\star}\frac{1}{x}\psi(x)\right]$$ $$\displaystyle=-i\theta(x)\delta(x)\psi(x)+\theta(x)\left(-i\frac{\partial}{\partial x}+\frac{i}{2x}\right)\psi(x)\,.$$ (70) Note, that in this case the position state $|x\rangle$ does not belong to the domain of the operator $\hat{p}$ and using it can require symmetrization. This formula allows to write more general matrix element $$\displaystyle\langle\psi_{2}|\hat{p}|\psi_{1}\rangle=-i\theta(0)\lim_{x\to 0^{+}}\frac{\psi_{1}(x)^{\star}\psi_{2}(x)}{x}$$ $$\displaystyle+\int_{\mathbb{R}_{+}}d\nu(x)\psi_{2}(x)^{\star}\left(-i\frac{\partial}{\partial x}+\frac{i}{2x}\right)\psi_{1}(x)\,,$$ (71) which after symmetrization can be rewritten as $$\displaystyle\langle\psi_{2}|\hat{p}|\psi_{1}\rangle_{sym}=\theta(0)\lim_{x\to 0^{+}}\frac{\mathrm{Im}(\psi_{2}(x)^{\star}\psi_{1}(x))}{x}$$ $$\displaystyle+\frac{(-i)}{2}\int_{\mathbb{R}_{+}}d\nu(x)\left(\psi_{2}(x)^{\star}\frac{\partial\psi_{1}}{\partial x}-\frac{\partial\psi_{2}}{\partial x}^{\star}\psi_{1}(x)\right)\ .$$ (72) Note that for the real functions $\psi$ the expectation value $\langle\psi|\hat{p}|\psi\rangle_{sym}=0$. Even more general, the expectation value $\langle\psi|\hat{p}|\psi\rangle_{sym}=0$ for $\mathrm{Im}\left(\psi(x)^{\star}\frac{\partial\psi}{x}\right)=0$. An interesting quantity is the expectation value of the $\hat{p}$ operator in the gaussian wave packet $\Psi^{(t)}(x)$ constructed from generalized eigenstates $\psi^{(t)}_{\tau}(x)=\sqrt{x}e^{i\tau x}$ of the $\hat{p}$ operator $$\Psi^{(t)}(x)=N_{t}\int_{\mathbb{R}}d\tau f^{(t)}(\tau)\sqrt{x}e^{i\tau x}=N_{t}\sqrt{x}\exp{(i\tau_{0}x)}\exp{\left[-\frac{1}{2}\gamma_{t}^{2}x^{2}\right]}\ ,$$ (73) where $$f^{(t)}(\tau)=\frac{1}{\sqrt{2\pi}\gamma_{t}}\exp{\left[-\frac{(\tau-\tau_{0})^{2}}{2\gamma_{t}^{2}}\right]}\ .$$ (74) The normalization coefficient is equal to $N_{t}^{2}=2\gamma_{t}/\sqrt{\pi}$. Making use of the formula (A) and $\theta(0)=1/2$ the required average value is $$\displaystyle\langle\Psi^{(t)}(x)|\hat{p}|\Psi^{(t)}(x)\rangle=\tau_{0}\ ,$$ (75) as it is expected. The same result one obtains from (A). Using the methods of this section, a rather general form of matrix elements can be obtained if the classical observable is dependent only on $q=r$ variable. In this case we need to quantize the function $f(r)$ $$\langle\Psi|\hat{f}|\Psi\rangle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,g)f(q)|\langle g(p,q)|\Psi\rangle|^{2}=\frac{1}{A_{\Phi_{0}}}\int_{\mathbb{R}_{+}}dx\,\frac{|\Psi(x)|^{2}}{x^{2}}\int_{\mathbb{R}_{+}}\frac{dq}{q^{2}}f(q)|\Phi_{0}(qx)|^{2}$$ (76) Appendix B Eigensolutions for elementary operators Because of the importance of the operators $\hat{t}$ and $\hat{r}$, it is interesting to find the eigensolutions of these operators. We will compute them without assuming any special form of fiducial vectors, the only assumption is that a fiducial vector is a real function. In this case, the constants $a$ and $b$ are as follows $$\displaystyle a=\langle\check{p}\rangle_{0}=0\,,$$ (77) $$\displaystyle b=\langle\check{q}\rangle_{0}=1\ .$$ (78) They are calculated in App. D. B.1 Eigenproblem for $\hat{t}$ operator It is easy to show that eigenfunctions of the differential part of the operator (A) $$\left(-i\frac{\partial}{\partial x}+\frac{i}{2x}\right)\psi^{(t)}_{\tau}(x)=\tau\psi^{(t)}_{\tau}(x)$$ (79) are equal to $$\psi^{(t)}_{\tau}(x)=\sqrt{x}e^{i\tau x}\ .$$ (80) This implies the following matrix elements of the operator $\hat{t}$ $$\displaystyle\langle\psi|\hat{t}|\psi^{(t)}_{\tau}\rangle=-i\theta(0)\lim_{x\rightarrow 0^{+}}\frac{\psi^{\star}(x)}{\sqrt{x}}+\tau\langle\psi|\psi^{(t)}_{\tau}\rangle\ .$$ (81) Because every function $\psi\in L^{2}(\mathbb{R}_{+},d\nu(x))$ has to converge to 0 as fast as $x$, or faster, when $x$ is going to $0^{+}$, the condition $\lim_{x\rightarrow 0^{+}}\psi^{\star}(x)/\sqrt{x}=0$ is fulfilled for such functions and the solutions (80) are generalized eigensolutions (weak solutions) of the operator $\hat{t}$. It is also possible to solve the eigenequation for $\hat{t}$ by using integral kernel as it was done for the operator $\hat{\mathcal{K}}$. In this case the integral kernel is equal to $$\displaystyle\mathbf{K}_{t}(x,y)=\langle x|\hat{t}|y\rangle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(pq)\langle x|g(p,q)\rangle\left(p-\frac{a}{b}q\right)\langle g(p,q)|y\rangle=$$ $$\displaystyle=\frac{(-i)}{A_{\Phi_{0}}}\delta^{\prime}(x-y)\int_{\mathbb{R}_{+}}\frac{dq}{q^{2}}\Phi_{0}(xq)\Phi^{\star}_{0}(qy).$$ (82) Using this method one has to extend the integration over $y$ to the whole real axis by introducing under integral the Heaviside function, as it is shown in the appendix A. It is interesting to show a form of $\psi^{(t)}_{\tau}(x)$ as a function of $(p,q)$ variables. For this purpose one needs to use an explicit form of a fiducial vector. For example, let us assume $\Phi_{0}(x)=\frac{1}{\sqrt{(2n-1)!}}x^{n}e^{-\frac{x}{2}}$ and then $$\psi^{(t)}_{\tau}(p,q)=\int_{\mathbb{R}_{+}}d\nu(x)\;\langle g(p,q)|x\rangle\psi^{(t)}_{\tau}(x)=\frac{\Gamma\left(n+\frac{1}{2}\right)}{\sqrt{(2n-1)!}}\frac{q^{n}}{\left(\frac{q}{2}+i(p+\tau)\right)^{n+\frac{1}{2}}}\,.$$ (83) Obviously, $\psi^{(t)}_{\tau}$ does not belong to $\mathcal{H}_{x}$ because it is not a square integrable function with the measure $d\nu(x)$ on $\mathbb{R}_{+}$. B.2 Eigenproblem for $\hat{r}$ operator Now, let us examine the eigensolutions of the operator $\hat{r}$. In this case the assumption about reality of the fiducial vector is not needed. $$\int_{\mathbb{R}_{+}}d\nu(y)\;\mathbf{K}_{r}(x,y)\;\psi^{(r)}_{s}(y)=s\,\psi^{(r)}_{s}(x)\,,$$ (84) where $$\displaystyle\mathbf{K}_{r}(x,y)=\langle x|\hat{r}|y\rangle=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\;\langle x|g(p,q)\rangle\;\frac{q}{b}\;\langle g(p,q)|y\rangle=$$ $$\displaystyle=\frac{1}{A_{\Phi_{0}}}\delta(x-y)\,.$$ (85) Eigenfunctions of $\hat{r}$ are Dirac delta type functions $$\displaystyle\psi^{(r)}_{s}(x)=\delta\left(x-\frac{1}{A_{\Phi_{0}}s}\right)\,.$$ (86) The form of these solutions as a functions of $(p,q)$ reads $$\psi_{s}(p,q)=\int_{0}^{\infty}d\nu(x)\langle g(p,q)|x\rangle\,\psi^{(r)}_{s}(x)=A_{\Phi_{0}}\;s\,\exp\left[-\frac{ip}{A_{\Phi_{0}}s}\right]\;\Phi^{\star}_{0}\left(\frac{q}{A_{\Phi_{0}}s}\right)\,.$$ (87) Obviously, $\psi^{(r)}_{s}$ does not belong to $\mathcal{H}_{x}$ because it is not a square integrable function with the measure $d\nu(x)$ on $\mathbb{R}_{+}$. Appendix C Expectation values of the Kretschmann operator within a basis in the Hilbert space $L^{2}(\mathbb{R}_{+},d\nu(x))$ In this appendix we present a derivation of expectation values and variances of the operators $\hat{t}$, $\hat{r}$ and $\hat{\mathcal{K}}$ within a class of quantum states furnishing a basis in the Hilbert space $L^{2}(\mathbb{R}_{+},d\nu(x))$. Let us consider quantum states similar to the wave packets $\Psi^{(t)}(x)$ defined by (73), where the only modification is in $x$ dependence of the eigenfunctions $\psi^{(t)}_{\tau}$ $$\Psi_{n}(x)=N\int_{\mathbb{R}}d\tau f^{(t)}(\tau)x^{n}e^{i\tau x}=Nx^{n}\exp\left[i\tau_{0}x-\frac{\gamma^{2}x^{2}}{2}\right]\,,$$ (88) where $n=1,2,\ldots$, the $f^{(t)}(\tau)$ is a Gaussian distribution, and where $N^{2}=2\gamma^{n}/(n-1)!$. The expectation values of the operators $\hat{t}$, $\hat{r}$, $\hat{\mathcal{K}}$ in the states $\Psi_{n}$ are as follows $$\displaystyle\langle\hat{t};\Psi_{n}\rangle=\tau_{0}\,,$$ (91) $$\displaystyle\langle\hat{r};\Psi_{n}\rangle=\frac{1}{A_{\Phi}}\frac{\Gamma\left(n-\frac{1}{2}\right)}{(n-1)!}\;\gamma\,,$$ $$\displaystyle\langle\hat{\mathcal{K}};\Psi_{n}\rangle=\mathcal{A}\frac{(n+2)!}{(n-1)!}\;\frac{1}{\gamma^{6}}\,,$$ and the corresponding variances are $$\displaystyle\mathrm{var}(\hat{t};\Psi_{n})=\frac{4n-3}{4(n-1)}\gamma^{2}\,,$$ (92) $$\displaystyle\mathrm{var}(\hat{r};\Psi_{n})=\frac{1}{A^{2}_{\Phi}}\left(\frac{1}{n-1}-\frac{\Gamma\left(n-\frac{1}{2}\right)^{2}}{(n-1)!^{2}}\right)\;\gamma^{2}\,,\quad n\geq 2$$ (93) $$\displaystyle\mathrm{var}(\hat{\mathcal{K}};\Psi_{n})=\mathcal{A}^{2}\left(\frac{(n+5)!}{(n-1)!}-\frac{(n+2)!^{2}}{(n-1)!^{2}}\right)\;\frac{1}{\gamma^{12}}\,.$$ (94) Average value and variance of $\hat{\mathcal{K}}$ and $\hat{r}$ are the same as in subsection IV.2. The last statement is important because we prove below that the set of the functions $\Psi_{n}$ is dense in the Hilbert space $L^{2}(\mathbb{R}_{+},d\nu(x))$. Namely, let us consider the subset of functions $\Psi_{n}$ where index $n$ is odd. By using linear combination of these functions one can build the set of functions in the following form $$l_{k}(x):=\sqrt{2}\gamma\;x\;L_{k}(\gamma^{2}x^{2})\;\exp\left(i\tau_{0}x-\frac{\gamma^{2}x^{2}}{2}\right)\,,$$ (95) where $L_{k}(x)$ are the Laguerre polynomials. Calculating the scalar product, after changing the variables $y=\gamma^{2}x^{2}$, one gets $$\langle l_{k}|l_{m}\rangle=\int_{0}^{\infty}d\nu(x)l_{k}(x)^{\star}l_{m}(x)=\int_{0}^{\infty}dyL_{k}(y)L_{m}(y)e^{-y}=\delta_{km}.$$ (96) It means, the set of functions (95) form an orthonormal basis in the Hilbert space $L^{2}(\mathbb{R}_{+},d\nu)$. Therefore, the set of functions $\Psi_{n}(x)$ must be dense in $L^{2}(\mathbb{R}_{+},d\nu(x))$ and every function belonging to our Hilbert space can be expressed as a combination of the functions $\Psi_{n}(x)$. Appendix D Calculations of $\langle\check{p}\rangle_{0},\langle\check{q}\rangle_{0},\langle\check{p}^{2}\rangle_{0}$, and $\langle\check{q}^{2}\rangle_{0}$ In the following we present the calculation of component parts which are needed for $\sigma_{t}$ and $\sigma_{r}$. In the computation we use method described in the Appendix A. For shortness we use the following notation $\langle g(p_{1},q_{1})|g(p_{2},q_{2})\rangle=\langle p_{1},q_{1}|p_{2}q_{2}\rangle$ . Now, we calculate $$\displaystyle\langle\check{p}\rangle_{0}=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\;\langle 0,1|p,q\rangle\;p\;\langle p,q|0,1\rangle=$$ (97) $$\displaystyle=\frac{i}{A_{\Phi_{0}}}\left[\int_{\mathbb{R}_{+}}\frac{dq}{q^{2}}\int_{\mathbb{R}_{+}}\frac{dy}{y}\left(\frac{\Phi^{\star}_{0}(y)}{y}\right)^{\prime}\Phi_{0}(qy)\Phi^{\star}_{0}(qy)\Phi_{0}(y)+\right.$$ (98) $$\displaystyle+\int_{\mathbb{R}_{+}}\frac{dq}{q}\int_{\mathbb{R}_{+}}\frac{dy}{y}\frac{\Phi^{\star}_{0}(y)}{y}\Phi^{\prime}_{0}(qy)\Phi^{\star}_{0}(qy)\Phi_{0}(y)+$$ (99) $$\displaystyle\left.\int_{\mathbb{R}_{+}}\frac{dq}{q}\int_{\mathbb{R}_{+}}\frac{dy}{y^{2}}\delta(y)\left|\Phi_{0}(y)\right|^{2}\left|\Phi_{0}(qy)\right|^{2}\right]\,.$$ (100) The second of these integrals can be turned to the form $(-A_{\Phi_{0}})\int_{\mathbb{R}_{+}}dy\;\Phi_{0}(y)^{\star}\Phi^{\prime}_{0}(y)/y$. The third one gives the same formula with the opposite sign. The last one is equal to $\lim_{x\rightarrow 0^{+}}\left|\Phi_{0}(y)\right|^{2}/y$. Choosing a fiducial vector for which this limit is equal to zero, one gets $$\langle\check{p}\rangle_{0}=0\,.$$ (101) Now, we calculate further required coefficients of type $\langle\check{A}\rangle_{0}$ : $$\displaystyle\langle\check{q}\rangle_{0}=\frac{1}{A_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p,q)\langle 0,1|p,q\rangle\;q\;\langle p,q|0,1\rangle$$ (102) $$\displaystyle=\frac{1}{A_{\Phi_{0}}}\int_{\mathbb{R}_{+}}\frac{dq}{q}\int_{\mathbb{R}_{+}}\frac{dy}{y}\Phi^{\star}_{0}(y)\Phi_{0}(qy)\Phi^{\star}_{0}(qy)\Phi_{0}(y)=1\,.$$ (103) For variances we need average values of squares of the operators $\check{p}$ and $\check{q}$: $$\displaystyle\langle\check{p}^{2}\rangle_{0}=\frac{1}{A_{\Phi_{0}}^{2}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{1},q_{1})\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{2},q_{2})\langle 0,1|p_{1},q_{1}\rangle\;p_{1}\;\langle p_{1},q_{1}|p_{2},q_{2}\rangle\;$$ (104) $$\displaystyle p_{2}\;\langle p_{2},q_{2}|0,1\rangle$$ (105) $$\displaystyle=\frac{(-1)}{A^{2}_{\Phi_{0}}}\int_{\mathbb{R}_{+}}\frac{dq_{1}}{q_{1}^{2}}\int_{\mathbb{R}_{+}}dx\;\frac{\Phi^{\star}_{0}(x)\Phi_{0}(q_{1}x)}{x}\int_{\mathbb{R}_{+}}dy\frac{\Phi^{\star}_{0}(q_{1}y)}{y}\delta^{\prime}(x-y)$$ (106) $$\displaystyle\left[\int_{\mathbb{R}_{+}}\frac{dq_{2}}{q_{2}^{2}}\left(\int_{\mathbb{R}_{+}}dz\;\delta^{\prime}(z-y)\frac{\Phi^{\star}_{0}(q_{2}z)\Phi_{0}(z)}{z}\right)\Phi^{\star}_{0}(yq_{2})\right]\,.$$ (107) The integral in the square bracket is equal to: $$\displaystyle A_{\Phi_{0}}y\left(\frac{\Phi_{0}(y)}{y}\right)^{\prime}+\frac{\Phi_{0}(y)}{y}B+A_{\Phi_{0}}\delta(y)\Phi_{0}(y)\,,$$ (108) where $B=\int_{\mathbb{R}_{+}}\frac{dq}{q}\Phi_{0}(q)\Phi^{\prime\star}_{0}(q)$. If $\Phi_{0}$ is a real function, $B=A_{\Phi_{0}}/2$. If one choose the fiducial vector in such a way that the limit $\lim_{y\rightarrow 0^{+}}\frac{\Phi_{0}^{\star}(q_{1}y)\Phi_{0}(y)}{q}$ is equal to zero for every $q_{1}$, then the integration of the part which includes $\delta(y)$ is equal to zero. The similar situation one gets after integration over $dx$ in the original integral. If one take the fiducial vector fulfilling the conditions $\lim_{y\rightarrow 0^{+}}\Phi_{0}(y)/y=0$ and $\lim_{y\rightarrow 0^{+}}(\Phi_{0}(y)/y)^{\prime}<\infty$ one gets: $$\displaystyle\langle\check{p}^{2}\rangle_{0}=\int_{\mathbb{R}_{+}}\frac{dy}{y}\left|y\left(\frac{\Phi_{0}(y)}{y}\right)^{\prime}+\frac{B}{A_{\Phi}}\frac{\Phi_{0}(y)}{y}\right|^{2}\,.$$ (109) The last component reads $$\displaystyle\langle\check{q}^{2}\rangle_{0}=\frac{1}{A^{2}_{\Phi_{0}}}\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{1},q_{1})\int_{\textrm{Aff}(\mathbb{R})}d\mu(p_{2},q_{2})\langle 0,1|p_{1},q_{1}\rangle\;q_{1}\;\langle p_{1},q_{1}|p_{2},q_{2}\rangle\;$$ (110) $$\displaystyle q_{2}\;\langle p_{2},q_{2}|0,1\rangle=\frac{1}{A_{\Phi_{0}}^{2}}\int_{\mathbb{R}_{+}}\frac{dz}{z^{3}}\left|\Phi_{0}(z)\right|^{2}\,.$$ (111) References (1) K. Schwarzschild, “On the gravitational field of a mass point according to Einstein’s theory”, arXiv:physics/9905030 [physics.hist-ph], translation and foreword by S. Antoci and A. Loinger. (2) J.Droste, “The field of a single centre in Einstein’s theory of gravitation and the motion of a particle in that field”, K. Ned. Akad. Wet. Proc. 19, 197 (1917). (3) A. Góźdź, W. Piechocki, and G. Plewa, “Quantum Belinski-Khalatnikov-Lifshitz scenario”, Eur. Phys. J. C 79, 45 (2019). (4) A. Góźdź and W. Piechocki, “Robustnes of the BKL scenario”, Eur. Phys. J. C 80, 142 (2020). (5) A. Góźdź, W. Piechocki, and T. Schmitz, “Dependence of the affine coherent states quantization on the parametrization of the affine group”, Eur. Phys. J. Plus 136, 18 (2021). (6) C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (W. H. Freeman and Company, San Francisco, 1973). (7) P. Chruściel, Geometry of Black Holes (International Series of Monographs on Physics, 2020). (8) M. de Cesare, S. S. Seahra, and E. Wilson-Ewing, “The singularity in mimetic Kantowski-Sachs cosmology”, JCAP 07 (2020) 018. (9) J. P. Gazeau and R. Murenzi, “Covariant affine integral quantization(s)”, J. Math. Phys. 57 052102 (2016). Robertson (1929) H. P. Robertson, “The Uncertainty Principle”, Phys. Rev. 34, 163 (1929). (11) K. Blanchette, S. Das, S. Hergott, and S. Rastgoo, “Black hole singularity resolution via the modified Raychaudhuri equation in loop quantum gravity”, Phys. Rev. D 103, 084038 (2021). (12) A. Ashtekar, J. Olmedo, and P. Singh, “Quantum extension of the Kruskal spacetime”, Phys. Rev. D 98, 126003 (2018). (13) M. Assanioussi and L. Mickel, “Loop effective model for Schwarzschild black hole: A modified $\bar{\mu}$ dynamics”, Phys. Rev. D 103, 124008 (2021). (14) R. Gambini and J. Pullin, “Loop Quantization of the Schwarzschild Black Hole”, Phys. Rev. Lett. 110, 211301 (2013). (15) C. Zhang, Y. Ma, S. Song, and X. Zhang, “Loop quantum deparametrized Schwarzschild interior and discrete black hole mass”, arXiv:2107.10579v1 [gr-qc]. (16) K. Mosani, D. Dey, P. S. Joshi, G. C. Samanta, H. Menon, V. D. Patel, “On the visibility of singularities in general relativity and modified gravity theories”, arXiv:2106.01773 [gr-qc]. (17) K. Mosani, D. Dey, P. S. Joshi, “Globally visible singularity in an astrophysical setup”, arXiv:2103.07179 [gr-qc]. (18) J.-Q. Guo, L. Zhang, Y. Chen, P. S. Joshi, H. Zhang, “Strength of the naked singularity in critical collapse”, Eur. Phys. J. C 80, 924 (2020). (19) J.-Q. Guo, P. S. Joshi, R. Narayan, L. Zhang, “Accretion disks around naked singularities”, Class. Quantum Grav. 38, 035012 (2021). (20) K. Mosani, D. Dey, P. S. Joshi, “Global visibility of a strong curvature singularity in nonmarginally bound dust collapse”, Phys. Rev. D 102, 044037 (2020). (21) K. Mosani, D. Dey, P. S. Joshi, “Strong curvature naked singularities in spherically symmetric perfect fluid collapse”, Phys. Rev. D 101, 044052 (2020). (22) R. Shaikh, P. S. Joshi, “Can we distinguish black holes from naked singularities by the images of their accretion disks?”, JCAP 10, 064 (2019). (23) P. Bambhaniya, A. B. Joshi, D. Dey, P. S. Joshi, “Timelike geodesics in Naked Singularity and Black Hole Spacetimes”, Phys. Rev. D 100, 124020 (2019). (24) A. Góźdź, J. J. Ostrowski, A. Pȩdrak, and W. Piechocki, “Quantum dynamics of Lemaître-Tolman-Bondi model of massive star”, in progress.
Origins of carbon enhanced metal poor stars Mahavir Sharma, Tom Theuns, Carlos S. Frenk and Ryan J. Cooke Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham, DH1 3LE, UK (Submitted ———- ; Accepted ———-; In original form ———-) Abstract We investigate the nature of carbon-enhanced metal poor (CEMP) stars in Milky Way (MW) analogues selected from the eagle cosmological hydrodynamical simulation. The stellar evolution model in eagle includes the physics of enrichment by asymptotic giant branch (AGB) stars, winds from massive stars, and type I and type II supernovae (SNe). In the simulation, star formation in young MW progenitors is bursty due to efficient stellar feedback, which causes poor metal mixing leading to the formation of CEMP stars with extreme abundance patterns. In this scenario, two classes of CEMP stars emerge: those mostly enriched by low-metallicity type II SNe with low Fe yields that drive galactic outflows, and those mostly enriched by AGB stars when a gas-poor progenitor accretes pristine gas. The first class resembles CEMP-no stars with high [C/Fe] and low [C/O], the second class resembles CEMP-s stars overabundant in s-process elements and high values of [C/O]. This scenario explains several trends seen in data: (i) the increase in the scatter and median of [C/O] at low and decreasing [O/H], (ii) the trend of stars with very low [Fe/H] or [C/H] to be of type CEMP-no, and (iii) the reduction in the scatter of [$\alpha$/Fe] with atomic number in metal poor stars. In this scenario, CEMP stars were enriched by the first few generations of stars and supernovae that enabled hydrogen reionization in the early Universe. keywords: stars: abundances – nuclear reactions, nucleosynthesis, abundances – dark ages, reionisation, first stars – Galaxy: abundances, formation, halo 1 Introduction Whereas hydrogen and most of the helium in the Universe were forged in the Big Bang (Alpher et al., 1948), other elements were predominantly synthesised in stars (e.g. Burbidge et al., 1957). The standard model of stellar evolution includes three main channels of such ‘metal’ enrichment: (i) Massive stars, $M\gtrapprox 6$ M${}_{\odot}$, that burn He ashes hydrostatically yielding predominantly $\alpha$ elements (of which the nucleus consists of an integer number of $\alpha$ particles and hence an even number of protons, such as C, O, Mg, Ne, Si, etc), imprinting a characteristic ‘odd-even’ elemental abundance pattern. These stars explode as core-collapse type II SNe, with neutron capture r-process and trans-Fe elements produced during explosive nucleosynthesis. (ii) Intermediate mass stars ($0.5\lessapprox M/{\rm M}_{\odot}<6$) that produce mainly C and O as well as s-process elements, that are brought to the surface and lost in a stellar wind or planetary nebula following the star’s ascent up the asymptotic giant branch (AGB). (iii) Type Ia SNe with significant Fe yields, and that are plausibly the result of mass transfer in a binary star that pushes the SN progenitor over the Chandrasekhar limit. The abundance pattern in stars reflects the extent to which these channels operate and how effective stellar ejecta mix with star-forming gas, see e.g. Nomoto et al. (2013) for a review. The lifetimes of the progenitors of these three channels are quite different. The short lifetimes ($\lessapprox 40$ Myr) of massive stars suggest that star forming gas will be rapidly enriched with $\alpha$ elements; the increase in the abundance of C from AGB stars, and Fe from type Ia SNe, delayed by $\sim 300$ Myrs. The consequences of such timed release of elements is evident in the high111The metallicity, $Z$, is the mass fraction in metals - elements more massive than helium. The common notation $[{\rm X}/{\rm Y}]\equiv\log(M_{\rm X}/M_{\rm Y})-\log((M_{\rm X}/M_{\rm Y})_{% \odot})$ denotes the mass (or number) ratio of elements X and Y, relative to that in the Sun. Here we take the solar abundances from Table 1 of Wiersma et al. (2009b) but this may not be the case for the data to which we compare our results. Differences in the assumed solar abundances are small compared to the large variations discussed here. $[\alpha/{\rm Fe}]$ abundances of elliptical galaxies in which star formation is rapidly suppressed following a burst (Segers et al., 2016). On the other hand, if star formation is quiescent, abundances of stars in a low-$Z$ galaxy will increase slowly, with, for example, [C/O] increasing with metallicity $Z$, until eventually the abundances reflect the yield of the full stellar initial mass function (IMF). The abundance patterns of Milky Way stars that formed relatively recently, such as the Sun for example, do not show large variations relative to the solar abundance pattern. However, the same is not true for extremely metal poor (EMP) stars (those with [Fe/H]$<$-3 in what follows; see Beers & Christlieb (2005) for a review), which can have [C/Fe]$\approx$+1 (carbon-enhanced metal poor stars, or CEMP stars in what follows). This may be caused by inefficient metal mixing, with the abundance pattern reflecting the yield of a few or even a single enriching supernova (e.g. Chan & Heger, 2016). This makes such stars valuable relics in studies of galactic archaeology (e.g Frebel & Norris, 2015). Abundance patterns different from that of the Sun are also detected in some damped Lyman-$\alpha$ systems (DLAs) of low metallicity ([Fe/H]$\lessapprox-3$, e.g. Cooke et al. (2011a)). Abundances at low $Z$ may therefore provide valuable clues to stellar yields at low $Z$, and to the efficiency of metal mixing during early star formation. However, the variety of patterns seen at low $Z$ is baffling. The subclass of CEMP stars alone exhibits examples that show enhancements due to both s-and/or r-process elements, and stars with relatively normal abundances of neutron capture elements222Denoted CEMP-s, CEMP-r, CEMP-rs, and CEMP-no, respectively. (e.g. Beers & Christlieb, 2005). CEMP-s stars may result from mass transfer from an AGB companion (e.g. Aoki et al., 2007; Masseron et al., 2010), and indeed the majority - but crucially not all - CEMP-s stars show radial velocity variations (Lucatello et al., 2005; Starkenburg et al., 2014; Hansen et al., 2016). Komiya et al. (2007) claim that CEMP-r stars result from binary evolution as well, whereas for example Hansen et al. (2015) suggest that the enrichment already happened in the star’s birth-cloud. Below [Fe/H]=-3, most stars are CEMP-no, possibly because the natal gas was enriched by a single, early SN with high [C/Fe], as argued by e.g. Frebel et al. (2006). Some observed abundance ratios do not appear to be consistent with any of the standard enrichment channels, possible pointing to more exotic stellar models. Several authors have argued that the increase in [C/Fe] at low Z may be due to enrichment by population III stars (e.g. Umeda & Nomoto, 2003; Ryan et al., 2005; Cooke & Madau, 2014; Ishigaki et al., 2014) which might also explain the elevated [C/O] (Akerman et al., 2004; Fabbian et al., 2009; Pettini & Cooke, 2014), due to their high C yields (Chieffi & Limongi, 2004), see also Heger & Woosley (2010); Limongi & Chieffi (2012). Other abundance ratios in CEMP stars may be anomalous as well, including values of [O/Fe]$>$4 or [Ba/Fe]$<$-2 (see the review by Frebel & Norris 2015). Aoki et al. (2014) see evidence for pair-instability SNe (PISN) in the abundance pattern of an EMP star. The abundance patterns in DLAs and their relation to early star formation, are discussed by Pettini et al. (2008); Cooke et al. (2011a). In this paper we examine which abundance patterns are predicted by the eagle cosmological hydrodynamical simulation at low $Z$. We focus in particular on how these patterns reflect both metal yields at low $Z$ and metal mixing during the onset of star formation in the progenitors of galaxies like the Milky Way, the latter selection allowing comparison to data. The eagle simulation incorporates the three main nucleosynthesis channels discussed above, as well as an implementation of stellar feedback that strongly affects star formation in small galaxies. The relatively coarse mass resolution in eagle precludes us from making predictions for individual stars, but it might be more realistic for the enrichment of birth clouds. The fact that both stars and DLAs exhibit a similar spread in abundances at low $Z$ suggests that the underlying physical mechanism that drives the spread may operate even at eagle resolution. 2 The eagle simulations eagle (Schaye et al. (2015), hereafter S15) is a suite of cosmological hydrodynamical simulations based on the $\Lambda$ cold dark matter model of structure formation with parameters taken from the Planck Collaboration et al. (2014) paper. The simulations were performed with the gadget-3 code, based on the public Tree-SPH code of Springel (2005), with changes to the numerical hydrodynamics scheme and new subgrid prescriptions for numerically unresolved physical processes relevant to galaxy formation (S15). The numerical parameters of the subgrid modules were calibrated to reproduce the redshift $z\approx 0.1$ galaxy stellar mass function, galaxy sizes, and the stellar mass - black hole mass relation, as described by Crain et al. (2015). Full details of the subgrid modules and modification to the gadget-3 code used in eagle can be found in S15, we summarise them very briefly here. Modifications to the code include the anarachy SPH implementation described by Dalla Vecchia (in prep., summarised by Schaller et al. (2015)) and the time-step limiter of Durier & Dalla Vecchia (2012). Cooling and photo-heating of cosmic gas in the presence of a pervasive and time-evolving UV/X-ray and cosmic microwave background is implemented as described by Wiersma et al. (2009a). Star formation is implemented following Schaye & Dalla Vecchia (2008), whereby star forming gas particles are converted stochastically to collisionless ‘star’ particles in a way that simulated galaxies follow the Kennicutt-Schmidt law (Kennicutt, 1998). Star particles in the simulation represent a simple stellar population (SSP) with a Chabrier (2003) stellar initial mass function (IMF) in the mass range [0.1,100] M${}_{\odot}$. As stars evolve, they enrich surrounding gas particles, spreading mass lost according to the SPH formalism; feedback from stars heat the gas as described by Dalla Vecchia & Schaye (2012). eagle tracks 11 elements (H, He, C, N, O, Ne, S, Ca, Si, S, Fe) as well as a ‘total’ metallicity variable, through the timed release of elements from the three channels summarised in the Introduction: SNe of types I and II (and winds from the massive star progenitors of type II SNe) and AGB stars. Metallicity-dependent yields were taken from Woosley & Weaver (1995) for type II SNe, from Marigo (2001) and Portinari et al. (1998) for intermediate mass stars, and from model W3 of Thielemann et al. (2003) for type I SNe (see Wiersma et al. 2009b for full details and the Appendix for an illustration of some characteristic yields of these channels). We track the contribution to Fe from SNe of type I, and the total mass from each of the three enrichment channels, separately. This allows us to determine which channel dominates the enrichment of a given element. We use subfind (Springel et al., 2001; Dolag et al., 2009) to identify halos and galaxies in the simulation as described by McAlpine et al. (2016); the Milky Way like galaxies whose stellar abundances we compare to observations below, are taken to be $z=0$ central galaxies that inhabit dark matter halos of mass $10^{12}\,{\rm M}_{\odot}<M_{h}<3\times 10^{12}{\rm M}_{\odot}$. We will refer to stars in Milky Way-like eagle galaxies as ‘eagle stars’ in what follows. To mitigate numerical sampling issues related to enrichment, eagle additionally tracks ‘SPH smoothed’ (as opposed to ‘particle’) abundances, as discussed by Wiersma et al. (2009b). However, our confidence in the accuracy of predicted SPH-smoothed absolute abundances is still limited: we are confident that an eagle star with smoothed Fe abundance of $[{\rm Fe/H}]<-2$, say, has indeed a low metallicity, but we cannot reliably distinguish stars with $-5<[{\rm Fe/H}]<-4$ from those with $-3<[{\rm Fe/H}]<-2$. However, relative abundances are not affected by sampling since they are sourced by the same star particles for all enrichment channels, and hence are much more reliable. To select candidate CEMP stars in eagle, we will therefore select star particles with low abundance, [Fe/H]$<$-2, and examine their abundance pattern. Star formation in low-mass halos, $M_{h}\lesssim 10^{10}{\rm M}_{\odot}$, is bursty in eagle: the SFR is high when low-mass halos are gas rich, but the feedback from young massive stars may then remove a large fraction of the gas, dramatically reducing the SFR. We will refer to this phenomenon, whereby the gas fraction varies significantly as a function of time, as ‘breathing’. Although the level of stochasticity of the gas fractions in such small halos may be affected by numerical sampling of the feedback events, simulations of high-$z$ dwarf galaxies at much higher resolution typically show similar bursty behaviour (e.g. Wise et al., 2014; Kimm & Cen, 2014; El-Badry et al., 2016), and therefore appear to be a generic prediction of current models. Such bursts may play an important role in reionisation (Sharma et al., 2016a, b). 3 Abundance patterns at low metallicity 3.1 An AGB origin for the [C/O] upturn at low $Z$ Standard enrichment models predict that [C/O] increases with metallicity, as illustrated by the model of Akerman et al. (2004) plotted in black in Fig. 1, because massive stars drive stronger winds at higher $Z$ (Henry et al., 2000; Carigi, 2000; Akerman et al., 2004; Cescutti et al., 2009; Romano et al., 2010). A trend of [C/O] increasing with $Z$ is seen in eagle stars formed after $z=0.05$, as well as in observed (young) Milky Way disc stars (blue curve and blue symbols in Fig. 1, respectively). However, abundances of MW-stars display a surprising upturn in [C/O] below [O/H]$\sim-1$ (Akerman et al., 2004; Fabbian et al., 2009), and similarly high values of [C/O] are detected in low-$Z$ DLAs (Pettini et al., 2008; Cooke et al., 2011b; Pettini & Cooke, 2014) and Cooke at al. (2016, in prep.) (red and magenta circles with error bars refer to MW stars and DLAs in Fig. 1, respectively). This upturn might be a signature of enrichment by Pop. III stars (Akerman et al., 2004; Pettini & Cooke, 2014). In addition to an upturn, the scatter333Error bars on the observed abundances of stars are not plotted in the figure, but are typically small compared to the scatter between points. in [C/O] increases dramatically with decreasing [O/H]. Abundances in eagle (red curve) show a similar upturn and increase in scatter (shaded red region), even though Pop. III are not part of the model. Because the simulation tracks enrichment by each channel separately, we know that the high values of [C/O] at low [O/H] reflect enrichment by AGB stars instead. This is demonstrated in the right panel of Fig. 1, where we plot [C/H] versus [C/O] for eagle stars formed before $z=6$, coloured by the ratio $$f_{\rm AGB}\equiv{m_{\rm AGB}\over m_{\rm AGB}+m_{\rm SNII}}\,,$$ (1) where $m_{\rm AGB}$ and $m_{\rm SNII}$ are the metal mass received by the precursor gas particle of a star from the AGB and SN type II channels, respectively. Stars with $f_{\rm AGB}=1$ are only enriched by AGB stars (coloured yellow), and have the most extreme values of [C/O]$\gtrsim 1$. The stars that have such low [O/H] and high [C/O] formed before $z=6$ (corresponding $\sim 1~{}$Gyr after the Big Bang, and only $\sim 700~{}$Myrs after the formation of the first stars at $z\sim 15$), which, maybe somewhat surprisingly, is late enough for the AGB channel to become active. Indeed, the stellar evolution models of Marigo (2001) and Portinari et al. (1998) used in eagle, already yield significant AGB enrichment 300 Myrs after the stars formed. Since the upturn is due to carbon produced by AGB stars in eagle, oxygen - which is not synthesised significantly in AGB stars (yields of [C/O]$>1$, see below)- is low. We recall that type II SNe produce both carbon and oxygen, and their yields do not become highly supersolar in [C/O]. Therefore, if our interpretation is correct, the high [C/O] stars are not so much carbon-enhanced as they are oxygen-poor. Somehow, the birth cloud of these stars avoided being enriched by type II SNe for long enough, $\gtrapprox 300$ Myr, to allow AGB progenitors to evolve and release carbon. Salvadori & Ferrara (2012) associate the gentle increase in [C/O] with increasing [O/H] with carbon production by AGB stars. However, in their model, the AGB mechanism does not result in an upturn in [C/O] at low [O/H] and AGB enrichment does not yield DLAs with [C/O]$\gtrapprox 0.5$, which they interpret as an evidence for enrichment by Pop. III stars instead. We explore the origin of the high [C/O] in eagle in the next section. 3.2 Breathing and poor metal mixing Enrichment due to massive short-lived stars should occur soon after a galaxy starts forming stars. As a consequence, the second generation of stars would be expected to show the signature of type II SNe nucleosynthetic yields (high abundances of $\alpha$ elements, for example). However, the abundance pattern of newly formed stars will only reflect the average yields of their precursors if the ejecta of stars mix well with star-forming gas. In eagle, and as discussed in §2 in many other simulations as well, star formation is very bursty and the gas content varies significantly with time in the low-mass progenitors in which low $Z$ MW-stars form - we refer to this as breathing modes. When the galaxy exhales by ejecting gas, star formation mostly shuts down until new, predominantly pristine, gas accretes. If this happens on a timescale of the order of 300 Myrs later, then accreted pristine gas may be enriched by AGB stars, and stars that form from this gas may show the signature in their abundance pattern of AGB yields (high [C/O], for example). To demonstrate that this scenario occurs in eagle, we first show that stars that form in low mass eagle galaxies show very large variations in the value of $r\equiv f_{\rm AGB}/f_{\rm SNII}$, predicting low $Z$ signatures of pure AGB/type II enrichment. We next demonstrate that this is related to the gas fraction of the galaxy at the time the stars form. Figures 2 and 3 demonstrate our first claim. Galaxies in halos at $z\gtrapprox 6$ with low maximum circular velocity, $v_{\rm c,max}\lessapprox 50$ km s${}^{-1}$, form stars with a wide range of $r=f_{\rm AGB}/f_{\rm SNII}$. At $z=10$ (top panel of Fig. 2), they form stars with pure type II enrichment patterns (plotted at $f_{\rm AGB}=10^{-7}$) but few show evidence for AGB enrichment, simply because the Universe is relatively young ($\sim 0.5$ Gyr) compared to the evolutionary timescale of AGB stars. At later times (middle and bottom panels at $z=8$ and $z=6$ in Fig. 2), more stars with high $f_{\rm AGB}$ appear, including stars with $f_{\rm AGB}\approx 1$ which exhibit nearly pure AGB abundances (e.g. [C/O]$>1$). Such extreme abundance patterns occur far less at higher $v_{\rm c,max}$; see for example the abundant cloud of orange/yellow points ($v_{\rm c,max}>100$km s${}^{-1}$), which correspond to stars that form in galaxies in which AGB and type II channels are well mixed. Figure 3 shows in more detail that the scatter in $r$ increases dramatically below $v_{\rm c,max}\sim 50$ km s${}^{-1}$. The second claim is demonstrated in Fig.4: stars with extreme AGB/type II abundances form predominantly in halos with $v_{\rm c,max}\lessapprox 50$ km s${}^{-1}$ (top panel). Those stars predominantly enriched by AGB stars form in halos that, in addition, have low baryon fractions (bottom panel). The latter are small halos where gas has been removed by a previous star burst, with pristine cosmologically accreted gas now being enriched by the early generation of AGB stars, imprinting their characteristic AGB pattern on any newly forming stars. Conversely, gas rich dwarfs (with high baryon fractions) form stars that may have a characteristic type II SNe pattern. These same SNe then power the outflow that causes the galaxy to become gas poor. How realistic is it that this scenario also applies to early galaxies, given the limitations of eagle? It is based on two aspects of the simulation: (i) large variations in the gas fraction of dwarfs (breathing), and (ii) poor metal mixing of stellar ejecta. A large mass loading factor - the ratio $\beta=\dot{M}_{\rm wind}/\dot{M}_{\star}$, of the galactic outflow rate to the star formation rate - for high-$z$ dwarfs, appears to be an essential ingredient of simulations (e.g. Muratov et al., 2015). Although galactic winds are indeed ubiquitously observed at high-$z$, measuring $\beta$ is challenging; see for example the review by Veilleux et al. (2005). However, metals are observed in the intergalactic medium (Cowie et al., 1995), even at low density (e.g. Schaye et al., 2003), and it is likely that these were deposited there by galactic outflows originating predominantly from dwarf galaxies (Madau et al., 2001; Theuns et al., 2002; Booth et al., 2012). Such wind episodes may also explain the high escape fractions of ionising photons, needed to reionise the Universe (Sharma et al., 2016a). Given this evidence we posit that this aspect of our model is relatively well established. How about the poor metal mixing of enriched gas? If, as is likely, winds are (at least partially) powered by massive stars, then it would not be surprising if the metallicity of the wind were higher than that of the general ISM, since it is the hot ejecta that provides the buoyancy for the gas to escape. This is indeed seen in the parsec-resolution wind simulations of Creasey et al. (2015). Therefore, it is plausible that dwarfs can indeed lose a significant fraction of SNe type II products in a galactic outflow. But is it then possible for the remaining gas to be enriched by SNe II to the extremely low levels seen in the simulation? Observations by James et al. (2016) of nearby star forming galaxies show that metals are poorly mixed on scales of $\sim$ 50 parsecs, which they attribute to poor metal mixing around young, star forming regions. Schaye et al. (2007) identify a large population of photo-ionised, compact (sizes 100 pc), metal rich ($Z\sim Z_{\odot}$ clouds in the redshift $z\sim 2$ intergalactic medium, from which they conclude that intergalactic metals are poorly mixed. Given these two lines of observational evidence, we argue that poor metal mixing in the $z\gtrapprox 6$ dwarfs is at least plausible. This scenario makes another testable prediction: if metal mixing is indeed poor, especially during the early stages of star formation in a galaxy, we would expect to see stars enriched (almost exclusively) by SNe of type II as well. We investigate observational evidence for this next. 3.3 The origin of stars with high [C/Fe] at low [C/O] The combination of poor metal mixing and the existence of two channels of carbon production (AGB and type  II SNe) gives rise to two classes of CEMP stars in eagle: those enriched by AGB stars (which do not produce Fe), and those enriched by type II SNe with Fe-poor ejecta. In the models of Woosley & Weaver (1995), the latter occur for a wide range of progenitor masses at low metallicity, $Z\lessapprox 0.004$, and also for progenitor masses of $M_{\star}\approx 30$ M${}_{\odot}$ for $Z=0.02$ (see also Fig. 10 below). Massive type II SNe are first to enrich their surroundings as a galaxy begins to form stars in eagle444We remind the reader that the simulation does not include any Pop. III stars. Their yields are extremely high in [C/Fe] and slightly subsolar in [C/O] (see the Appendix). As time progresses and lower mass type II SNe explode, the enrichment pattern shifts to yields with lower [C/Fe] and values of [C/O] still within $\sim 0.8$ dex from solar. The timescale for this initial enrichment is of course very short, with the $\sim 10~{}$Myr lifetime of a 20 M${}_{\odot}$ star much shorter than the $\sim 300$ Myr delay of the earliest AGB events. As time progresses, we therefore expect the abundance of star forming gas that is enriched by type II SNe to shift along the arrows in Fig. 5. However, if feedback from these massive stars is able to eject a significant fraction of the star-forming gas, then star formation may temporarily halt. When it resumes, following cosmological accretion of mostly pristine gas, AGB stars may enrich star-forming gas yielding stars with high [C/Fe] and [C/O]$>1$. With time, the galaxy grows in mass, dramatic outflows following bursts diminish, and AGB and type II (and type I as well) ejecta mix well, so that when [Fe/H]$\sim-2$, the abundances of [C/Fe] and [C/O] eventually approach solar values. This scenario predicts that at very low $Z$, the value of [C/O] should increase with decreasing [O/H] fastest for stars with [Fe/H]$\gtrapprox-3.5$. Indeed, the MW progenitors in which these stars form are sufficiently evolved to host AGB stars, and it is their C-rich but O-poor ejecta that cause [C/O] to increase. At even lower [Fe/H], the MW progenitor is too young to host significant numbers of AGB stars and enrichment is mostly by type II SNe: it is in these MW progenitors that the stars with lower [C/O] at very low [O/H] in Fig. 1 form. We can take this reasoning further to predict s-process abundances, which signal AGB enrichment. In Fig. 6, we plot [C/Fe] versus [Fe/H] for eagle stars (depicted as a 2D greyscale histogram) and compare this to abundance ratios of Milky Way stars taken from the saga database555\urlhttp://saga.sci.hokudai.ac.jp/wikidoku.php described by Suda et al. (2008). The observed stars are collected from a diverse set of observational surveys with a variety of selection criteria. The database is therefore not a complete sample of CEMP stars. In red giant branch (RGB) stars, carbon can be burned to nitrogen (Roederer et al., 2014; Placco et al., 2014). When this happens, the measured (surface) carbon abundance does not reflect the initial carbon abundance. We therefore exclude RGB stars (surface gravity $\log_{10}(g/({\rm cm}~{}{\rm s}^{-2}))<3.2$) from Figs. 6 and 7. However, we do show RGB stars in Figs. 8 and 9 below: their [C/$\alpha$] are lower limits, but [$\alpha$/Fe] is unaffected by nuclear burning. In both eagle and saga, the scatter and the median value of [C/Fe] increase with decreasing [Fe/H] (Fig. 6). The observations show a cluster of points with [C/Fe]$\sim 2$ at $-3\lessapprox{\rm[Fe/H]}\lessapprox-2$, most of which are CEMP-s stars and a significant fraction of these are believed to be binary stars. The enhancement in carbon and s-process elements in these binary stars likely results from mass transfer. Since eagle does not include binary star evolution, it comes as no surprise that they are absent from the simulation. At low values of [Fe/H]$\lessapprox-3.5$, observed stars with [C/Fe]$>1$ are mostly of type CEMP-no (see Fig. 6), suggesting a link between low Fe enrichment and CEMP-no nature (e.g. Aoki et al., 2007; Yong et al., 2013). To examine whether there is such a connection in eagle, we select CEMP stars by requiring that [C/Fe]$>1$ (imposing additionally that $Z>10^{-3}$ to avoid sampling stars at too low metallicity) and divide them in two classes: (i) CEMP-no: stars formed before $z=10$ with $f_{\rm SNII}>0.9$, and (ii) CEMP-s: stars with $f_{\rm AGB}>0.1$. The redshift selection imposed on the CEMP-no stars avoids contamination by lower-mass type II SNe which can enrich gas with r-process elements due to their high neutron flux. The criterium $f_{\rm SNII}>0.9$ selects stars predominantly enriched by massive low-$Z$ type II SNe; the criterium $f_{\rm AGB}>0.1$ selects stars enriched with s-process elements (see also Fig. 5). At [Fe/H]$\lessapprox-3$, the selected stars are CEMP-no (red histogram in the top panel of Fig. 6), with the CEMP-s fraction increasing with increasing [Fe/H] (blue histogram). This trend is similar to that observed. Yoon et al. (2016) show that [C/H] is an even better predictor of s-process enhancement, with CEMP-no stars at low [C/H] and CEMP-s stars at higher [C/H]. This is illustrated in Fig. 7, where CEMP stars from the saga database are plotted as blue or red plus symbols, depending on whether they are classified as CEMP-s or CEMP-no, respectively. The CEMP stars from eagle are shown as open circles, using the same colour convention. It is clear that eagle reproduces the observed trend. As explained above, the underlying reason is that CEMP-no eagle stars are enriched very early on by high mass, low $Z$ type  II SNe, whereas the CEMP-s stars appear later on and are enriched with C by AGB stars. 3.4 Signatures of low-$Z$ type II enrichment The two classes of CEMP are very well separated in Fig. 8: a type II enriched branch with [C/O]$\lessapprox-0.4$ and a wide range of [O/Fe]$\gtrapprox 1$, and an AGB enriched branch with [C/O]$\gtrapprox 0.5$ and [O/Fe]$\sim 0\hbox{--}2$ (where the Fe and most of the O comes from a small contribution from type II SNe). Stars more recently formed from well-mixed gas are shown in blue, they approach solar abundance ratios with increasing $Z$. The bottom panel of the figure plots observed stars taken from the saga database (filled circles show detections, crosses indicate upper limits, colour represents [Fe/H] as in the top panel). The data also show a large scatter in [C/O], with a trend of lower [Fe/H] stars having higher [O/Fe] which is also clearly seen in eagle. However, the two branches that stand out clearly in the simulation (top panel) are not so obvious in the data. It is not obvious how to identify these two classes of CEMP stars observationally in a diagram such as Fig. 8, because measuring the oxygen abundance is more difficult than for other elements. The upper branch can be distinguished by the high s-process element abundance of such stars, a clear signature of the AGB origin of carbon in CEMP stars. The relative abundance pattern of $\alpha$-elements may provide a signature of the stars on the lower branch. This is explored in Fig. 9, where we compare results from eagle in grey, to observed patterns of metal poor stars with [Fe/H]$<-2$ from the saga database (coloured symbols); crosses indicate limits in cases of non-detection of one or more elements. In the top left panel showing [C/O] versus [O/Fe], we have labelled the loci in which eagle stars are predominantly enriched by AGB and type II SNe. Mixing these channels leads to the appearance of stars with high [O/Fe], and much lower [C/O] than AGB yields ([C/O]$\gtrapprox 1$) and higher than type II SNe yields ([C/O]$\sim-0.9$). Although the upper AGB branch is well populated in the diagram, the presence of observed stars on the lower type II branch may be less convincing. However, a well documented feature of type II SNe yields at low-$Z$ is the dramatic decrease in the scatter in [$\alpha$/Fe] along the sequence of increasing atomic number $A$ from O-Mg-Si to Ca. In the Woosley & Weaver (1995) yields used in eagle, this is clearly seen in the truncation of the grey region at large values of [$\alpha$/Fe] particularly for Si and Ca for low values of [C/$\alpha$]. The physical reason underlying this trend is that in the ‘onion’ model of the SN precursor, the Ca shell lies close to the Fe core, whereas the Si, Mg and O shells, in that order, lie further away. If the central core is strongly bound, as is the case at low $Z$, the SN explosion may not be sufficiently energetic to expel deep stellar layers. In that case Ca and Fe should track each other much more tightly than O and Fe, say, which is indeed what the grey eagle pattern shows. Note in particular the sharp reduction in the number of eagle stars above [Si/Fe]$\sim 1$ or [Ca/Fe]$\sim 1$ at low [C/$\alpha$]. Interestingly, a sharp reduction in the scatter of [$\alpha$/Fe] for higher $A$ is also seen in the data. In addition, the number of observed stars dramatically decreases above [Mg/Fe]$=0.8$, [Si/Fe]$=1$ or [Ca/Fe]$>0.6$. Recall that in the eagle simulation, AGB stars are the source of carbon in the stars with high [C/$\alpha$]. Therefore stars with low [C/$\alpha$] are enriched mainly by type II SNe. Such stars are clearly present in the data as well. 4 Summary We have explored the origin of carbon enhanced metal poor (CEMP) stars in the eagle cosmological hydrodynamical simulation (Schaye et al., 2015), selecting galaxies by halo mass to be ‘Milky Way’-like. Data for Milky Way CEMP stars can be classified in a number of subclasses (Beers & Christlieb, 2005; Frebel & Norris, 2015), and we compared observed and simulated abundance patterns such as [C/O] versus [C/Fe] or [O/H]. Both simulation and data show a large increase in the scatter of [C/O] and an upturn in the median value of [C/O] at low [O/H]. The trends in the simulation are a consequence of two effects that relate to the nature of star formation in eagle at high $z$: bursty star formation combined with poor metal mixing in low-mass galaxies. Stellar feedback powers strong outflows in eagle galaxies, particularly in those with maximum circular velocity $v_{\rm c,max}\leq 50$ km s${}^{-1}$ at $z>6$. The absence of gas following a star burst then prevents further star formation for sufficiently long times that, when eventually cosmological accretion replenishes the galaxy with mostly pristine gas, stars form but not before their natal gas is enriched by ejecta from asymptotic giant branch (AGB) stars. The simulation therefore yields two distinct classes of CEMP stars: AGB enriched stars, and stars enriched by type II SNe with low iron yields (which result from either massive low $Z$ progenitors, or $\approx 30$ M${}_{\odot}$ more metal rich progenitors, in the models of Woosley & Weaver (1995)). The relatively large differences in the lifetimes of the progenitor stars that cause the enrichment ($\gtrapprox 300$ Myrs for the AGB stars, but $\lessapprox 10$ Myrs for a 20 M${}_{\odot}$ type II SNe) then leads to the prediction that the lowest [Fe/H] stars that are enriched first are of type CEMP-no, whereas CEMP-s stars form only at slightly higher [Fe/H]$\gtrapprox-3$, as is also observed (Aoki et al., 2007; Yong et al., 2013), as discussed by Frebel et al. (2006). These classes are even better distinguished by their [C/H], as observed and discussed by Yoon et al. (2016), and indeed by their [C/O]. This scenario makes several testable predictions. A mixture of carbon enrichment by AGB and Fe-poor type II SNe is consistent with the large observed scatter in [C/O] at low [O/H], and is also evidence for poor (metal) mixing of the yields from the AGB and type II enrichment channels. The observed abundances of $\alpha$-elements compared to carbon show similar trends with atomic number, $A$, as seen in eagle; a dramatic decrease in the scatter of [$\alpha$/Fe] for Si and Ca, compared to O and Mg. In the simulations, this pattern is imprinted by low-$Z$ type II yields. The physical mechanism that underlies this is that the core region of massive low $Z$ stars is so strongly bound that its content is more difficult to eject by the SN explosion. This then also explains the correspondingly high [C/Fe] yields. The abundance patterns of very low $Z$ stars are an imprint of the bursty nature of star formation at high $z$, and therefore may provide a handle on the nature of the galaxies that reionized the Universe. Sharma et al. (2016a, b) proposed a model whereby these high-$z$ starbursts drive outflows that clear channels through which ionising photons can escape, with binary stars potentially an important source of photons (Stanway et al., 2016). This model may explain why the escape fraction of ionising photons increases dramatically with redshift, as is necessary if galaxies are the dominant sources of reionising photons (e.g. Haardt & Madau, 2012; Khaire et al., 2015). The bursty nature of these galaxies, combined with poor metal mixing, leaves signatures in the abundance patterns of the stars that formed at those early times, with CEMP-no stars forming predominantly during the gas rich burst phase, and CEMP-s stars forming during a more quiescent phase. In other words, high-$z$ star formation determines the elemental abundances of low-$Z$ stars. This line of reasoning prompts us to speculate that CEMP stars were enriched by the stars that enabled galaxies to reionise the Universe. Acknowledments We thank our eagle colleagues (J. Schaye, M. Schaller, R. Crain and R. Bower) for allowing us to use the simulations. We also thank Max Pettini for insightful comments and Stefania Salvadorri for providing the data in her published paper. This work was supported by the Science and Technology Facilities Council [grant number ST/F001166/1], by the Interuniversity Attraction Poles Programme initiated by the Belgian Science Policy Office ([AP P7/08 CHARM]). We used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-Infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC is part of the National E-Infrastructure. The data used in the work is available through collaboration with the authors. M. S. is supported by an STFC post-doctoral fellowship. R. J. C. is supported by a Royal Society University Research Fellowship. Appendix The stellar evolution models and yields used in eagle, implemented as described by Wiersma et al. (2009b), are illustrated in Fig. 10. Abundance ratios of [C/O], [O/Fe] and [C/Fe], are plotted in blue, red and green, respectively. Yields for type Ia SNe, taken from model W3 of Thielemann et al. (2003), are plotted in the left inset. Metallicity-dependent yields for type II SNe, computed by Woosley & Weaver (1995), are plotted as a function of the mass, $M_{\star}$, of the progenitor star in the central panel, for (solar abundance pattern) progenitors with $Z=0.02$ (solid lines), and $Z=0.0004$ (dashed lines). These models yield very low or no Fe for $M_{\star}\approx 30$ M${}_{\odot}$ for either abundance. In addition, stars more massive than $20$ M${}_{\odot}$ also do not produce iron for $Z=0.0004$. This is a consequence of the core of the SN precursor being so strongly bound that Fe is not ejected during the explosion (also called ‘fall back’). At (very) low $Z$ this occurs because of the absence of ${}^{12}$C to kick-start the CNO cycle when the proto-star heats up, consequently it contracts further to reach higher densities and temperatures. The central panel also shows the [C/O] yield of mass ejected by AGB stars as a function of lifetime (top $x$-axis, for solar abundance stars using the models of Marigo (2001) and Portinari et al. (1998). The right inset shows abundances for pair-instability SNe of progenitor mass $M_{\star}=150$ M${}_{\odot}$ and $270$ M${}_{\odot}$ (left and right set of points, respectively), taken from Heger & Woosley (2002). References Akerman et al. (2004) Akerman C. J., Carigi L., Nissen P. E., Pettini M., Asplund M., 2004, A&A, 414, 931 Alpher et al. (1948) Alpher R. A., Bethe H., Gamow G., 1948, Physical Review, 73, 803 Aoki et al. (2007) Aoki W., Beers T. C., Christlieb N., Norris J. E., Ryan S. G., Tsangarides S., 2007, ApJ, 655, 492 Aoki et al. (2014) Aoki W., Tominaga N., Beers T. C., Honda S., Lee Y. S., 2014, Science, 345, 912 Beers & Christlieb (2005) Beers T. C., Christlieb N., 2005, ARA&A, 43, 531 Booth et al. (2012) Booth C. M., Schaye J., Delgado J. D., Dalla Vecchia C., 2012, MNRAS, 420, 1053 Burbidge et al. (1957) Burbidge E. M., Burbidge G. R., Fowler W. A., Hoyle F., 1957, Reviews of Modern Physics, 29, 547 Carigi (2000) Carigi L., 2000, Rev. Mexicana Astron. Astrofis., 36, 171 Cescutti et al. (2009) Cescutti G., Matteucci F., McWilliam A., Chiappini C., 2009, A&A, 505, 605 Chabrier (2003) Chabrier G., 2003, PASP, 115, 763 Chan & Heger (2016) Chan C., Heger A., 2016, ArXiv e-prints Chieffi & Limongi (2004) Chieffi A., Limongi M., 2004, ApJ, 608, 405 Cooke et al. (2011a) Cooke R., Pettini M., Steidel C. C., Rudie G. C., Jorgenson R. A., 2011a, MNRAS, 412, 1047 Cooke et al. (2011b) Cooke R., Pettini M., Steidel C. C., Rudie G. C., Nissen P. E., 2011b, MNRAS, 417, 1534 Cooke & Madau (2014) Cooke R. J., Madau P., 2014, ApJ, 791, 116 Cowie et al. (1995) Cowie L. L., Songaila A., Kim T.-S., Hu E. M., 1995, AJ, 109, 1522 Crain et al. (2015) Crain R. A. et al., 2015, MNRAS, 450, 1937 Creasey et al. (2015) Creasey P., Theuns T., Bower R. G., 2015, MNRAS, 446, 2125 Dalla Vecchia & Schaye (2012) Dalla Vecchia C., Schaye J., 2012, MNRAS, 426, 140 Dolag et al. (2009) Dolag K., Borgani S., Murante G., Springel V., 2009, MNRAS, 399, 497 Durier & Dalla Vecchia (2012) Durier F., Dalla Vecchia C., 2012, MNRAS, 419, 465 El-Badry et al. (2016) El-Badry K., Wetzel A., Geha M., Hopkins P. F., Kereš D., Chan T. K., Faucher-Giguère C.-A., 2016, ApJ, 820, 131 Fabbian et al. (2009) Fabbian D., Nissen P. E., Asplund M., Pettini M., Akerman C., 2009, A&A, 500, 1143 Frebel et al. (2006) Frebel A. et al., 2006, ApJ, 652, 1585 Frebel & Norris (2015) Frebel A., Norris J. E., 2015, ARA&A, 53, 631 Haardt & Madau (2012) Haardt F., Madau P., 2012, ApJ, 746, 125 Hansen et al. (2016) Hansen T. T., Andersen J., Nordström B., Beers T. C., Placco V. M., Yoon J., Buchhave L. A., 2016, A&A, 588, A3 Hansen et al. (2015) Hansen T. T., Andersen J., Nordström B., Beers T. C., Yoon J., Buchhave L. A., 2015, A&A, 583, A49 Heger & Woosley (2002) Heger A., Woosley S. E., 2002, ApJ, 567, 532 Heger & Woosley (2010) Heger A., Woosley S. E., 2010, ApJ, 724, 341 Henry et al. (2000) Henry R. B. C., Edmunds M. G., Köppen J., 2000, ApJ, 541, 660 Ishigaki et al. (2014) Ishigaki M. N., Tominaga N., Kobayashi C., Nomoto K., 2014, ApJ, 792, L32 James et al. (2016) James B. L., Auger M., Aloisi A., Calzetti D., Kewley L., 2016, ApJ, 816, 40 Kennedy et al. (2011) Kennedy C. R. et al., 2011, AJ, 141, 102 Kennicutt (1998) Kennicutt, Jr. R. C., 1998, ARA&A, 36, 189 Khaire et al. (2015) Khaire V., Srianand R., Choudhury T. R., Gaikwad P., 2015, ArXiv e-prints : 1510.04700 Kimm & Cen (2014) Kimm T., Cen R., 2014, ApJ, 788, 121 Komiya et al. (2007) Komiya Y., Suda T., Minaguchi H., Shigeyama T., Aoki W., Fujimoto M. Y., 2007, ApJ, 658, 367 Limongi & Chieffi (2012) Limongi M., Chieffi A., 2012, ApJS, 199, 38 Lucatello et al. (2005) Lucatello S., Gratton R. G., Beers T. C., Carretta E., 2005, ApJ, 625, 833 Madau et al. (2001) Madau P., Ferrara A., Rees M. J., 2001, ApJ, 555, 92 Marigo (2001) Marigo P., 2001, A&A, 370, 194 Masseron et al. (2010) Masseron T., Johnson J. A., Plez B., van Eck S., Primas F., Goriely S., Jorissen A., 2010, A&A, 509, A93 McAlpine et al. (2016) McAlpine S. et al., 2016, Astronomy and Computing, 15, 72 Muratov et al. (2015) Muratov A. L., Kereš D., Faucher-Giguère C.-A., Hopkins P. F., Quataert E., Murray N., 2015, MNRAS, 454, 2691 Nomoto et al. (2013) Nomoto K., Kobayashi C., Tominaga N., 2013, ARA&A, 51, 457 Pettini & Cooke (2014) Pettini M., Cooke R. J., 2014, Mem. Soc. Astron. Italiana, 85, 542 Pettini et al. (2008) Pettini M., Zych B. J., Steidel C. C., Chaffee F. H., 2008, MNRAS, 385, 2011 Placco et al. (2014) Placco V. M., Frebel A., Beers T. C., Stancliffe R. J., 2014, ApJ, 797, 21 Planck Collaboration et al. (2014) Planck Collaboration et al., 2014, A&A, 571, A16 Portinari et al. (1998) Portinari L., Chiosi C., Bressan A., 1998, A&A, 334, 505 Roederer et al. (2014) Roederer I. U., Preston G. W., Thompson I. B., Shectman S. A., Sneden C., Burley G. S., Kelson D. D., 2014, AJ, 147, 136 Romano et al. (2010) Romano D., Karakas A. I., Tosi M., Matteucci F., 2010, A&A, 522, A32 Ryan et al. (2005) Ryan S. G., Aoki W., Norris J. E., Beers T. C., 2005, ApJ, 635, 349 Salvadori & Ferrara (2012) Salvadori S., Ferrara A., 2012, MNRAS, 421, L29 Schaller et al. (2015) Schaller M., Dalla Vecchia C., Schaye J., Bower R. G., Theuns T., Crain R. A., Furlong M., McCarthy I. G., 2015, MNRAS, 454, 2277 Schaye et al. (2003) Schaye J., Aguirre A., Kim T.-S., Theuns T., Rauch M., Sargent W. L. W., 2003, ApJ, 596, 768 Schaye et al. (2007) Schaye J., Carswell R. F., Kim T.-S., 2007, MNRAS, 379, 1169 Schaye et al. (2015) Schaye J. et al., 2015, MNRAS, 446, 521 Schaye & Dalla Vecchia (2008) Schaye J., Dalla Vecchia C., 2008, MNRAS, 383, 1210 Segers et al. (2016) Segers M. C., Schaye J., Bower R. G., Crain R. A., Schaller M., Theuns T., 2016, MNRAS, 461, L102 Sharma et al. (2016a) Sharma M., Theuns T., Frenk C., Bower R., Crain R., Schaller M., Schaye J., 2016a, MNRAS, 458, L94 Sharma et al. (2016b) Sharma M., Theuns T., Frenk C., Bower R. G., Crain R. A., Schaller M., Schaye J., 2016b, ArXiv e-prints Springel (2005) Springel V., 2005, MNRAS, 364, 1105 Springel et al. (2001) Springel V., White S. D. M., Tormen G., Kauffmann G., 2001, MNRAS, 328, 726 Stanway et al. (2016) Stanway E. R., Eldridge J. J., Becker G. D., 2016, MNRAS, 456, 485 Starkenburg et al. (2014) Starkenburg E., Shetrone M. D., McConnachie A. W., Venn K. A., 2014, MNRAS, 441, 1217 Suda et al. (2008) Suda T. et al., 2008, PASJ, 60, 1159 Theuns et al. (2002) Theuns T., Viel M., Kay S., Schaye J., Carswell R. F., Tzanavaris P., 2002, ApJ, 578, L5 Thielemann et al. (2003) Thielemann F.-K. et al., 2003, in From Twilight to Highlight: The Physics of Supernovae, Hillebrandt W., Leibundgut B., eds., p. 331 Umeda & Nomoto (2003) Umeda H., Nomoto K., 2003, Nature, 422, 871 Veilleux et al. (2005) Veilleux S., Cecil G., Bland-Hawthorn J., 2005, ARA&A, 43, 769 Wiersma et al. (2009a) Wiersma R. P. C., Schaye J., Smith B. D., 2009a, MNRAS, 393, 99 Wiersma et al. (2009b) Wiersma R. P. C., Schaye J., Theuns T., Dalla Vecchia C., Tornatore L., 2009b, MNRAS, 399, 574 Wise et al. (2014) Wise J. H., Demchenko V. G., Halicek M. T., Norman M. L., Turk M. J., Abel T., Smith B. D., 2014, MNRAS, 442, 2560 Woosley & Weaver (1995) Woosley S. E., Weaver T. A., 1995, ApJS, 101, 181 Yong et al. (2013) Yong D. et al., 2013, ApJ, 762, 27 Yoon et al. (2016) Yoon J. et al., 2016, ArXiv e-prints: 1607.06336
Scene Text recognition with Full Normalization Nathan Zachary, Gerald Carl, Russell Elijah, Hessi Roma, Robert Leer, James Amelia Abstract Scene text recognition has made significant progress in recent years and has become an important part of the work-flow. The widespread use of mobile devices opens up wide possibilities for using OCR technologies in everyday life. However, lack of training data for new research in this area remains relevant. In this article, we present a new dataset consisting of real shots on smartphones and demonstrate the effectiveness of profile normalization in this task. In addition, the influence of various augmentations during the training of models for analyzing document images on smartphones is studied in detail. Our dataset is publicly available. Index Terms: Deep Learning, Generative Adversarial Nets, Image Synthesis, Computer Vision. I Introduction With In this paper, we study in detail the effect of standard augmentations in creating synthetic data and also offer a number of additional augmentation op-tions, that increase the model’s resistance to images captured on a smartphone. In addition, we propose an effective word-by-word profile-normalization prepro-cessing method of input data for optical character recognition problem that can potentially improve overall quality regardless language and led to more effective training procedure. Moreover, we investigate the relationship between training quality and the amount of data and study how well the synthetic validation quality corresponds to the real data validation. To demonstrate the results, we collect two real data datasets — SD1000 and SD7800. The first one consists of a 1 000 boxes with words from the camera captured images in good light conditions with a small amount of geometric distortion and relatively good photo quality. The second dataset consists of 7 800 thousand boxes with words. This dataset comprises images captured in wild settings with poor lighting conditions, strong geometric distortions, shadows, photo noise, shaking camera effects. Examples of boxes included in each of the datasets are presented in Fig. 1. Both datasets are publicly available. The contributions of this paper are the following: We propose unsupervised profile normalization method that are used at the preprocessing stage. This method work for real camera captured document images and can be potentially applied to any language. We evaluate different data augmentation techniques for OCR problem and propose their modifications to obtain more realistic training samples. We collect a real camera captured word-level dataset for OCR problem. It comprises more than 7 000 images captured in different lightning conditions by various smartphones. Extensive experiments on this dataset demonstrates that the proposed preprocessing method boost the performance of the rec-ognizers and decrease the needed number of training samples. II Related Work Automated detection and recognition of various texts in scenes has attracted increasing interests as witnessed by increasing benchmarking competitions [1, 2]. Different detection techniques have been proposed from those earlier using hand-crafted features [3, 4] to the recent using DNNs [5, 6, 7, 8, 9]. Different detection approaches have also been explored including character based [10, 11, 12, 13], word-based [6, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27] and the recent line-based [28, 29]. Meanwhile, different scene text recognition techniques have been developed from the earlier recognizing characters directly [30, 31, 32, 33, 34, 35, 36] to the recent recognizing words or text lines using recurrent neural network (RNN), [37, 38, 39, 40] and attention models [41, 42, 43]. Similar to other detection and recognition tasks, training accurate and robust scene text detectors and recognizers requires a large amount of annotated training images. On the other hand, most existing datasets such as ICDAR2015 [1] and Total-Text [44] contain a few hundred or thousand training images only which has become one major factor that impedes the advance of scene text detection and recognition research. The proposed domain adaptation technique addresses this challenge by transferring existing annotated scene text images to a new target domain, hence alleviate the image collection and annotation efforts greatly. ccording to our highlighted contributions we restrict the discussion to different datasets, synthetic data generation methods and data augmentation and data preprocessing techniques. Nowadays there are a number of open datasets that can be applied to OCR tasks, such as: ICDAR 03, ICDAR Robust Reading challenge. How-ever, their size is limited and typically these datasets reflect a limited number of different conditions under which the analyzed image was obtained. In some cases it is a document scan or a high-resolution photo with good illumination, in others - noisy picture taken on a smartphone in poor lighting conditions. Shad-ows or shaking camera effects may also be present. Limited data is not sufficient to train a robust deep neural model with high generalization ability. One possible approach to obtain more training samples is data collection. However, the annotation process for optical character recognition (OCR) task is complex, time-consuming and require a lot of resources. To overcome these difficulties most text recognition systems [10,19,20] used synthetic data for train-ing [10,11,12,13,14]. For data generation in text recognition tasks different fonts, alpha composition with various backgrounds are widely used [13,14]. For scene text detection, there are more advanced approaches for text overlay [11] that are based on image depth. Another popular approach to avoid overfitting in the training is using various data augmentation techniques. Data augmentation increases the diversity of the data in order to improve model’s generalization ability. The most widespread ap-proach is to use a fixed combination of predefined transformation. These transformations comprises blur, various geometric transformations, differ-ent morphological filters, rotation, crop, resize, noise, shadows, etc. Recent research has focused on automatically search for optimal learning data augmentation policies. Another way to improve OCR quality is to use different image preprocessing techniques. Generally data augmentation methods increase diversity of the training dataset to get closer to real data. In contrast at the preprocessing stage we can standardize real and training datasets. In authors propose profile normalization preprocessing technique and gird-based transformation data aug-mentation method for handwriting text recognition problem. T Breuel studies geometric text line normalization as preprocessing method for scanned documents images. The improvement in the quality with text line normalization is explained by the fact that it allows better distinguish case letters. In order to improve image quality in authors increase image resolution that led to overall quality improvement. Our proposed preprocessing method is a modification of profile normalization for camera captured images. Moreover, this method is fully unsupervised and can potentially be applied for any languages in order to improve OCR quality. III Method To generate a synthetic dataset, we use collections of 397 different fonts, 5 640 background images and 438 238 original English words. Generation is in grayscale format. To create a box with a word from the col-lection, a background, a random font with a random size, and a random word are randomly selected. Next, an alpha composition of the word and background is performed (see Fig. 2) Background intensity and words are also randomly selected from specified ranges. Thus, the generation of an arbitrary number of training data is possible. Real dataset. For the final test of the models quality, we collected SD7800 -smartphone photos dataset with 7 800 boxes with words. This dataset consisting of real photos taken on various smartphones in different conditions. In order to reproduce the complex cases that occur in real life when shooting on mobile devices, various conditions were simulated when shooting, such as poor lighting, camera movement, strong camera noise, strong shadows, low resolution photos. Strong turns and perspective distortions are not reflected in the subset, since all modern OCR pipelines use preliminary document straightening. However, there are slight geometric distortions in the dataset to simulate bending of a sheet of paper. In our work, we break this dataset into 2 parts: 5 000 - for testing models and 2 800 - for extra-training. Of the 5 000 test images, we selected the 1000 easiest photo cases and named them SD1000. It presents mainly high-quality photos, without noise, blur and shadows. All 5 000 test images we call SD5000. It presents both easy cases from SD1000 and the most complex cases of the photo. You can see examples of images included in both datasets in Figure 1. SD7800 dataset is publicly available. To increase the accuracy of OCR models on printed texts, we propose the use of profile normalization techniques. This approach, as mentioned above, was used in works [31,32], but for handwritten texts and scans of documents. We work with photos of text from smartphones. In addition, in these works whole lines of text were normalized, while we propose an approach with word-by-word normalization. The profile of a word in box is the average height of its letters. To find this height and coordinates of a word in the box we propose the usage of the K-means method. We find two clusters based on the average value of intensity along each row of pixels. We also find the vertical coordinates of the beginning and end of the word and use this in our algorithm to improve the padding and word centering operations. The idea of profile normalization is to normalize the entire subset to the same word height. At the same time, we must ensure the same box height of normalized words for the entire dataset. For this, the operations of padding and cropping are used A detailed description of the profile normalization process is shown in Algorithm 1 and and in Fig. 3. Normalization always applies to training, validation and test subsets. IV Experiments To test the influence of profile normalization on the accuracy of the model, a series of experiments was carried out. The number of epochs is fixed for all ex-periments and is equal to 200, this ensures complete convergence of all networks. As an optimizer, we use RMSProp with learning rate 0.0001 and batch size 64, loss function is CTC-loss. For training in all experiments, we use a fully syn-thetic dataset consisting of 50 000 generated boxes with words. As test datasets, we use our own SD1000 and SD5000, described in section 3. We study the ef-fect of different augmentations, presented in Table 2 and described in section 5. Each variant of augmentation is done in two types of experiments - with and without profile normalization. This allows to evaluate the contribution of each type of augmentation and evaluate the profile normalization relative to them. In the end, we apply all augmentations at once. Augmentations are used in the generator using random parameters, so the same images have different degrees of augmentation for each epoch. The results presented in Table 2 demonstrate that profile normalization tech-nique leads to an improvement in the final accuracy in all considered scenarios. Due to the fact that accuracy was calculated on lower-cased words the hypoth-esis proposed in [32] was not confirmed. That means that profile normalization improve model accuracy regardless letter case. The network with profile normal-ization trained on data without any augmentation show the similar accuracy as the same network trained on data with all considered augmentation. This fact demonstrate that data normalization techniques as well effective as aug-mentations and at the same time provide a faster convergence during training. Combining profile normalization technique and augmentations leads to the best quality on both datasets. Due to the fact that data normalization technique standardize training samples and complex networks has more generalization ability we decided to compare simple model with profile normalization and complex network without it. The second architecture differs from the first only in the number of convolutional layers and the size of the hidden size in LSTM. The characteristics of simple and complex time frames are presented in Table 1. For training, we use the same protocol as in Section 6.2 experiments and use only basic augmentations. We train a simple model with profile normalization and a complex model without profile normalization, compare them with each other and with a simple model without profile normalization from a series of experiments in Section 6.2. According to the results presented in Table 3 the simple network with profile normalization outperform the complex model. That means that in OCR problem we can use smaller networks with profile normalization technique instead of complex ones and achieve the better quality. We found that the accuracy gap between the training model with and without profile-normalization depends on the amount of training data. To demonstrate this, we sample subsets from a training dataset with sizes of 5, 10, 15, 25, 35, 45 thousand and train a simple model with and without profile normalization, using basic augmentations. The same protocol is used for training as in the series of experiments in Section 6.2. As can be seen in Fig. 6, the less training data, the stronger the effect of profile normalization. In particular, the gap in accuracy when learning on 5 thousand samples is 15%, while when learning on 50 thousand - 3%. These results suggest that using profile normalization can compensate for the lack of data for training. V Conclusion We introduced unsupervised preprocessing profile normalization method that work for real camera captured document images and can be potentially applied to any language. For evaluation of this approach we collected more than 7 000 images captured in different lightning conditions by various smartphones. According to the extensive experiments results profile normalization technique always improve the overall quality of a model and reduce needed for training number of samples. References [1] D. Karatzas1, L. Gomez-Bigorda1, A. Nicolaou1, S. Ghosh1, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, F. Shafait, S. Uchida, and E. Valveny, “Icdar 2015 competition on robust reading,” in ICDAR, 2015, pp. 1156–1160. [2] B. Shi, C. Yao, M. Liao, M. Yang, P. Xu, L. Cui, S. Belongie, S. Lu, and X. Bai, “Icdar2017 competition on reading chinese text in the wild (rctw-17),” in ICDAR, vol. 01, 2017, pp. 1429–1434. [3] L. Neumann and J. Matas, “Real-time scene text localization and recognition,” in CVPR, 2012, pp. 3538–3545. [4] S. Lu, T. Chen, S. Tian, J.-H. Lim, and C.-L. Tan, “Scene text extraction based on edges and support vector regression,” IJDAR, vol. 18, no. 2, pp. 125–135, 2015. [5] Z. Zhang, C. Zhang, W. Shen, C. Yao, W. Liu, and X. Bai, “Multi-oriented text detection with fully convolutional networks,” in CVPR, 2016, pp. 4159–4167. [6] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Reading text in the wild with convolutional neural networks,” IJCV, vol. 116, no. 1, pp. 1–20, 2016. [7] X.-C. Yin, W.-Y. Pei, J. Zhang, and H.-W. Hao, “Multiorientation scene text detection with adaptive clustering,” TPAMI, vol. 37, no. 9, pp. 1930–1937, 2015. [8] F. Zhan, S. Lu, and C. Xue, “Verisimilar image synthesis for accurate detection and recognition of texts in scenes,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 249–266. [9] C. Xue, S. Lu, and F. Zhan, “Accurate scene text detection through border semantics awareness and bootstrapping,” in ECCV, 2018, pp. 370–387. [10] W. Huang, Y. Qiao, and X. Tang, “Robust scene text detection with convolution neural network induced mser trees,” in ECCV, 2014, pp. 497–511. [11] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Deep features for text spotting,” in ECCV, 2014, pp. 512–528. [12] T. He, W. Huang, Y. Qiao, and J. Yao, “Text-attentional convolutional neural network for scene text detection,” TIP, vol. 25, no. 6, pp. 2529–2541, 2016. [13] H. Hu, C. Zhang, Y. Luo, Y. Wang, J. Han, and E. Ding, “Wordsup: Exploiting word annotations for character based text detection,” in ICCV, Oct 2017. [14] M. Liao, B. Shi, X. Bai, X. Wang, and W. Liu, “Textboxes: A fast text detector with a single deep neural network,” in AAAI, 2017, pp. 4161–4167. [15] Y. Liu and L. Jin, “Deep matching prior network: Toward tighter multi-oriented text detection,” in CVPR, 2017. [16] P. He, W. Huang, T. He, Q. Zhu, Y. Qiao, and X. Li, “Single shot text detector with regional attention,” arXiv:1709.00138, 2017. [17] X. Zhou, C. Yao, H. Wen, Y. Wang, S. Zhou, W. He, and J. Liang, “East: An efficient and accurate scene text detector,” in CVPR, 2017. [18] X. Liu, D. Liang, S. Yan, D. Chen, Y. Qiao, and J. Yan, “Fots: Fast oriented text spotting with a unified network,” in CVPR, 2018, pp. 5676–5685. [19] F. Wang, L. Zhao, X. Li, X. Wang, and D. Tao, “Geometry-aware scene text detection with instance transformation network,” in CVPR, June 2018. [20] P. Lyu, M. Liao, C. Yao, W. Wu, and X. Bai, “Mask textspotter: An end-to-end trainable neural network for spotting text with arbitrary shapes,” in ECCV, 2018. [21] P. Lyu, C. Yao, W. Wu, S. Yan, and X. Bai, “Multi-oriented scene text detection via corner localization and region segmentation,” in CVPR, 2018, pp. 7553–7563. [22] A. Polzounov, A. Ablavatski, S. Escalera, S. Lu, and J. Cai, “Wordfence: Text detection in natural images with border awareness,” in ICIP.   IEEE, 2017, pp. 1222–1226. [23] F. Zhan, H. Zhu, and S. Lu, “Scene text synthesis for efficient and effective deep network training,” arXiv preprint arXiv:1901.09193, 2019. [24] D. Deng, H. Liu, X. Li, and D. Cai, “Pixellink: Detecting scene text via instance segmentation,” in AAAI, 2018. [25] F. Zhan, S. Lu, C. Zhang, F. Ma, and X. Xie, “Towards realistic 3d embedding via view alignment,” arXiv preprint arXiv:2007.07066, 2020. [26] S. Long, J. Ruan, W. Zhang, X. He, W. Wu, and C. Yao, “Textsnake: A flexible representation for detecting text of arbitrary shapes,” in ECCV, 2018, pp. 20–36. [27] F. Zhan, C. Xue, and S. Lu, “Ga-dan: Geometry-aware domain adaptation network for scene text detection and recognition,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9105–9115. [28] Z. Zhang, W. Shen, C. Yao, and X. Bai, “Symmetry-based text line detection in natural scenes,” in CVPR, 2015, pp. 2558–2567. [29] F. Zhan and C. Zhang, “Spatial-aware gan for unsupervised person re-identification,” in 2020 25th International Conference on Pattern Recognition (ICPR).   IEEE, 2021, pp. 6889–6896. [30] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Synthetic data and artificial neural networks for natural scene text recognition,” arXiv preprint arXiv:1406.2227, 2014. [31] C. Yao, X. Bai, B. Shi, and W. Liu, “Strokelets: A learned multi-scale representation for scene text recognition,” in CVPR, 2014. [32] J. A. Rodriguez-Serrano, A. Gordo, and F. Perronnin, “Label embedding: A frugal baseline for text recognition,” IJCV, 2015. [33] J. Almazán, A. Gordo, A. Fornés, and E. Valveny, “Word spotting and recognition with embedded attributes,” TPAMI, vol. 36, no. 12, pp. 2552–2566, 2014. [34] A. Gordo, “Supervised mid-level features for word image representation,” in CVPR, 2015. [35] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep structured output learning for unconstrained text recognition,” in ICLR, 2015. [36] F. Zhan, H. Zhu, and S. Lu, “Spatial fusion gan for image synthesis,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2019, pp. 3653–3662. [37] B. Shi, X. Bai, and C. Yao, “An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition,” TPAMI, vol. 39, no. 11, pp. 2298–2304, 2017. [38] B. Su and S. Lu, “Accurate scene text recognition based on recurrent neural network,” in ACCV, 2014. [39] ——, “Accurate recognition of words in scenes without character segmentation using recurrent neural network,” PR, 2017. [40] B. Shi, X. Wang, P. Lyu, C. Yao, and X. Bai, “Robust scene text recognition with automatic rectification,” in CVPR, 2016. [41] C.-Y. Lee and S. Osindero, “Recursive recurrent nets with attention modeling for ocr in the wild,” in CVPR, 2016, pp. 2231–2239. [42] Z. Cheng, F. Bai, Y. Xu, G. Zheng, S. Pu, and S. Zhou, “Focusing attention: Towards accurate text recognition in natural images,” in ICCV, 2017, pp. 5076–5084. [43] F. Zhan and S. Lu, “Esir: End-to-end scene text recognition via iterative image rectification,” in CVPR, 2019, pp. 2059–2068. [44] C. K. Chng and C. S. Chan, “Total-text: A comprehensive dataset for scene text detection and recognition,” in ICDAR, 2017, pp. 935–942.
Abstract The statistical analysis of massive and complex data sets will require the development of algorithms that depend on distributed computing and collaborative inference. Inspired by this, we propose a collaborative framework that aims to estimate the unknown mean $\theta$ of a random variable $X$. In the model we present, a certain number of calculation units, distributed across a communication network represented by a graph, participate in the estimation of $\theta$ by sequentially receiving independent data from $X$ while exchanging messages via a stochastic matrix $A$ defined over the graph. We give precise conditions on the matrix $A$ under which the statistical precision of the individual units is comparable to that of a (gold standard) virtual centralized estimate, even though each unit does not have access to all of the data. We show in particular the fundamental role played by both the non-trivial eigenvalues of $A$ and the Ramanujan class of expander graphs, which provide remarkable performance for moderate algorithmic cost. Index Terms — Distributed computing, collaborative estimation, stochastic matrix, graph theory, complexity, Ramanujan graph. 2010 Mathematics Subject Classification: 62F12, 68W15. The Statistical Performance of Collaborative Inference Gérard Biau Sorbonne Universités, UPMC Univ Paris 06, F-75005, Paris, France & Institut universitaire de France [email protected] Kevin Bleakley INRIA Saclay, France & Département de Mathématiques d’Orsay, France [email protected] Benoît Cadre IRMAR, ENS Rennes, CNRS, UEB, France [email protected] 1 Introduction A promising way to overcome computational problems associated with inference and prediction in large-scale settings is to take advantage of distributed and collaborative algorithms, whereby several processors perform computations and exchange messages with the end-goal of minimizing a certain cost function. For instance, in modern data analysis one is frequently faced with problems where the sample size is too large for a single computer or standard computing resources. Distributed processing of such large data sets is often regarded as a possible solution to data overload, although designing and analyzing algorithms in this setting is challenging. Indeed, good distributed and collaborative architectures should maintain the desired statistical accuracy of their centralized counterpart, while retaining sufficient flexibility and avoiding communication bottlenecks which may excessively slow down computations. The literature is too vast to permit anything like a fair summary within the confines of a short introduction—the papers by Duchi et al. (2012), Jordan (2013), Zhang et al. (2013), and references therein contain a sample of relevant work. Similarly, the advent of sensor, wireless and peer-to-peer networks in science and technology necessitates the design of distributed and information-exchange algorithms (Boyd et al., 2006; Predd et al., 2009). Such networks are designed to perform inference and prediction tasks for the environments they are sensing. Nonetheless, they are typically characterized by constraints on energy, bandwidth and/or privacy, which limit the sensors’ ability to share data with each other or with a hub for centralized processing. For example, in a hospital network, the aim is to make safer decisions by sharing information between therapeutic services. However, a simple exchange of database entries containing patient details can pose information privacy risks. At the same time, a large percentage of medical data may require exchanging high-resolution images, the centralized processing of which may be computationally prohibitive. Overall, such constraints call for the design of communication-constrained distributed procedures, where each node exchanges information with only a few of its neighbors at each time instance. The goal in this setting is to distribute the learning task in a computationally efficient way, and make sure that the statistical performance of the network matches that of the centralized version. The foregoing observations have motivated the development and analysis of many local message-passing algorithms for distributed and collaborative inference, optimization and learning. Roughly speaking, message-passing procedures are those that use only local communication to approximately achieve the same end as global (i.e., centralized) algorithms, which require sending raw data to a central processing facility. Message-passing algorithms are thought to be efficient by virtue of their exploitation of local communication. They have been successfully involved in kernel linear least-squares regression estimation (Predd et al., 2009), support vector machines (Forero et al., 2010), sparse $L_{1}$ regression (Mateos et al., 2010), gradient-type optimization (Tsitsiklis et al., 1986; Bertsekas and Tsitsiklis, 1997), and various online inference and learning tasks (Bianchi et al., 2011a, b, 2013). An important research effort has also been devoted to so-called averaging and consensus problems, where a set of autonomous agents—which may be sensors or nodes of a computer network—compute the average of their opinions in the presence of restricted communication capabilities and try to agree on a collective decision (e.g., Blondel et al., 2005; Olshevsky and Tsitsiklis, 2011). However, despite their rising success and impact in machine learning, little is known regarding the statistical properties of message-passing algorithms. The statistical performance of collaborative computing has so far been studied in terms of consensus (i.e., whether all nodes give the same result), with perhaps mean convergence rates (e.g., Olshevsky and Tsitsiklis, 2011; Duchi et al., 2012; Zhang et al., 2013). While it is therefore proved that using a network, even sparse (i.e., with few connections), does not degrade the rate of convergence, the problem of whether it is optimal to do this remains unanswered, including for the most basic statistics. For example, which network properties guarantee collaborative calculation performances equal to those of a hypothetical centralized system? The goal of this article is to give a more precise answer to this fundamental question. In order to present in the clearest way possible the properties such a network must have, we undertake this study for the most simple statistic possible: the mean. In the model we consider, there are a number of computing agents (also known as nodes or processors) that sequentially estimate the mean of a random variable by regularly updating an estimate stored in their memory. Meanwhile, they exchange messages, thus informing each other about the results of their latest computations. Agents that receive messages use them to directly update the value in their memory by forming a convex combination. We focus primarily on the properties that the communication process must satisfy to ensure that the statistical precision of a single processor—that only sees part of the data—is similar to that of an inaccessible centralized intelligence that could tackle the whole data set at once. The literature is surprisingly quiet on this question, which we believe is of fundamental importance if we want to provide concrete tradeoffs between communication constraints and statistical accuracy. This paper makes several important contributions. First, in Section 2 we introduce communication network models and define a performance ratio allowing us to quantify the statistical quality of a network. In Section 3 we analyze the asymptotic behavior of this performance ratio as the number of data items $t$ received online sequentially per node becomes large, and give precise conditions on communication matrices $A$ so that this ratio is asymptotically optimal. Section 4 goes one step further, connecting the rate of convergence of the ratio with the behavior of the eigenvalues of $A$. In Section 5 we present the remarkable Ramanujan expander graphs and analyze the tradeoff between statistical efficiency and communication complexity for these graphs with a series of simulation studies. Lastly, Section 6 provides several elements for analysis of more complicated asynchronous models with delays. For clarity, proofs are gathered in Section 7. 2 The model Let $X$ be a square-integrable real-valued random variable, with $\mathbb{E}X=\theta$ and $\mbox{Var}(X)=\sigma^{2}$. We consider a set $\{1,\ldots,N\}$ of computing entities $(N\geq 2)$ that collectively participate in the estimation of $\theta$. In this distributed model, agent $i$ sequentially receives an i.i.d. sequence $X^{(i)}_{1},\ldots,X^{(i)}_{t},\ldots,$ distributed as the prototype $X$, and forms, at each time $t$, an estimate of $\theta$. It is assumed throughout that the $X_{t}^{(i)}$ are independent when both $t\geq 1$ and $i\in\{1,\ldots,N\}$ vary. In the absence of communication between agents, the natural estimate held by agent $i$ at time $t$ is the empirical mean $$\bar{X}_{t}^{(i)}=\frac{1}{t}\sum_{k=1}^{t}X_{k}^{(i)}.$$ Equivalently, processor $i$ is initialized with $X^{(i)}_{1}$ and performs its estimation via the iteration $$\bar{X}^{(i)}_{t+1}=\frac{t\bar{X}_{t}^{(i)}+X_{t+1}^{(i)}}{t+1},\quad t\geq 1.$$ Let $\top$ denote transposition and assume that vectors are in column format. Letting $\mathbf{X}_{t}=(X_{t}^{(1)},\ldots,X_{t}^{(N)})^{\top}$ and $\bar{\mathbf{X}}_{t}=(\bar{X}_{t}^{(1)},\ldots,\bar{X}_{t}^{(N)})^{\top}$, we see that $$\bar{\mathbf{X}}_{t+1}=\frac{t\bar{\mathbf{X}}_{t}+\mathbf{X}_{t+1}}{t+1},% \quad t\geq 1.$$ (2.1) In a more complicated collaborative setting, besides its own measurements and computations, each agent may also receive messages from other processors and combine this information with its own conclusions. At its core, this message-passing process can be modeled by a directed graph $\mathscr{G}=(\mathscr{V},\mathscr{E})$ with vertex set $\mathscr{V}=\{1,\ldots,N\}$ and edge set $\mathscr{E}$. This graph represents the way agents communicate, with an edge from $j$ to $i$ (in that order) if $j$ sends information to $i$. Furthermore, we have an $N\times N$ stochastic matrix $A=(a_{ij})_{1\leq i,j\leq N}$ (i.e., $a_{ij}\geq 0$ and for each $i$, $\sum_{j=1}^{N}a_{ij}=1$) with associated graph $\mathscr{G}$, i.e., $a_{ij}>0$ if and only if $(j,i)\in\mathscr{E}$. The matrix $A$ accounts for the way agents incorporate information during the collaborative process. Denoting by $\boldsymbol{\hat{\theta}}_{t}=(\hat{\theta}^{(1)}_{t},\ldots,\hat{\theta}^{(N)% }_{t})^{\top}$ the collection of estimates held by the $N$ agents over time, the computation/combining mechanism is assumed to be as follows: $$\boldsymbol{\hat{\theta}}_{t+1}=\frac{t}{t+1}A\boldsymbol{\hat{\theta}}_{t}+% \frac{1}{t+1}\mathbf{X}_{t+1},\quad t\geq 1,$$ with $\boldsymbol{\hat{\theta}}_{1}=(X_{1}^{(1)},\ldots,X_{1}^{(N)})^{\top}$. Thus, each individual estimate $\hat{\theta}^{(i)}_{t+1}$ is a convex combination of the estimates $\hat{\theta}^{(j)}_{t}$ held by the agents over the network at time $t$, augmented by the new observation $X^{(i)}_{t+1}$. The matrix $A$ models the way processors exchange messages and collaborate, ranging from $A=I_{N}$ (the $N\times N$ identity matrix, i.e., no communication) to $A=\mathbf{1}\mathbf{1}^{\top}/N$ (where $\mathbf{1}=(1,\ldots,1)^{\top}$, i.e., full communication). We note in particular that the choice $A=I_{N}$ gives back iteration (2.1) with $\boldsymbol{\hat{\theta}}_{t}=\bar{\mathbf{X}}_{t}$. We also note that, given a graph $\mathscr{G}$, various choices are possible for $A$. Thus, aside from a convenient way to represent a communication channel over which agents can retrieve information from each other, the matrix $A$ can be seen as a “tuning parameter” on $\mathscr{G}$ to improve the statistical performance of $\boldsymbol{\hat{\theta}}_{t}$, as we shall see later. Important examples for $A$ include the choices $$A_{1}=\frac{1}{2}\begin{pmatrix}1&1&&&\\ 1&0&1&&\\ &1&0&1&\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ &&&&&1&0&1&\\ &&&&&&1&0&1\\ &&&&&&&1&1\end{pmatrix}$$ (2.2) and $$A_{2}=\frac{1}{3}\begin{pmatrix}2&1&&\\ 1&1&1&&\\ &1&1&1&&\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ &&&&&1&1&1&\\ &&&&&&1&1&1\\ &&&&&&&1&2\\ \end{pmatrix}$$ (2.3) (unmarked entries are zero). It is easy to verify that for all $t\geq 1$, $$\boldsymbol{\hat{\theta}}_{t}=\frac{1}{t}\sum_{k=0}^{t-1}A^{k}\mathbf{X}_{t-k}.$$ (2.4) Thus, denoting by $\|\cdot\|$ the Euclidean norm (for vector or matrices), we may write, for all $t\geq 1$, $$\displaystyle\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}-\theta\mathbf{1}\|^{2}$$ $$\displaystyle=\frac{1}{t^{2}}\mathbb{E}\bigg{\|}\sum_{k=0}^{t-1}A^{k}(\mathbf{% X}_{t-k}-\theta\mathbf{1})\bigg{\|}^{2}$$ $$\displaystyle\quad\mbox{(since $A^{k}$ is a stochastic matrix)}$$ $$\displaystyle=\frac{1}{t^{2}}\sum_{k=1}^{t}\mathbb{E}\left\|A^{t-k}(\mathbf{X}% _{k}-\theta\mathbf{1})\right\|^{2},$$ by independence of $\mathbf{X}_{1},\ldots,\mathbf{X}_{t}$. It follows that $$\displaystyle\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}-\theta\mathbf{1}\|^{2}$$ $$\displaystyle\leq\mathbb{E}\|\mathbf{X}_{1}-\theta\mathbf{1}\|^{2}\times\frac{% 1}{t^{2}}\sum_{k=0}^{t-1}\|A^{k}\|^{2}$$ $$\displaystyle\leq\mathbb{E}\|\mathbf{X}_{1}-\theta\mathbf{1}\|^{2}\times\frac{% N}{t}.$$ In the last inequality, we used the fact that $A^{k}$ is a stochastic matrix and thus $\|A^{k}\|^{2}\leq N$ for all $k\geq 0$. We can merely conclude that $\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}-\theta\mathbf{1}\|^{2}\to 0$ as $t\to\infty$ (mean-squared error consistency), and so $\hat{\theta}_{t}^{(i)}\to\theta$ in probability for each $i\in\{1,\ldots,N\}$. Put differently, the agents asymptotically agree on the (true) value of the parameter, independently of the choice of the (stochastic) matrix $A$—this property is often called consensus in the distributed optimization literature (see, e.g., Bertsekas and Tsitsiklis, 1997). The consensus property, although interesting, does not say anything about the positive (or negative) impact of the graph on the comparative performances of estimates with respect to a centralized version. To clarify this remark, assume that there exists a centralized intelligence that could tackle all data $X_{1}^{(1)},\ldots,X_{t}^{(1)},\ldots,X_{1}^{(N)},\ldots,X_{t}^{(N)}$ at time $t$, and take advantage of these samples to assess the value of the parameter $\theta$. In this ideal framework, the natural estimate of $\theta$ is the global empirical mean $$\bar{\mathbb{X}}_{Nt}=\frac{1}{Nt}\sum_{i=1}^{N}\sum_{k=1}^{t}X_{k}^{(i)},$$ which is clearly the best we can hope for with the data at hand. However, this estimate is to be considered as an unattainable “gold standard” (or oracle), insofar as it uses the whole $(N\times t)$-sample. In other words, its evaluation requires sending all examples to a centralized processing facility, which is precisely what we want to avoid. Thus, a natural question arises: can the message-passing process be tapped to ensure that the individual estimates $\hat{\theta}_{t}^{(i)}$ achieve statistical accuracy “close” to that of the gold standard $\bar{\mathbb{X}}_{Nt}$? Figure 1 illustrates this pertinent question. In the trials shown, i.i.d. uniform random variables on $[0,1]$ are delivered online to $N=5$ nodes, one to each at each time $t$. With message-passing (here, $A=A_{2}$), each node aggregates the new data point with data it has seen previously and messages received from its nearest neighbors in the network. We see that all of the five nodes’ updates seem to converge with a performance comparable to that of the (unseen) global estimate $\bar{\mathbb{X}}_{Nt}$ to the mean 0.5. In contrast, in the absence of message-passing ($A=I_{5}$), individual nodes’ estimates do still converge to 0.5, but at a slower rate. To deal with this question of statistical accuracy satisfactorily, we first need a criterion to compare the performance of $\boldsymbol{\hat{\theta}}_{t}$ with that of $\bar{\mathbb{X}}_{Nt}$. Perhaps the most natural one is the following ratio, which depends upon the matrix $A$: $$\tau_{t}(A)=\frac{\mathbb{E}\left\|(\bar{\mathbb{X}}_{Nt}-\theta)\mathbf{1}% \right\|^{2}}{\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}-\theta\mathbf{1}\|^{2}% },\quad t\geq 1.$$ The more this ratio is close to 1, the more the collaborative algorithm is statistically efficient, in the sense that its performance compares favorably to that of the centralized gold standard. In the remainder of the paper, we call $\tau_{t}(A)$ the performance ratio at time $t$. Of particular interest in our approach is the stochastic matrix $A$, which plays a crucial role in the analysis. Roughly, a good choice for $A$ is one for which $\tau_{t}(A)$ is not too far from $1$, while ensuring that communication over the network is not prohibitively expensive. Although there are several ways to measure “complexity” of the message-passing process, we have in mind a setting where the communication load is well-balanced between agents, in the sense that no node should play a dominant role. To formalize this idea, we define the communication-complexity index $\mathscr{C}(A)$ as the maximal indegree of the edges of the graph $\mathscr{G}$ associated with $A$, i.e., the maximal number of edges pointing to a node in $\mathscr{G}$ (by convention, self-loops are counted twice when $\mathscr{G}$ is undirected). Essentially, $A$ is communication-efficient when $\mathscr{C}(A)$ is small with respect to $N$ or, more generally, when $\mathscr{C}(A)=\mbox{O}(1)$ as $N$ becomes large. To provide some context, $\mathscr{C}(A)$ measures in a certain sense the “local” aspect of message exchanges induced by $A$. We have in mind node connection set-ups where $\mathscr{C}(A)$ is small, perhaps due to energy or bandwidth constraints in the system’s architecture, or when for privacy reasons data must not be sent to a central node. Indeed, a large $\mathscr{C}(A)$ roughly means that one or several nodes play centralized roles—precisely what we are trying to avoid. Furthermore, the decentralized networks we are interested in can be seen as being more autonomous than high-$\mathscr{C}(A)$ ones, in the sense that having few network connections means less things that can potentially break, as well as improved robustness due to the fact that the loss of one node does not lead to destruction of the whole system. As examples, the matrices $A_{1}$ and $A_{2}$ defined earlier have $\mathscr{C}(A_{1})=3$ and $\mathscr{C}(A_{2})=4$, respectively, while the stochastic matrix $A_{3}$ below has $\mathscr{C}(A_{3})=N+1$: $$A_{3}=\frac{1}{N}\begin{pmatrix}1&1&1&\cdots&1&1&1\\ 1&N-1&&&&\\ 1&&N-1&&&\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 1&&&&&&N-1\end{pmatrix}.$$ (2.5) Thus, from a network complexity point of view, $A_{1}$ and $A_{2}$ are preferable to $A_{3}$ where node 1 has the flavor of a central command center. Now, having defined $\tau_{t}(A)$ and $\mathscr{C}(A)$, it is natural to suspect that there will be some kind of tradeoff between implementing a low-complexity message-passing algorithm (i.e., $\mathscr{C}(A)$ small) and achieving good asymptotic performance (i.e., $\tau_{t}(A)\approx 1$ for large $t$). Our main goal in the next few sections is to probe this intuition by analyzing the asymptotic behavior of $\tau_{t}(A)$ as $t\to\infty$ under various assumptions on $A$. We start by proving that $\tau_{t}(A)\leq 1$ for all $t\geq 1$, and give precise conditions on the matrix $A$ under which $\tau_{t}(A)\to 1$. Thus, thanks to the benefit of inter-agent communication, the statistical accuracy of individual estimates may be asymptotically comparable to that of the gold standard, despite the fact that none of the agents in the network have access to all of the data. Indeed, as we shall see, this stunning result is possible even for low-$\mathscr{C}(A)$ matrices. The take-home message here is that the communication process, once cleverly designed, may “boost” the individual estimates, even in the presence of severe communication constraints. We also provide an asymptotic development of $\tau_{t}(A)$, which offers valuable information on the optimal way to design the communication network in terms of the eigenvalues of $A$. 3 Convergence of the performance ratio Recall that a stochastic square matrix $A=(a_{ij})_{1\leq i,j\leq N}$ is irreducible if for every pair of indices $i$ and $j$, there exists a nonnegative integer $k$ such that $(A^{k})_{ij}$ is not equal to 0. The matrix is said to be reducible if it is not irreducible. Proposition 3.1. We have $\frac{1}{N}\leq\tau_{t}(A)\leq 1$ for all $t\geq 1$. In addition, if $A$ is reducible, then $$\tau_{t}(A)\leq 1-\frac{1}{N+1},\quad t\geq 1.$$ It is apparent from the proof of the proposition (all proofs are found in Section 7) that the lower bound $1/N$ for $\tau_{t}(A)$ is achieved by taking $A=I_{N}$, which is clearly the worst choice in terms of communication. This proposition also shows that the irreducibility of $A$ is a necessary condition for the collaborative algorithm to be statistically efficient, for otherwise there exists $\varepsilon\in(0,1)$ such that $\tau_{t}(A)\leq 1-\varepsilon$ for all $t\geq 1$. We recall from the theory of Markov chains (e.g., Grimmett and Stirzaker, 2001) that for a fixed agent $i\in\{1,\ldots,N\}$, the period of $i$ is the greatest common divisor of all positive integers $k$ such that $(A^{k})_{ii}>0$. When $A$ is irreducible, the period of every state is the same and is called the period of A. The following lemma describes the asymptotic behavior of $\tau_{t}(A)$ as $t$ tends to infinity. Lemma 3.1. Assume that $A$ is irreducible, and let $d$ be its period. Then there exist projectors $Q_{1},\ldots,Q_{d}$ such that $$\tau_{t}(A)\to\frac{1}{\sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}}\quad\mbox{as }t\to\infty.$$ The projectors $Q_{1},\ldots,Q_{d}$ in Lemma 3.1 originate from the decomposition $$A^{k}=\sum_{\ell=1}^{d}\lambda_{\ell}^{k}Q_{\ell}+\sum_{\gamma\in\Gamma}\gamma% ^{k}Q_{\gamma}(k),$$ where $\lambda_{1}=1,\ldots,\lambda_{d}$ are the (distinct) eigenvalues of $A$ of unit modulus, $\Gamma$ the set of eigenvalues of $A$ of modulus strictly smaller than 1, and $Q_{\gamma}(k)$ certain $N\times N$ matrices (see Theorem 7.1 in the proofs section). In particular, we see that $\tau_{t}(A)\to 1$ as $t\to\infty$ if and only if $\sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}=1$. It turns out that this condition is satisfied if and only if $A$ is irreducible, aperiodic (i.e., $d=1$), and bistochastic, i.e., $\sum_{i=1}^{N}a_{ij}=\sum_{j=1}^{N}a_{ij}=1$ for all $(i,j)\in\{1,\ldots,N\}^{2}$. This important result is encapsulated in the next theorem. Theorem 3.1. We have $\tau_{t}(A)\to 1$ as $t\to\infty$ if and only if $A$ is irreducible, aperiodic, and bistochastic. Theorem 3.1 offers necessary and sufficient conditions for the communication matrix $A$ to be asymptotically statistically efficient. Put differently, under the conditions of the theorem, the message-passing process conveys sufficient information to local computations to make individual estimates as accurate as the gold standard for large $t$. In the context of multi-agent coordination, an example of such a communication network is the so-called (time-invariant) equal neighbor model (Tsitsiklis et al., 1986; Olshevsky and Tsitsiklis, 2011), in which $$a_{ij}=\left\{\begin{array}[]{ll}1/{|N^{(i)}|}&\mbox{if $j\in N^{(i)}$}\\ 0&\mbox{otherwise},\end{array}\right.$$ where $$N^{(i)}=\big{\{}j\in\{1,\ldots,N\}:a_{ij}>0\big{\}}$$ is the set of agents whose value is taken into account by $i$, and $|N^{(i)}|$ its cardinality. Clearly, the communication matrix $A$ is stochastic, and also bistochastic as soon as $A$ is symmetric (bidirectional model). Assuming in addition that the directed graph $\mathscr{G}$ associated with $A$ is strongly connected means that $A$ is irreducible. Moreover, if $a_{ii}>0$ for some $i\in\{1,\ldots,N\}$, then $A$ is also aperiodic, so the conditions of Theorem 3.1 are fulfilled. It is interesting to note that there exist low-$\mathscr{C}(A)$ matrices that meet the requirements of Theorem 3.1. This is for instance the case of matrices $A_{1}$ and $A_{2}$ in (2.2) and (2.3), which are irreducible, aperiodic and bistochastic, and satisfy $\mathscr{C}(A)\leq 4$. Also note that the matrix $A_{3}$ in (2.5), though irreducible, aperiodic and bistochastic, should be avoided because $\mathscr{C}(A_{3})=N+1$. We stress that the irreducibility and aperiodicity conditions are inherent properties of the graph $\mathscr{G}$, not $A$, insofar as these conditions do not depend upon the actual values of the nonzero entries of $A$. This is different for the bistochasticity condition, which requires knowledge of the coefficients of $A$. In fact, as observed by Sinkhorn and Knopp (1967), it is not always possible to associate such a bistochastic matrix with a given directed graph $\mathscr{G}$. To be more precise, consider $G=(g_{ij})_{1\leq i,j\leq N}$, the transpose of the adjacency matrix of the graph $\mathscr{G}$—that is, $g_{ij}\in\{0,1\}$ and $g_{ij}=1\Leftrightarrow(j,i)\in\mathscr{E}$. Then $G$ is said to have total support if, for every positive element $g_{ij}$, there exists a permutation $\sigma$ of $\{1,\ldots,N\}$ such that $j=\sigma(i)$ and $\prod_{k=1}^{N}g_{k\sigma(k)}>0$. The main theorem of Sinkhorn and Knopp (1967) asserts that there exists a bistochastic matrix $A$ of the form $A=D_{1}GD_{2}$, where $D_{1}$ and $D_{2}$ are $N\times N$ diagonal matrices with positive diagonals, if and only if $G$ has total support. The algorithm to induce $A$ from $G$ is called the Sinkhorn-Knopp algorithm. It does this by generating a sequence of matrices whose rows and columns are normalized alternately. It is known that the convergence of the algorithm is linear and upper bounds have been given for its rate of convergence (e.g., Knight, 2008). Nevertheless, if for some reason we face a situation where it is impossible to associate a bistochastic matrix with the graph $\mathscr{G}$, Proposition 3.2 below shows that it is still possible to obtain information about the performance ratio, provided $A$ is irreducible and aperiodic. Proposition 3.2. Assume that $A$ is irreducible and aperiodic. Then $$\tau_{t}(A)\to\frac{1}{N\|\boldsymbol{\mu}\|^{2}}\quad\mbox{as }t\to\infty,$$ where $\boldsymbol{\mu}$ is the stationary distribution of $A$. To illustrate this result, take $N=2$ and consider the graph $\mathscr{G}$ with (symmetric) adjacency matrix $\mathbf{1}\mathbf{1}^{\top}$ (i.e., full communication). Various stochastic matrices may be associated with $\mathscr{G}$, each with a certain statistical performance. For $\alpha>1$ a given parameter, we may choose for example $$H_{\alpha}=\frac{1}{\alpha}\begin{pmatrix}1&\alpha-1\\ 1&\alpha-1\\ \end{pmatrix}.$$ When $\alpha=2$, we have $\tau_{t}(H_{2})\to 1$ by Theorem 3.1. More generally, using Proposition 3.2, it is an easy exercise to prove that, as $t\to\infty$, $$\tau_{t}(H_{\alpha})\to\frac{\alpha^{2}}{2+2(\alpha-1)^{2}}.$$ We see that the statistical performance of the local estimates deteriorates as $\alpha$ becomes large, for in this case $\tau_{t}(H_{\alpha})$ gets closer and closer to $1/2$. This toy model exemplifies the role the stochastic matrix is playing as a “tuning parameter” to improve the performance of the distributed estimate. 4 Convergence rates Theorem 3.1 gives precise conditions ensuring $\tau_{t}(A)=1+\mbox{o}(1)$, but does not say anything about the rate (i.e., the behavior of the second-order term) at which this convergence occurs. It turns out that a much more informative limit may be obtained at the price of the mild additional assumption that the stochastic matrix $A$ is symmetric (and hence bistochastic). Theorem 4.1. Assume that $A$ is irreducible, aperiodic, and symmetric. Let $1>\gamma_{2}\geq\cdots\geq\gamma_{N}>-1$ be the eigenvalues of $A$ different from $1$. Then $$\tau_{t}(A)=\frac{1}{1+\frac{1}{t}\sum_{\ell=2}^{N}\frac{1-\gamma_{\ell}^{2t}}% {1-\gamma_{\ell}^{2}}}.$$ In addition, setting $$\mathscr{S}(A)=\sum_{\ell=2}^{N}\frac{1}{1-\gamma_{\ell}^{2}}\quad\mbox{and}% \quad\Gamma(A)=\max_{2\leq\ell\leq N}|\gamma_{\ell}|,$$ we have, for all $t\geq 1$, $$1-\frac{\mathscr{S}(A)}{t}\leq\tau_{t}(A)\leq 1-\frac{\mathscr{S}(A)}{t}+% \Gamma^{2t}(A)\frac{\mathscr{S}(A)}{t}+\Big{(}\frac{\mathscr{S}(A)}{t}\Big{)}^% {2}.$$ Clearly, we thus have $$t\big{(}1-\tau_{t}(A)\big{)}\to\mathscr{S}(A)\quad\mbox{as }t\to\infty.$$ The take-home message is that the smaller the coefficient $\mathscr{S}(A)$, the better the matrix $A$ performs from a statistical point of view. In this respect, we note that $\mathscr{S}(A)\geq N-1$ (uniformly over the set of stochastic, irreducible, aperiodic, and symmetric matrices). Consider the full-communication matrix $$A_{0}=\frac{1}{N}\mathbf{1}\mathbf{1}^{\top},$$ (4.1) which models a saturated communication network in which each agent shares its information with all others. The associated communication topology, which has $\mathscr{C}(A_{0})=N+1$, is roughly equivalent to a centralized algorithm and, as such, is considered inefficient from a computational point of view. On the other hand, intuitively, the amount of statistical information propagating through the network is large so $\mathscr{S}(A_{0})$ should be small. Indeed, it is easy to see that in this case, $\gamma_{\ell}=0$ for all $\ell\in\{2,\ldots,N\}$ and $\mathscr{S}(A_{0})=N-1$. Therefore, although complex in terms of communication, $A_{0}$ is statistically optimal. For a comparative study of statistical performance and communication complexity of matrices, let us consider the sparser graph associated with the tridiagonal matrix $A_{1}$ defined in (2.2). With this choice, $\gamma_{\ell}=\cos\frac{(\ell-1)\pi}{N}$ (Fiedler, 1972), so that $$\mathscr{S}(A_{1})=\sum_{\ell=1}^{N-1}\frac{1}{1-\cos^{2}{\frac{\ell\pi}{N}}}=% \frac{N^{2}}{6}+\mbox{O}(N)\quad\mbox{as $N\to\infty$}.$$ (4.2) Thus, we lose a power of $N$ but now have lower communication complexity $\mathscr{C}(A_{1})=3$. Let us now consider the tridiagonal matrix $A_{2}$ defined in (2.3). Noticing that $3A_{2}=2A_{1}+I_{N}$, we deduce that for the matrix $A_{2}$, $\gamma_{\ell}=\frac{1}{3}+\frac{2}{3}\cos\frac{(\ell-1)\pi}{N}$, $2\leq\ell\leq N$. Thus, as $N\to\infty$, $$\mathscr{S}(A_{2})=\frac{N^{2}}{9}+\mbox{O}(N).$$ (4.3) By comparing (4.2) and (4.3), we can conclude that the matrices $A_{1}$ and $A_{2}$, which are both low-$\mathscr{C}(A)$, are also nearly equivalent from a statistical efficiency point of view. $A_{2}$ is nevertheless preferable to $A_{1}$, which has a larger constant in front of the $N^{2}$. This slight difference may be due to the fact that most of the diagonal elements of $A_{1}$ are zero, so that agents $i\in\{2,\ldots,N-1\}$ do not integrate their current value in the next iteration, as happens for $A_{2}$. Furthermore, for large $N$, the performance of $A_{1}$ and $A_{2}$ are expected to dramatically deteriorate in comparison with those of $A_{0}$, since $\mathscr{S}(A_{1})$ and $\mathscr{S}(A_{2})$ are proportional to $N^{2}$, while $\mathscr{S}(A_{0})$ is proportional to $N$. Figure 2 shows the evolution of $\tau_{t}(A)$ for $N$ fixed and $t$ increasing for the matrices $A=A_{0}$, $A_{1}$, $A_{2}$ as well as the identity $I_{N}$. As expected, we see convergence of $\tau_{t}(A_{i})$ to $1$, with degraded performance as the number of agents $N$ increases. Also, we see that the lack of message-passing for $I_{N}$ means it is statistically inefficient, with constant $\tau_{t}(I_{N})=1/N$ for all $t$. The discussion and plots above highlight the crucial influence of $\mathscr{S}(A)$ on the performance of the communication network. Indeed, Theorem 4.1 shows that the optimal order for $\mathscr{S}(A)$ is $N$, and that this scaling is achieved by the computationally-inefficient choice $A_{0}$—see (4.1). Thus, a natural question to ask is whether there exist communication networks that have $\mathscr{S}(A)$ proportional to $N$ and, simultaneously, $\mathscr{C}(A)$ constant or small with respect to $N$. These two conditions, which are in a sense contradictory, impose that the absolute values of the non-trivial eigenvalues $\gamma_{\ell}$ stay far from 1, while the maximal indegree of the graph $\mathscr{G}$ remains moderate. It turns out that these requirements are satisfied by so-called Ramanujan graphs, which are presented in the next section. 5 Ramanujan graphs In this section, we consider undirected graphs $\mathscr{G}=(\mathscr{V},\mathscr{E})$ that are also $d$-regular, in the sense that all vertices have the same degree $d$; that is each vertex is incident to exactly $d$ edges. Recall that in this definition, self-loops are counted twice and multiple edges are allowed. However, in what follows, we restrict ourselves to graphs without self-loops and multiple edges. In this setting, the natural (bistochastic) communication matrix $A$ associated with $\mathscr{G}$ is $A=\frac{1}{d}G$, where $G=(g_{ij})_{1\leq i,j\leq N}$ is the adjacency matrix of $\mathscr{G}$ ($g_{ij}\in\{0,1\}$ and $g_{ij}=1\Leftrightarrow(i,j)\in\mathscr{E}$). Note that $\mathscr{C}(A)=d$. The matrix $G$ is symmetric and we let $d=\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{N}\geq-d$ be its (real) eigenvalues. Similarly, we let $1=\gamma_{1}\geq\gamma_{2}\geq\cdots\geq\gamma_{N}\geq-1$ be the eigenvalues of $A$, with the straightforward correspondence $\gamma_{i}=\mu_{i}/d$. We note that $A$ is irreducible (or, equivalently, that $\mathscr{G}$ is connected) if and only if $d>\mu_{2}$ (see, e.g., Shlomo et al., 2006, Section 2.3). In addition, $A$ is aperiodic as soon as $\mu_{N}>-d$. According to the Alon-Boppana theorem (Nilli, 1991) one has, for every $d$-regular graph, $$\mu_{2}\geq 2\sqrt{d-1}-\mbox{o}_{N}(1),$$ where the $\mbox{o}_{N}(1)$ term is a quantity that tends to zero for every fixed $d$ as $N\to\infty$. Moreover, a $d$-regular graph $\mathscr{G}$ is called Ramanujan if $$\max\big{(}|\mu_{\ell}|:\mu_{\ell}<d\big{)}Ê\leq 2\sqrt{d-1}.$$ In view of the above, a Ramanujan graph is optimal, at least as far as the spectral gap measure of expansion is concerned. Ramanujan graphs fall in the category of so-called expander graphs, which have the apparently contradictory features of being both highly connected and at the same time sparse (for a review, see Shlomo et al., 2006). Although the existence of Ramanujan graphs for any degree larger than or equal to $3$ has been recently established by Marcus et al. (2015), their explicit construction remains difficult to use in practice. However, a conjecture by Alon (1986), proved by Friedman (2008) (see also Bordenave, 2015) asserts that most $d$-regular graphs are Ramanujan, in the sense that for every $\varepsilon>0$, $$\mathbb{P}\Big{(}\max\big{(}|\mu_{2}|,|\mu_{N}|\big{)}\geq 2\sqrt{d-1}+% \varepsilon\Big{)}\to 0\quad\mbox{as }N\to\infty,$$ or equivalently, in terms of the eigenvalues of $A$, $$\mathbb{P}\Big{(}\max\big{(}|\gamma_{2}|,|\gamma_{N}|\big{)}\geq\frac{2\sqrt{d% -1}}{d}+\varepsilon\Big{)}\to 0\quad\mbox{as }N\to\infty.$$ In both results, the limit is along any sequence going to infinity with $Nd$ even, and the probability is with respect to random graphs uniformly sampled in the family of $d$-regular graphs with vertex set $\mathscr{V}=\{1,\ldots,N\}$. In order to generate a random irreducible, aperiodic $d$-regular Ramanujan graph, we can first generate a random $d$-regular graph using an improved version of the standard pairing algorithm, proposed by Steger and Wormald (1999). We retain it if it passes the tests of being irreducible, aperiodic and Ramanujan as described above. Otherwise, we continue to generate a $d$-regular graph until all these conditions are satisfied. Figure 3 gives an example of a $3$-regular Ramanujan graph with $N=16$ vertices, generated in this way. Now, given an irreducible and aperiodic communication matrix $A$ associated with a $d$-regular Ramanujan graph $\mathscr{G}$, we have, whenever $d\geq 3$, $$\mathscr{S}(A)\leq\frac{N-1}{1-\frac{4(d-1)}{d^{2}}}.$$ Thus, recalling that $\mathscr{S}(A)\geq N-1$, we see that $\mathscr{S}(A)$ scales optimally as $N$ while having $\mathscr{C}(A)=d$ (fixed). This remarkable superefficiency property can be compared with the full-communication matrix $A_{0}$, which has $\mathscr{S}(A_{0})=N-1$ but inadmissible complexity $\mathscr{C}(A_{0})=N+1$. The statistical efficiency of these graphs is further highlighted in Figure 4. It shows results for 3- and 5-regular Ramanujan-type matrices ($A_{3}$ and $A_{5}$) as well as the previous results for non-Ramanujan-type matrices $A_{0}$, $A_{1}$ and $A_{2}$ (see Figure 2). We see that $A_{3}$ is already close to the statistical performance of $A_{0}$, the saturated network, and for all intents and purposes $A_{5}$ is essentially as good as $A_{0}$, even when there are $N=1000$ nodes; i.e., the statistical performance of the $5$-regular Ramanujan graph is barely distinguishable from that of the totally connected graph! Nevertheless, we must not forget that the possibility of building such efficient networks in real-world situations will ultimately depend on the specific application, and may not always be possible. Next, assuming that the Ramanujan-type matrix $A$ is irreducible and aperiodic, it is apparent that there is a compromise to be made between the communication complexity of the algorithm (as measured by the degree index $\mathscr{C}(A)=d$) and its statistical performance (as measured by the coefficient $\mathscr{S}(A)$). Clearly, the two are in conflict. Upon this a question arises: is it possible to reach a compromise in the range of statistical performances $\mathscr{S}(A)$ while varying the communication complexity between $d=3$ and $d=N$? The answer is affirmative, as shown in the following simulation exercise. We fix $N=200$ and then for each $d=3,\ldots,N$: $(i)$ Generate a matrix $A_{d}$ associated with a $d$-regular Ramanujan graph as before. $(ii)$ Compute the (non-unitary) eigenvalues $\gamma_{2}^{(d)},\ldots,\gamma_{N}^{(d)}$ of the matrix $A_{d}$ and evaluate the sum $$\mathscr{S}(A_{d})=\sum_{\ell=2}^{N}\frac{1}{1-\big{(}\gamma_{\ell}^{(d)}\big{% )}^{2}}.$$ $(iii)$ Plot $\mathscr{S}(A_{d})$ and $\beta\mathscr{C}(A_{d})=\beta d$ as well as penalized sums $\mathscr{S}(A_{d})+\beta\mathscr{C}(A_{d})$ for $\beta\in\{1/2,1,2,4\}$, where $\beta$ represents an explicit cost incurred when increasing the number of connections between nodes. Results are shown in Figure 5, where $d^{\star}$ refers to the $d$ for which the penalized sum $\mathscr{S}(A_{d})+\beta\mathscr{C}(A_{d})$ is minimized. We observe that $\mathscr{S}(A_{d})$ is decreasing whereas $\mathscr{C}(A_{d})$ increases linearly. The tradeoff between statistical efficiency and communication complexity can be seen as minimizing their penalized sum, where $\beta$ for example represents a monetary cost incurred by adding new network connections between nodes. We see that the optimal $d^{\star}$ and thus the number of node connections decreases as the cost of adding new ones increases. Next, let us investigate the tradeoffs involved in the case where we have a large but fixed total number $T$ of data to be streamed to $N$ nodes, each receiving one new data value from time $t=1$ to time $t=T/N$. In this context, the natural question to ask is how many nodes should we choose, and how much communication should we allow between them in order to get “very good” results for a “low” cost? Here a low cost comes from both limiting the number of nodes as well as the number of connections between them. In the same set-up for $A_{d}$ defined above, one way to look at this is to ask, for each $N$, what is the smallest $d\in\{3,\ldots,N\}$ and therefore the smallest communication cost $\mathscr{C}(A_{d})=d$ for which the performance ratio $\tau_{t}(A_{d})$ is at least $0.99$ after receiving all the data, i.e., when $t=T/N$? Then, as there is also a cost associated with increasing $N$, minimizing $\mathscr{C}(A_{d^{\star}})/N$ (where $d^{\star}$ is this smallest $d$ chosen) should help us choose the number of nodes $N$ and the amount of connection $\mathscr{C}(A_{d^{\star}})$ between them. The result of this is shown in Figure 6 for $T=100$ million data points. The minimum is found at $(N,d^{\star})=(710,3)$, suggesting that with 100 million data points, one can get excellent performance results ($\tau_{t}(A_{d^{\star}})\geq 0.99$) for a low cost with around 700 nodes, each connected only to three other nodes! Increasing $N$ further raises the cost necessary to obtain the same performance, both due to the price of adding more nodes, as well as requiring more connections between them: $d^{\star}$ must increase to 4, 5, and so on. 6 Asynchronous models The models considered so far assume that messages from one agent to another are immediately delivered. However, a distributed environment may be subject to communication delays, for instance when some processors compute faster than others or when latency and finite bandwidth issues perturb message transmission. In the presence of such communication delays, it is conceivable that an agent will end up averaging its own value with an outdated value from another processor. Situations of this type fall within the framework of distributed asynchronous computation (Tsitsiklis et al., 1986; Bertsekas and Tsitsiklis, 1997). In the present section, we have in mind a model where agents do not have to wait at predetermined moments for predetermined messages to become available. We thus allow some agents to compute faster and execute more iterations than others and allow communication delays to be substantial. Communication delays are incorporated into our model as follows. For $B$ a nonnegative integer, we assume that the last instant before $t$ where agent $j$ sent a message to agent $i$ is $t-B_{ij}$, where $B_{ij}\in\{0,\ldots,B\}$. Put differently, recalling that $\hat{\theta}_{t}^{(i)}$ is the estimate held by agent $i$ at time $t$, we have $$\hat{\theta}_{t+1}^{(i)}=\frac{1}{t+1}\sum_{j=1}^{N}a_{ij}(t-B_{ij})\hat{% \theta}^{(j)}_{t-B_{ij}}+\frac{1}{t+1}X_{t+1}^{(i)},\quad t\geq 1.$$ (6.1) Thus, at time $t$, when agent $i$ uses the value of another agent $j$, this value is not necessarily the most recent one $\hat{\theta}_{t}^{(j)}$, but rather an outdated one $\hat{\theta}_{t-B_{ij}}^{(j)}$, where $B_{ij}$ represents the communication delay. The time instants $t-B_{ij}$ are deterministic and, in any case, $0\leq B_{ij}\leq B$, i.e., we assume that delays are bounded. Notice that some of the values $t-B_{ij}$ in (6.1) may be negative—in this case, by convention we set $\hat{\theta}^{(j)}_{t-B_{ij}}=0$. Our goal is to establish a counterpart to Theorem 3.1 in the presence of communication delays. As usual, we set $\boldsymbol{\hat{\theta}}_{t}=(\hat{\theta}_{t}^{(1)},\ldots,\hat{\theta}^{(N)% }_{t})^{\top}$. Let $\kappa(t)$ be the smallest $\ell$ such that for all $(k_{0},\ldots,k_{\ell})\in\{1,\ldots,N\}^{\ell+1}$ satisfying $\prod_{j=1}^{\ell}a_{k_{j-1}k_{j}}>0,$ we have $$t-\ell-\sum_{j=1}^{\ell}B_{k_{j-1}k_{j}}\leq B.$$ Observe that $t-\ell-\sum_{j=1}^{\ell}B_{k_{j-1}k_{j}}$ is the last time before $t$ when a message was sent from agent $k_{0}$ to agent $k_{\ell}$ via $k_{1},\ldots,k_{\ell-1}$. Accordingly, $\kappa(t)$ is nothing but the smallest number of transitions needed to return at a time instant earlier than $B$, whatever the path. We note that $\kappa(t)$ is roughly of order $t$, since $$\frac{1}{B+1}\leq\liminf_{t\to\infty}\frac{\kappa(t)}{t}\leq\limsup_{t\to% \infty}\frac{\kappa(t)}{t}\leq 1.$$ From now on, it is assumed that $A=A_{1}$, i.e., the irreducible, aperiodic, and symmetric matrix defined in (2.2). Besides its simplicity, this choice is motivated by the fact that $A_{1}$ is communication-efficient while its associated performance obeys $$\tau_{t}(A)\approx 1-\frac{N^{2}}{6t}$$ for large $t$ and $N$. The main result of the section now follows. Theorem 6.1. Assume that $X$ is bounded and let $A=A_{1}$ be defined as in (2.2). Then, as $t\to\infty$, $$\mathbb{E}\bigg{\|}\frac{t}{\kappa(t)}\boldsymbol{\hat{\theta}}_{t}-\theta% \mathbf{1}\bigg{\|}^{2}=\emph{O}\bigg{(}\frac{1}{t}\bigg{)}.$$ The advantages one hopes to gain from asynchronism are twofold. First, a reduction of the synchronization penalty and a potential speed advantage over synchronous algorithms, perhaps at the expense of higher communication complexity. Second, a greater implementation flexibility and tolerance to system failure and uncertainty. On the other hand, the powerful result of Theorem 6.1 comes at the price of assumptions on the transmission network, which essentially demand that communication delays $B_{ij}$ are time-independent. In fact, we find that the introduction of delays considerably complicates the consistency analysis of $\tau_{t}(A)$ even for the simple case of the empirical mean. This unexpected mathematical burden is due to the fact that the introduction of delays makes the analysis of the variance of the estimates quite complicated. 7 Proofs We start this section by recalling the following important theorem, whose proof can be found for example in Foata and Fuchs (2004, Theorems 6.8.3 and 6.8.4). Here and elsewhere, $A$ stands for the stochastic communication matrix. Theorem 7.1. Let $\lambda_{1},\ldots,\lambda_{d}$ be the eigenvalues of $A$ of unit modulus (with $\lambda_{1}=1$) and $\Gamma$ be the set of eigenvalues of $A$ of modulus strictly smaller than 1. $(i)$ There exist projectors $Q_{1},\ldots,Q_{d}$ such that, for all $k\geq N$, $$A^{k}=\sum_{\ell=1}^{d}\lambda_{\ell}^{k}Q_{\ell}+\sum_{\gamma\in\Gamma}\gamma% ^{k}Q_{\gamma}(k),$$ where the matrices $\{Q_{\gamma}(k):k\geq N,\gamma\in\Gamma\}$ satisfy $Q_{\gamma}(k)Q_{\gamma^{\prime}}(k^{\prime})=Q_{\gamma}(k+k^{\prime})$ if $\gamma=\gamma^{\prime}$, and $0$ otherwise. In addition, for all $\gamma\in\Gamma$, $\lim_{k\to\infty}\gamma^{k}Q_{\gamma}(k)=0$. $(ii)$ The sequence $(A^{k})_{k\geq 0}$ converges in the Cesàro sense to $Q_{1}$, i.e., $$\frac{1}{t}\sum_{k=0}^{t}A^{k}\to Q_{1}\quad\mbox{as $t\to\infty$}.$$ 7.1 Proof of Proposition 3.1 According to (2.4), since $A^{k}$ is a stochastic matrix, we have $$\boldsymbol{\hat{\theta}}_{t}-\theta\mathbf{1}=\frac{1}{t}\sum_{k=0}^{t-1}A^{k% }(\mathbf{X}_{t-k}-\theta\mathbf{1}).$$ Therefore, it may be assumed, without loss of generality, that $\theta=0$. Thus, $$\tau_{t}(A)=\frac{\mathbb{E}\left\|\bar{\mathbb{X}}_{Nt}\mathbf{1}\right\|^{2}% }{\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}\|^{2}}.$$ Next, let $A^{k}=(a_{ij}^{(k)})_{1\leq i,j\leq N}$. Then, for each $i\in\{1,\ldots,N\}$, $$\hat{\theta}_{t}^{(i)}=\frac{1}{t}\sum_{k=0}^{t-1}\sum_{j=1}^{N}a_{ij}^{(k)}X_% {t-k}^{(j)},\quad t\geq 1.$$ By independence of the samples, $$\mathbb{E}\big{(}\hat{\theta}_{t}^{(i)}\big{)}^{2}=\frac{\sigma^{2}}{t^{2}}% \sum_{k=0}^{t-1}\sum_{j=1}^{N}\big{(}a_{ij}^{(k)}\big{)}^{2}.$$ Upon noting that $\mathbb{E}(\bar{\mathbb{X}}_{Nt})^{2}=\frac{\sigma^{2}}{Nt}$, we get $$\displaystyle\tau_{t}(A)$$ $$\displaystyle=\frac{N\mathbb{E}\big{(}\bar{\mathbb{X}}_{Nt}\big{)}^{2}}{% \mathbb{E}\big{(}\hat{\theta}_{t}^{(1)}\big{)}^{2}+\cdots+\mathbb{E}\big{(}% \hat{\theta}_{t}^{(N)}\big{)}^{2}}$$ $$\displaystyle=\frac{t}{\sum_{k=0}^{t-1}\|A^{k}\|^{2}}.$$ Since each $A^{k}$ is a stochastic matrix, $\|A^{k}\|^{2}\leq N$ and, by the Cauchy-Schwarz inequality, $\|A^{k}\|\geq 1$. Thus, $\frac{1}{N}\leq\tau_{t}(A)\leq 1$, the lower bound being achieved when $A$ is the identity matrix. Let us now assume that $A$ is reducible, and let $C\subsetneq\{1,\ldots,N\}$ be a recurrence class. Arguing as above, we obtain that for all $i\in C$, $$\mathbb{E}\big{(}\hat{\theta}^{(i)}_{t}\big{)}^{2}=\frac{\sigma^{2}}{t^{2}}% \sum_{k=0}^{t-1}\sum_{j=1}^{N}\big{(}a^{(k)}_{ij}\big{)}^{2}\geq\frac{\sigma^{% 2}}{t^{2}}\sum_{k=0}^{t-1}\sum_{j\in C}\big{(}a^{(k)}_{ij}\big{)}^{2}.$$ Since $C$ is a recurrence class, the restriction of $A$ to entries in $C$ is a stochastic matrix as well. Thus, setting $N_{1}=|C|$, by the Cauchy-Schwarz inequality, $$\mathbb{E}\big{(}\hat{\theta}^{(i)}_{t}\big{)}^{2}\geq\left\{\begin{array}[]{% ll}\frac{\sigma^{2}}{tN_{1}}&\mbox{ if $i\in C$}\\ \frac{\sigma^{2}}{tN}&\mbox{ otherwise.}\end{array}\right.$$ To conclude, $$\displaystyle\tau_{t}(A)$$ $$\displaystyle=\frac{\sigma^{2}/t}{\sum_{i\in C}\mathbb{E}\big{(}\hat{\theta}^{% (i)}_{t}\big{)}^{2}+\sum_{i\notin C}\mathbb{E}\big{(}\hat{\theta}^{(i)}_{t}% \big{)}^{2}}$$ $$\displaystyle\leq\frac{1}{1+(N-N_{1})/N}$$ $$\displaystyle\leq\frac{N}{N+1},$$ since $N-N_{1}\geq 1$. 7.2 Proof of Lemma 3.1 As in the previous proof, we assume that $\theta=0$. Recall that $$\boldsymbol{\hat{\theta}}_{t}=\frac{1}{t}\sum_{k=0}^{t-1}A^{k}\mathbf{X}_{t-k}% ,\quad t\geq 1.$$ Thus, for all $t\geq 1$, $$\displaystyle\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}\|^{2}$$ $$\displaystyle=\frac{1}{t^{2}}\mathbb{E}\bigg{\|}\sum_{k=0}^{t-1}A^{k}\mathbf{X% }_{t-k}\bigg{\|}^{2}$$ $$\displaystyle=\frac{1}{t^{2}}\sum_{k=0}^{t-1}\mathbb{E}\|A^{k}\mathbf{X}_{t-k}% \|^{2}$$ $$\displaystyle\quad\mbox{(by independence of $\mathbf{X}_{1},\ldots,\mathbf{X}_% {t}$)}$$ $$\displaystyle=\frac{1}{t^{2}}\mathbb{E}\mathbf{X}_{1}^{\top}\bigg{(}\sum_{k=0}% ^{t-1}(A^{k})^{\top}A^{k}\bigg{)}\mathbf{X}_{1}.$$ Denote by $\lambda_{1}=1,\ldots,\lambda_{d}$ the eigenvalues of $A$ of modulus 1, and let $\Gamma$ be the set of eigenvalues $\gamma$ of $A$ of modulus strictly smaller than 1. According to Theorem 7.1, there exist projectors $Q_{1},\ldots,Q_{d}$ and matrices $Q_{\gamma}(k)$ such that for all $k\geq N$, $$A^{k}=\sum_{\ell=1}^{d}\lambda_{\ell}^{k}Q_{\ell}+\sum_{\gamma\in\Gamma}\gamma% ^{k}Q_{\gamma}(k).$$ Therefore, $$\displaystyle\sum_{k=0}^{t-1}(A^{k})^{\top}A^{k}$$ $$\displaystyle=\sum_{k=0}^{t-1}(\bar{A}^{k})^{\top}A^{k}$$ $$\displaystyle=\sum_{k=0}^{t-1}\bigg{(}\sum_{\ell=1}^{d}\bar{\lambda}_{\ell}^{k% }\bar{Q}_{\ell}+\sum_{\gamma\in\Gamma}\bar{\gamma}^{k}\bar{Q}_{\gamma}(k)\bigg% {)}^{\top}\bigg{(}\sum_{j=1}^{d}\lambda_{j}^{k}Q_{j}+\sum_{\gamma\in\Gamma}% \gamma^{k}Q_{\gamma}(k)\bigg{)}$$ $$\displaystyle=\sum_{k=0}^{t-1}\sum_{\ell,j=1}^{d}\bar{\lambda}_{\ell}^{k}% \lambda_{j}^{k}\bar{Q}_{\ell}^{\top}Q_{j}+\mbox{o}(t).$$ Here, we have used Cesàro’s lemma combined with the fact that for any $\gamma\in\Gamma$, $\lim_{k\to\infty}\gamma^{k}Q_{\gamma}(k)=0$ (Theorem 7.1). Since $A$ is irreducible, according to the Perron-Frobenius theorem (e.g., Grimmett and Stirzaker, 2001, page 240), we have that $\lambda_{\ell}=e^{\frac{2\pi i(\ell-1)}{d}}$, $1\leq\ell\leq d$. Accordingly, $$\bar{\lambda}_{\ell}\lambda_{j}=e^{\frac{2\pi i(j-\ell)}{d}}=1\Leftrightarrow j% =\ell.$$ Thus, $$\sum_{k=0}^{t-1}(A^{k})^{\top}A^{k}=t\sum_{\ell=1}^{d}\bar{Q}_{\ell}^{\top}Q_{% \ell}+\mbox{O}(1)+\mbox{o}(t).$$ Letting $Q=\sum_{\ell=1}^{d}\bar{Q}_{\ell}^{\top}Q_{\ell}$, we obtain $$\displaystyle t\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}\|^{2}$$ $$\displaystyle=\mathbb{E}\mathbf{X}_{1}^{\top}Q\mathbf{X}_{1}+\mathbb{E}\mathbf% {X}_{1}^{\top}\bigg{(}\frac{1}{t}\sum_{k=0}^{t-1}(A^{k})^{\top}A^{k}-Q\bigg{)}% \mathbf{X}_{1}$$ (7.1) $$\displaystyle=\mathbb{E}\mathbf{X}_{1}^{\top}Q\mathbf{X}_{1}+\mbox{o}(1)$$ $$\displaystyle=\sum_{\ell=1}^{d}\mathbb{E}\|Q_{\ell}\mathbf{X}_{1}\|^{2}+\mbox{% o}(1).$$ Denoting by $Q_{\ell,ij}$ the $(i,j)$-entry of $Q_{\ell}$, we conclude $$\displaystyle t\mathbb{E}\|\boldsymbol{\hat{\theta}}_{t}\|^{2}$$ $$\displaystyle=\sum_{\ell=1}^{d}\mathbb{E}\sum_{i=1}^{N}\bigg{(}\sum_{j=1}^{N}Q% _{\ell,ij}X_{1}^{(j)}\bigg{)}^{2}+\mbox{o}(1)$$ $$\displaystyle=\sigma^{2}\sum_{\ell=1}^{d}\sum_{i,j=1}^{N}Q^{2}_{\ell,ij}+\mbox% {o}(1)$$ $$\displaystyle\quad\mbox{(by independence of $X_{1}^{(1)},\ldots,X_{1}^{(N)}$)}$$ $$\displaystyle=\sigma^{2}\sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}+\mbox{o}(1).$$ Lastly, recalling that $\mathbb{E}\|\bar{\mathbb{X}}_{Nt}\mathbf{1}\|^{2}=\frac{\sigma^{2}}{t}$, we obtain $$\tau_{t}(A)=\frac{1}{\sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}+\mbox{o}(1)}=\frac{1}{% \sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}}+\mbox{o}(1).$$ 7.3 Proof of Theorem 3.1 Sufficiency. Assume that $A$ is irreducible, aperiodic, and bistochastic. The first two conditions imply that $1$ is the unique eigenvalue of $A$ of unit modulus. Therefore, according to Lemma 3.1, we only need to prove that the projector $Q_{1}$ satisfies $\|Q_{1}\|=1$. Since $A$ is bistochastic, its stationary distribution is the uniform distribution on $\{1,\ldots,N\}$. Moreover, since $A$ is irreducible and aperiodic, we have, as $k\to\infty$, $$A^{k}\to\frac{1}{N}\begin{pmatrix}1&1&\ldots&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&\ldots&1\end{pmatrix}.$$ By comparing this limit with that of the second statement of Theorem 7.1, we conclude by Cesàro’s lemma that $$Q_{1}=\frac{1}{N}\begin{pmatrix}1&1&\ldots&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&\ldots&1\end{pmatrix}.$$ This implies in particular that $\|Q_{1}\|=1$. Necessity. Assume that $\tau_{t}(A)$ tends to 1 as $t\to\infty$. According to Proposition 3.1, $A$ is irreducible. Thus, by Lemma 3.1, we have $\sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}=1$. Observe, since each $Q_{\ell}$ is a projector, that $\|Q_{\ell}\|\geq 1$. Therefore, the identity $\sum_{\ell=1}^{d}\|Q_{\ell}\|^{2}=1$ implies $d=1$ and $\|Q_{1}\|=1$. We conclude that $A$ is aperiodic. Then, since $A$ is irreducible and aperiodic, we have, as $k\to\infty$, $$A^{k}\to\begin{pmatrix}\boldsymbol{\mu}\\ \vdots\\ \boldsymbol{\mu}\end{pmatrix},$$ where $\boldsymbol{\mu}$ is the stationary distribution of $A$, represented as a row vector. Comparing once again this limit with the second statement of Theorem 7.1, we see that $$Q_{1}=\begin{pmatrix}\boldsymbol{\mu}\\ \vdots\\ \boldsymbol{\mu}\end{pmatrix}.$$ Thus, $\|Q_{1}\|^{2}=N\|{\boldsymbol{\mu}}\|^{2}=1$. In particular, letting $\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{N})$, we have $$N\sum_{i=1}^{N}\mu_{i}^{2}=\sum_{i=1}^{N}\mu_{i}.$$ This is an equality case in the Cauchy-Schwarz inequality, from which we deduce that $\boldsymbol{\mu}$ is the uniform distribution on $\{1,\ldots,N\}$. Since $\boldsymbol{\mu}$ is the stationary distribution of $A$, this implies that $A$ is bistochastic. 7.4 Proof of Proposition 3.2 If $A$ is irreducible and aperiodic, then by Lemma 3.1, $\tau_{t}(A)\to\frac{1}{\|Q_{1}\|^{2}}$ as $t\to\infty$. But, as $k\to\infty$, $$A^{k}\to\begin{pmatrix}\boldsymbol{\mu}\\ \vdots\\ \boldsymbol{\mu}\end{pmatrix},$$ where the stationary distribution $\boldsymbol{\mu}$ of $A$ is represented as a row vector. By the second statement of Theorem 7.1, we conclude that $\|Q_{1}\|^{2}=N\|\boldsymbol{\mu}\|^{2}$. 7.5 Proof of Theorem 4.1 Without loss of generality, assume that $\theta=0$. Since $A$ is irreducible and aperiodic, the matrix $Q$ in the proof of Lemma 3.1 is $Q=Q_{1}^{\top}Q_{1}$. Moreover, since $A$ is also bistochastic, we have already seen that as $k\to\infty$, $$A^{k}\to\frac{1}{N}\begin{pmatrix}1&1&\ldots&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&\ldots&1\end{pmatrix}.$$ (7.2) However, by the second statement of Theorem 7.1, the above matrix is equal to $Q_{1}$. Thus, the projector $Q_{1}$ is symmetric, which implies $Q=Q_{1}$. Next, we deduce from (7.1) that $$\displaystyle\tau_{t}(A)$$ $$\displaystyle=\frac{\sigma^{2}}{\mathbb{E}\mathbf{X}_{1}^{\top}Q\mathbf{X}_{1}% +\mathbb{E}\mathbf{X}_{1}^{\top}\big{(}\frac{1}{t}\sum_{k=0}^{t-1}(A^{k})^{% \top}A^{k}-Q\big{)}\mathbf{X}_{1}}$$ $$\displaystyle=\frac{\sigma^{2}}{\sigma^{2}+\mathbb{E}\mathbf{X}_{1}^{\top}\big% {(}\frac{1}{t}\sum_{k=0}^{t-1}A^{2k}-Q\big{)}\mathbf{X}_{1}},$$ (7.3) by symmetry of $A$ and the fact that $\mathbb{E}\mathbf{X}_{1}^{\top}Q\mathbf{X}_{1}=\sigma^{2}$. The symmetric matrix $A$ can be put into the form $$A=UDU^{\top},$$ where $U$ is a unitary matrix with real entries (so, $U^{\top}=U^{-1}$) and $D=\mbox{diag}(1,\gamma_{2},\ldots,\gamma_{N})$, with $1>\gamma_{2}\geq\cdots\geq\gamma_{N}>-1$. Therefore, as $k\to\infty$, $$\frac{1}{t}\sum_{k=0}^{t-1}A^{2k}=U\bigg{(}\frac{1}{t}\sum_{k=0}^{t-1}D^{2k}% \bigg{)}U^{\top}\to U\begin{pmatrix}1&0&\ldots&0\\ 0&0&\ldots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\ldots&0\end{pmatrix}U^{\top}.$$ However, by (7.2) and Cesàro’s lemma, $$\frac{1}{t}\sum_{k=0}^{t-1}A^{2k}\to Q\quad\mbox{as $k\to\infty$}.$$ It follows that $Q=UMU^{\top}$, where $$M=\begin{pmatrix}1&0&\ldots&0\\ 0&0&\ldots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\ldots&0\end{pmatrix}.$$ Thus, $$\displaystyle\frac{1}{t}\sum_{k=0}^{t-1}A^{2k}-Q$$ $$\displaystyle=U\bigg{(}\frac{1}{t}\sum_{k=0}^{t-1}D^{2k}-M\bigg{)}U^{\top}$$ $$\displaystyle=U\bigg{(}\frac{1}{t}\sum_{k=0}^{t-1}\mbox{diag}\big{(}0,\gamma_{% 2}^{2k},\ldots,\gamma_{N}^{2k}\big{)}\bigg{)}U^{\top}$$ $$\displaystyle=U\mbox{diag}\bigg{(}0,\frac{1}{t}\frac{1-\gamma_{2}^{2t}}{1-% \gamma_{2}^{2}},\ldots,\frac{1}{t}\frac{1-\gamma_{N}^{2k}}{1-\gamma_{N}^{2}}% \bigg{)}U^{\top}.$$ Next, set $$\alpha_{\ell}=\frac{1}{t}\frac{1-\gamma_{\ell}^{2t}}{1-\gamma_{\ell}^{2}},% \quad 2\leq\ell\leq N,$$ and let $U=(u_{ij})_{1\leq i,j\leq N}$. With this notation, the $(i,j)$-entry of the matrix $\frac{1}{t}\sum_{k=0}^{t-1}A^{2k}-Q$ is $$\sum_{\ell=2}^{N}u_{i\ell}\alpha_{\ell}u_{j\ell}.$$ Hence, $$\mathbf{X}_{1}^{\top}\bigg{(}\frac{1}{t}\sum_{k=0}^{t-1}A^{2k}-Q\bigg{)}% \mathbf{X}_{1}=\sum_{i=1}^{N}X_{1}^{(i)}\sum_{j=1}^{N}\bigg{(}\sum_{\ell=2}^{N% }u_{i\ell}\alpha_{\ell}u_{j\ell}\bigg{)}X_{1}^{(j)}.$$ Thus, $$\displaystyle\mathbb{E}\mathbf{X}_{1}^{\top}\bigg{(}\frac{1}{t}\sum_{k=0}^{t-1% }A^{2k}-Q\bigg{)}\mathbf{X}_{1}$$ $$\displaystyle=\sigma^{2}\sum_{i=1}^{N}\sum_{\ell=2}^{N}u_{i\ell}\alpha_{\ell}u% _{i\ell}$$ $$\displaystyle=\sigma^{2}\sum_{i=1}^{N}\sum_{\ell=2}^{N}\alpha_{\ell}u_{i\ell}^% {2}$$ $$\displaystyle=\sigma^{2}\sum_{\ell=2}^{N}\alpha_{\ell}$$ $$\displaystyle=\frac{\sigma^{2}}{t}\sum_{\ell=2}^{N}\frac{1-\gamma_{\ell}^{2t}}% {1-\gamma_{\ell}^{2}}.$$ We conclude from (7.3) that $$\tau_{t}(A)=\frac{1}{1+\frac{1}{t}\sum_{\ell=2}^{N}\frac{1-\gamma_{\ell}^{2t}}% {1-\gamma_{\ell}^{2}}}.$$ This shows the first statement of the theorem. Using the inequality $\frac{1}{1+x}\geq 1-x$, valid for all $x\geq 0$, we have $$\displaystyle\tau_{t}(A)$$ $$\displaystyle\geq 1-\frac{1}{t}\sum_{\ell=2}^{N}\frac{1-\gamma_{\ell}^{2t}}{1-% \gamma_{\ell}^{2}}$$ $$\displaystyle\geq 1-\frac{\mathscr{S}(A)}{t}.$$ Finally, evoking the inequality $\frac{1}{1+x}\leq 1-x+x^{2}$, valid for all $x\geq 0$, we conclude $$\displaystyle\tau_{t}(A)$$ $$\displaystyle\leq 1-\frac{1}{t}\sum_{\ell=2}^{N}\frac{1-\gamma_{\ell}^{2t}}{1-% \gamma_{\ell}^{2}}+\bigg{(}\frac{1}{t}\sum_{\ell=2}^{N}\frac{1-\gamma_{\ell}^{% 2t}}{1-\gamma_{\ell}^{2}}\bigg{)}^{2}$$ $$\displaystyle\leq 1-\frac{\mathscr{S}(A)}{t}+\Gamma^{2t}(A)\frac{\mathscr{S}(A% )}{t}+\Big{(}\frac{\mathscr{S}(A)}{t}\Big{)}^{2}.$$ 7.6 Proof of Theorem 6.1 From now on, we fix $k_{0}\in\{1,\ldots,N\}$ and let $Z_{t}^{(i)}=t\hat{\theta}_{t}^{(i)}$ for any $i\in\{1,\ldots,N\}$. Thus, for all $t\geq 1$, $$Z_{t}^{(k_{0})}=\sum_{k=1}^{N}a_{k_{0}k}Z_{t-B_{k_{0}k}-1}^{(k)}+X_{t}^{(k_{0}% )},$$ and $$Z_{t}^{(k_{0})}=\sum_{k_{1},k_{2}=1}^{N}a_{k_{0}k_{1}}a_{k_{1}k_{2}}Z_{t-B_{k_% {0}k_{1}}-B_{k_{1}k_{2}}-2}^{(k_{2})}+\sum_{k_{1}=1}^{N}a_{k_{0}k_{1}}X_{t-B_{% k_{0}k_{1}}-1}^{(k_{1})}+X_{t}^{(k_{0})}.$$ (7.4) Our first task is to iterate this formula. To do so, we need additional notation. For $\ell$ a positive integer and $k\in\{1,\ldots,N\}$, let $\underline{K}^{\ell}(k)$ be the set of vectors in $\{1,\ldots,N\}^{\ell+1}$ of the form $(k_{0},k_{1},\ldots,k_{\ell-1},k)$ such that $w(\underline{K}^{\ell}(k))>0$, where $$w\big{(}\underline{K}^{\ell}(k)\big{)}=a_{k_{0}k_{1}}a_{k_{1}k_{2}}\ldots a_{k% _{\ell-2}k_{\ell-1}}a_{k_{\ell-1}k}.$$ In particular, by our choice of $A$, we have $w(\underline{K}^{\ell}(k))=2^{-\ell}$ for any $k$. Next, we set $$\Delta\big{(}\underline{K}^{\ell}(k)\big{)}=\ell+B_{k_{0}k_{1}}+B_{k_{1}k_{2}}% +\cdots+B_{k_{\ell-2}k_{\ell-1}}+B_{k_{\ell-1}k}.$$ When $\ell=0$, then by convention $\underline{K}^{0}(k)=(k_{0})$, $w(\underline{K}^{0}(k))=1$ if $k=k_{0}$ and $0$ otherwise, and $\Delta(\underline{K}^{0}(k))=0$. We are now ready to iterate (7.4). To do so, observe that $$\displaystyle Z_{t}^{(k_{0})}$$ $$\displaystyle=\sum_{k=1}^{N}\sum_{\underline{K}^{\kappa(t)}(k)}w\big{(}% \underline{K}^{\kappa(t)}(k)\big{)}Z^{(k)}_{t-\Delta(\underline{K}^{\kappa(t)}% (k))}$$ $$\displaystyle\quad+\sum_{\ell=0}^{\kappa(t)-1}\sum_{k=1}^{N}\sum_{\underline{K% }^{\ell}(k)}w\big{(}\underline{K}^{\ell}(k)\big{)}X^{(k)}_{t-\Delta(\underline% {K}^{\ell}(k))}$$ $$\displaystyle\stackrel{{\scriptstyle\mbox{\tiny def}}}{{=}}R_{t}^{1}+R_{t}^{2}.$$ (7.5) By the definition of $\kappa(t)$, for all $k\in\{1,\ldots,N\}$, $t-\Delta(\underline{K}^{\kappa(t)}(k))\leq B$. Since $X$ is bounded, we deduce that there exists $C>0$ such that $$|R_{t}^{1}|\leq C\sum_{k=1}^{N}\sum_{\underline{K}^{\kappa(t)}(k)}w\big{(}% \underline{K}^{\kappa(t)}(k)\big{)}.$$ This implies that $|R_{t}^{1}|\leq C$. To see this, note that $A^{\kappa(t)}$ is a stochastic matrix and that for all $k\in\{1,\ldots,N\}$, $$\sum_{\underline{K}^{\kappa(t)}(k)}w\big{(}\underline{K}^{\kappa(t)}(k)\big{)}% =(A^{\kappa(t)})_{k_{0}k}.$$ The analysis of the term $R_{t}^{2}$ is more delicate. The difficulty arises from the fact that this term is not a sum of independent random variables, and therefore its components must be grouped. Since each $B_{ij}$ is smaller than $B$ and $\Delta(\underline{K}^{\ell}(k))=x$ implies $x\geq\ell$, we obtain $$\displaystyle R_{t}^{2}$$ $$\displaystyle=\sum_{\ell=0}^{\kappa(t)-1}\sum_{k=1}^{N}\sum_{x=0}^{(B+1)\ell}% \sum_{\underline{K}^{\ell}(k):\Delta(\underline{K}^{\ell}(k))=x}w\big{(}% \underline{K}^{\ell}(k)\big{)}\,X_{t-x}^{(k)}$$ $$\displaystyle=\sum_{x=0}^{(B+1)(\kappa(t)-1)}\sum_{k=1}^{N}\sum_{\ell=\lfloor x% /(B+1)\rfloor+1}^{x}\sum_{\underline{K}^{\ell}(k):\Delta(\underline{K}^{\ell}(% k))=x}w\big{(}\underline{K}^{\ell}(k)\big{)}\,X_{t-x}^{(k)}$$ ($\lfloor\cdot\rfloor$ is the floor function). By independence of the $X_{j}^{(i)}$, we get $${\rm Var}(R_{t}^{2})=\sigma^{2}\sum_{x=0}^{(B+1)(\kappa(t)-1)}\sum_{k=1}^{N}% \bigg{(}\sum_{\ell=\lfloor x/(B+1)\rfloor+1}^{x}\sum_{\underline{K}^{\ell}(k):% \Delta(\underline{K}^{\ell}(k))=x}w\big{(}\underline{K}^{\ell}(k)\big{)}\bigg{% )}^{2}.$$ Recalling that $w(\underline{K}^{\ell}(k))=2^{-\ell}$, we obtain $${\rm Var}(R_{t}^{2})=\sigma^{2}\sum_{x=0}^{(B+1)(\kappa(t)-1)}\sum_{k=1}^{N}% \bigg{(}\sum_{\ell=\lfloor x/(B+1)\rfloor+1}^{x}\frac{1}{2^{\ell}}\,\Big{|}% \underline{K}^{\ell}(k):\Delta\big{(}\underline{K}^{\ell}(k)\big{)}=x\Big{|}% \bigg{)}^{2}.$$ Next, consider the Markov chain $(Y_{n})_{n\geq 0}$ with transition matrix $A$ such that $Y_{0}=k_{0}$. Observe that $$\mathbb{P}\Big{(}Y_{\ell}=k,\sum_{j=1}^{\ell}B_{Y_{j-1}Y_{j}}=x-\ell\Big{)}=% \frac{1}{2^{\ell}}\,\Big{|}\underline{K}^{\ell}(k):\Delta\big{(}\underline{K}^% {\ell}(k)\big{)}=x\Big{|}.$$ Moreover, for fixed $x$, the events $$\bigg{\{}\sum_{j=1}^{\ell}B_{Y_{j-1}Y_{j}}=x-\ell\bigg{\}},\quad\Big{\lfloor}% \frac{x}{B+1}\Big{\rfloor}+1\leq\ell\leq x,$$ are disjoint since the $B_{ij}$ are nonnegative. Thus, $$\sum_{\ell=\lfloor x/(B+1)\rfloor+1}^{x}\frac{1}{2^{\ell}}\,\Big{|}\underline{% K}^{\ell}(k):\Delta\big{(}\underline{K}^{\ell}(k)\big{)}=x\Big{|}\leq 1,$$ and so, $${\rm Var}(R_{t}^{2})\leq\sigma^{2}\sum_{x=0}^{(B+1)(\kappa(t)-1)}\sum_{k=1}^{N% }1=\sigma^{2}N\big{(}(B+1)\kappa(t)-B\big{)}.$$ (7.6) The expectation of $R_{t}^{2}$ is easier to compute. Indeed, since each $A^{\ell}$ is a stochastic matrix, $$\mathbb{E}R_{t}^{2}=\theta\sum_{\ell=0}^{\kappa(t)-1}\sum_{k=1}^{N}\sum_{% \underline{K}^{\ell}(k)}w\big{(}\underline{K}^{\ell}(k)\big{)}=\theta\sum_{% \ell=0}^{\kappa(t)-1}\sum_{k=1}^{N}(A^{\ell})_{k_{0}k}=\theta\kappa(t).$$ Combining (7.5), (7.6), and the fact that $|R_{t}^{1}|\leq C$, we obtain $$\displaystyle\mathbb{E}\bigg{(}\frac{t}{\kappa(t)}\hat{\theta}_{t}^{(k_{0})}-% \theta\bigg{)}^{2}$$ $$\displaystyle=\mathbb{E}\bigg{(}\frac{R_{t}^{1}}{\kappa(t)}+\frac{R_{t}^{2}}{% \kappa(t)}-\theta\bigg{)}^{2}$$ $$\displaystyle=\mathbb{E}\bigg{(}\frac{R_{t}^{2}-\mathbb{E}R_{t}^{2}}{\kappa(t)% }+\frac{R_{t}^{1}}{\kappa(t)}\bigg{)}^{2}$$ $$\displaystyle=\mbox{O}\bigg{(}\frac{1}{\kappa(t)}\bigg{)}.$$ The result follows from the identity $1/\kappa(t)=\mbox{O}(1/t)$. References Alon (1986) N. Alon. Eigenvalues and expanders. Combinatorica, 6:83–96, 1986. Bertsekas and Tsitsiklis (1997) D.P. Bertsekas and J.N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont, 1997. Bianchi et al. (2011a) P. Bianchi, G. Fort, W. Hachem, and J. Jakubowicz. Convergence of a distributed parameter estimator for sensor networks with local averaging of the estimates. In Proceedings of the 36th IEEE International Conference on Acoustics, Speech and Signal Processing, 2011a. Bianchi et al. (2011b) P. Bianchi, G. Fort, W. Hachem, and J. Jakubowicz. Performance analysis of a distributed Robbins-Monro algorithm for sensor networks. In Proceedings of the 19th European Signal Processing Conference, 2011b. Bianchi et al. (2013) P. Bianchi, S. Clémençon, J. Jakubowicz, and G. Morral. On-line learning gossip algorithm in multi-agent systems with local decision rules. In Proceedings of the 2013 IEEE International Conference on Big Data, 2013. Blondel et al. (2005) V.D. Blondel, J.M. Hendrickx, A. Olshevsky, and J.N. Tsitsiklis. Convergent in multiagent coordination, consensus, and flocking. In Proceedings of the Joint 44th IEEE Conference on Decision and Control and European Control Conference, 2005. Bordenave (2015) C. Bordenave. A new proof of Friedman’s second eigenvalue theorem and its extension to random lifts. arXiv:1502.04482v1, 2015. Boyd et al. (2006) S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Randomized gossip algorithms. IEEE Transactions on Information Theory, 52:2508–2530, 2006. Duchi et al. (2012) J.C. Duchi, A. Agarwal, and M.J. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57:592–606, 2012. Fiedler (1972) M. Fiedler. Bounds for eigenvalues of doubly stochastic matrices. Linear Algebra and Its Applications, 5:299–310, 1972. Foata and Fuchs (2004) D. Foata and A. Fuchs. Processus Stochastiques : Processus de Poisson, Chaînes de Markov et Martingales. Dunod, Paris, 2004. Forero et al. (2010) P.A. Forero, A. Cano, and G.B. Giannakis. Consensus-based distributed support vector machines. Journal of Machine Learning Research, 11:1663–1707, 2010. Friedman (2008) J. Friedman. A Proof of AlonÕs Second Eigenvalue Conjecture and Related Problems, volume 195 of Memoirs of the American Mathematical Society. American Mathematical Society, Providence, 2008. Grimmett and Stirzaker (2001) G.R. Grimmett and D.R. Stirzaker. Probability and Random Processes. Third Edition. Oxford University Press, Oxford, 2001. Jordan (2013) M.I. Jordan. On statistics, computation and scalability. Bernoulli, 19:1378–1390, 2013. Knight (2008) P.A. Knight. The Sinkhorn-Knopp algorithm: Convergence and applications. SIAM Journal on Matrix Analysis and Applications, 30:261–275, 2008. Marcus et al. (2015) A.W. Marcus, D.A. Spielman, and N. Srivastava. Interlacing families I: Bipartite Ramanujan graphs of all degrees. Annals of Mathematics, 182:307–325, 2015. Mateos et al. (2010) G. Mateos, J.A. Bazerques, and G.B. Giannakis. Distributed sparse linear regression. IEEE Transactions on Signal Processing, 58:5262–5276, 2010. Nilli (1991) A. Nilli. On the second eigenvalue of a graph. Discrete Mathematics, 91:207–210, 1991. Olshevsky and Tsitsiklis (2011) A. Olshevsky and J.N. Tsitsiklis. Convergence speed in distributed consensus and averaging. SIAM Review, 53:747–772, 2011. Predd et al. (2009) J.B. Predd, S.R. Kulkarni, and H.V. Poor. A collaborative training algorithm for distributed learning. IEEE Transactions on Automatic Control, 55:1856–1871, 2009. Shlomo et al. (2006) H. Shlomo, N. Linial, and A. Wigderson. Expander graphs and their applications. Bulletin of the American Mathematical Society, 43:439–561, 2006. Sinkhorn and Knopp (1967) R. Sinkhorn and P. Knopp. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21:343–348, 1967. Steger and Wormald (1999) A. Steger and N.C. Wormald. Generating random regular graphs quickly. Combinatorics, Probability and Computing, 8:377–396, 1999. Tsitsiklis et al. (1986) J.N. Tsitsiklis, D.P. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Transactions on Automatic Control, 31:803–812, 1986. Zhang et al. (2013) Y. Zhang, J.C. Duchi, and M.J. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14:3321–3363, 2013.
11institutetext: IST Austria 11email: {zetavar, krzysztof.pietrzak, michelle.yeo}@ist.ac.at 22institutetext: Faculty of Computer Science, University of Vienna, Austria 22email: {iosif.salem, stefan_schmid}@univie.ac.at 33institutetext: Centrum Wiskunde & Informatica, Amsterdam, The Netherlands 33email: [email protected] Hide & Seek: Privacy-Preserving Rebalancing on Payment Channel Networks Zeta Avarikioti 11 Krzysztof Pietrzak Supported by the Vienna Cybersecurity and Privacy Research Center (ViSP), funded by the Vienna business agency (Wirtschaftsagentur), 2020-2023 11 Iosif Salem 22 Stefan Schmid Supported partially by the Austrian Science Fund (FWF) project ”Design Framework for Self-Driving Networks” (ADVISE), I 4800-N, 2020-2023 and Vienna Cybersecurity and Privacy Research Center (ViSP), funded by the Vienna business agency (Wirtschaftsagentur), 2020-2023 22 Samarth Tiwari Supported partially by ERC Starting Grant QIP–805241 and the Vienna Cybersecurity and Privacy Research Center (ViSP), funded by the Vienna business agency (Wirtschaftsagentur), 2020-2023 33 Michelle Yeo 11 Abstract Payment channels effectively move the transaction load off-chain thereby successfully addressing the inherent scalability problem most cryptocurrencies face. A major drawback of payment channels is the need to “top up” funds on-chain when a channel is depleted. Rebalancing was proposed to alleviate this issue, where parties with depleting channels move their funds along a cycle to replenish their channels off-chain. Protocols for rebalancing so far either introduce local solutions or compromise privacy. In this work, we present an opt-in rebalancing protocol that is both private and globally optimal, meaning our protocol maximizes the total amount of rebalanced funds. We study rebalancing from the framework of linear programming. To obtain full privacy guarantees, we leverage multi-party computation in solving the linear program, which is executed by selected participants to maintain efficiency. Finally, we efficiently decompose the rebalancing solution into incentive-compatible cycles which conserve user balances when executed atomically. Keywords: Payment Channel Networks Privacy Rebalancing. 1 Introduction Cryptocurrencies are increasingly growing as an alternative payment method. By replacing a central trusted authority (e.g., a bank) with a decentralised ledger, i.e., a blockchain, mutually distrusting users now have the means to achieve consensus over transactions. However, achieving consensus on the blockchain is notoriously inefficient. Bitcoin, for instance, can only support at most 7 transactions per second on average [24]. This severely limits the scalability of blockchain solutions to every day life situations. Payment channel networks (PCNs) aim to increase the efficiency and scalability of blockchains while maintaining the benefits of security and decentralisation. PCNs operate on top of blockchains introducing Layer 2 – the blockchain itself being Layer 1. As the name suggests, a PCN consists of several payment channels between pairs of users who wish to transact with each other. Users connected indirectly through a path of channels may route transactions through the network. To open a payment channel, two users create a funding transaction where they lock funds on-chain only to be used in this payment channel. Thereafter, each transaction on the payment channel is simply an exchange of a signed message that depicts the current balances between the two users; so it does not involve the blockchain at all. This can go on indefinitely until the users go back to the blockchain to close the channel. The process of closing a channel consists of one on-chain transaction optimistically, while in worst case of a small constant number of transactions (e.g., in Lightning closing a payment channel costs at most two on-chain transactions). Thus, with at most three blockchain transactions, any pair of users can in theory make an arbitrary number of costless transactions with each other. A major drawback of payment channels is that users cannot simply “top up” their balance in the channel off-chain once it is depleted. Instead, they have to go on-chain to refund the payment channel. A solution to extend the lifetime of payment channels is rebalancing, which updates payment channels with the crucial condition that the overall balance of each node is unchanged. Although it is not possible to shift funds from one payment channel to another off-chain, the effect of rebalancing is precisely that: funds from well-funded payment channels transfer to depleted ones. There are two predominant approaches to rebalancing. The first involves a local search of rebalancing cycles (i.e., transactions of a fixed amount that begin and end with the same user) initiated by a single user. This is the current rebalancing approach in the Lightning Network [1]. The second approach (introduced in  [17]) is global instead of local: nodes looking to rebalance specify a maximum amount of rebalancing flow along each of their channels, where the rebalancing transactions are determined by a global evaluation of the state of the network. A drawback of single-user based cycle finding is it overlooks other rebalancing requests across the network, leading to local solutions. Figure 1 illustrates one such consequence which we call the “cancelling out” effect. Suppose a user Charlie wants to move 10 coins from his channel with Bob to his channel with Alice. If Charlie utilises the cycle finding approach, he will only manage to rebalance 6 coins as depicted in the graph on the right. The channel between Bob and Alice would be ignored because of the lack of sufficient balance on Bob’s end. In a globally optimal solution, however, the entire rebalancing in the graph on the left can be executed, as it takes into account that transactions in both directions can be above the capacity of a channel, as long as they “cancel out” and the resulting transaction is within the capacity. Furthermore, users must check if the other users on the cycle are willing to forward the rebalancing transaction amount, even after finding rebalancing cycles. This could lead to a prolonged and laborious search for cycles with willing participants. Lastly, this approach requires users to have global knowledge of the network topology which can be unrealistic in terms of storage as the size of the network increases. The second approach does not suffer from local limitations such as the canceling out effect, and theoretically achieves the global optimal rebalancing. Revive [17] implemented this method by assigning a random delegate, either a trusted external third party or someone from the set of participants, to receive channel constraints and solve a linear program that models rebalancing. This is a serious privacy loophole, since the delegate now has information on the concerned payment channels. Moreover, the delegate has control over the rebalancing output; for instance, the delegate may compute the rebalancing transactions in a malicious or suboptimal way, favouring some transactions over others. Although the authors proposed a method for any participant to challenge the rebalancing transactions, the process is lengthy and requires giving the challenger access to the balances of all participants. In this work, we present Hide & Seek, the first opt-in rebalancing protocol that is both private and achieves a globally optimal rebalancing. Each party that is interested in rebalancing specifies the maximum amount to be forwarded in each of the party’s channel. We employ selected delegates that receive the maximum amounts per channel, calculate and share with each party the exact amount to be moved in each channel. We formulate our problem as a linear program and set our objective function to maximize the total amount of funds to be rebalanced in the network. Our protocol does not involve transaction fees. On the other hand, we leverage multi-party computation to obtain a fully private solution. Specifically, the participants in Hide & Seek only learn the information they would have learned if a trusted third party computed the optimal rebalancing and returned to each participant the amount to be moved along each of their channel. No sensitive information such as the channel balances is leaked. Finally, we guarantee the rebalancing can be securely and efficiently executed. We propose a simple way to decompose the optimal rebalancing circulation into a set of transaction cycles. As a result, the transactions of each cycle are easy to execute atomically using HTLCs. We note that atomicity is limited to each rebalancing cycle, therefore increasing the protocol’s robustness; any cycle can be executed successfully regardless of the success of other cycles. We highlight the advantages of our approach in Table 1. 1.0.1 Our contributions. We introduce Hide & Seek, the first opt-in privacy-preserving and globally optimal rebalancing protocol that can be implemented in a secure and efficient manner. We acknowledge and discuss its limitations in terms of efficiency. We suggest several practical speed ups for the deployment of our solution, and outline possible extensions. 2 Preliminaries 2.1 Payment channels networks Users $u,v$ can open a payment channel between each other by locked some of their funds to be used only in this channel: if $u$ locks $a$ units and $v$ locks $b$ units, the state of the channel from $u$ to $v$ is modeled as a real number $\text{balance}(u,v)\in[-b,a],$ initialized as $0$. The capacity of the channel refers to the sum $a+b$ of these funds. Once the channel is created, both users can send each other money by updating the channel balances in favour of the other party, as long as the state remains within the interval $[-b,a]$. Users who are not directly connected by a channel in a PCN can still transact with each other if they are connected by a path of payment channels. The users along the transaction path which are not the sender or receiver typically charge a fee for forwarding the transaction that depends on the transaction amount. For a transaction to be successful, the sender has to first send enough money to cover both the desired payment amount and all the fees charged by each user on the payment path. That is, suppose user $s$ wants to send $x$ coins to user $r$ along a payment path $p=\{(s,u_{1}),...,(u_{k},r)\}$. Then $s$ must send $x+\sum_{i=1}^{k}\text{fee}(u_{i})$. Secondly, the balance of each user along the path must be large enough to forward the payment amount together with fees. Then for each user $u_{i}$ on $p$, $\text{balance}(u_{i},u_{i+1})\geq x+\sum_{j=i+1}^{k}\text{fee}(u_{j})$. Although the channel capacities are typically public information, the individual balances on each end are private; so senders typically have to try different payment paths until one of them succeeds. A desired guarantee for payment routing through a path in a PCN is atomicity, i.e., for all users along the path, either all of them update their balances or none of the balances in the path get updated. This is enforced in the Lightning Network using HTLCs [16]. An HTLC ($HTLC(u,v,x,h,t)$) is a smart contract between any two users $u$ and $v$ that locks some amount of coins $x$ using a hash output $h$ and a timelock $t$. To get the locked funds, $v$ has to produce the preimage $r$ to the hash $h=H(r)$ within time $t$, upon which the locked funds will be released to $v$. If $v$ cannot do so within the time limit, $u$ can claim the locked funds. Payment path atomicity is enforced using HTLCs for each channel on the path with the same hash value (determined using a secret chosen by the receiver on the path), but with decreasing timelock values from sender to receiver to guarantee the security of funds. 2.2 Network flows Consider a directed graph $G=(V,E)$ and the associated $\lvert{E}\rvert$-dimensional Euclidean space of non-negative flow along each edge. A circulation is a flow $\mathbf{f}=(f(u,v))_{(u,v)\in E}$ such that the net flow through each vertex is zero: $\sum\limits_{v\in V}f(u,v)=\sum\limits_{v\in V}f(v,u),\forall u\in V$. Two circulations $\mathbf{f}_{1},\mathbf{f}_{2}$ can be added to get yet another circulation: $\mathbf{f}_{1}+\mathbf{f}_{2}=(f_{1}(u,v)+f_{2}(u,v))_{(u,v)\in E}$. A cycle is a sequence of vertices $v_{1},v_{2}\ldots v_{k}$ such that $(v_{i},v_{i+1})\in E,\forall 1\leq i\leq k-1$ and $(v_{k},v_{1})\in E$ as well. We may equivalently refer to this cycle as $(e_{1},e_{2}\ldots e_{k})$ where $e_{i}=(v_{i},v_{i+1}),\forall 1\leq i\leq k-1$ and $e_{k}=(v_{k},v_{1})$. We call $k$ the length of this cycle. A cycle flow $\mathbf{f}$ of weight $w$ on cycle $C$ is a circulation where $f(e)=w,\forall e\in C$ and $f(e)=0$ otherwise. A standard result of network flow theory is that any circulation may be expressed as a sum of at most $\lvert{E}\rvert$ cycles. We refer the reader to the textbook of Ahuja, Magnanti and Orlin( [3]) for a detailed treatment. 3 Protocol Overview and Model 3.1 System model Payment network topology. We model the PCN as a graph $\tilde{G}=(\tilde{V},\tilde{E})$, with a vertex for each node and an edge between $u$ and $v$ if there is a payment channel between them. Let $V\subset\tilde{V}$ be the users in the PCN that are interested in rebalancing and let $G=(V,E)$ be the subgraph of $\tilde{G}$ induced by $V$. We denote $\lvert{V}\rvert=n,\lvert{E}\rvert=m$. We assume each user $u$ has only local knowledge of the PCN topology, i.e., only knows the capacities and balances on the edges incident to $u$. Cryptographic assumptions. We assume the existence of secure communications channels, hash functions and signatures. We follow [11] and assume the concept of an arithmetic black box for MPC $\mathcal{F}_{ABB}$, in particular with functionalities like secret sharing, storage, retrieval, addition, multiplication, and comparisons. Blockchain & network model. We assume a synchronous network, i.e., there is known bounded message delay. We further assume the underlying blockchain satisfies persistence and liveness as defined in [14]. 3.2 Protocol overview In a nutshell, our proposed protocol Hide & Seek consists of two phases: an exploration phase and an execution phase. Firstly, the goal of the exploration phase is to discover rebalancing cycles privately and efficiently. Then, the goal of the execution phase is to guarantee that the rebalancing transactions are executed in a secure manner. At the same time we want to maximise the efficacy of our protocol, that is, we want as many rebalancing cycles to go through as possible. Exploration phase. The exploration phase first formulates the rebalancing problem as a linear program. Then we randomly select $k$ delegates out of the participants to perform an MPC protocol to jointly solve the linear program. Next, any set of participants that wish to participate in the rebalancing protocol prepares the shared inputs to the delegates. The output of the exploration phase is a rebalancing circulation. Execution phase. We first efficiently decompose the rebalancing circulation output of the exploration phase into a set of cycle flows. These cycle flows have the property that they are sign-consistent, i.e. they are consistent with the direction of the flows in the rebalancing circulation. This makes executing these cycles incentive-compatible, as no user would have to execute transactions which violate their specified rebalancing capacity and direction along channels. Once this is done, we enforce atomicity of these cycles by creating an HTLC for each cycle which ensures either transactions along the the entire cycle goes through or none at all. 3.3 Desired properties & threat model In general, we assume a computationally bounded adversary, i.e., runs in probabilistic polynomial time. The properties Hide & Seek should guarantee are the following: 1. Balance conservation (security): The total balance of each node, which is the sum of the node’s balances on each incident channel, must remain the same before and after Hide & Seek, even when all other participants are corrupted by the adversary. 2. Privacy: The information revealed during Hide & Seek should be not exceed the minimum required to execute rebalancing: (a) the participants must only learn the transaction amounts for each of their payment channels; (b) the delegates of MPC should not be able to determine private financial information of the participants. Both (a) and (b) should hold as long as one of the delegates is not corrupted. 3. Optimality (completeness): Assuming every participant is honest, the result should be optimal in that no other rebalancing yields greater total change over all payment channels. 4 The Hide & Seek Protocol 4.1 Exploration phase 4.1.1 Linear programming for rebalancing. The practical problem of rebalancing has many facets, including keeping participants’ financial information private and facilitating coordination. We overlook these considerations momentarily to present the underlying optimization problem of rebalancing. For a payment channel between $(u,v)$, the users would like to move the state $\text{balance}(u,v)$ towards a desired state $\text{balance}^{*}(u,v)$. If $\text{balance}^{*}(u,v)>\text{balance}(u,v)$ then rebalancing would involve $u$ transferring funds to $v$, and we model this as a directed edge from $u$ to $v$ with capacity $m(u,v):=\text{balance}^{*}(u,v)-\text{balance}(u,v)$. If $\text{balance}^{*}(u,v)<\text{balance}(u,v)$ then there is a directed edge from $v$ to $u$ with capacity $m(v,u):=\text{balance}(u,v)-\text{balance}^{*}(u,v)$. Thus the graph $G$ is transformed into a directed weighted graph. The capacities $m(u,v)$ represent the most flow that can occur through each channel during rebalancing. If $m(u,v)=0$ the edge from $u$ to $v$ is either non-existent or equivalently, a zero-capacity edge. We also enforce that if $m(u,v)>0$, then necessarily $m(v,u)=0$. Let us denote a potential rebalancing by $\mathbf{f}\in\mathbb{R}^{\lvert{E}\rvert}$ on this directed graph, where $f(u,v)$ denote the flow from $u$ to $v$. Since rebalancing should not result in a net financial gain or loss for any participant, we require $\mathbf{f}$ to be a circulation. Recall that it means the net flow through each vertex is zero: $$\sum\limits_{v:(u,v)\in E}f(u,v)=\sum\limits_{v:(v,u)\in E}f(v,u).$$ Not only must the flows be non-negative, but they must also satisfy the capacity constraints as specified by participants: $$0\leq f(u,v)\leq m(u,v).$$ Thus, the set of valid rebalancings is a polytope in $m$-dimensional Euclidean space defined by $n+2m$ linear constraints: $n$ zero flow constraints for each vertex and $m$ pairs of flow capacity constraints for each edge. We wish to compute a rebalancing that maximizes the linear objective $\sum\limits_{(u,v)\in E}f(u,v)$. We call the linear program so specified the rebalancing problem. This choice of objective function amounts to maximizing the total change in each payment channel’s balance towards its desired state. 4.1.2 Solving the rebalancing problem. One can apply any linear programming algorithm of preference to solve the rebalancing, such as any from the family of simplex methods. In fact, the rebalancing problem can be reduced to the min-cost flow problem, a specialization of linear programming which can be solved more easily. For instance, the min-cost flow problem admits a strongly polynomial algorithm, meanwhile the corresponding question for linear programming is a major open problem in the field. Appendix 0.A illustrates how the rebalancing problem is equivalent to a min-cost flow problem with the same number of vertices and edges. Henceforth, we refer to the rebalancing problem as a min-cost flow problem. 4.1.3 Delegate selection and multi-party computation. Delegate selection can be done using a simple version of cryptographic sortion as in [15]. Each of the $k$ delegates involved in the MPC gets $n+2m$ inputs which are shares of each of the $n$ participant’s zero flow constraints and the $2m$ rebalancing capacity constraint along the $m$ directed edges (for each edge we have two constraints: one which specifies the maximum rebalancing flow in one direction, and another which specifies the flow has to be 0 in the other direction). The objective function is also shared and given as an input to the delegates. The delegates jointly compute the optimal solution to the rebalancing LP problem and each delegate outputs a share of the final flow on each edge at the end of the protocol. 4.2 Execution phase 4.2.1 Cycle decomposition. The exploration phase concludes with a solution to the rebalancing linear program obtained through multi-party computation. This solution $\mathbf{f}^{*}$ is in fact encoded as shared secrets, and, as observed in [29] (relevant passage), one can process the solution further before returning to individual participants. Instead of directly sending each $f^{*}(u,v)$ to $u$, we decompose the circulation into a sum of cycle flows. This makes the execution of rebalancing via HTLCs easier; instead of the entire network committing their funds to a large atomic rebalancing transaction, each cycle only requires coordination between nodes constituting the cycle. As mentioned earlier, each circulation can be expressed as a sum of cycle flows. We briefly describe a standard algorithm to compute this decomposition efficiently. Algorithm 1 uses depth-first search as a subroutine to detect cycles and then induce cycle flows on them. Figure 2 depicts a circulation and its decomposition into cycle flows. 4.2.2 HTLC commitments per cycle. Given such a decomposition, we need to enforce atomicity of each cycle flow by creating an HTLC for each cycle $c$ in the set. This can be done by first selecting a user in each cycle at random to initiate the cycle. This user has to choose a random secret $r_{c}$ from some domain $\mathcal{X}$ and create a hash of the secret $h_{c}=H(r_{c})$. The timelock for the initiator of the cycle and the next user is set equal to the length of the cycle. The transaction amount to send along each cycle is the weight of the cycle $w_{c}$. Every subsequent user in the cycle decrements the timelock value by 1 and looks up the next user in the cycle they should create an HTLC with (determined by the vertex order in the cycle). They then create an HTLC with that user with the decremented timelock value (lines 5-7 in Algorithm 2). Finally we note that Algorithm 1 and Algorithm 2 can be computed privately using MPC. To prevent any two users on a cycle $c$ from sharing their hash $h_{c}$ with each other and thus finding out they are in the same cycle, one can use MAPPCN [30] to preserve user anonymity. 5 Analysis 5.0.1 Desired Properties. An execution of Hide & Seek satisfies the desired properties as stated in Section 3.3. Let us study each of the properties in order. Balance Conservation. Suppose there is a node $v$ that enjoys net financial gain through the execution of Hide & Seek under a malicious adversary. Hide & Seek specifies a set of cycle flows that the nodes may execute, and $v$ must have participated in some subset of these. Note that by atomicity of cycle flows ensured through HTLCs, it is not possible for a cycle to be executed partially (even when parties act maliciously). If $v$’s balance increased, that means there must be at least one cycle flow with net positive flow through $v$. But this contradicts the definition of cycle flows, since they must satisfy zero flow through each node: $$\sum\limits_{(u,v)\in E}f(u,v)=\sum\limits_{(v,u)\in E}f(v,u)\quad\forall v\in V.$$ Privacy. The sensitive data used in the exploratory phase of Hide & Seek remains private as long as at least one delegate of the MPC is honest (inherited by the MPC). In the execution phase, users do not know the other users in their cycle except their predecessors and successors in their cycles as we use MAPPCN to preserve user anonymity. Optimality. Assuming the delegates compute the solution correctly, the circulation returned by the min-cost flow algorithm maximizes the total flow through each edge. Under the same assumption, the cycle decomposition algorithm would result in an equivalent (and thus also optimal) set of cycle flows. 5.0.2 Efficiency. We break the analysis of the efficiency of Hide & Seek into three parts: (1) solving the rebalancing problem, (2) cycle decomposition, and (3) MPC. Solving the rebalancing problem. Solving the underlying min-cost flow problem is the most computationally intensive aspect of Hide & Seek. Fortunately we can leverage the vast body of algorithms for this problem being asymptotically optimal in different parameter regimes. The complexity of these algorithms is analyzed in terms of $n,m$, the largest capacity $U$ and the largest cost $W$ of an edge. We may presently ignore the term $W$ as each edge has identical cost $1$. For the parameter regime of rebalancing, we recommend the double scaling algorithm of Ahuja, Goldberg, Orlin and Tarjan with computes the optimal solution in time $O(nm\log nW\log\log U)$ ([4]). An alternative is to use a network simplex algorithm. This family of algorithms are excellent in practice, although theoretical analysis of their effectiveness is an active area of research in optimization. Simplex algorithms are also incredibly simple, and for this reason have they been recommended in Toft’s framework for privately solving linear programs([29] despite the somewhat poorer theoretical guarantees. We recommend the network simplex algorithm of Orlin ([22]) for the rebalancing problem, which terminates in at most $O(nm\log n)$ pivots. Generally, the amortized cost per pivot is $O(n)$, but Orlin presents a modification with total runtime $O(nm\log n\log nW)$. Cycle decomposition. If the rebalancing circulation obtained by solving the min-cost flow contains $n^{\prime}$ vertices and $m^{\prime}$ edges, then the cycle decomposition algorithm as detailed in Algorithm 1 terminates in $O(n^{\prime}m^{\prime})$ time, which is $O(nm)$ at worst. Every loop iteration removes at least one edge, and each iteration visits at most $n^{\prime}$ vertices before finding a cycle. The pre-processing of $G$ to obtain the subgraph induced by the circulation takes $O(n+m)$ time. Also note that the timelocks used in the execution of a cycle flow are bounded by the length of the cycle. MPC. Although MPC implementations of optimization algorithms incur a penalty in speed, there are multiple methods to speed up the implementation of Hide & Seek: Firstly, the rebalancing problem, much like many other min-cost flow problems, satisfies the Hoffman-Gale conditions: the optimal solution, along with the vertices of the polytope, is guaranteed to be integral. This means the MPC can be performed over faster integer arithmetic rather than slower floating point arithmetic. Hide & Seek can be implemented even faster by reducing the number of bits per variable. This quantity is governed by the maximum capacity per edge as well as the granularity of rebalancing, so that the number of bits required depends on the specific cryptocurrency. For instance, an implementation of Hide & Seek for Bitcoin with just $20$ bits per variable may restrict all quantities to multiples of $2^{10}=1024$ satoshis up to $2^{30}$ satoshis which is approximately $10$ bitcoins. The number of delegates chosen to compute the MPC also contributes to the communication cost during rounds, and here we note that Hide & Seek does not place any limitations on this number. In fact, it can be as low as two delegates as long as one of them is honest. Finally, the efficiency of our protocol inherently depends on the MPC primitives used. This is a wide and active area of research, with a lot of new developments in making efficient MPC primitives [10, 5, 9, 8]. 6 Limitations and Extensions In this section, we identify the limitations of our protocol and discuss possible extensions. 6.0.1 Rational participants. Participants in financial networks such as PCNs typically act selfishly, aiming to increase their financial gain. As a result, an interesting future study is the security of our scheme under rational participants. In the execution phase, the cycle decomposition ensures that participants always gain from executing a cycle because the cycles are sign-consistent. Nevertheless, HTLCs have been proven vulnerable to attacks where participants collude and act for-profit [20]. Regarding the exploration phase, it has been shown that when participants are rational (with respect to privacy) MPC is possible using randomized mechanisms with constant expected running time [2]. 6.0.2 Weighted LP. The linear program of the rebalancing problem currently maximizes the total flow through each edge in the network. This is but an approximation of the practical objective, since in practice, flows through distinct edges are not necessarily equally important. A more accurate model of rebalancing involved modifying the objective function from $\sum_{(u,v)\in E}f(u,v)$ to $\sum_{(u,v)\in E}w(u,v)f(u,v)$ for non-negative integral weights $w(u,v)$ supplied by $u$ via secret sharing. Let $W$ be the maximum possible weight that participants may specify. This slight modification greatly enlarges the expressive power of participants, as they can now provide local preferences of one cycle over another. For instance, a user $u$ with one outgoing edge $e_{0}$ and three incoming edges $e_{1},e_{2},e_{3}$ wishes to rebalance $e_{0}$ desperately. $u$ considers rebalancing along $e_{1}$ favorable but not urgent, is indifferent to rebalancing along $e_{2}$, and does not permit any flow through $e_{3}$. Knowing that outgoing flow through $e_{0}$ must be balanced by equal incoming flow, $u$ may assign a weight $w(e_{2})=0$ to allow for flow through $e_{2}$ and then $e_{0}$ in order to rebalance $e_{0}$. This edge preference can be expressed by weights: $$w(e_{0})=W,\qquad w(e_{1})=1,\qquad w(e_{2})=0,$$ and by not including $e_{3}$ in the protocol at all. The desired properties of Hide & Seek continue to hold after this modification. In terms of efficiency, the double scaling algorithm that we use runs in $O(nm\log nW\log\log U)$ time rather than $O(nm\log n\log\log U)$ [4]. The major drawback of this modification is game theoretic: although incorporating preferences is straightforward when users faithfully follow the protocol, it breaks under the assumption of rational participants. In particular, misreporting the weight of every edge as the maximum $W$ is a dominant strategy, since that assigns the highest possible weight to every cycle that a user is part of. This reduces this modification to the original case of maximizing $\sum\limits_{(u,v)\in E}f(u,v)$. An improved design of this mechanism, such as a clever budgeting of weights, could circumvent this problem, manage individual users’ incentives, and let the weighted LP extension be used practically. 6.0.3 Optimality with corrupted participants. Participants’ sensitive financial data, such as existence of a payment channel and its capacity for rebalancing, is not verified in the protocol, nor does our threat model consider falsification of this data with respect to optimality. Unfortunately, this lack of verification can prevent any rebalancing to occur: an adversary with knowledge of the payment channel network can falsify edge data so that each cycle passes through one of their edges. The adversary can then refuse to participate in the execution phase and prevent others from rebalancing, even when cycle flows between honest parties exist. To defend against such adversary, we propose that parties submit zero knowledge proof of validity along with their edge constraint data. Although one cannot force participants to participate in rebalancing cycles, this modification certainly increases the success rate of rebalancing cycles in Hide & Seek even under an active adversary. 7 Related Work Rebalancing PCNs. There are several payment channel primitives proposed in literature [28, 24, 12, 6, 7, 21]. Regardless of the primitive, a challenge all PCNs share is how to route transactions in the PCN while maintaining balanced channels for as long as possible. Classic routing studies in PCNs like SilentWhispers [19], SpeedyMurmurs [26], and others  [25] ignore that channels may be slowly depleting. A promising approach to avoid channel depletion and prolong the network availability for transaction routing is to maintain balanced channels or occasionally perform rebalancing. But transaction routing is a challenging task on its own because the channel balances remain secret for privacy purposes  [17, 27, 31], let alone avoiding channel depletion on-top. Khalil and Gervais introduce the first channel rebalancing protocol, called Revive [17]. They formulate the problem as an LP, similarly to our work. Then, a delegate is elected to solve the LP and return the solution to the rebalancing participants. Although our work lies close to Revive, it also differs in several aspects. First, Revive considers rebalancing as an LP as well, but Hide & Seek employs faster and more specific min-cost flow algorithms. Second, Revive relies on a single delegate to compute the optimal rebalancing which leaks private information about balances to the delegate. In contrast, Hide & Seek uses MPC to achieve full privacy guarantees. Since Hide & Seek uses MPC, the speeds of the two protocols cannot be compared. We nevertheless expect Revive to also benefit from using our min-cost flow framework. Finally, atomic execution of the rebalancing transactions in Revive requires the transaction language of the underlying blockchain to be Turing-complete, and thus it is not suitable for Bitcoin. Hide & Seek avoids this issue by first decomposing the optimal rebalancing into cycles, and then executing these cycles atomically using HTLCs. The cycle decomposition in Hide & Seek also ensures that, as long as the channel is not part of all cycles, some rebalancing can still occur if individual HTLCs fail on a channel. From a practical perspective, rebalancing in the Lightning Network currently utilises a brute force search for rebalancing cycles with sufficient capacity. An automated approach for doing so using the imbalance measure was proposed by [23]. Unlike Hide & Seek, these methods do not leverage other rebalancing requests to find the globally optimal rebalancing. These methods also require nodes to have global knowledge of the network whereas nodes in Hide & Seek only need to have local knowledge of the PCN. Recently some works introduce routing protocols that attempt to maintain balanced channels. In particular, Spider [27] is a payment routing algorithm that maximizes the throughput while maintaining the original channel balances, without providing rebalancing however. Li et al. [18] propose to extend the lifetime of payments channels by estimating payment demand, and using this estimate to decide on the initial balance of channels. Engelshoven and Roos [13], on the other hand, leverage routing fees to incentivize the balanced use of payment channels. All these works are orthogonal and complementary to ours, as we introduce an opt-in rebalancing protocol. Network flows and MPC. The general problem of solving network flow problems via multi-party computation is considered in the comprehensive PhD thesis of Aly [5]. Various privacy preserving implementations of combinatorial optimization problems are presented. The author acknowledges that the cost for privacy is very high even for the simplest of problems. Roughly speaking, their MPC implementations must iterate for the theoretical worst-case number of iterations to maintain privacy. For the practical problem of rebalancing though, we do not choose to implement extra iterations. On the other hand, we believe that suboptimal rebalancing is better than no rebalancing, and recommend terminating the min-cost flow solution prematurely if needed. Both scaling algorithms and network simplex algorithms monotonically generate better solutions in each iteration, leaving the participants with a feasible solution if they stop early. 8 Conclusion and Future Work In this work we study the rebalancing problem for PCNs. We present Hide & Seek, which is a secure opt-in rebalancing protocol, that is also private and finds the globally-optimal rebalancing. Hide & Seek achieves better efficiency by reducing the rebalancing problem to a min-cost flow problem. Hide & Seek also achieves better robustness by decomposing the solution into cycles and executing each cycle atomically, as opposed to executing the entire solution atomically. An interesting direction for future work is to consider the transaction aggregation problem, which is similar to rebalancing but without the balance conservation property (for instance Alice’s balance is not conserved if Alice wants to pay Bob 2 coins for a coffee). The main difficulty with transaction aggregation comes from the constraint that transactions may not be executed partially. In other words, where the optimization underlying rebalancing is a linear program (solvable in polynomial time), the problem underlying transaction aggregation is an integer program (NP-complete in general). References [1] Rebalance plugin. https://github.com/lightningd/plugins/tree/master/rebalance [2] Abraham, I., Dolev, D., Gonen, R., Halpern, J.Y.: Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation. In: PODC (2006), https://doi.org/10.1145/1146381.1146393 [3] Ahuja, R., Magnanti, T., Orlin, J.: Network flows - theory, algorithms and applications (1993) [4] Ahuja, R., Goldberg, A., Orlin, J., Tarjan, R.: Finding minimum-cost flows by double scaling. Math. Program. 53, 243–266 (02 1992). https://doi.org/10.1007/BF01585705 [5] Aly, A.: Network flow problems with secure multiparty computation (2015) [6] Avarikioti, Z., Kogias, E.K., Wattenhofer, R., Zindros, D.: Brick: Asynchronous incentive-compatible payment channels. In: FC (2021), https://fc21.ifca.ai/papers/168.pdf [7] Avarikioti, Z., Litos, O.S.T., Wattenhofer, R.: Cerberus channels: Incentivizing watchtowers for bitcoin. In: FC (2020), 10.1007/978-3-030-51280-4_19 [8] Baum, C., Orsini, E., Scholl, P., Soria-vazquez, E.: Efficient constant-round mpc with identifiable abort and public verifiability. Springer-Verlag (2020) [9] Catrina, O., de Hoogh, S.: Secure multiparty linear programming using fixed-point arithmetic. In: Gritzalis, D., Preneel, B., Theoharidou, M. (eds.) Computer Security – ESORICS 2010. pp. 134–150. Springer Berlin Heidelberg, Berlin, Heidelberg (2010) [10] Cramer, R., Fehr, S., Ishai, Y., Kushilevitz, E.: Efficient multi-party computation over rings. In: Biham, E. (ed.) Advances in Cryptology - EUROCRYPT 2003, International Conference on the Theory and Applications of Cryptographic Techniques, Warsaw, Poland, May 4-8, 2003, Proceedings. Lecture Notes in Computer Science, vol. 2656, pp. 596–613. Springer (2003). https://doi.org/10.1007/3-540-39200-9_37, https://doi.org/10.1007/3-540-39200-9_37 [11] Damgård, I., Nielsen, J.B.: Universally composable efficient multiparty computation from threshold homomorphic encryption. In: Boneh, D. (ed.) Advances in Cryptology - CRYPTO 2003, 23rd Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 2003, Proceedings. Lecture Notes in Computer Science, vol. 2729, pp. 247–264. Springer (2003). https://doi.org/10.1007/978-3-540-45146-4_15, https://doi.org/10.1007/978-3-540-45146-4_15 [12] Decker, C., Wattenhofer, R.: A fast and scalable payment network with bitcoin duplex micropayment channels. In: Stabilization, Safety, and Security of Distributed Systems (2015), 10.1007/978-3-319-21741-3_1 [13] van Engelshoven, Y., Roos, S.: The merchant: Avoiding payment channel depletion through incentives. CoRR abs/2012.10280 (2020), https://arxiv.org/abs/2012.10280 [14] Garay, J., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol: Analysis and applications. In: Eurocrypt (2015), 10.1007/978-3-662-46803-6_10 [15] Gilad, Y., Hemo, R., Micali, S., Vlachos, G., Zeldovich, N.: Algorand: Scaling byzantine agreements for cryptocurrencies. In: SOSP (2017), 10.1145/3132747.3132757 [16] Joseph Poon, T.D.: The bitcoin lightning network: Scalable off-chain instant payments. Tech. rep., https://lightning.network/lightning-network-paper.pdf [17] Khalil, R., Gervais, A.: Revive: Rebalancing off-blockchain payment networks. In: CCS (2017), 10.1145/3133956.3134033 [18] Li, P., Miyazaki, T., Zhou, W.: Secure balance planning of off-blockchain payment channel networks. In: IEEE INFOCOM 2020 - IEEE Conference on Computer Communications. pp. 1728–1737 (2020). https://doi.org/10.1109/INFOCOM41043.2020.9155375 [19] Malavolta, G., Moreno-Sanchez, P., Kate, A., Maffei, M.: Silentwhispers: Enforcing security and privacy in decentralized credit networks. In: NDSS (2017), 10.14722/ndss.2017.23448 [20] Malavolta, G., Moreno-Sanchez, P., Schneidewind, C., Kate, A., Maffei, M.: Anonymous multi-hop locks for blockchain scalability and interoperability. In: NDSS (2019), 10.14722/ndss.2019.23330 [21] Miller, A., Bentov, I., Bakshi, S., Kumaresan, R., McCorry, P.: Sprites and state channels: Payment networks that go faster than lightning. In: FC (2019), 10.1007/978-3-030-32101-7_30 [22] Orlin, J.: A polynomial time primal network simplex algorithm for minimum cost flows. Math. Prog. 78, 109–129 (01 1996). https://doi.org/10.1007/BF02614365 [23] Pickhardt, R., Nowostawski, M.: Imbalance measure and proactive channel rebalancing algorithm for the lightning network. In: IEEE International Conference on Blockchain and Cryptocurrency, ICBC 2020, Toronto, ON, Canada, May 2-6, 2020. pp. 1–5. IEEE (2020). https://doi.org/10.1109/ICBC48266.2020.9169456, https://doi.org/10.1109/ICBC48266.2020.9169456 [24] Poon, J., Dryja, T.: The bitcoin lightning network: Scalable off-chain instant payments. https://lightning.network/lightning-network-paper.pdf (2015) [25] Prihodko, P., Zhigulin, S., Sahno, M., Ostrovskiy, A., Osuntokun, O.: Flare: An approach to routing in lightning network. shorturl.at/adrHP (2016) [26] Roos, S., Moreno-Sanchez, P., Kate, A., Goldberg, I.: Settling payments fast and private: Efficient decentralized routing for path-based transactions. arXiv preprint arXiv:1709.05748 (2017) [27] Sivaraman, V., Venkatakrishnan, S.B., Ruan, K., Negi, P., Yang, L., Mittal, R., Fanti, G., Alizadeh, M.: High throughput cryptocurrency routing in payment channel networks. In: 17th $\{$USENIX$\}$ Symposium on Networked Systems Design and Implementation ($\{$NSDI$\}$ 20). pp. 777–796 (2020) [28] Spilman, J.: Anti dos for tx replacement. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-April/002433.html, accessed: 2020-11-22 [29] Toft, T.: Solving linear programs using multiparty computation. In: Dingledine, R., Golle, P. (eds.) Financial Cryptography and Data Security, 13th International Conference, FC 2009, Accra Beach, Barbados, February 23-26, 2009. Revised Selected Papers. Lecture Notes in Computer Science, vol. 5628, pp. 90–107. Springer (2009). https://doi.org/10.1007/978-3-642-03549-4_6, https://doi.org/10.1007/978-3-642-03549-4_6 [30] Tripathy, S., Mohanty, S.K.: Mappcn: Multi-hop anonymous and privacy-preserving payment channel network. In: International Conference on Financial Cryptography and Data Security. pp. 481–495. Springer (2020) [31] Yu, R., Xue, G., Kilari, V.T., Yang, D., Tang, J.: Coinexpress: A fast payment routing mechanism in blockchain-based payment channel networks. In: 2018 27th International Conference on Computer Communication and Networks (ICCCN). pp. 1–9. IEEE (2018) Appendix 0.A Reduction of The Rebalancing Problem to min-cost Flow Recall that the rebalancing problem consists of finding a circulation on a directed graph with maximum flow while also satisfying the capacity constraints. The related well-studied problem of min-cost circulation provides a cost to each edge as well as lower and upper bounds on the flow through each edge. Rebalancing can thus be seen as a circulation problem with negative costs with flow bounds given by $0$ and capacity $m(u,v)$. Below, we provide a short reduction to the more fundamental min-cost flow problem on the same graph. The reduction is a simple change of variables: define $\mathbf{f}^{\prime}\in\mathbb{R}^{m}$ as $f^{\prime}(v,u):=m(u,v)-f(u,v)$. Consider the reversed graph $G^{\prime}=(V,E^{\prime})$ where all directed edges from $G$ are reversed $E^{\prime}=\{{e^{\prime}=(v,u):(u,v)\in E}\}$. Rebalancing on $G$ is equivalent to a min-cost flow problem on $G^{\prime}$. The constraints $0\leq f(u,v)\leq m(u,v)$ transform into $0\leq f^{\prime}(v,u)\leq m^{\prime}(v,u)=m(u,v)$. Finally, the zero flow constraints from the rebalancing problem $$\sum\limits_{(u,v)\in E}f(u,v)-\sum\limits_{(v,u)\in E}f(v,u)=0$$ transform into $$\sum\limits_{(v,u)\in E^{\prime}}m(u,v)-f^{\prime}(v,u)-\sum\limits_{(u,v)\in E^{\prime}}m(v,u)-f^{\prime}(u,v)=0,$$ or, $$\sum\limits_{(u,v)\in E^{\prime}}f^{\prime}(u,v)-\sum\limits_{(v,u)\in E^{\prime}}f^{\prime}(v,u)=\sum\limits_{(u,v)\in E}m(u,v)-\sum\limits_{(v,u)\in E}m(v,u)$$ In other words, the sources and sinks can be defined by whether $\sum\limits_{(u,v)\in E}m(u,v)-\sum\limits_{(v,u)\in E}m(v,u)$ is positive or negative. By a standard technique, we can further reduce the problem to that containing a single source and single sink by appending so-called “super-source and super-sink” to $G^{\prime}$. Finally, we need to specify the cost to complete the problem description: if the objective of rebalancing is to maximise $\sum\limits_{(u,v)\in E}c(u,v)f(u,v)$ then we specify the min-cost flow problem to minimize $\sum\limits_{(u,v)\in E^{\prime}}c(v,u)f^{\prime}(u,v)$. In this way, not only are the feasible regions of both problems equivalent by the described change of variables, but so are the optimum solutions.
Search for non-Poissonian behavior in nuclear $\bbox{\beta}$ decay Giorgio Concas${}^{1,2,}$[*] and Marcello Lissia${}^{3,1,}$[†] ${}^{1}$Dipartimento di Scienze Fisiche, Università di Cagliari, via Ospedale 72, I-09124 Cagliari, Italy ${}^{2}$Istituto Nazionale per la Fisica della Materia, via Ospedale 72, I-09124 Cagliari, Italy ${}^{3}$Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, via Negri 18, I-09127 Cagliari, Italy (November 1996) Abstract We performed two independent counting experiments on a $\beta$-emitting source of ${}^{151}_{62}\text{Sm}$ by measuring the $\gamma$ photon emitted in a fraction of the decays. For counting times ranging from $10^{-3}$ to $5.12\times 10^{4}$ seconds, our measurements show no evidence of deviations from Poissonian behavior and, in particular, no sign of $1/f$ noise. These measurements put strong limits on non-Poissonian components of the fluctuations for the subset of decays accompanied by $\gamma$, and corresponding limits for the total number of $\beta$ decays. In particular, the magnitude of a hypothetical flicker floor is strongly bounded also for the $\beta$ decay. This result further constrains theories predicting anomalous fluctuations in nuclear decays. pacs: PACS numbers: 05.40.+j, 02.50.-r, 23.90.+w ††preprint: To appear in Phys. Rev. E 55 (1997) nucl-ex/9612007 INFNCA-TH9613 I Introduction The statistics of the radioactive decay of heavy nuclei have been the subject of much experimental and theoretical work in the past decade. So wide an interest was stimulated by the conjecture that, owing to the intrinsic fluctuations of the decay rate, the counting statistics could depart from the simple Poissonian behavior [3, 4, 5, 6, 7, 8, 9]. The experimental results are often conflicting, even for the same kind of source. On the one hand, there exist investigations both on $\alpha$ (${}^{241}_{95}\text{Am}$ [10, 11, 12, 13] and ${}^{210}_{84}\text{Po}$ [14]) and $\beta$ decays (${}^{137}_{55}\text{Cs}$ [15]) that confirm the Poissonian nature of these processes. On the other hand, several experiments carried out both with $\alpha$ (${}^{241}_{95}\text{Am}$, ${}^{239}_{94}\text{Pu}$, and ${}^{244}_{96}\text{Cm}$ [16, 17, 18, 19]) and with $\beta$ sources (${}^{204}_{81}\text{Tl}$ [20], ${}^{90}_{39}\text{Y}$ [21] and ${}^{90}_{38}$Sr-${}^{90}_{39}$Y [22]), find that the counting variance, for long counting periods, is higher than the Poissonian value by more than one order of magnitude. This anomalous large variance has been taken as an experimental evidence that the power spectrum of the decay-rate fluctuations has a contribution that grows as the inverse of the frequency $f$ at low frequencies, in addition to the usual frequency-independent Poissonian component. Several mechanisms have been proposed as possible sources of this $1/f$ noise: quantum self-interference between the wave packets of the emitted particles [5, 6], solid-angle fluctuations and random rearrangements within the source [8], spatial $1/f$ noise in the detector [19]. As a matter of fact, the interpretation of the decay experiments reporting a variance in excess of the Poisson value is still an open problem [8]. In previous work [23, 24] we considered the decay statistics of a $\gamma$ source (${}^{119m}_{50}\text{Sn}$). In that case, we measured that, for counting periods $T$ longer than one hour, the variance of the decay rate significantly deviated from the Poissonian prediction. However, that behaviour could be fully explained by taking into account the time dependence of the statistics [25] without resorting to any exotic effect [23, 24]. The aim of this article is to extend our experimental study to a different nucleus, ${}^{151}_{62}\text{Sm}$, that undergoes $\beta$ decay. There are in fact theoretical claims [8] that deviations from Poissonian statistics could be caused by self-interference of the emitted particles and that these deviations should be present only in $\beta$ decays and not in $\gamma$ or $\alpha$ decays. II Experimental setup The mean lifetime of ${}^{151}_{62}\text{Sm}$ is ($130\pm 12$) years [26]. While most of the nuclei directly $\beta$ decay into the ground state of ${}^{151}_{63}\text{Eu}$, a small fraction (0.91%) $\beta$-decays into an excited state of energy $E_{\text{exc}}=(21.532\pm 0.068)$ keV, which then decays to the ground state (mean lifetime: $1.38\times 10^{-8}$ seconds). In the 3.45 % of cases, this second fast transition produces a $21.532$ keV $\gamma$-photon: our apparatus has been set up to detect this photon. In summary, our measurement has the characteristic of selecting a fraction $\xi$ of the total decays ($\xi=(3.14\pm 0.22)\times 10^{-4}$): those decays that go through the two-step process, $\beta$ emission followed by a $21.532$ keV photon [26]. We shall discuss later and in the Appendix why and to what extent our results on the statistics of the $\gamma$’s also carry information on the total statistics of the $\beta$ decays. The source is a crystal of $\text{SmF}_{3}$ containing ${}^{151}_{62}\text{Sm}$ nuclei (activity: 3.7 GBq) shaped as a thin disk (diameter: 14 mm) with an aluminium cap. The aluminium cap, which closes the source, filters out the $\beta$ particles. In the two experiments, which we denote A and B, we used the same source at different distances (about 15 and 7 cm, respectively) from the detector in order to change the count rate: while Poissonian statistics only depends on the total number of counts (rate $\times$ time), deviations from the standard case (and/or systematic errors) could in principle depend also on the rate (see Ref. [23] for one such an example) and it is better to have the possibility of performing this kind of checks. In each experiment the photons were detected by a disk-shaped crystal of NaI(Tl) (diameter: 5 cm) integrally mounted on a photomultiplier tube (PMT): we used a crystal 1 mm thick in experiment A, and a crystal 2 mm thick in experiment B. In both experiments, the output signal from the PMT, after being amplified and shaped to a Gaussian pulse, passed through a single-channel analyzer, which selected pulses corresponding to an energy window from 2 to 53 keV. The pulse-shaping time constant was $0.5\mu$s and the time resolution of the single-channel analyzer $0.6\mu$s. The dead time of the entire system was about $2.5\mu$s. The energy window was preliminarily set by means of a multi-channel analyzer module; we have verified that no appreciable drift of the window occurred during the experiments, which lasted 76 days (A) and 19 days (B). We verified that the stability of the energy window and of the voltage of the power supply were sufficient to keep systematic variations of the counting rate below 0.01%, therefore below the statistical fluctuations we measured: only for the longest measurements (total counts of the order of $10^{8}$) the fluctuations-to-signal ratio was as low as $10^{-4}$ ($1/\sqrt{10^{8}}$). Counting was executed by a programmable multi-channel scaler (MCS) module interfaced to an IBM PC, which provides for control and data storage. In experiment A (B), a set of 40 (38) values of the counting period $T$ was preliminarily defined in the control program with $T$ ranging from $T_{\text{min}}=T_{1}=10^{-3}$s to $T_{\text{max}}=T_{40}=2^{9}\times 100$s ($T_{\text{max}}=T_{38}=2^{7}\times 100$s). For each value of $T$ the MCS module counted the events occurring in each of 64 consecutive periods of length $T$. Count data were saved on hard disk for further off-line analysis. At the end of experiment A (B), data were available as 40 (38) sequences of 64 counts $M^{T}_{k}$ ($k=1,\ldots,64$), one for each of the prefixed values of $T$. III Results and discussion We analyzed the data by computing the average count and the Allan variance (see Refs. [7, 27] and references therein) as function of the time interval $T$. All our results originate from a single uninterrupted run (for each experiment), have been averaged over the same number (64) of consecutive intervals, and are statistically independent (each count has been used only once). First we verified that the count rate during each experiment had no drifts that could bias the Allan variance; in particular, the slow exponential decay of the source could not affect the Allan variance at the low count rates we operated [23]. Therefore, it is consistent that we consider a constant average rate. We measured this average rate $$m=\sum_{T}\frac{1}{T}\frac{1}{64}\sum_{k=1}^{64}M^{T}_{k}\quad,$$ (1) finding $m=(5.3687\pm 0.00025)\times 10^{3}$ count/s in experiment A and $m=(2.4262\pm 0.00022)\times 10^{4}$ count/s in experiment B. Being the rate constant, the average count for an interval of length $T$ $$\overline{M}(T)\equiv\frac{1}{64}\sum_{k=1}^{64}M^{T}_{k}$$ (2) has an expectation value proportional to $T$: $\langle\overline{M}(T)\rangle=mT$. There are two reasons for using the Allan variance, which we estimate with an average over 63 consecutive measurements $$A(T)\equiv\frac{1}{2\times 63}\sum_{k=1}^{63}[M^{T}_{k}-M^{T}_{k+1}]^{2}\,,$$ (3) instead of the usual variance. The first and most important reason is that $A(T)$ is finite even when the power spectrum grows as $1/f$ at low frequencies: when non-Poissonian fluctuations might be present, the Allan variance is then a common choice. We remind that the power spectrum of Poissonian fluctuations is independent of frequency: $S(f)=2mT$; since counts are uncorrelated, the Allan and the usual variance have the same expectation (use $\langle M^{T}_{j}M^{T}_{k}\rangle\propto\delta_{jk}$ and Eq. (3)), namely, the average count: $\langle A(T)\rangle=\langle\overline{M}(T)\rangle=mT$. However, if the fluctuations have a power spectrum $S(f)=C/f$ ($C$ is a constant independent of $f$), i.e., we are in presence of $1/f$ noise, the expectation value of $A$ is: $\langle A(T)\rangle=C\ln 4\,T^{2}$ (note the different power of $T$ compared to Poissonian fluctuations), while the usual variance is infinite [7]. A second additional advantage of using the Allan variance is that it is less sensible to drifts of the count rate: the correction is independent of the number of intervals (64) and not proportional to it, see the Appendix of Ref. [23]. Before discussing our results, we wish to comment on our choice of observing the channel of the decay characterized by the emission of a 21.532 keV photon. A more detailed discussion can be found in the Appendix. We made this choice because we can control better the stability of our measurements when detecting photons than when detecting electrons, given our present equipment. However, since one of the motivations of our experiment was to study fluctuations in a $\beta$ decay, it is natural to ask to what extent the statistics of the $\gamma$ emission reflects the statistics of the $\beta$ decay. The time delay of the emission is so small (mean lifetime of the excited state: $1.38\times 10^{-8}$ seconds) compared to the time intervals of interest that its effect is negligible. Yet one might worry that the fluctuations of the small branching ratio (the fraction of decays that on average emit the $\gamma$ is only $\xi=0.000314$) might overwhelm any exotic effect of the original decay. The explicit calculation reported in the Appendix shows that: (1) an upper bound on the flicker floor in the statistics of the $\gamma$’s implies an equal bound on the flicker floor in the statistics of the parent decay; (2) an upper bound on the ratio of $1/f$ noise to Poissonian noise in the statistics of the $\gamma$’s implies a corresponding bound for the statistics of the parent decay weaker by a factor $1/\xi$; (3) upper bounds on less singular, e.g. frequency independent, deviations from Poissonian behavior in the statistics of the $\gamma$’s imply corresponding bounds on the parent decay: these bounds on the statistics of the $\beta$ decay are also weaker by a factor $1/\xi\approx 3000$. We report in Figs. 1 and 2 the ratio $R(T)\equiv A(T)/M^{2}(T)$ (reduced Allan variance) versus the inverse of the number of counts $1/M(T)$ for experiments A and B, respectively. Both experiments show that $R(T)$ depends linearly on $1/M(T)=1/(mT)$ with unit slope in the range of $T$ considered. The data perfectly fit the Poisson prediction $R(T)=M/M^{2}=1/M\propto 1/T$; this prediction is also reported in Figs. 1 and 2 as a solid line. A fit to the data yields $M(T)\times R(T)=0.99\pm 0.02$. On the contrary, a power spectrum $S(f)=C/f$ would yield $R(T)=(C\ln 4\,T^{2})/(mT)^{2}=C\ln 4/m^{2}$. Therefore, if we suppose that both a Poissonian and a $1/f$ contribution are present, when $T$ is large enough the Poissonian contribution becomes negligible and $R(T)$ goes to a constant ($F\equiv C\ln 4/m^{2}$): this constant $F$ is usually called flicker floor. We measured values of $R(T)$ as low as $6\times 10^{-9}$ ($3\times 10^{-9}$) in the experiment A (B) at $T_{\text{max}}=5.12\times 10^{4}$ s ($T_{\text{max}}=1.28\times 10^{4}$ s) without seeing deviations from Poissonian behavior and, in particular, no signal of the curve turning up at large $T$ and becoming constant. Therefore, we conclude that, if a flicker floor is present, $F<3\times 10^{-9}$; as discussed in the Appendix, this limit is valid also for the $\beta$ decay. If we express the power spectrum as the sum of the Poissonian component plus a hypothetical $1/f$ component: $S(f)=2mT+C/f$, in the range of frequencies accessible by our experiments ($f>1/T_{\text{max}}$), the limit on the flicker floor implies an upper limit on the ratio of the strength of the $1/f$ contribution ($C/f$) relative to the Poissonian one ($2mT$), i.e., a limit on the ratio $(C/f)/(2mT_{\text{max}})<C/(2m)$: $(C/f)/(mT_{\text{max}})<1\times 10^{-5}$ ($C/m<2.5\times 10^{-5}$). These limits on the strength of the $1/f$ noise are valid for the channel of the decay with $\gamma$ emission; for the total $\beta$ decays the limit is weaker (see Appendix): $(C_{\beta}/f)/(m_{\beta}T_{\text{max}})<3\times 10^{-2}$. The model of quantum $1/f$ noise proposed by Handel predicts $F=8\alpha\zeta\ln 2\,(\Delta v/c)^{2}/(3\pi)$ for $\beta$ decays, see Eq. (3.6) of Ref. [7], and references therein; here $\alpha\approx 1/137$ is the fine structure constant, $0<\zeta<1$ is a coherence factor, and $\Delta v/c$ is the velocity change of the particles in the emission process relative to the speed of light $c$: if $K_{\beta}$ is the kinetic energy of the electron, $(\Delta v/c)^{2}=1-[1+K_{\beta}/(mc^{2})]^{-2}$. Since we did not measured the electron energy, our data are averaged over the entire electron-energy spectrum. Therefore, we can only give an estimate of the limit on the coherence factor by using the average electron energy: $\langle K_{\beta}\rangle=13.96$ keV. The fact that we do not see any flicker floor implies, in the context of Handel’s model, that the coherence factor $\zeta$ must be smaller than about $10^{-5}$. Our limit should be compared to the recent positive determinations of $\zeta$ in the range $5.2\times 10^{-3}<\zeta<8.3\times 10^{-3}$, that Gopala et al. [22] have made, albeit in different $\beta$ decays: ${}^{90}_{38}$Sr, ${}^{90}_{39}$Y and ${}^{204}_{81}$Tl. We do not have any explanation why $\zeta$ should be more than two orders of magnitude larger in those decays compared to our upper limit. IV Conclusions We have measured the counting rate of secondary $\gamma$ rays from a $\beta$ source of ${}^{151}_{62}\text{Sm}$ for counting periods ranging from $10^{-3}$ to $5.12\times 10^{4}$ seconds, and studied the fluctuations of the rate by means of the Allan variance. • We have found no evidence of deviations from Poissonian behavior up to a ratio of fluctuations to signal as low as $5\times 10^{-5}$. • The ratio between a hypothetical $1/f$ component of the power spectrum and the usual Poissonian contribution must be less than $1\times 10^{-5}$ at the longest time interval (lowest frequency) that we have measured ($T_{\text{max}}=5.12\times 10^{4}$ s). • We found no evidence of flicker floor. The upper bound on a hypothetical flicker floor is $3\times 10^{-9}$; this limit is valid also for the statistics of the total $\beta$ decays. • If our upper bound the flicker floor is interpreted in the context of Handel’s theory of $1/f$ noise predicting coherent interference of the emitted charged particle, the coherence factor $\zeta$ for this decay must be less than about $1\times 10^{-5}$: this number is more than two orders of magnitude smaller than the one that has been recently proposed in the literature albeit for different decays [22]. Acknowledgements.We gratefully acknowledge stimulating and encouraging discussions with R. Boscaino. This work has been partially supported by M.U.R.S.T. (Italian Ministry of University and Scientific and Technological Research). In this Appendix, we discuss the relation between fluctuations of the number of total decays (in our case, the total number of $\beta$ decays) and fluctuations of the number of decays in a subchannel (in our case, the fraction of $\beta$ decays that produce a $\gamma$ photon with energy 21.532 keV). The main results of this Appendix are summarized by Eqs. (11) and (12). We consider only the effect of the fluctuations of the branching ratio and not the effect of time delay between the first and second decay, which in general has the effect of a low-pass filter [11], since this second decay (the $\gamma$ emission) is practically instantaneous for the case under study (mean lifetime 13.8 ns compared to a time resolution of the order of $\mu$s and to the shortest time interval considered: 1 ms). In the following we shall use the symbol $M$ when referring to the number of detected $\gamma$’s and the symbol $N$ when referring to the corresponding (total) number of $\beta$ decays. We shall also use the subscript $\gamma$ ($\beta$) referring to the partial daughter statistics (total parent statistics). Let us define two kinds of averages: $\langle\cdots\rangle_{N}$ average over the $\gamma$-count distribution keeping the number of $\beta$ counts $N$ fixed; $\bbox{\langle}\cdots\bbox{\rangle}$ average over the $\beta$-count distribution. Using the symbol $\xi$ for the branching ratio, we can write $$\displaystyle\langle M\rangle_{N}$$ $$\displaystyle=$$ $$\displaystyle\xi N$$ (4) $$\displaystyle\overline{M}\equiv\bbox{\langle}\langle M\rangle_{N}\bbox{\rangle}$$ $$\displaystyle=$$ $$\displaystyle\xi\overline{N}\quad,$$ (5) where for simplicity we use the symbol $\overline{M}$ to indicate the number of $\gamma$ counts twice averaged both over the $\gamma$ and $\beta$ distributions and, at the same time, the symbol $\overline{N}$ to indicate the average $\beta$ counts (over the $\beta$ distribution). Since our experiments do not show any deviation from Poissonian behavior, we can readily put limits on non-Poissonian components of the $\gamma$ counts. The implications for the total $\beta$ decay can be assessed by assuming that the $\gamma$ distribution at fixed number of $\beta$ decays $N$ is standard (binomial and frequency independent) and by considering the effect of fluctuations of $N$: $$\langle(M_{i}-\langle M\rangle_{N_{i}})(M_{j}-\langle M\rangle_{N_{j}})\rangle% _{N_{i}N_{j}}=\delta_{ij}\langle M\rangle_{N_{i}}=\delta_{ij}\xi(1-\xi)N_{i}\quad,$$ (6) where the indices $i$ and $j$ indicate different counting intervals. We first consider the average at fixed $N_{i}$ and $N_{i+1}$ of $(M_{i}-M_{i+1})^{2}$, which by adding and subtracting $\langle M\rangle_{N_{i}}=\xi N_{i}$ can be written as $$\displaystyle\langle(M_{i}-M_{i+1})^{2}\rangle_{N_{i}N_{i+1}}$$ $$\displaystyle=$$ $$\displaystyle\langle\,\bbox{[}(M_{i}-\langle M\rangle_{N_{i}})-(M_{i+1}-% \langle M\rangle_{N_{i+1}})+\xi(N_{i+1}-N_{i})\bbox{]}^{2}\,\rangle_{N_{i}N_{i% +1}}$$ $$\displaystyle=$$ $$\displaystyle\Bigl{\{}\langle(M_{i}-\langle M\rangle_{N_{i}})^{2}\rangle_{N_{i% }}+\langle(M_{i+1}-\langle M\rangle_{N_{i+1}})^{2}\rangle_{N_{i+1}}+\xi^{2}(N_% {i}-N_{i+1})^{2}$$ $$\displaystyle-2\langle(M_{i}-\langle M\rangle_{N_{i}})(M_{i+1}-\langle M% \rangle_{N_{i+1}})\rangle_{N_{i}N_{i+1}}$$ $$\displaystyle+2\xi(N_{i}-N_{i+1})\,\bigl{[}\langle(M_{i}-\langle M\rangle_{N_{% i}})\rangle_{N_{i}}+\langle(M_{i+1}-\langle M\rangle_{N_{i+1}})\rangle_{N_{i+1% }}\bigr{]}\Bigr{\}}$$ $$\displaystyle=$$ $$\displaystyle\xi(1-\xi)(N_{i}+N_{i+1})+\xi^{2}(N_{i}-N_{i+1})^{2}\quad,$$ (9) where we have applied Eq. (6) to the first and second line of Eq. ( ‣ Search for non-Poissonian behavior in nuclear $\bbox{\beta}$ decay), while the third line is identically zero. If we now divide the above result by two and average it over the $\beta$ distribution, we find (considering that for a stationary process $\bbox{\langle}N_{i}\bbox{\rangle}=\overline{N}$ independently of $i$): $$\frac{1}{2}\bbox{\langle}\langle(M_{i}-M_{i+1})^{2}\rangle_{N_{i}N_{i+1}}\bbox% {\rangle}=\xi(1-\xi)\overline{N}+\xi^{2}\frac{1}{2}\bbox{\langle}(N_{i}-N_{i+1% })^{2}\bbox{\rangle}\quad.$$ (10) The left-hand side of Eq. (10) is the expectation value of the Allan variance of the $\gamma$ counts, which we measure with the statistics defined in Eq. (3), while the right-hand side is the expectation value of the Allan variance of the total $\beta$-decay counts. If we define $A_{\gamma}$ ($A_{\beta}$) as the Allan variance and $R_{\gamma}\equiv A_{\gamma}/\overline{M}^{2}$ ($R_{\beta}\equiv A_{\beta}/\overline{N}^{2}$) as the relative Allan variance of the $\gamma$ ($\beta$) counts, Eq. (10) becomes $$\displaystyle A_{\gamma}$$ $$\displaystyle=$$ $$\displaystyle(1-\xi)\overline{M}+\xi^{2}A_{\beta}$$ (11) $$\displaystyle R_{\gamma}$$ $$\displaystyle=$$ $$\displaystyle\frac{(1-\xi)}{\overline{M}}+R_{\beta}\,,$$ (12) which constitute the main result of this Appendix. In the following we analyzes the consequences of Eq. (12) for our experimental study. .0.1 Flicker floor If the $\beta$ decay has a $1/f$ component that produces a flicker floor $F_{\beta}$ in the relative Allan variance, i.e., $R_{\beta}=1/\overline{N}+F_{\beta}$, the fact that no deviation of $R_{\gamma}$ from $1/\overline{M}$ has been observed for $R_{\gamma}$ as low as $3\times 10^{-9}$ implies not only that $F_{\gamma}<3\times 10^{-9}$, but also that $F_{\beta}<3\times 10^{-9}$, where we have used Eq. (12) dropping $\xi\approx 3\times 10^{-4}$ compared to 1: $(1-\xi)\approx 1$ and $1/\overline{M}+1/\overline{N}\approx 1/\overline{M}$. .0.2 $1/f$ noise If we are interested in the ratio of the $1/f$ contribution ($C_{\beta}/f$) relative to the Poissonian one ($2m_{\beta}T$), we should recall that the rate of the $\beta$ decay is $m_{\beta}=m_{\gamma}/\xi$ and the that the constant $C$ is related to the flicker floor by $C=F\times m^{2}/ln4$. Then this ratio for the $\beta$ decay can be related to the same ratio for the $\gamma$ decay by using the fact that the limit on $F_{\beta}$ and the one on $F_{\gamma}$ are equal: $(C_{\beta}/f)/(2m_{\beta}T_{\text{max}})=(F_{\beta}m_{\beta}/f)/(T_{\text{max}% }\ln 4)=(1/\xi)\times(F_{\gamma}m_{\gamma}/f)/(T_{\text{max}}\ln 4)$. We loose a factor $1/\xi\approx 3000$ going from the upper bound on the ratio of the $1/f$ contribution relative to the Poissonian one for the $\gamma$ statistics ($1\times 10^{-5}$) to the upper bound on the same ratio for the $\beta$ statistics ($3\times 10^{-2}$). .0.3 Frequency-independent non-Poissonian component If instead we suppose that the $\beta$ decay has a frequency-independent deviation from Poissonian statistics, i.e., $R_{\beta}=\kappa/\overline{N}=\kappa\xi/\overline{M}$, the fact that no deviation of $R_{\gamma}=(1-\xi+\kappa\xi)/\overline{M}$ from $1/\overline{M}$ has been observed ($\overline{M}\times R_{\gamma}=0.99\pm 0.02$) implies also that $(1-\kappa)\xi=0.01\pm 0.02$ and, consequently, that $\kappa=-30\pm 60$. In conclusion, we have show in this Appendix that a measurement of a process ($\beta$ decay) by selecting a subprocess (detecting the $\gamma$ emitted in a fraction of the decays) whose branching ratio $\xi$ is itself a statistical variable corresponds, as might have been expected, to the use of a detector with efficiency not greater than $\xi$. Therefore, we loose a factor $1/\xi$ in most limits on dimensionless quantities when passing from statistics of the subprocess to total statistics of entire process. However, there exist quantities, such as the flicker floor, that can be determined from the partial statistics without loosing any sensibility. The reason of this different behavior is related on how strongly the noise under study depends on the number of events $N$ compared to the usual $\sqrt{N}$ dependence. References [*] Electronic address: [email protected] [†] Electronic address: [email protected] [3] P. H. Handel, Phys. Rev. Lett. 34, 1492 (1975). Title: $1/f$ Noise – An “Infrared Phenomenon”. [4] P. H. Handel, Phys. Rev. Lett. 34, 1495 (1975). Title: Nature of $1/f$ Noise . [5] P. H. Handel, Phys. Rev. A 22, 745 (1980). Title: Quantum approach to $1/f$ noise. [6] C. M. V. Vliet, P. H. Handel, and A. V. der Ziel, Physica A 108, 511 (1981). Title: Superstatistical emission noise. [7] C. M. V. Vliet and P. H. Handel, Physica A 113, 261 (1982). Title: A new transform theorem for stochastic processes with special application to counting statistics, [8] C. M. V. Vliet, Solid State Electronics 34, 1 (1991). Title: A survey of results and future prospects on quantum $1/f$ noise and $1/f$ noise in general. [9] F. N. Hooge, in Noise in Physical Systems and 1/f Noise, edited by V. Bareikis and R. Katilius (World Scientific, Singapore, 1995), p. 8., and other articles in the book. Title: $1/f$ noise in semiconductor materials. [10] W. V. Prestwich, T. J. Kennett, and G. T. Pepper, Phys. Rev. A 34, 5132 (1986). Title: Search for $1/f$ fluctuation in $\alpha$ decay. [11] W. V. Prestwich, T. J. Kennett and G. T. Pepper, Can. J. Phys. 66, 100 (1988). Title: Comment on: Flicker noise fluctuations in $\alpha$-radioactive decay. [12] G. T. Pepper, T. J. Kennett, and W. V. Prestwich, Can. J. Phys. 67, 468 (1989). Title: A re-investigation of the possibility of $1/f$ noise fluctuations in $\alpha$ decay. [13] T. J. Kennett and W. V. Prestwich, Phys. Rev. A 40, 4630 (1989). Title: Limit on the existence of $1/f$ noise in $\alpha$ decay. [14] M. A. Azhar and K. Gopala, Phys. Rev. A 39, 5311 (1989). Title: Search for $1/f$ fluctuations in $\alpha$ decay of ${}^{210}$Po. [15] K. Gopala and M. A. Azhar, Phys. Rev. A 37, 2173 (1988). Title: Search for $1/f$ fluctuations in $\gamma$ decay of ${}^{137}$Cs. [16] J. Gong, C. M. Van Vliet, W. H. Ellis, G. Bosman and P. H. Handel, in Noise in Physical Systems and 1/f Noise, edited by G. L. M. Savelli and J. P. Nougier (Elsevier, New York, 1983), p. 381. Title: $1/f$ noise fluctuations in $\alpha$-particle radioactive decay of ${}^{241}$Am. [17] G. S. Kousik, C. M. Van Vliet, G. Bosman, W. H. Ellis and E. E. Carrol, in Noise in Physical Systems and 1/f Noise, edited by A. D’Amico and P. Mazzetti (Elsevier, New York, 1986), p. 469. Title: $1/f$ noise in alpha-particle decay of ${}^{239}$Pu, ${}^{241}$Am and ${}^{244}$Cm. [18] G. S. Kousik et al., Can. J. Phys. 65, 365 (1987). Title: Flicker-noise fluctuations in $\alpha$-radioactive decay. [19] V. D. Rusov et al., Nucl. Tracks Radiat. Meas. 20, 305 (1992). Title: Observation of spatial $1/f$ noise in experimental detection of ${}^{239}$Pu $\alpha$-particles by solid state nuclear track detector. [20] M. A. Azhar and K. Gopala, Phys. Rev. A 39, 4137 (1989). Title: $1/f$ noise in $\beta^{-}$ decay of ${}^{204}$Tl. [21] M. A. Azhar and K. Gopala, Phys. Rev. A 44, 1044 (1991). Title: $1/f$ fluctuations in $\beta^{-}$ decay of ${}^{90}$Y. [22] K. Gopala, M. A. Azhar and Swamy, Phys. Rev. E 50, 2588 (1994). Title: $1/f$ noise in the $\beta$ decay of ${}^{90}$Sr-${}^{90}$Y. [23] R. Boscaino, G. Concas, M. Lissia, and S. Serci, Phys. Rev. E 49 333 (1994). Title: Fluctuations in radioactive decays. I. Nonequilibrium effects and noise. [24] R. Boscaino, G. Concas, M. Lissia, and S. Serci, Phys. Rev. E 49 341 (1994). Title: Fluctuations in radioactive decays. II. Experimental results. [25] M. C. Teich and H. C. Card, Opt. Lett. 4, 146 (1979). Title: Photocounting distribution for exponentially decaying sources. [26] R. B. Firestone, Table of Isotopes, 8th edition (John Wiley & Sons, New York, 1996). [27] W. V. Prestwich, T. J. Kennett, and F. W. Kus, Can. J. Phys. 69, 1405 (1991). Title: The statistical properties of Allan variance.
${}^{11}$Be($\beta$p), a quasi-free neutron decay? K. Riisager [email protected] O. Forstner M.J.G. Borge J.A. Briz M. Carmona-Gallardo L.M. Fraile H.O.U. Fynbo T. Giles A. Gottberg A. Heinz J.G. Johansen111Present address: Institut für Kernphysik, Technische Universität Darmstadt, D–64289 Darmstadt, Germany B. Jonson J. Kurcewicz M.V. Lund T. Nilsson G. Nyman E. Rapisarda P. Steier O. Tengblad R. Thies S.R. Winkler Department of Physics and Astronomy, Aarhus University, DK–8000, Aarhus C, Denmark Faculty of Physics, University of Vienna, Währinger Strasse 17, A–1090 Wien, Austria Stefan-Meyer-Institut für subatomare Physik, Austrian Academy of Sciences, A–1090 Wien, Austria ISOLDE, PH Department, CERN, CH–1211 Geneve 23, Switzerland Instituto de Estructura de la Materia, CSIC, E-28006 Madrid, Spain Grupo de Física Nuclear, Universidad Complutense de Madrid, CEI Moncloa, E-28040 Madrid, Spain EN Department, CERN, CH–1211 Geneve 23, Switzerland Fundamental Fysik, Chalmers Tekniska Högskola, SE–41296 Göteborg, Sweden Abstract We have observed $\beta^{-}$-delayed proton emission from the neutron-rich nucleus ${}^{11}$Be by analysing a sample collected at the ISOLDE facility at CERN with accelerator mass spectrometry (AMS). With a branching ratio of $(8.4\pm 0.6)\cdot 10^{-6}$ the strength of this decay mode, as measured by the $B_{GT}$-value, is unexpectedly high. The result is discussed within a simple single-particle model and could be interpreted as a quasi-free decay of the ${}^{11}$Be halo neutron into a single-proton state. keywords: beta decay, halo nucleus, ${}^{11}$Be 1 Introduction Beta-minus decay and proton emission take a nucleus in almost opposite directions on a nuclear chart, so $\beta^{-}$-delayed proton emission (where beta decay feeds excited states that subsequently emit a proton) is forbidden in all but a few nuclei where it is heavily suppressed as the available energy is Jon01 $Q_{\beta p}=782\,\mathrm{keV}-S_{n}$ where $S_{n}$ is the neutron separation energy of the nucleus. We describe here an experiment to detect this decay mode from the one-neutron halo nucleus ${}^{11}$Be that is believed to be the most favourable case Bay11 ; Bor13 due to the single-particle behaviour of halo nuclei Jen04 ; Tan13 ; Rii13 that may favour this decay mode and due also to the relatively long halflife that is caused by the normal beta-decay of ${}^{11}$Be being hindered since a level inversion gives it a $1/2^{+}$ ground state rather than a $1/2^{-}$. Beta-delayed particle emission is in general a prominent decay mode for nuclei close to the dripline, see Pfu12 ; Bla08 for recent reviews. The energetically open channels for ${}^{11}$Be are $\beta\alpha$, $\beta$t, $\beta$p and $\beta$n with corresponding $Q$-values of mas12 $2845.2\pm 0.2$ keV, $285.7\pm 0.2$ keV, $280.7\pm 0.3$ keV and $55.1\pm 0.5$ keV. The low decay energy implies that the branching ratio for beta-delayed proton emission is low, typical estimates are slightly above $10^{-8}$ Bor13 . To detect the process experimentally, it is therefore essential to keep contaminants at a very low level. The $\beta$p decay mode may be expected preferentially in one-neutron halo nuclei, partly due to the requirement of low neutron separation energy, partly due to the more pronounced single-particle behaviour of halo nuclei. Two-neutron halo nuclei are in a similar way candidates for beta-delayed deuteron emission, which has so far been observed only in the nuclei ${}^{6}$He and ${}^{11}$Li Pfu12 ; Nil00 . For ${}^{11}$Li the decay has a branching ratio of order $10^{-4}$, the low value again caused by a small energy window, whereas cancellation effects reduces the branching ratio for ${}^{6}$He down to the $10^{-6}$ level. It may be more useful to consider the standard measure for the strength of a decay, the reduced matrix element squared $B_{GT}$, that is found from the relation Pfu12 $$ft=\frac{K}{g_{V}^{2}B_{F}+g_{A}^{2}B_{GT}}$$ (1) where $f$ is the beta-decay phase space, $K/g_{V}^{2}=6144.2\pm 1.6$ s and $g_{A}/g_{V}=-1.2694\pm 0.0028$. Converting the observed spectra for beta-delayed deuteron emission from the two-neutron halo nuclei ${}^{6}$He Ant02 and ${}^{11}$Li Raa08 gives total $B_{GT}$ values within the observed energy range of about 0.0016 and 0.75. (Note, however, that the ${}^{6}$He decay to the ${}^{6}$Li ground state has been described as an effective di-neutron to deuteron decay, it is a super-allowed transition with a $B_{GT}$ of 4.7. This may be a reflection of a general trend for super-allowed decays to occur in very neutron-rich nuclei Bor91 .) For comparison, the sum of $B_{GT}$ for all currently known transitions in the ${}^{11}$Be decay is 0.27. 2 The experiment 2.1 General remarks The radioactive ${}^{11}$Be nuclei were produced at the ISOLDE facility at CERN. Searching for protons with a kinetic energy of a few hundred keV with relative intensity $10^{-8}$ is challenging in a radioactive beam environment, so we instead detect the decay product, ${}^{10}$Be with a halflife of $1.5\cdot 10^{6}$ y, that exists only in minute quantities on earth. To reach the needed sensitivity we must employ state-of-the-art AMS. It is also crucial to limit the amount of contaminants in the samples so sample collection took place at ISOLDE’s high-resolution mass separator. The resolution from the magnetic separation stage is supplemented by the electrostatic beam transport at ISOLDE similar to, but at lower resolution than, the separation stages in AMS facilities. A first attempt was made in 2001 and the results were published recently Bor13 . The signal was not sufficiently strong to be clearly separated from background and gave a $\beta$p branching ratio of $(2.5\pm 2.5)\cdot 10^{-6}$, significantly above the published theoretical expectations. Due to improvements both in production of ${}^{11}$Be and AMS detection of ${}^{10}$Be, the current collection was performed in December 2012 and resulted in three samples. 2.2 Sample collection The ${}^{11}$Be activity was produced by bombarding a UC target with 1.4 GeV protons. The products were ionized in a laser ion source, which provided element selectivity, mass separated in the ISOLDE high-resolution separator, and guided through several collimators to the collection point where they were implanted at 60 keV in a small copper plate (15$\times$20$\times$2 mm). A high-purity coaxial Ge-detector placed 40 cm downstream behind a lead shielding monitored the collection rate. The Ge-detector was energy and efficiency calibrated with standard sources of ${}^{60}$Co, ${}^{152}$Eu and ${}^{228}$Th. The main lines in the $\gamma$ spectrum recorded during ${}^{11}$Be collection are the 2124 keV line from the decay of ${}^{11}$Be and the 511 keV line from positron annihilation. The overall efficiency at 2124 keV is found to be $(2.0\pm 0.2)\cdot 10^{-5}$. A second line from the decay at 2895 keV was also used to check the overall amount of ${}^{11}$Be. The two determinations gave about the same precision, the one from the 2124 keV line being dominated by systematic uncertainties in the efficiency and the one from the 2895 keV being dominated by statistical uncertainties, and were internally consistent leading to a final value for the amount of collected ${}^{11}$Be in the main sample (S1) of $(1.447\pm 0.055)\cdot 10^{12}$. This includes a correction for dead time of 2.8%, determined from the ratio of accepted to total number of triggers. As cross-checks two other samples were collected: sample S2 at the mass position of ${}^{11}$Li (0.02 mass units heavier than ${}^{11}$Be) where an upper limit of $3\cdot 10^{6}$ could be determined for the number of atoms collected (corresponding to a ${}^{11}$Li yield below 625/s which is reasonable) and, for one second only, sample S3 at the ${}^{10}$Be mass position where an estimate of the current of 3.5 pA (uncertain by a factor two) converts into $2.2\cdot 10^{7}$ atoms. According to SRIM calculations SRIM about 6% of all Be ions implanted in Cu at 60 keV energy will backscatter out of the sample. Most of the backscattered ions are expected to remain close to the sample so $\gamma$-rays from their decays will be seen as well, although the decay products are not retained in the sample. This gives a correction which we estimate to be $4\pm 4$%. 2.3 Accelerator mass spectrometry The ${}^{10}$Be accelerator mass spectrometry (AMS) measurements were performed at the Vienna Environmental Research Accelerator (VERA) at the University of Vienna. VERA is a dedicated AMS facility based on a NEC 3 MV pelletron tandem accelerator. A new scheme for ${}^{10}$Be using a passive foil absorber in front of a gas ionization chamber detector was employed. In this way the detection efficiency for ${}^{10}$Be atoms is increased significantly. According to TRIM simulations SRIM the maximum implantation depth of ${}^{11}$Be in our copper plate catcher was below 1 $\mu$m. To reduce the amount of material to be dissolved only the surface layer of each irradiated copper plate was leached in nitric acid. A second leaching was performed to verify the blank level of the irradiated copper plate. The second leaching of sample S3 did not produce enough BeO for a measurement. For samples S1 and S2 the values of the second leachings were consistent with a blank sample. This shows that the material was sitting in the surface, as expected for an implanted sample, and not due to a bulk contamination. An amount of 359 $\mu$g (uncertainty 3%) ${}^{9}$Be carrier was added to the solution to reach a ${}^{10}$Be/${}^{9}$Be isotopic ratio in the range of 10${}^{-16}$–10${}^{-11}$. In the next step the solution was treated with ammonium hydroxide to precipitate the beryllium as beryllium hydroxide (Be(OH)${}_{2}$). The dissolved copper remains in the solution in this step. The beryllium hydroxide was dried out by heating in an oven at 900${}^{\circ}$C for at least 8 hours forming beryllium oxide (BeO). The BeO powder was mixed 1:1 with high purity copper powder and pressed into sample holders and mounted together with standard and blank material in a MC-SNICS type Cesium sputter ion source. Blank is the pure phenakite material directly pressed into a sample holder. A separate sample, S-blank, went through the chemistry preparation to check for the amount of ${}^{10}$Be introduced during the chemical sample preparation. BeO${}^{-}$ was extracted from the ion source and stripped in the terminal of the tandem accelerator to Be${}^{2+}$, resulting in a total ion energy of 2.4 MeV. After further mass separation by a sector magnetic analyzer and an electrostatic analyzer the remaining particles are sent to a gas ionization chamber detector with a two-split anode for particle identification. A silicon nitride foil stack as a passive absorber was installed in front of the detector. This foil stack prevents the isobaric background ${}^{10}$B from entering into the detector: The energy loss of boron in the foil stack is slightly larger compared to beryllium. By selecting the right foil thickness and carefully tuning the particle energy the boron ions are stopped in the foil stack whereas the beryllium ions can enter the detector. The final results are given in table 1. The amount of atoms in sample S3 agrees with the estimation from the implantation current. The number for sample S2 is consistent with the lack of observed $\gamma$-rays from the decay of ${}^{11}$Li. The number for sample S1, the ${}^{11}$Be sample, is $(1.170\pm 0.047)\cdot 10^{7}$. 2.4 Possible contaminants Contaminations in our sample might arise due to tails of the neighbouring activities ${}^{10}$Be or ${}^{11}$Li, whose decay also produces ${}^{10}$Be. Both possibilities are ruled out by the low recorded number of atoms for the ${}^{11}$Li sample (S2). The ISOLDE mass separator profile was found by measuring the beta activity as the mass settings were changed around the nominal ${}^{11}$Be mass, see figure 1. The release function of this specific target and ion source combination was measured first, which allows to combine measurements with different collection times relative to proton impact on target. In this way the sensitivity was increased and the activity could be followed down to the $10^{-5}$ level that occured at a mass difference of 0.05 mass units. The only remaining way for ${}^{10}$Be to appear on the ${}^{11}$Be position is as the molecule ${}^{10}$Be${}^{1}$H, but this molecule is unlikely to be formed in the target and to survive through the laser ion source since its ionization energy of 8.22 eV Bub07 is much higher than its dissociation energy of 3.26 eV. Nevertheless, we have re-checked the data from an earlier experiment on ${}^{12}$Be Dig05 and were able to put limits on the amount of ${}^{11}$Be${}^{1}$H (from the $\beta\alpha$ branch) that would correspond in our current case to a ${}^{10}$Be${}^{1}$H intensity less than $2\cdot 10^{-6}$ of ${}^{11}$Be. Our conditions should be better, partly due to higher laser ionization power, partly due to the beam passing through a gas-filled RFQ cooler, both effects that would enhance molecular break-up. We therefore conclude that we have observed the ${}^{11}$Be($\beta$p) decay via detection of the final nucleus ${}^{10}$Be. The observed intensity converts to a branching ratio of $(8.4\pm 0.6)\cdot 10^{-6}$. 3 Discussion The experimentally found branching ratio is surprisingly large, but consistent with the outcome of the first experiment. If the strength in ${}^{11}$Be($\beta$p) was as broadly distributed as in ${}^{11}$Li($\beta$d) we would expect the $B_{GT}$ within the Q-window to be less than 0.1, which would not be sufficient to explain the decay rate. We therefore turn to a simple model for the decay along the lines of the direct decay calculations in Bay11 ; Zhu95 , details of the calculations are reported elsewhere Rii14 . The basic assumption is that the beta decay proceeds as an essentially detached decay of the halo neutron into a proton. The initial and final state wavefunctions are calculated as single-particle states in square-well or Woods-Saxon potentials with the final state spectrum discretized by imposing a large confining radius at 1000 fm. The overlap of the wavefunctions gives the beta strength $B_{GT}$ and the decay rate is found from equation (1). The final total branching ratio for beta-delayed proton emission depends strongly on the strength of the potential between the final state proton and ${}^{10}$Be. For most potential strengths the branching ratio will indeed be a few times $10^{-8}$, as in other calculations, but in a limited range the beta strength will be concentrated within the Q-window. Effectively, in this range the proton formed in the decay interacts strongly with the remaining ${}^{10}$Be and forms a resonance-like structure; as a consequence it emerges with a quite well defined energy. The branching ratios obtained for this set of parameters are shown in figure 2 as a function of the energy of the resonance. The simple model neglects isospin. The lowest $T=3/2$ states are situated slightly more than 1 MeV above the $Q_{\beta p}$ window. They are members of isospin multiplets that include the ${}^{11}$Be ground state and first excited state neutron halos. The data indicate Jon98 that the intermediate states ($|T_{z}|$ of 1/2) in these multiplets have good total isospin rather than a composition with just one proton (or neutron) plus core. We therefore expect that realistic final state wave functions in our case, with $T=1/2$, also should have good isospin. Standard isospin coupling then predicts that the state should be proton plus ${}^{10}$Be with weight 2/3 and neutron plus ${}^{10}$B(T=1) with weight 1/3. Our calculated decay probabilities must therefore be corrected by a factor 2/3. A further reduction factor about 0.7 is due to the initial ${}^{11}$Be wavefunction containing several configurations Sch12 . The overall scaling factor on the theory, included in figure 2, is therefore about 0.5. Could this be an established resonance in ${}^{11}$B ? The known states TUNL11 in this region mainly couple to the $\alpha$-particle channel (with partial widths around 100 keV) and only one, a state at $11450\pm 17$ keV, may have spin-parity that allows emission of an s-wave proton — the others will have angular momentum barriers that will suppress proton emission. Decays through levels that have other sizeable decay channels ($\alpha$ emission or, for very narrow levels, $\gamma$ emission; in principle triton emission could also occur) would therefore only contribute to the proton channel with probability $\Gamma_{p}/\Gamma_{tot}$. Since $\Gamma_{\gamma}$ for the $1/2^{+}$, $T=3/2$ state at 12.55 MeV (that apart from isospin should be similar in structure to our state) is about 10 eV TUNL11 , and even a small admixture into our state of other $1/2^{+}$ levels is likely to give a $\Gamma_{\alpha}$ at least of the same magnitude, we shall assume the width for other decay channels $\Gamma_{b}=\Gamma_{\gamma}+\Gamma_{\alpha}$ to be larger than 0.01 keV. To take these effects into account calculations were also made within the R-matrix approach Bar88 , but in a simplified version where e.g. other decay channels are approximated as having a constant width $\Gamma_{\alpha}$ over the energy window, see Rii14 for details. Converting the decay rate into a differential branching ratio gives the following expression: $$\frac{\mathrm{d}b}{\mathrm{d}E}=t_{1/2}\frac{g_{A}^{2}}{K}\frac{B_{GT}\Gamma_{% p}/2\pi}{(E_{res}-E)^{2}+\Gamma_{tot}^{2}/4}f(Q-E)\,,$$ (2) where $\Gamma_{tot}=\Gamma_{b}+\Gamma_{p}$, $\Gamma_{p}=2P\gamma^{2}$, $P$ is the standard (energy-dependent) penetrability factor and $\gamma^{2}$ the maximal reduced width. Integration over the Q-window gives the total branching ratios shown in figure 2 as a function of resonance position $E_{res}$ for different values of $\Gamma_{b}$. The branching ratios agree well with the ones from the simple model. It is clear that all known levels are too wide to fit and that a $\Gamma_{b}$ above 0.01 keV gives a lower limit on the $B_{GT}$ of about 0.3 with an upper limit given by the theoretical maximum of 3. 4 Conclusion We have observed beta-delayed proton emission for the first time in a neutron-rich nucleus. The unexpectedly high decay rate can only be understood within current theory if the decay proceeds through a new single-particle resonance in ${}^{11}$B that is strongly fed in beta-decay. The $B_{GT}$-value could be as high as that of a free neutron decay. A natural interpretation would be peripheral beta decay of the halo neutron in ${}^{11}$Be into a single-proton state. This appears to be a simpler process than the $\beta$d decays of the two-neutron halo nuclei ${}^{6}$He and ${}^{11}$Li. Although the halo structure must be important for the $\beta$p decay mode, the large value of $B_{GT}$ may be related to large values found in other (non-halo) near-dripline nuclei Bor91 and point to a more widespread change of beta-decay patterns at least in light nuclei in line with some predictions Sag93 . Acknowledgements We thank the ISOLDE group for the successful operation of the HRS separator at very high resolution and Aksel Jensen for discussions on the theoretical interpretation. We acknowledge support from the European Union Seventh Framework through ENSAR (contract no. 262010), from Austrian Science Fund (FWF) P22164-N20, from Spanish MINECO through projects FPA2010-17142 and FPA2012-32443, and CPAN Consolider CSD-2007-00042. References References (1) B. Jonson and K. Riisager, Nucl. Phys. A 693 (2001) 77. (2) D. Baye and E.M. Tursonov, Phys. Lett. B 696 (2011) 464. (3) M.J.G. Borge et al., J. Phys. G 40 (2013) 035109. (4) A.S. Jensen, K. Riisager, D.V. Fedorov and E. Garrido, Rev. Mod. Phys. 76 (2004) 215. (5) I. Tanihata, H. Savajols and R. Kanungo, Prog. Part. Nucl. Phys. 68 (2013) 215. (6) K. Riisager, Physica Scripta T152 (2013) 014001. (7) M. Pfützner, M. Karny, L.V. Grigorenko and K. Riisager, Rev. Mod. Phys. 84 (2012) 567. (8) B. Blank B and M.J.G. Borge, Prog. Part. Nucl. Phys. 60 (2008) 403. (9) M. Wang et al., Chinese Phys. C 36 (2012) 1603. (10) T. Nilsson, G. Nyman and K. Riisager, Hyperfine Int. 129 (2000) 67. (11) D. Anthony et al., Phys. Rev. C 65 (2002) 034310. (12) R. Raabe et al., Phys. Rev. Lett. 101 (2008) 212501. (13) M.J.G. Borge et al., Z. Phys. A 340 (1991) 255. (14) J.F. Ziegler, Particle interactions with matter. http://www.srim.org (Jan. 20, 2014) (15) S. Bubin and L. Adamowicz, J.Chem.Phys. 126 (2007) 214305. (16) C.Aa. Diget et al., Nucl. Phys. A 760 (2005) 3. (17) M.V. Zhukov, B.V. Danilin, L.V. Grigorenko and J.S. Vaagen, Phys. Rev. C 52 (1995) 2461. (18) K. Riisager, submitted to Nucl. Phys. A (2014), arXiv:1312.0479 (19) B. Jonson and K. Riisager, Phil. Trans. R. Soc. Lond. A 356 (1998) 2063. (20) K.T. Schmitt et al., Phys. Rev. Lett. 108 (2012) 192701. (21) J.H. Kelley et al., Nucl. Phys. A 880 (2012) 88. (22) F.C. Barker and E.K. Warburton, Nucl. Phys. A 487 (1988) 269. (23) H. Sagawa, I. Hamamoto and M. Ishihara, Phys. Lett. B 303 (1993) 215.
Liouville/Toda central charges from M5-branes Luis F. Alday${}^{\heartsuit}$, Francesco Benini${}^{\diamondsuit}$, Yuji Tachikawa${}^{\heartsuit}$ alday,[email protected], [email protected] ${}^{\heartsuit}$ School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540, USA ${}^{\diamondsuit}$ Department of Physics, Princeton University, Princeton, NJ 08544, USA Abstract: We show that the central charge of the Liouville and ADE Toda theories can be reproduced by equivariantly integrating the anomaly eight-form of the corresponding six-dimensional $\mathcal{N}=(0,2)$ theories, which describe the low-energy dynamics of M5-branes. ††preprint: PUTP-2314 1 Introduction $\mathcal{N}=2$ supersymmetric field theories in four dimensions are very rich, both from the physical and mathematical points of view. Recently, it was observed in [1] that many $\mathcal{N}=2$ theories can be understood in a unified manner by realizing them as a compactification of six-dimensional $\mathcal{N}=(0,2)$ theories on a Riemann surface. Furthermore, it was noted in [2] that Nekrasov’s partition function [3] of such theories (with $\mathrm{SU}(2)$ gauge groups) computes the conformal blocks of the Virasoro algebra. It was also noted that the partition function on $S^{4}$, as given by [4], coincides with the corresponding correlation function of the Liouville theory. Soon this 2d-4d correspondence was extended in [5, 6] to the case of $\mathrm{SU}(N)$ gauge groups where the Liouville theory generalizes to the $A_{N-1}$ Toda theory.111Note that the Liouville theory is equivalent to the $A_{1}$ Toda theory. Given that these 4d theories are engineered from theories on M5-branes, one would like to understand the above correspondence in terms of string/M-theory. A step in this direction was made in [7, 8]. Hinted at by the results of [5] and [9], in [8] an interesting observation was made, namely that the anomaly eight-form of the 6d $\mathcal{N}=(0,2)$ theory of type $A_{N-1}$ and the central charge of the Toda theory of the same type have similar structures: $$\displaystyle I_{8}[A_{N-1}]$$ $$\displaystyle=(N-1)I_{8}(1)+N(N^{2}-1)\frac{p_{2}(N)}{24}\;,$$ (1.1) $$\displaystyle c_{\text{Toda}}[A_{N-1}]$$ $$\displaystyle=(N-1)+N(N^{2}-1)Q^{2}\;.$$ (1.2) In this short note, we show that (1.2) with the correct value for $Q$, namely $Q=(\epsilon_{1}+\epsilon_{2})^{2}/(\epsilon_{1}\epsilon_{2})$, arises from (1.1) if we consider the compactification of the 6d $(0,2)$ theory on $\mathbb{R}^{4}$ with equivariant parameters $\epsilon_{1,2}$. Furthermore, we will see that this relation works for arbitrary theories of type $A,D$ and $E$. 2 Computation The anomaly eight-form of one M5-brane [10] is $$I_{8}(1)=\frac{1}{48}\left[p_{2}(NW)-p_{2}(TW)+\frac{1}{4}\bigl{(}p_{1}(TW)-p_% {1}(NW)\bigr{)}^{2}\right]\;,$$ (2.1) where $NW$ and $TW$ stand for the normal and the tangent bundles of the worldvolume $W$, respectively and $p_{k}$ denotes the $k$-th Pontryagin class. Using this, the anomaly of the $\mathcal{N}=(0,2)$ theory of type $G$ ($G=A_{n},D_{n},E_{n}$) can be written as [11, 12, 13]222For $E$-type $\mathcal{N}=(0,2)$ theory, this formula is only conjectural and there has been no independent check, to our knowledge. We assume the correctness of the formula. $$I_{8}[G]=r_{G}I_{8}(1)+d_{G}h_{G}\frac{p_{2}(NW)}{24}.$$ (2.2) Here $r_{G}$, $d_{G}$ and $h_{G}$ are the rank, the dimension, and the Coxeter number of the Lie algebra of type $G$, respectively. They are tabulated in Table 1. Now, we wrap the $(0,2)$-theory of type $G$ on a four-manifold $X_{4}$. The 11d theory lives on $$\Sigma\times X_{4}\times\mathbb{R}^{5},$$ (2.3) where $\Sigma$ is the worldsheet of the resulting 2d theory. We take $X_{4}$ to be Euclidean and $\Sigma$ to be Lorentzian. The supercharges decompose as: $$\mathbf{4}_{+}\times\mathbf{4}\;\to\;\Big{(}\frac{1}{2},2,1,2,\frac{1}{2}\Big{% )}+\Big{(}\frac{1}{2},2,1,2,-\frac{1}{2}\Big{)}+\Big{(}-\frac{1}{2},1,2,2,% \frac{1}{2}\Big{)}+\Big{(}-\frac{1}{2},1,2,2,-\frac{1}{2}\Big{)}\;,$$ (2.4) where we listed the representation contents under the decomposition $$\mathrm{SO}(5,1)\times\mathrm{SO}(5)\;\to\;\mathrm{SO}(1,1)\times\mathrm{SU}(2% )_{l}\times\mathrm{SU}(2)_{r}\times\mathrm{SO}(3)\times\mathrm{SO}(2)\;.$$ (2.5) Here we have decomposed $\mathrm{SO}(4)\simeq\mathrm{SU}(2)_{l}\times\mathrm{SU}(2)_{r}$ and $\mathrm{SO}(5)\supset\mathrm{SO}(3)\times\mathrm{SO}(2)$ . The symplectic Majorana condition acts on each factor separately. Let us twist $\mathbb{R}^{5}$ over $X_{4}$ so that a fraction of the supersymmetry remains. We embed the spin connection of the $\mathrm{SU}(2)_{r}$ factor into the $\mathrm{SO}(3)$ factor, that is $$\mathrm{SU}(2)_{r}\;\to\;\text{diagonal part of }\big{[}\mathrm{SU}(2)_{r}% \times\mathrm{SO}(3)\big{]}\;.$$ (2.6) Note that the $\mathrm{SO}(3)$ factor is the standard $\mathrm{SU}(2)_{R}$ symmetry of the four-dimensional theory if we think of the setup as the compactification of the six-dimensional theory on $\Sigma$, giving an $\mathcal{N}=2$ theory on $X_{4}$. Therefore this twist is the one used by [14]. After the twist, we get the symmetry group $\mathrm{SO}(1,1)\times\mathrm{SU}(2)_{l}\times\mathrm{SU}(2)_{r}\times\mathrm{% SO}(2)$ and supercharges $$\Big{(}\frac{1}{2},2,2,\frac{1}{2}\Big{)}+\Big{(}\frac{1}{2},2,2,{}-\frac{1}{2% }\Big{)}+\Big{(}{}-\frac{1}{2},1,1+3,\frac{1}{2}\Big{)}+\Big{(}{}-\frac{1}{2},% 1,1+3,{}-\frac{1}{2}\Big{)}\;.$$ (2.7) The preserved supercharges (scalars under $\mathrm{SU}(2)_{l}\times\mathrm{SU}(2)_{r}$) form a two-dimensional $\mathcal{N}=(0,2)$ superalgebra, with $\mathrm{U}(1)$ R-symmetry.333This twist is different from the one obtained by wrapping M5-branes on a holomorphic 4-cycle in a Calabi-Yau threefold [15]. Let us exploit this 2d $\mathcal{N}=(0,2)$ superalgebra. We take the right-movers to be the supersymmetric side. It is known that the anomaly polynomial and the central charges are related via $$I_{4}=\frac{c_{R}}{6}c_{1}(F)^{2}+\frac{c_{L}-c_{R}}{24}p_{1}(T\Sigma),$$ (2.8) where $F$ is the external $\mathrm{U}(1)$ bundle which couples to the $\mathrm{U}(1)_{R}$ symmetry. Let us check this formula against free multiplets. The anomaly polynomial of a right-moving complex Weyl fermion with charge $q$ is $$I_{4}=\mathop{\mathrm{ch}}(qF)\hat{A}(T\Sigma)\Bigm{|}_{4}=\frac{q^{2}}{2}c_{1% }(F)^{2}-\frac{p_{1}(T\Sigma)}{24}\;.$$ (2.9) The right-moving chiral multiplet has one complex boson, whose anomaly is the same as that of two neutral Weyl fermions and one Weyl fermion with charge $1$. In total, $I_{4}=c_{1}(F)^{2}/2-p_{1}(T\Sigma)/8$ with $(c_{L},c_{R})=(0,3)$. On the other hand, the left-moving free real boson has $I_{4}=p_{1}(T\Sigma)/24$ with $(c_{L},c_{R})=(1,0)$. Both cases agree with (2.8). Now let us determine $I_{4}$ of the compactified theory by integrating $I_{8}$ over $X_{4}$. Let us assign the Chern roots as follows: $\pm t$ for the tangent bundle of $\Sigma$; $\pm\lambda_{1}$, $\pm\lambda_{2}$ for the tangent bundle of $X_{4}$; and $\pm n_{1}$, $\pm n_{2}$, 0 for the normal bundle. We include the $\mathrm{U}(1)$ R-symmetry through $$n_{1}\;\to\;2c_{1}(F)\;,$$ (2.10) and the twisting (2.6) introduces $$n_{2}\;\to\;\lambda_{1}+\lambda_{2}\;.$$ (2.11) Note that the doublet of $\mathrm{SU}(2)_{r}$ has the Chern roots $\pm(\lambda_{1}+\lambda_{2})/2$. $(n_{2},0,-n_{2})$ should then be the Chern roots of the triplet, resulting in (2.11). Then we evaluate the anomaly polynomial. Notice that $\lambda_{1}$ and $\lambda_{2}$ will be integrated over $X_{4}$. Since the 2d spacetime effectively behaves as four dimensional inside the anomaly polynomial, forms whose degree along $T\Sigma$ is higher than four automatically vanish. We get: $$\displaystyle I_{4}=\Big{[}\frac{r_{G}+2d_{G}h_{G}}{12}\int\big{(}\lambda_{1}^% {2}+\lambda_{2}^{2}\big{)}+\frac{(3r_{G}+4d_{G}h_{G})}{12}\int\lambda_{1}% \lambda_{2}\Big{]}c_{1}(F)^{2}\\ \displaystyle-\Big{[}\frac{r_{G}}{48}\int\big{(}\lambda_{1}^{2}+\lambda_{2}^{2% }\big{)}+\frac{r_{G}}{48}\int\lambda_{1}\lambda_{2}\Big{]}p_{1}(T\Sigma)\;.$$ (2.12) Translating to $c_{L,R}$ using (2.8), we find $$\displaystyle c_{R}$$ $$\displaystyle=\frac{1}{2}\big{(}P_{1}(X_{4})+3\chi(X_{4})\big{)}r_{G}+\big{(}P% _{1}(X_{4})+2\chi(X_{4})\big{)}d_{G}h_{G}\;,$$ (2.13) $$\displaystyle c_{L}$$ $$\displaystyle=\chi(X_{4})r_{G}+\big{(}P_{1}(X_{4})+2\chi(X_{4})\big{)}d_{G}h_{% G}\;.$$ Here, $\chi(X_{4})=\int_{X_{4}}e(X_{4})$ is the Euler number of $X_{4}$, and $P_{1}(X_{4})=\int_{X_{4}}p_{1}(X_{4})$ is the integrated first Pontryagin class which is three times the signature of $X_{4}$. For example, let us wrap one M5-brane on $X_{4}=K3$, in which case there is effectively no twisting. We start from $I_{8}(1)$ instead of $I_{8}[G]$, which effectively means using $r_{G}=1$ and $d_{G}h_{G}=0$ in (2.13). Using $P_{1}(K3)=-48$ and $\chi(K3)=24$, we obtain $$c_{L}=24,\qquad c_{R}=12$$ (2.14) which is the value for the heterotic string, as it should be. The case we are most interested in is $X_{4}=\mathbb{R}^{4}$, considering the characteristic classes in the equivariant sense444Equivariant cohomology is a cohomology theory which also captures the action of a group on a space. For simplicity we only consider the abelian case $\mathrm{U}(1)^{n}$. Consider the space of differential forms on $M$ valued in the polynomial of the formal parameters $\epsilon_{a}$, ($a=1,\ldots,n$), and consider the deformed differential $D_{\epsilon}=d+\epsilon_{a}\iota_{k^{a}}$. Here $\iota$ is the interior product and $k^{a}$ is the Killing vector of the $a$-th $\mathrm{U}(1)$. Then $D_{\epsilon}{}^{2}=\epsilon_{a}\pounds_{k_{a}}$ where $\pounds_{k_{a}}$ is the Lie derivative by $k_{a}$. We define the equivariant cohomology $H_{\mathrm{U}(1)^{n}}(M)$ to be the cohomology of $D_{\epsilon}$ on the space of differential forms invariant under $\mathrm{U}(1)^{n}$. Note that the formal parameters $\epsilon_{a}$ have degree $2$. Equivariant characteristic classes are elements of the equivariant cohomology. For example, consider $\mathbb{C}$ acted on by $\mathrm{U}(1)$ which rotates the phase, and let the equivariant parameter be $\epsilon$. The Chern class $c_{1}(T\mathbb{C})$ in the standard sense is of course trivial, but the equivariant Chern class is given by $c_{1}(T\mathbb{C})=\epsilon$. For more details, see e.g. [16]. . We take the action of $\mathrm{U}(1)^{2}$ to rotate two orthogonal two-planes in $\mathbb{R}^{4}$, and call the equivariant parameters $\epsilon_{1,2}$ respectively. The Chern classes of the two two-planes are $\epsilon_{1,2}$. Thus we have $p_{1}(T\mathbb{R}^{4})=\epsilon_{1}^{2}+\epsilon_{2}^{2}$ and $e(T\mathbb{R}^{4})=\epsilon_{1}\epsilon_{2}$. We then use the localization formula, in the case where the fixed points are isolated: $$\int_{M}\alpha=\sum_{p}\frac{\alpha|_{p}}{e(N_{p})}\;.$$ The summation is over the fixed points $p$, and $e(N_{p})$ is the equivariant Euler class of the normal bundle of $p$ inside $M$. In our case the only fixed point is the origin. Therefore we have $$P_{1}(\mathbb{R}^{4})=\frac{\epsilon_{1}^{2}+\epsilon_{2}^{2}}{\epsilon_{1}% \epsilon_{2}},\qquad\chi(\mathbb{R}^{4})=1.$$ (2.15) Applying (2.13), we find $$\displaystyle c_{R}$$ $$\displaystyle=\frac{\epsilon_{1}^{2}+3\epsilon_{1}\epsilon_{2}+\epsilon_{2}^{2% }}{2\epsilon_{1}\epsilon_{2}}r_{G}+\frac{(\epsilon_{1}+\epsilon_{2})^{2}}{% \epsilon_{1}\epsilon_{2}}d_{G}h_{G}\;,$$ (2.16) $$\displaystyle c_{L}$$ $$\displaystyle=r_{G}+\frac{(\epsilon_{1}+\epsilon_{2})^{2}}{\epsilon_{1}% \epsilon_{2}}d_{G}h_{G}\;.$$ Upon the identification $\epsilon_{1}/\epsilon_{2}=b^{2}$ advocated in [2], $c_{L}$ perfectly agrees with the central charge of the conformal Toda theory of type $G$ [17]: $$c_{\text{Toda}}[G]=r_{G}+\left(b+\frac{1}{b}\right)^{2}d_{G}h_{G}\;.$$ (2.17) 3 Discussion A couple of comments are in order. First, recall that in the construction of [1] the $\mathcal{N}=2$ theories are obtained by wrapping M5-branes on $\mathbb{R}^{4}\times\Sigma$, with a suitable twist on $\Sigma$ which preserves one half of the supersymmetry. So far, we have not taken this twist into account. When we perform it, the right-moving sector, which was the supersymmetric part, becomes topological and so $c_{R}\to 0$, while $c_{L}$ is untouched and agrees with the central charge of the Liouville/Toda theories. This is consistent with the fact that Nekrasov’s partition function computes the chiral half of the Liouville/Toda correlation functions. Second, notice that Nekrasov’s partition function was computed after introducing an equivariant deformation of $\mathbb{R}^{4}$ by a $\mathrm{U}(1)^{2}$ action with parameters $\epsilon_{1,2}$. More precisely, the symmetry of the 4d theory is $$\mathrm{SO}(4)\times\mathrm{SU}(2)_{R}\simeq\mathrm{SU}(2)_{l}\times\mathrm{SU% }(2)_{r}\times\mathrm{SU}(2)_{R}.$$ (3.1) The topological theory has a modified Lorentz group $$\mathrm{SO}(4)^{\prime}\simeq\mathrm{SU}(2)_{l}\times\mathrm{SU}(2)_{r^{\prime% }}\;,$$ (3.2) where $\mathrm{SU}(2)_{r^{\prime}}$ is the diagonal subgroup of $\mathrm{SU}(2)_{r}\times\mathrm{SU}(2)_{R}$. The $\mathrm{U}(1)^{2}$ used in the equivariant deformation is the Cartan subgroup of this modified $\mathrm{SO}(4)^{\prime}$. This motivated our choice in (2.6). In view of this, it is also reasonable to evaluate the anomaly polynomial in the same equivariant sense 555Note that Nekrasov’s partition function itself can be computed as an equivariant integral over the instanton moduli space.. It would be nice to have a better understanding of this point. Acknowledgments It is a pleasure to thank G. Bonelli, J. Maldacena, N. Seiberg, A. Tanzini, H. Verlinde, B. Wecht and E. Witten for helpful discussions. LFA is supported in part by the DOE grant DE-FG02-90ER40542. FB is supported by the DOE grant DE-FG02-91ER40671. YT is supported in part by the NSF grant PHY-0503584, and by the Marvin L. Goldberger membership at the Institute for Advanced Study. Appendix A Central charges of Sicilian gauge theories of type $A$, $D$, $E$ In [9] the central charges $a$ and $c$ of the 4d superconformal Sicilian theories of $A$ type (obtained by wrapping M5-branes on a genus-$g$ Riemann surface), both in the $\mathcal{N}=2$ and $\mathcal{N}=1$ case, were computed from the 6d anomaly polynomial. We observe that from (2.2) the computation can be performed for the whole ADE series. Let us start with the $\mathcal{N}=2$ case. Using the same Chern roots as in section 2, the line bundle of the $\mathcal{N}=1$ R-symmetry is incorporated by: $n_{1}\to n_{1}+\frac{2}{3}c_{1}(F)$, $n_{2}\to n_{2}+\frac{4}{3}c_{1}(F)$. $\mathcal{N}=2$ SUSY requires $n_{1}+t=0$, $n_{2}=0$. The integral over the Riemann surface is $\int_{\Sigma}t=2-2g$. The 4d ’t Hooft anomalies of $U(1)_{R}$ are read from the formula: $$I_{6}=\frac{\operatorname{tr}R^{3}}{6}\,c_{1}(F)^{3}-\frac{\operatorname{tr}R}% {24}\,c_{1}(F)p_{1}(T_{4})\;.$$ (A.1) Comparing this with the integral of $I_{8}$, we get: $$\operatorname{tr}R^{3}=\frac{2}{27}(g-1)(13r_{G}+16d_{G}h_{G})\qquad\qquad% \operatorname{tr}R=\frac{2}{3}(g-1)r_{G}\;.$$ (A.2) Using the standard relations between $a$, $c$ and $\operatorname{tr}R$, $\operatorname{tr}R^{3}$, we get: $$a=(g-1)\frac{5r_{G}+8d_{G}h_{G}}{24}\qquad\qquad\qquad c=(g-1)\frac{r_{G}+2d_{% G}h_{G}}{6}\;.$$ (A.3) This agrees with [18] for the $A$ series, and with [19] for the $D$ series. Similar formulas can be obtained in the $\mathcal{N}=1$ case. The R-symmetry bundle is given by $n_{1}\to n_{1}+c_{1}(F)$ and $n_{2}\to n_{2}+c_{1}(F)$, while $\mathcal{N}=1$ SUSY requires $n_{1}+n_{2}+t=0$. We get: $$a=(g-1)\,\frac{6r_{G}+9d_{G}h_{G}}{32}\qquad\qquad c=(g-1)\,\frac{4r_{G}+9d_{G% }h_{G}}{32}\;.$$ (A.4) References [1] D. Gaiotto, “${\mathcal{N}}\!=2$ Dualities,” arXiv:0904.2715 [hep-th]. [2] L. F. Alday, D. Gaiotto, and Y. Tachikawa, “Liouville Correlation Functions from Four-Dimensional Gauge Theories,” arXiv:0906.3219 [hep-th]. [3] N. A. Nekrasov, “Seiberg-Witten Prepotential from Instanton Counting,” Adv. Theor. Math. Phys. 7 (2004) 831–864, arXiv:hep-th/0206161. [4] V. Pestun, “Localization of Gauge Theory on a Four-Sphere and Supersymmetric Wilson Loops,” arXiv:0712.2824 [hep-th]. [5] N. Wyllard, “$A_{N-1}$ Conformal Toda Field Theory Correlation Functions from Conformal ${\mathcal{N}}\!=2$ $SU(N)$ Quiver Gauge Theories,” arXiv:0907.2189 [hep-th]. [6] A. Mironov and A. Morozov, “On AGT Relation in the Case of U(3),” arXiv:0908.2569 [hep-th]. [7] R. Dijkgraaf and C. Vafa, “Toda Theories, Matrix Models, Topological Strings, and ${\mathcal{N}}\!=2$ Gauge Systems,” arXiv:0909.2453 [hep-th]. [8] G. Bonelli and A. Tanzini, “Hitchin Systems, ${\mathcal{N}}\!=2$ Gauge Theories and W-Gravity,” arXiv:0909.4031 [hep-th]. [9] F. Benini, Y. Tachikawa, and B. Wecht, “Sicilian Gauge Theories and ${\mathcal{N}}\!=1$ Dualities,” arXiv:0909.1327 [hep-th]. [10] E. Witten, “Five-Brane Effective Action in M-Theory,” J. Geom. Phys. 22 (1997) 103–133, arXiv:hep-th/9610234. [11] J. A. Harvey, R. Minasian, and G. W. Moore, “Non-Abelian Tensor-Multiplet Anomalies,” JHEP 09 (1998) 004, arXiv:hep-th/9808060. [12] K. A. Intriligator, “Anomaly Matching and a Hopf-Wess-Zumino Term in 6D, $N$ = (2,0) Field Theories,” Nucl. Phys. B581 (2000) 257–273, arXiv:hep-th/0001205. [13] P. Yi, “Anomaly of (2,0) Theories,” Phys. Rev. D64 (2001) 106006, arXiv:hep-th/0106165. [14] E. Witten, “Topological Quantum Field Theory,” Commun. Math. Phys. 117 (1988) 353. [15] J. M. Maldacena, A. Strominger, and E. Witten, “Black Hole Entropy in M-Theory,” JHEP 12 (1997) 002, arXiv:hep-th/9711053. [16] M. Libnei, “Lecture notes on equivariant cohomology,” arXiv:0709.3615 [math]. [17] T. J. Hollowood and P. Mansfield, “Quantum Group Structure of Quantum Toda Conformal Field Theories. 1,” Nucl. Phys. B330 (1990) 720. [18] D. Gaiotto and J. Maldacena, “The Gravity Duals of ${\mathcal{N}}\!=2$ Superconformal Field Theories,” arXiv:0904.4466 [hep-th]. [19] Y. Tachikawa, “Six-Dimensional $D_{N}$ Theory and Four-Dimensional SO-USp Quivers,” arXiv:0905.4074 [hep-th].
Effective Forces in Thermal Amorphous Solids with Generic Interactions Giorgio Parisi Dipartimento di Fisica, Sapienza Universitá di Roma, INFN, Sezione di Roma I, IPFC – CNR, Piazzale Aldo Moro 2, I-00185 Roma, Italy    Itamar Procaccia Department of Chemical Physics, the Weizmann Institute of Science, Rehovot 76100, Israel    Carmel Shor Department of Chemical Physics, the Weizmann Institute of Science, Rehovot 76100, Israel    Jacques Zylberg Department of Chemical Physics, the Weizmann Institute of Science, Rehovot 76100, Israel Abstract In thermal glasses at temperatures sufficiently lower than the glass transition, the constituent particles are trapped in their cages for sufficiently long time such that their time-averaged positions can be determined before diffusion and structural relaxation takes place. The effective forces are those that hold these average positions in place. In numerical simulations the effective forces ${\bm{F}}_{ij}$ between any pair of particles can be measured as a time average of the bare forces ${\bm{f}}_{ij}({\bm{r}}_{ij}(t))$. In general even if the bare forces come from two-body interactions, thermal dynamics dress the effective forces to contain many-body interactions. Here we develop the effective theory for systems with generic interactions, where the effective forces are derivable from an effective potential and in turn they give rise to an effective Hessian whose eigenvalues are all positive when the system is stable. In this Letter we offer analytic expressions for the effective theory, and demonstrate the usefulness and the predictive power of the approach. Introduction: In the last decade or two there has been great progress in understanding athermal amorphous solids at temperature $T=0$ Malandro and Lacks (1998); Maloney and Lemaître (2004); Tanguy et al. (2006); Lerner and Procaccia (2009); Karmakar et al. (2010a); Hentschel et al. (2011). This progress was facilitated by the fact that particles’ positions $\{{\bm{r}}_{i}\}_{i=1}^{N}$ are frozen at $T=0$ and the knowledge of the microscopic (bare) forces is sufficient to develop a theory of the response of the materials to external mechanical or magnetic strains. The Hessian matrix, whose eigenvalues are semi-positive at $T=0$, supplies important information, leading to an athermal theory that provides good understanding of the density of states, of plastic events, and of the failure mechanisms of amorphous solids. Considerable progress was also achieved in understanding magnetic amorphous solids and cross effects between mechanical and magnetic responses Dasgupta et al. (2013); Dubey et al. (2015); Hentschel et al. (2016). These techniques fail however at finite temperature since the particle positions $\{{\bm{r}}_{i}(t)\}_{i=1}^{N}$ fluctuate in time and inter-particle forces become dressed by dynamical effects. The Hessian matrix of a configuration at any given time $t$ contains negative eigenvalues and it cannot be used to study stability and instabilities. The effective forces in thermal systems are determined by the momentum transferred when particles interact. Even when the bare forces are binary, the effective forces contain ternary, quaternary and higher order terms Gendelman et al. (2016); Parisi et al. (2018). In order to lift the methods that were so useful at $T=0$ one needs a new idea: in thermal systems the particle positions are indeed not stationary, but in glasses with large relaxation times one can determine the time-averaged positions before the onset of diffusion and much before the glass relaxes to thermodynamic equilibrium Mezard and Parisi (2009). The time averaged positions are trivially stationary in time, and we refer to such states as “thermal mechanical equilibria” Dubey et al. (2016). In such states one can determine the renormalized force-laws that hold these average positions stable. These renormalized force-laws will define an effective Hamiltonian and an effective Hessian matrix, offering a totally novel way to explore the stability, the responses to external strain and stresses, and the failure mechanisms of glassy materials at finite temperatures. General Theory: To develop the general effective theory consider a generic glass former composed on $N$ particles in a volume $V$. The system is endowed with a bare potential $U(\{{\bm{r}}_{i}(t)\})$. It is customary to assume that the potential is a sum of binary interaction terms $\phi\left({\bm{r}}_{ij}(t)\right)$, $$U(\{{\bm{r}}_{i}(t)\})=\sum_{<ij>}\phi\left({\bm{r}}_{ij}(t)\right)\ ,\quad{% \bm{r}}_{ij}\equiv{\bm{r}}_{j}-{\bm{r}}_{i}\ ,$$ (1) where the symbol $<ij>$ denotes summation over interacting pairs only. In this paper we consider bare potentials whose range exceeds the average inter-particle distance. Lennard-Jones interactions are an example, but hard-spheres or even soft spheres are excluded, for reasons that will clarify soon. Such longer range interactions are referred to as “generic”. The total bare force on a single particle and the bare inter-particle forces are $$f^{\alpha}_{i}(\{{\bm{r}}_{k}(t)\})\equiv-\frac{\partial U(\{{\bm{r}}_{k}(t)\}% )}{\partial r_{i}^{\alpha}}\ ,\quad f_{ij}^{\alpha}\left({\bm{r}}_{ij}(t)% \right)=-\frac{\partial\phi\left({\bm{r}}_{ij}\right)}{\partial r_{ij}^{\alpha% }}\ .$$ (2) Note that Greek superscripts will be reserved below for Cartesian components of vectors and matrices. Lastly one can also define the bare Hessian matrix as $$H_{ij}^{\alpha\beta}\equiv\frac{\partial f^{\alpha}_{i}}{\partial r_{j}^{\beta% }}\equiv-\frac{\partial^{2}U}{\partial r_{i}^{\alpha}\partial r_{j}^{\beta}}\ .$$ (3) As said above, this Hessian matrix differs from its athermal counterpart in having negative eigenvalues. The effective theory will cure this disadvantage. When the amorphous solid under study allows the calculation of the average positions of the particles within an interval of time $[0,\tau]$ such that each particle is only fluctuating within its own cage, we define the following time-stationary averages: The mean position of the $i$th particle ${\bm{R}}_{i}$ is defined as $${\bm{R}}_{i}\equiv\frac{1}{\tau}\int_{0}^{\tau}dt~{}{\bm{r}}_{i}(t)\ ,\quad{% \bm{R}}_{ij}\equiv{\bm{R}}_{i}-{\bm{R}}_{j}\ .$$ (4) The mean force on the particle $i$ is defined as $${\bm{F}}_{i}\equiv\frac{1}{\tau}\int_{0}^{\tau}dt~{}{\bm{f}}_{i}(\{{\bm{r}}_{k% }(t)\}_{i=1}^{N})\ .$$ (5) The mean force between particles $i$ and $j$ is defined as $${\bm{F}}_{ij}\equiv\frac{1}{\tau}\int_{0}^{\tau}dt~{}{\bm{f}}_{ij}\left({\bm{r% }}_{ij}(t)\right)\ .$$ (6) We note that $${\bm{F}}_{i}=\sum_{j}{\bm{F}}_{ij}=0\quad\text{in thermal mechanical % equilibrium.}$$ (7) More importantly and less trivially, we stress that although ${\bm{f}}_{ij}$ is only a function of ${\bm{r}}_{ij}$, the effective force ${\bm{F}}_{ij}$ is not binary, and in principle it can contain many-body interactions. Theory for generic potentials. For generic potentials we will write $${\bm{r}}_{ij}(t)={\bm{R}}_{ij}+{\bm{u}}_{ij}(t)\ ,$$ (8) and expand the wanted objects to any desired order in $u_{ij}$. We will examine the efficacy of this approach with Lennard-Jones glasses below. Thus for example we will use Eq. (8) in Eq. (6). Denoting objects expanded to a desired order in $u_{ij}$ with a hat, we find $$\hat{F}^{\eta}_{ij}=f_{ij}^{\eta}({\bm{R}}_{ij})+\frac{1}{2}\frac{\partial^{2}% f_{ij}^{\eta}}{\partial r_{ij}^{\alpha}\partial r_{ij}^{\beta}}\Big{|}_{{\bm{R% }}_{ij}}\langle u_{ij}^{\alpha}(t)u_{ij}^{\beta}(t)\rangle+\cdots\ .$$ (9) where repeated indices are summed upon, angular brackets denote time average and “$\cdots$” represent terms of higher order in $u_{ij}$ if such terms are deemed necessary. We note that the linear term in $u_{ij}$ vanishes upon time averaging. The derivatives are evaluated at the mean vector distance. It becomes clear now why this approach is not applicable to hard or soft spheres; typically the separations $R_{ij}$ exceed the range of interaction and the derivatives employed in Eq. (9) do not exist. Similarly, affecting the same approximation to the definition of $\phi_{ij}$ we find that $$\hat{\Phi}_{ij}=\phi(R_{ij})+\frac{1}{2}\frac{\partial^{2}\phi}{\partial r_{ij% }^{\alpha}\partial r_{ij}^{\beta}}\Big{|}_{{\bm{R}}_{ij}}\langle u_{ij}^{% \alpha}(t)u_{ij}^{\beta}(t)\rangle+\cdots\ ,$$ (10) We note that the force between particles $i$ and $j$ in the present approximation is not a function of $R_{ij}$ only, since the cage fluctuations $\langle u_{ij}^{\alpha}u_{ij}^{\beta}\rangle$ depend on all the particles. This is where the many-body interactions are implicitly affecting the present approximation. Finally we note that the effective inter-particle force is derivable from the effective potential, $$\hat{F}^{\eta}_{ij}=-\frac{\partial\hat{\Phi}_{ij}}{\partial r_{ij}^{\eta}}% \Big{|}_{{\bm{R}}_{ij}}\ ,$$ (11) if we adopt the convention that the cage fluctuations are taken as input numbers in Eq. (10) and are not derived. Similarly, with the same rule, the effective Hessian is obtained by a second derivative of Eq. (10) or as a first derivative of Eq. (11). Below we ascertain that the effective forces compute this way sum up to zero, $\sum_{j}\hat{{\bm{F}}}_{ij}=0$ and that the effective Hessian has no negative eigenvalues, as required. Testing in Lennard-Jones models: Before proceeding we should test the quality of the truncated expansion in standard models of glass formers. For the numerical experiments we employ a generic glass former in 2-dimensions in the form of the Kob-Andersen binary mixtureKob and Andersen (1995); Brüning et al. (2009) of Lennard-Jones bare interactions cut off at $r_{\rm co}$ with four smooth derivatives. The potentials have the analytic form $$\phi\!\left(\!{\textstyle\frac{r_{ij}}{\lambda_{ij}}}\!\right)\!=\!\left\{% \begin{array}[]{l}\!\varepsilon\!\left[\left(\!\frac{\lambda_{ij}}{r_{ij}}\!% \right)^{12}\!-\left(\!\frac{\lambda_{ij}}{r_{ij}}\!\right)^{6}+\displaystyle{% \sum_{\ell=0}^{6}}c_{\ell}\left(\!{\textstyle\frac{r_{ij}}{\lambda_{ij}}}\!% \right)^{\ell}\right]\!,\hskip 0.0pt\frac{r_{ij}}{\lambda_{ij}}<\frac{r_{\rm co% }}{\lambda}\\ \quad\quad\quad\quad\quad 0\hskip 96.73937pt,\frac{r_{ij}}{\lambda_{ij}}\geq% \frac{r_{\rm co}}{\lambda}\end{array}\right.\!\!$$ (12) The unit of length $\lambda$ is set to be the interaction length scale of two small particles. We solve the coefficients $c_{\ell}$ for the potential such that the potential has its minimum $\phi=-\varepsilon$ at $r_{\rm min}/\lambda_{ij}=2^{1/6}$ and it vanishes with four continuous derivatives at $r_{\rm co}/\lambda_{ij}=2.5$. In these units the Boltzmann constant $k_{B}=1$. The simulations are performed in an NVT ensemble with density $\rho=1.162$ using a modified Berendsen thermostat which couples a constant number of particles to the bath, regardless of the system size Karmakar et al. (2010b). In this method the temperature is defined by the average kinetic energy per particles, and a constant temperature is achieved by velocity re-scaling. Using samples with $N=500$ particles we first equilibrate a liquid at $T=0.5$ (in the usual LJ units) than cool the system slowly at $\dot{T}=10^{-6}$ down to $T=10^{-6}$. Lastly the samples are heated up at the same rate up to the desired temperatures $T$, where the averaged positions and moments of displacement are calculated. Using this model we determined the effective inter-particle forces ${\bm{F}}_{ij}$ using the time average Eq. (6) and the approximation Eq. (9). In Fig. 1 we compare one against the other for four different temperatures, $T=0.01$, $T=0.05$ and $T=0.1$ and $T=0.15$ in Lennard-Jones units. Computing the Pearson correlation coefficients for these data we find the values $R^{2}=0.999,0.992,0.979$ and 0.965 respectively. Obviously the error grows with the temperature indicating that at higher temperature one will need higher corrections beyond the second order. Notwithstanding, we consider this results as an excellent support for the approach and proceed now to test its power in determining the mechanical properties of the thermal amorphous solid. The shear modulus: To demonstrate the usefulness of the approach we turn now to the computation of the shear-modulus. Before going to the thermal case, we recall that at athermal conditions ($T=0$) the shear modulus has the exact representation: $$\mu(T=0)\!=\!\frac{\partial\sigma_{xy}}{\partial\epsilon_{xy}}-\sum\limits_{i,% \alpha}\!\!\sum\limits_{j,\beta}\Xi_{i}^{\alpha}(H_{ij}^{\alpha\beta})^{-1}\Xi% _{j}^{\beta}\ ,\quad\Xi_{i}^{\alpha}\equiv\frac{\partial\sigma_{xy}}{\partial r% _{i}^{\alpha}}\ .$$ (13) where the first term is the Born approximation. We note that here and below the Hessian matrix has zero eigenvalues due to Goldstone modes, and theses should be removed before inversion. In the thermal case Lutsko (1989); Barrat (2006); Yoshino (2012) the Born term is corrected by stress fluctuations: $$\displaystyle\mu=\left<\frac{\partial\sigma_{xy}}{\partial\epsilon_{xy}}\right% >-\frac{V}{k_{B}T}\left(\langle\sigma_{xy}^{2}\rangle-\langle\sigma_{xy}% \rangle^{2}\right)+2\rho k_{B}T\ .$$ (14) As before we determine an “effective shear modulus” $\hat{\mu}$ by using the stationary positions ${\bm{R}}_{i}$ and the cage fluctuations ${\bm{u}}_{i}$. We begin by expressing the stress $\sigma\equiv\sigma_{xy}$: $$\displaystyle\sigma(\{{\bm{r}}_{i}(t)\})=\sigma(\{{\bm{R}}_{i}\})+\sum\limits_% {i,\alpha}\frac{\partial\sigma}{\partial r_{i}^{\alpha}}\Big{|}_{\{{\bm{R}}_{i% }\}}u_{i}^{\alpha}$$ $$\displaystyle+\frac{1}{2}\sum\limits_{i,\alpha}\sum\limits_{j,\beta}\frac{% \partial^{2}\sigma}{\partial r_{i}^{\alpha}\partial r_{j}^{\beta}}\Big{|}_{\{{% \bm{R}}_{i}\}}u_{i}^{\alpha}u_{j}^{\beta}+\dots\>\>.$$ (15) The first term in Eq.(14) is the Born term, and using Eq.(10) it can be expanded again in the cage fluctuations: $$\displaystyle\hat{\mu}_{Born}=$$ $$\displaystyle\frac{1}{V}\frac{\partial^{2}U}{\partial\epsilon_{xy}^{2}}\Big{|}% _{\{{\bm{R}}_{i}\}}$$ $$\displaystyle+\frac{1}{V}\frac{1}{2}\sum_{<ij>}\Big{[}\frac{\partial^{2}}{% \partial r_{ij}^{\alpha}\partial r_{ij}^{\beta}}\frac{\partial^{2}U}{\partial% \epsilon_{xy}^{2}}\Big{|}_{{\bm{R}}_{ij}}\langle u_{ij}^{\alpha}u_{ij}^{\beta}% \rangle\Big{]}+\ldots,$$ The second term in Eq.(14) contains the second moment $\langle\sigma^{2}_{xy}\rangle$. When we expand this object we need to take into consideration the first order fluctuation $u_{i}^{\alpha}$: $$\displaystyle\big{[}\sigma(\{{\bm{r}}_{i}(t)\})\big{]}^{2}=\big{[}\sigma(\{{% \bm{R}}_{i}\})\big{]}^{2}+2\sigma(\{{\bm{R}}_{i}\})\sum\limits_{i,\alpha}\frac% {\partial\sigma}{\partial r_{i}^{\alpha}}\Big{|}_{\{{\bm{R}}_{i}\}}u_{i}^{\alpha}$$ $$\displaystyle+\frac{\sigma(\{{\bm{R}}_{i}\})}{2}\sum\limits_{i,\alpha}\sum% \limits_{j,\beta}\frac{\partial^{2}\sigma}{\partial r_{i}^{\alpha}\partial r_{% j}^{\beta}}\Big{|}_{\{{\bm{R}}_{i}\}}u_{i}^{\alpha}u_{j}^{\beta}$$ (17) $$\displaystyle+\sum\limits_{i,\alpha}\frac{\partial\sigma}{\partial r_{i}^{% \alpha}}\Big{|}_{\{{\bm{R}}_{i}\}}\sum\limits_{j,\beta}\frac{\partial\sigma}{% \partial r_{j}^{\beta}}\Big{|}_{\{{\bm{R}}_{i}\}}u_{i}^{\alpha}u_{i}^{\beta}+\dots;$$ For the average of the second moment $\langle\sigma^{2}\rangle$ the linear contribution in $u_{i}^{\alpha}$ vanishes, and we are left with: $$\displaystyle\big{<}\big{[}\sigma(\{{\bm{r}}_{i}(t)\})\big{]}^{2}\big{>}=\big{% [}\sigma(\{{\bm{R}}_{i}\})\big{]}^{2}$$ $$\displaystyle+\frac{\sigma(\{{\bm{R}}_{i}\})}{2}\sum\limits_{i,\alpha}\sum% \limits_{j,\beta}\frac{\partial^{2}\sigma}{\partial r_{i}^{\alpha}\partial r_{% j}^{\beta}}\Big{|}_{\{{\bm{R}}_{i}\}}\left<u_{i}^{\alpha}u_{j}^{\beta}\right>$$ (18) $$\displaystyle+\sum\limits_{i,\alpha}\frac{\partial\sigma}{\partial r_{i}^{% \alpha}}\Big{|}_{\{{\bm{R}}_{i}\}}\sum\limits_{j,\beta}\frac{\partial\sigma}{% \partial r_{j}^{\beta}}\Big{|}_{\{{\bm{R}}_{i}\}}\left<u_{i}^{\alpha}u_{j}^{% \beta}\right>+\dots.$$ For $\langle\sigma\rangle^{2}$, the $\langle u_{i}^{\alpha}\rangle$ terms have already vanished before squaring the term, so we get: $$\displaystyle\big{<}\big{[}\sigma(\{{\bm{r}}_{i}(t)\})\big{]}\big{>}^{2}=\big{% [}\sigma(\{{\bm{R}}_{i}\})\big{]}^{2}$$ (19) $$\displaystyle+\sigma(\{{\bm{R}}_{i}\})\sum\limits_{i,\alpha}\sum\limits_{j,% \beta}\frac{\partial^{2}\sigma}{\partial r_{i}^{\alpha}\partial r_{j}^{\beta}}% \Big{|}_{\{{\bm{R}}_{i}\}}\left<u_{i}^{\alpha}u_{j}^{\beta}\right>+\dots.$$ Subtracting the last two equations we get the desired expression for the Taylor expansion of the fluctuation term in Eq.(14): $$\displaystyle\hat{\mu}_{F}=\frac{V}{k_{B}T}\sum\limits_{i,\alpha}\sum\limits_{% j,\beta}\frac{\partial\sigma}{\partial r_{i}^{\alpha}}\Big{|}_{\{{\bm{R}}_{i}% \}}\left<u_{i}^{\alpha}u_{j}^{\beta}\right>\frac{\partial\sigma}{\partial r_{j% }^{\beta}}\Big{|}_{\{{\bm{R}}_{i}\}}+\dots$$ (20) Note that in this case the cage fluctuations are represented by the correlation matrix $C_{ij}^{\alpha\beta}=\langle u_{i}^{\alpha}u_{j}^{\beta}\rangle=\langle(r^{% \alpha}_{i}-R^{\alpha}_{i})(r_{j}^{\beta}-R^{\beta}_{j})\rangle$ , and not by the “pair fluctuations” $\langle u_{ij}^{\alpha}u_{ij}^{\beta}\rangle$ as in Eq. (9) and Eq.(10) above. We expect this correlation to be proportional to $V^{-1}$ due to the central limit theorem, cancelling the explicit volume factor on the RHS of Eq. (20). Recalling that the correlation function $C_{ij}^{\alpha\beta}$ is intimately related to the inverse (bare) Hessian Henkes et al. (2012), i.e. that $${\bm{C}}=k_{B}T{\bm{H}}^{-1}\ ,$$ (21) we appreciate the apparent structural relationship between Eqs. (20) and (13). Finally using Eq.(Effective Forces in Thermal Amorphous Solids with Generic Interactions) and Eq. (20) we can write: $$\hat{\mu}=\hat{\mu}_{Born}-\hat{\mu}_{F}.$$ (22) A comparison between the results of computing the shear modulus via Eq. (22) and Eq.(14) now called for. For each Temperature $T\in[0.001..0.100]$ some 100 samples were prepared, and quenched separately from $T=0.5$ to $T=10^{-6}$ with quench rate of $\dot{T}=10^{-6}$. Next, each sample was heated up to the desired temperature; the average positions ${\bm{R}}_{i}$ cage fluctuation correlations were calculated by averaging over $5\times 10^{5}$ MD steps; All together, each point in Fig.2 was averaged over $50-100$ samples. The results of the comparison are shown in Fig 2. The line drawn is just a guide for the eye. Concluding remarks: The results obtained for the effective forces and the for shear modulus open up a new path for studying the physics of amorphous solids at finite temperatures. After all, when we discuss glasses at low temperatures, the average spatial structure is quite stable for a long time, oblivious of the thermal agitation of particles within their cages. Do we really care about the details of this thermal agitation?. The answer is yes and no. Yes, because this motion dresses up the interactions between the particles, and the bare forces are no longer providing a proper description of some important properties. Momentum transfer is taking place, and this fact has consequences. On the other hand, the average positions of the particles within their cages offers a skeleton for the theory of thermal amorphous solids in much the same way as the frozen position at zero temperature. What we need to learn is how to take the pertinent information into account for devising a good theory Stoessel and Wolynes (1984); Hall and Wolynes (2008). The present results indicate that at least in glasses where the average distance between particles is within the range of the bare interactions, we can reach a theory by expanding objects around the average positions. In this paper we stopped at the first correction, limiting ourselves to low temperatures. We note that this is NOT a harmonic approximation of the bare potential; the theory presented above calls for higher derivatives of the bare potential, up to fourth order already; for example the RHS of Eq. (9) employs a third order derivative of the potential, and Eq. (Effective Forces in Thermal Amorphous Solids with Generic Interactions) a fourth order. Higher order truncations necessitated by higher temperatures will require more smooth derivatives in the bare potential. We reiterate that the present approach will fail for hard or soft spheres, and also for some inverse power law models where the exponent is too large. In such models the mean distance between particles will exceed the range of interaction and we do not have the required derivatives of the bare potential that are computed at the mean separations. One important question remaining for future research is how to build up on the present ideas a theory of instabilities and mechanical failure in thermal glasses. In athermal conditions the eigenvalues of the Hessian matrix provided enormous insights on plastic responses, density of states and shear banding instabilities. If we accept the view that the random thermal motions within cages should be averaged over, then the effective Hessian introduced above should be studied for the purpose of providing a similar understanding in thermal systems. This and other related issues will be studied in the near future. Acknowledgements.We thank George Hentschel for some very helpful discussions at the initiation of this research. This work was supported in part by the Israel Science Foundation (Israel Singapore Program), US-Israel Binational Science Foundation and the Joint Laboratory on “Advanced and Innovative Materials” - Universita’ di Roma “La Sapienza” - WIS. References Malandro and Lacks (1998) D. L. Malandro and D. J. Lacks, Phys. Rev. Lett. 81, 5576 (1998). Maloney and Lemaître (2004) C. Maloney and A. Lemaître, Physi. Rev. Lett. 93, 195501 (2004). Tanguy et al. (2006) A. Tanguy, F. Leonforte,  and J. L. Barrat, Euro. Phys. J. E 20, 355 (2006). Lerner and Procaccia (2009) E. Lerner and I. Procaccia, Phys. Rev.E 79, 066109 (2009). Karmakar et al. (2010a) S. Karmakar, A. Lemaître, E. Lerner,  and I. Procaccia, Phys. Rev. Lett. 104, 215502 (2010a). Hentschel et al. (2011) H. G. E. Hentschel, S. Karmakar, E. Lerner,  and I. Procaccia, Phys. Rev.E 83, 061101 (2011). Dasgupta et al. (2013) R. Dasgupta, H. G. E. Hentschel, I. Procaccia,  and B. Sen Gupta, EPL (Europhysics Letters) 104, 47003 (2013). Dubey et al. (2015) A. K. Dubey, H. G. E. Hentschel, P. K. Jaiswal, C. Mondal, I. Procaccia,  and B. Sen Gupta, EPL (Europhysics Letters) 112, 17011 (2015). Hentschel et al. (2016) H. G. E. Hentschel, I. Procaccia,  and B. Sen Gupta, Phys. Rev.E 93, 033004 (2016). Gendelman et al. (2016) O. Gendelman, E. Lerner, Y. G. Pollack, I. Procaccia, C. Rainone,  and B. Riechers, Phys. Rev. E 94, 051001 (2016). Parisi et al. (2018) G. Parisi, Y. G. Pollack, I. Procaccia, C. Rainone,  and M. Singh, Phys. Rev. E 97, 063003 (2018). Mezard and Parisi (2009) M. Mezard and G. Parisi, “Glasses and replicas,”  (2009), arXiv:0910.2838v1 . Dubey et al. (2016) A. K. Dubey, I. Procaccia, C. A. Shor,  and M. Singh, Phys. Rev. Lett. 116, 085502 (2016). Kob and Andersen (1995) W. Kob and H. C. Andersen, Phys. Rev.E 52, 4134 (1995), 9505118 . Brüning et al. (2009) R. Brüning, D. A. St-Onge, S. Patterson,  and W. Kob, J. Phys.: Cond. Matt. 21, 035117 (2009). Karmakar et al. (2010b) S. Karmakar, E. Lerner, I. Procaccia,  and J. Zylberg, Phys. Rev.E 82, 031301 (2010b), arXiv:1006.3737 . Lutsko (1989) J. F. Lutsko, Journal of Applied Physics 65, 2991 (1989). Barrat (2006) J.-L. Barrat, in Computer Simulations in Condensed Matter Systems: From Materials to Chemical Biology Volume 2 (Springer, 2006) pp. 287–307. Yoshino (2012) H. Yoshino, J. Chem. Phys. 136, 214108 (2012). Henkes et al. (2012) S. Henkes, C. Brito,  and O. Dauchot, Soft Matter 8, 6092 (2012). Stoessel and Wolynes (1984) J. P. Stoessel and P. G. Wolynes, J. Chem. Phys. 80, 4502 (1984). Hall and Wolynes (2008) R. W. Hall and P. G. Wolynes, J. Phys. Chem. B 112, 301 (2008).
Spin and interaction effects on charge distribution and currents in one-dimensional conductors and rings within the Hartree-Fock approximation Avraham Cohen${}^{1}$    Klaus Richter${}^{2}$    and Richard Berkovits${}^{1}$ ${}^{1}$The Minerva Center for the Physics of Mesoscopics, Fractals and Neural Networks, Department of Physics, Bar-Illan University, 52900 Ramat-Gan, Israel ${}^{2}$Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Strasse 38, 01187 Dresden, Germany (January 10, 2021) Abstract Using the self–consistent Hartree-Fock approximation for electrons with spin at zero temperature, we study the effect of the electronic interactions on the charge distribution in a one-dimensional continuous ring containing a single $\delta$ scatterer. We reestablish that the interaction suppresses the decay of the Friedel oscillations. Based on this result, we show that in an infinite one dimensional conductor containing a weak scatterer, the current is totally suppressed because of a gap opened at the Fermi energy. In a canonical ensemble of continuous rings containing many scatterers, the interactions enhance the average and the typical persistent current. pacs: PACS numbers: 72.10.Fk, 73.20.Dx [ ] The effects of electronic interactions on characteristic properties, such as charge fluctuations, persistent currents (PC’s) and the conductance of electronic systems are very rich and interesting.[1] They strongly depend on the strength and range of the interactions,[2, 3, 4] on the dimensionality of the system, and on whether the space is discrete or continuous.[5, 6] Approximate calculations, like Hartree-Fock, introduce a great deal of simplifications, but at the same time many effects may be washed out. However, approximate calculations may be used to shed more light on specific problems, while keeping in mind their limitations. In this work we consider $e$-$e$ interactions within the self-consistent Hartree-Fock approximation (SCHFA) for electrons with spin at zero temperature. For simplicity we assume an equal number of electrons of opposite spin states. Our aim is to study numerically the interaction effects on the charge distribution and the currents in continuous one-dimensional (1D) isolated rings and open conductors containing a single $\delta$ scatterer,[4, 7, 8, 9, 10] as well as on the PC’s in rings containing many scatterers.[11, 12] Even within the Hartree-Fock approximation we recover the bosonization[4] and the density-matrix renormalization-group result:[10] We show that for a single scatterer in a ring the repulsive electronic interaction suppresses the decay of the charge oscillations. Based on this we show, as a central result, that for an open conductor with a weak scatterer the electronic conduction at the Fermi energy vanishes because of Bragg reflection coexisting with a gap at the Fermi energy. The zero conduction of the interacting system was obtained in Refs.7, 8, 9 by exact and by renormalization group calculations. Within the first iteration of the SCHFA, it was shown[8, 9] that an attempt to explain this result by a scattering perturbation series is inadequate because of logarithmic divergences of the transmission amplitude at the Fermi energy in all orders of the series. Although the dissipative conductance of the infinite conductor is suppressed by the interactions, the PC in a ring is not. This is because the conductance depends on the properties of the levels close to the Fermi energy but the PC is a thermodynamic property that depends on the response of all occupied levels.[11] Moreover, we show that once many scatterers are considered, the interactions not only do not suppress the PC, but even enhance it. We write the HF equation for electrons in a ring of radius $R$ with angular coordinate $\theta$ and energy units $\hbar^{2}/m_{e}R^{2}=1$ (we drop the background term) as $$\displaystyle-\left[{1\over 2}{\partial^{2}\over\partial\theta^{2}}+V_{\rm dis% }(\theta)+{R\over r_{0}}\int_{0}^{2\pi}{\sum_{l^{\prime}=1}^{Ne}|\psi_{l^{% \prime}}(\theta^{\prime})|^{2}\over\sqrt{(\theta-\theta^{\prime})^{2}+\epsilon% ^{2}}}d{\theta^{\prime}}\right]\psi_{l}(\theta)$$ $$\displaystyle-\delta_{s_{l^{\prime}},s_{l}}{R\over r_{0}}\int_{0}^{2\pi}{\sum_% {l^{\prime}=1}^{Ne}\Psi_{l^{\prime}}^{*}(\theta^{\prime})\Psi_{l^{\prime}}(% \theta)\over\sqrt{(\theta-\theta^{\prime})^{2}+\epsilon^{2}}}\psi_{l}(\theta^{% \prime})d\theta^{\prime}=E\psi_{l}(\theta)\,.$$ (1) The twisted boundary condition $\psi(\theta+2\pi)=\psi(\theta)\exp(i2\pi\phi/\phi_{0})$ accounts for a flux $\phi$ threading the ring. $\phi_{0}\equiv hc/e$ is the flux quantum. $V_{\rm dis}(\theta)$ is the disorder potential which may include a single or many scatterers. The first (second) integral term is the Hartree (Fock) term. The electronic wave functions $\Psi_{l}(\theta)\equiv\psi_{l}(\theta)\exp{(-i\theta\phi/\phi_{0})}$ in the Fock term are $2\pi$ periodic for any value of flux. $l$ enumerates the energy levels together with the spin state $s_{l}$. $N_{e}$ is the total number of electrons in the ring. The cutoff $\epsilon^{2}$ allows (as in quasi 1D) using the 3D Coulomb law[5] and makes the integrations finite. The square of the distance between the particles is defined[13] by $(\theta-\theta^{\prime})^{2}\equiv\min[|\theta-\theta^{\prime}|^{2},(2\pi-|% \theta-\theta^{\prime}|)^{2}]$. In Eq. (1), $r_{0}\equiv{\bf\varepsilon}\hbar^{2}/m_{e}e^{2}$ denotes the Bohr radius with dielectric constant ${\bf\varepsilon}$ (to be distinguished from the cutoff $\epsilon$). We define the coefficient $g\equiv{R/r_{0}}$ to be the interaction strength. $g\sim 1$ corresponds to semiconductors.[14] Because the sum $\sum_{l^{\prime}=1}^{Ne}\Psi_{l^{\prime}}^{*}(\theta^{\prime})\Psi_{l^{\prime}% }(\theta)$ represents almost a closure relation we replace, as discussed in Refs.14 and 15, the integrodifferential equation (Eq. (1)) by an ordinary Schrödinger equation that we solve self–consistently: $$\displaystyle\bigg{[}-{1\over 2}{\partial^{2}\over\partial\theta^{2}}+V_{\rm dis% }(\theta)+gV_{\rm eff}(\theta)\bigg{]}\psi_{l}(\theta)=E\psi_{l}(\theta).$$ (2) Here $V_{\rm eff}(\theta)$ is given by $$\displaystyle\int_{0}^{2\pi}{\sum_{l^{\prime}=1}^{Ne}|\psi_{l^{\prime}}(\theta% ^{\prime})|^{2}-\delta_{s_{l^{\prime}},s_{l}}{\rm{\rm Re}}\{\Psi_{l^{\prime}}^% {*}(\theta^{\prime})\Psi_{l^{\prime}}(\theta)\}\over\sqrt{(\theta-\theta^{% \prime})^{2}+\epsilon^{2}}}d\theta^{\prime}$$ where Re stands for real part. The spin degree of freedom is very important. For spinless electrons the interaction effect is weak because the Fock and Hartree terms tend to cancel each other due to opposite signs and similar absolute values. Taking into account the spin degree of freedom, the Hartree term is twice as large as the exchange term. Then the former dominates $V_{\rm eff}$ and enhances screening; therefore we expect the interaction effects to be stronger for electrons with spin. This explains the importance of considering spin[16] in order to understand disordered interacting systems. We begin by studying the interaction effect on the charge oscillations in a ring with a single scatterer, $$V_{\rm dis}(\theta)=\lambda\delta(\theta).$$ (3) For a strong scatterer, $\lambda\geq E_{f}$ ($E_{f}$ is the Fermi energy), the interaction effect on the decay of the charge oscillations is weak and may even be neglected because the scatterer is dominating. For a weak scatterer, $\lambda<<E_{f}$, at the level of the SCHFA we recover the numerical result of Ref.10 based on the density-matrix renormalization group: With increasing repulsive interaction $g$ the decay of the Friedel oscillations is suppressed (indicating also the reliability of our SCHFA). Figure 1 depicts the decay rate for the strongest interaction for which the SCHFA still converges. As Fig. 2 shows, the effective potential tends to be periodic with half a Fermi wavelength periodicity. Both (direct and exchange) terms tend to have this periodicity which is independent of the interaction strength. This behavior holds for a larger number of electrons on a ring for a given constant charge density. The above results may be used to study the effect on the charge oscillations and on the conduction in the case of a single weak scatterer $\hat{\lambda}\delta(x)$ embedded in an infinite 1D conductor ($x$ is the spatial coordinate). For noninteracting electrons the orthogonal wave functions, with a given spin state, are[8, 9] $$\phi_{k}^{(1)}(x)=\left\{\begin{array}[]{lr}e^{ikx}+r(k,\lambda)e^{-ikx},&\;x<% 0\\ t(k,\lambda)e^{ikx},&\;x>0,\end{array}\right.$$ (4) $$\phi_{k}^{(2)}(x)=\left\{\begin{array}[]{lr}t^{\prime}(k,\lambda)e^{-ikx},&\;x% <0\\ e^{-ikx}+r^{\prime}(k,\lambda)e^{ikx},&\;x>0,\end{array}\right.$$ (5) with $k>0$ and $\lambda\equiv\hat{\lambda}/(\hbar^{2}/m_{e})$ having units of inverse length. $r(k,\lambda)={-i\lambda/(k+i\lambda)}$ and $t(k,\lambda)={k/(k+i\lambda)}.$ Because of time-reversal symmetry and the symmetry of the potential under coordinate inversion, $t^{\prime}=t$ and $r^{\prime}\equiv-(r/t)^{*}t=r$. The fluctuating density per spin is $$\displaystyle\Delta\rho(x)$$ $$\displaystyle=$$ $$\displaystyle 2\int_{0}^{K_{f}}{-\lambda^{2}\cos 2kx+\lambda k\;\sin 2k|x|% \over k^{2}+\lambda^{2}}dk$$ (6) $$\displaystyle=$$ $$\displaystyle-2\lambda e^{2\lambda|x|}{\rm Im}\{E_{1}(-iz)\}.$$ ${\rm Im}$ takes the imaginary part of the exponential integral $E_{1}$, $z\equiv 2|x|(K_{f}+i\lambda)$, and $K_{f}$ is the Fermi wave vector. $\Delta\rho(0)=-2\lambda\tan^{-1}(K_{f}/\lambda)$ is a minimum. For $K_{f}x>1$, the asymptotic expansion of $E_{1}$ implies $$\Delta\rho(x)=-{\lambda(K_{f}\cos 2K_{f}x+\lambda\sin 2K_{f}|x|)\over|x|(% \lambda^{2}+K_{f}^{2})}.$$ (7) Using $r\equiv|r|e^{i\eta}$ and $|r_{f}|={|\lambda|/\sqrt{K_{f}^{2}+\lambda^{2}}}$, $\sin\eta_{f}={-k/\sqrt{K_{f}^{2}+\lambda^{2}}}$, one finds[8, 9] $$\Delta\rho(x)={|r_{f}|\sin(2K_{f}|x|+\eta_{f})\over|x|}.$$ (8) For the SCHFA the initial $V_{\rm eff}$ is calculated using the wave functions of noninteracting electrons. The charge fluctuations define the Hartree potential $$V_{{}_{H}}(x)=g_{s}\int_{0}^{\infty}\bigg{[}{1\over|x+x^{\prime}|}+{1\over|x-x% ^{\prime}|}\bigg{]}\Delta\rho(x^{\prime})dx^{\prime},$$ (9) where $g_{s}=1\;(2)$ for electrons without (with) spin. Our approximated Fock potential is $$\displaystyle V_{{}_{F}}(x)$$ $$\displaystyle=$$ $$\displaystyle-\int_{-\infty}^{+\infty}{\int_{0}^{K_{f}}\sum_{i=1}^{2}{\rm Re}% \{\phi_{k}^{(i)*}(x^{\prime})\phi_{k}^{(i)}(x)\}dk\over|x-x^{\prime}|}dx^{\prime}$$ (10) $$\displaystyle=$$ $$\displaystyle-\int_{0}^{\infty}\bigg{[}{1\over|x+x^{\prime}|}+{1\over|x-x^{% \prime}|}\bigg{]}\Delta\rho(x+x^{\prime})dx^{\prime}.$$ Clearly, $V_{\rm eff}=V_{{}_{H}}+V_{{}_{F}}$ is a function of $|x|$, and will change during the iterations until self-consistency is reached. $V_{\rm eff}$ is small due to a weak coupling constant ($g\sim 1$). At this point we invoke an approximate self-consistency by adopting a suppression[4, 10] of the decay of the Friedel oscillations, as was demonstrated above to be valid in the SCHFA. We substitute by hand the limit[4] $\delta=0$ in $$\Delta\rho(x)={|r_{f}|\sin(2K_{f}|x|+\eta_{f})\over|x|^{\delta}}$$ (11) for Eqs. (9) and (10), assuming that this yields a $V_{\rm eff}$ close to that from the SCHFA. To carry out the integration [in Eqs. (9) and (10)], we use a cutoff that allows contributions only from $|x-x^{\prime}|\geq\epsilon.$ This cutoff is equivalent to that used in Eq. (1). For $K_{f}x\gg 1$, we then obtain, up to an additive constant, $$\displaystyle V_{eff}(x)=U[g_{s}\sin(2K_{f}x+\eta_{f})-\sin(4K_{f}x+\eta_{f})],$$ (12) where $U\equiv-2|r_{f}|{\bf c}_{i}(2K_{f}\epsilon)$, and ${\bf c}_{i}$ is the cosine integral. Equation (12) shows that $V_{\rm eff}$ has two periodicities: ${\lambda_{f}/2}$ from the direct potential ($\lambda_{f}\equiv{2\pi/K_{f}}$), and ${\lambda_{f}/4}$ from the exchange potential. The overall periodicity is given by the larger period. The electrons at the Fermi energy exactly obey the Bragg condition[17] for total reflection, i.e., $$2{\lambda_{f}\over 2}\sin{\pi\over 2}=n\lambda_{f}\,.$$ (13) All the states with $|k|<K_{f}$ remain practically unaffected by the weak and periodic $V_{\rm eff}$. Note that consistently with Eq. (13) there is a gap[17] of order $U$ at the Fermi energy. Thus the current vanishes at the Fermi energy. For a ring with a weak scatterer the interaction will not destroy the PC even if the current at the Fermi energy (assuming a large ring) is totally suppressed by the periodic effective potential. This follows from the fact that all occupied levels contribute to the PC, except at $E_{f}$, where Eq. (13) is assumed to be relevant. In the following we will consider the general case of a large number of scatterers in a ring. Figure 2 already shows the importance of screening for a single scatterer. This indicates that screening is of particular relevance for the case of many random scatterers: $$V_{\rm dis}(\theta)=\sum_{j=1}^{N_{s}}\lambda_{j}\delta(\theta-\theta_{j}).$$ (14) Here the location and strength of the $j$th scatterer are uniformly distributed in $(0,2\pi)$ and $(-\Lambda,\Lambda)$, respectively. $N_{s}$ is the total number of scatterers in a ring. For the numerics we use $\Lambda=14$ (in scaled units). The characteristic features of disordered noninteracting samples were, e.g., discussed by Imry and Shiren.[18] For noninteracting electrons, the localization length[14] at $E_{f}=200$ is $\xi\sim{\pi/2}.$ This should reduce the average current in open conductors by a factor $\sim 1/50$. The average sample PC of noninteracting electrons was reduced by factor $\sim 1/40$ which is slightly greater than predicted for open conductors. The typical sample PC, $\sqrt{\langle I^{2}\rangle}$, was reduced by factor $\sim 1/10$ which indicates the importance of a statistical study. The fixed total number of electrons in a ring was $32\pm 4$. Figure 3 shows the interaction effect on the sample PC statistics for an interaction coupling constant $g=1$. The interaction reduces the peak, centered at zero, while broadening the distribution. Furthermore, the distribution gains more weight at negative values of the PC, which indicates a diamagnetic tendency. We found that the interaction enhances the typical PC (by factor $\sim$2); the average PC is neither enhanced nor suppressed. Figure 4 shows that for increasing interaction, $g=1.75$, the average PC is also enhanced by factor $\sim$2. Figures 3 and 4 both show a clear tendency of the interaction to enhance the PC for electrons with spin. For spinless electrons the PC was found[14] to be rather unaffected by interaction. This shows an essential difference between models of electrons with or without spin. In addition, a clear difference between tight-binding models and continuous models[19] becomes apparent: In the former it was concluded,[20] using exact diagonalization and the SCHFA, that switching on the $e$-$e$ interaction in the regime of moderate disorder further suppresses the PC because of the Mott transition.[21] In continuous models this transition appears to be irrelevant, since the continuous models correspond to tight-binding models at very low fillings.[16, 7] In conclusion, using the SCHFA in one-dimension, we showed the tendency of the electronic interaction to build up nondecaying charge oscillations in a ring containing a single weak scatterer. Adopting this result for an infinite conductor implies a periodic effective potential. The electronic conduction was shown to vanish, because of Bragg reflection that coexists with a gap at the Fermi energy. This shows that, even in the HF limit the influence of the interactions on the Friedel oscillations and on conduction in one-dimension, calculated by exact and renormalization methods, may be reproduced. In rings the PC is not suppressed by the interaction. It is even enhanced in the case of many moderate scatterers due to screening. To demonstrate these effects, we considered the spin degree of freedom and used continuous conductors and rings. A. C. would like to thank A. Auerbach, D. Bar-Moshe, and B. Shapiro for valuable discussions, and A. Heinrich for his interest in this work. A. C. and K. R. would like to thank U. Eckern, P. Schwab and P. Schmitteckert for valuable comments and criticism. References [1] For reviews see: Mesoscopic Phenomena in Solids, edited by B. L. Altshuler, P. A. Lee, and R. A. Webb (North-Holland, Amsterdam, 1990); Exactly Solvable Models of Strongly Correlated Electrons, edited by V. A. Korepin and F. H. L. Eßler (World Scientific, Amsterdam, 1994); K. Richter, D. Ullmo, and R. A. Jalabert, Phys. Rep. 276, 1 (1996). [2] W. Apel and T. M. Rice, Phys. Rev. B 26, 7063 (1982). [3] M. Fabrizio, A. O. Gogolin and S. Scheidl, Phys. Rev. Lett. 72, 2235 (1994); Y. Oreg and A. M. Finkelstein, ibid. 76, 4230 (1997). [4] R. Egger and H. Grabert, Phys. Rev. Lett. 75, 3505 (1995). [5] H. Schulz, Phys. Rev. Lett. 71, 1864 (1993). [6] E. H. Lieb and F. Y. Wu, Phys. Rev. Lett. 20, 1445 (1968). [7] C. L. Kane and M. P. A. Fisher, Phys. Rev. Lett. 68, 1220 (1992); Phys. Rev. B 46, 15233 (1992). [8] K. A. Matveev, D. Yue, and L. I. Glazman, Phys. Rev. Lett. 71, 3351 (1993); D. Yue, L. I. Glazman, and K. A. Matveev, Phys. Rev. B 49, 1966 (1994). [9] M. P. A. Fisher and L. I. Glazman, cond-mat/9610037 (unpublished). [10] P. Schmitteckert and U. Eckern, Phys. Rev. B 53, 15397 (1996). [11] R. Berkovits and Y. Avishai, Phys. Rev. Lett. 76, 291 (1996). [12] N. Byers and C. N. Yang, Phys. Rev. Lett. 7, 46 (1961); L. P. Levy, G. Dolan, J. Dunsmuir, and H. Bouchiat, ibid. 64, 2074 (1990); V. Chandrasekhar. R. A. Webb, M. J. Brady, M. B. Ketchen, W. J. Gallagher, and A. Kleinsasser, ibid. 67, 3578 (1991); D. Mailly, C. Chapelier, and A. Benoit, ibid. 70, 2020 (1993). [13] This definition of the distance is close to $2\;\sin({|\theta-\theta^{\prime}|/2})$ and does not introduce any qualitative changes. Also, the small cutoff has a negligible influence on the numerical results. [14] A. Cohen, R. Berkovits, and A. Heinrich, Int. J. Mod. Phys. B. 11, 1845 (1997). [15] Quantum mechanics II, by Rubin H. Landau (Wiley, New York, 1990), p.194. [16] T. Giamarchi and B. S. Shastry, Phys. Rev. B. 51, 10915 (1995); M. Kamal, Z. H. Musslimani, and A. Auerbach, J. Phys. France I 5, 1487 (1995). [17] Introduction to Solid State Physics by C. Kittel (Wiley, New York, 1996). [18] Y. Imry and N. S. Shiren, Phys. Rev. B 33, 7992 (1986). [19] A. Müller-Groeling, H. A. Weidenmüller, and C. H. Lewnkopf, Europhys. Lett. 22, 193 (1993); A. Müller-Groeling and H. A. Weidenmüller, Phys. Rev. B 49, 4752 (1994). [20] M. Abraham and R. Berkovits, Phys. Rev. Lett. 70, 1509 (1993); G. Bouzerar, D. Poilblanc, and G. Montambaux, Phys. Rev. B 49, 8258 (1994); H. Kato and D. Yoshioka, ibid. 50, 4943 (1994). [21] N. Mott, Proc. R. Soc. London, Ser. A 382, 1 (1982).
ITFA-2008-19 Statistical Predictions From Anarchic Field Theory Landscapes Vijay Balasubramanian111e-mail: [email protected]${}^{,ab}$, Jan de Boer222e-mail: [email protected]${}^{,c}$, and Asad Naqvi333e-mail: [email protected]${}^{,de}$, ${}^{a}$ David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, PA 19104, USA ${}^{b}$ School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540, USA ${}^{c}$ Instituut voor Theoretische Fysica, Valckenierstraat 65, 1018XE Amsterdam, The Netherlands ${}^{d}$ Department of Physics, Swansea University, Singleton Park, Swansea, SA2 8PP, UK ${}^{e}$ Department of Physics, Lahore University of Management and Sciences, Lahore, Pakistan Consistent coupling of effective field theories with a quantum theory of gravity appears to require bounds on the the rank of the gauge group and the amount of matter. We consider landscapes of field theories subject to such to boundedness constraints. We argue that appropriately “coarse-grained” aspects of the randomly chosen field theory in such landscapes, such as the fraction of gauge groups with ranks in a given range, can be statistically predictable. To illustrate our point we show how the uniform measures on simple classes of ${\cal N}=1$ quiver gauge theories localize in the vicinity of theories with certain typical structures. Generically, this approach would predict a high energy theory with very many gauge factors, with the high rank factors largely decoupled from the low rank factors if we require asymptotic freedom for the latter. Contents 1 Introduction 2 The set of field theories 2.1 Interesting classes of quiver gauge theories 2.2 Averages and typicality 2.3 Choice of measure 3 Typicality in Toy Landscapes 3.1 Theories without matter: coarse graining and typicality 3.2 Cyclic, chiral quivers 3.2.1 The canonical ensemble breaks down 3.2.2 Microcanonical analysis 4 Thinking about the general quiver 4.1 Implementing anomaly cancellation 4.2 Dealing with discrete quiver symmetries: an example 4.3 Towards dynamics 4.3.1 Four node, asymptotically free quivers 4.3.2 General quiver with unequal gauge groups 5 Conclusion 1 Introduction It is commonly supposed that the huge numbers of vacua that can arise from different compactifications of string theory [2, 4] imply a complete loss of predictability of low energy physics. If this is the case, the stringiness simply constrains the possible dynamics rather than the precise complement of forces and matter. Every string theory leads to some effective field theory at a high scale $\Lambda$, taken to be, say, an order of magnitude below the string scale. Predictions for low energy physics have to made in terms of this effective field theory. Thus, the landscape of string theory vacua leads to a landscape of effective field theories at the scale $\Lambda$ . Here we ask if constraints of finiteness imposed on this landscape via its origin in string theory might be sufficient to lead to a degree of predictability, at least in some statistical sense. Previous authors have discussed how continuous parameters can scan in a random landscape of effective field theories [6, 8, 10, 12, 14, 16, 18], and there has been some study of the gauge groups and matter content attainable from specific string theoretic scenarios [20, 22, 24, 25, 26, 27]. For example, [26] and [27] discuss the distribution of gauge groups arising in intersecting brane models on torus orientifolds. We will impose the weakest of the constraints arising from string theory – namely that it should be possible to couple the effective field theory consistently to a quantum theory of gravity. It has been argued [28, 29, 30] that such consistency with string theory requires that the rank of the gauge group and the number of matter fields be bounded from above.111A possible bound on the number of matter species in theories containing gravity was originally discussed by Bekenstein [31]. Since we will not impose any constraints based on rules arising from symmetry or dynamics on the measure, we will call this an “anarchic” landscape, in recollection of the terminology in [12]. Thus we will study simple anarchic landscapes of field theories bounded in this way, and illustrate how statistics can lead to characteristic predictions for the low energy physics. These predictions are strongest for appropriately coarse-grained attributes of a theory that possess the property of typicality in such landscapes – i.e. they are overwhelmingly likely to lie close to certain typical values. An example of such a typical property will be the fraction of gauge groups with ranks lying within some range. We will illustrate and develop our thinking using some simple examples. 2 The set of field theories A natural, large class of field theories to consider is the set of quiver gauge theories. For simplicity, we will restrict attention to ${\cal N}=1$ supersymmetric quiver gauge theories where the gauge group is a product of unitary groups, $$G=\prod_{i=1}^{L}U(N_{i}).$$ (1) In addition, there will be $A_{ii}$ hypermultiplets transforming in the adjoint of $U(N_{i})$, and $A_{ij}$ hypermultiplets transforming in the $(\mathbf{N}_{i},\bar{\mathbf{N}}_{j})$ of $U(N_{i})\times U(N_{j})$. The nonnegative integer matrix $$A_{ij}\geq 0,\quad i,j=1\ldots L$$ (2) describes the number of arrows from site $i$ to site $j$ in the quiver. Of course, to specify the full gauge theory we also need to describe the Kähler potential for the hypermultiplets, the gauge kinetic terms, the superpotential and possibly Fayet-Iliopoulos terms. We will postpone a discussion of these quantities for now and will in this paper only discuss the matter and gauge group content of the ${\cal N}=1$ theory. Gauge theories of quiver type are ubiquitous in string theory, and this is the main motivation to restrict attention to this class. Bifundamentals tend to appear in string theory because strings have two endpoints only. A typical setup to engineer quiver $N=1$ theories is to consider D6-branes wrapping 3-cyles inside a Calabi-Yau manifold in type IIA string theory, in which case the number of bifundamentals is related to the intersection number of the 3-cycles. By including orientifolds, we can also easily engineer quiver gauge theories with $SO$ and $Sp$ gauge factors, but we will postpone a study of these theories to another occasion. Our goal will thus be to study random $U(N)$ quiver gauge theories. Before looking at some concrete examples, we are first going to make some general remarks on possible further restrictions on the set of gauge theories, on the choice of measure on the space of theories, and the kinds of properties we might predict. 2.1 Interesting classes of quiver gauge theories Three possible restricted sets of gauge theories are: 1. Anomaly free theories. A simple physical requirement that we should impose is the condition that there are no anomalies. This translates into the statement that for all $i$ $$\sum_{j\neq i}(A_{ij}-A_{ji})N_{j}=0.$$ (3) The expectation value of the left hand side of this equation in the unconstrained set of quiver gauge theories with the uniform measure is clearly zero, as the measure is invariant under $A_{ij}\leftrightarrow A_{ji}$. Therefore, “on average,” random quiver gauge theories are anomaly free, and one might be inclined to not worry about anomalies anymore. However, from a physical point of view it seems rather peculiar to allow forbidden theories in an ensemble, as it is not at all clear that properties of the set of anomaly free theories are properly reproduced by the full set of random quiver gauge theories. Hence we will for the most part restrict to field theories which obey (3). 2. Asymptotically free theories. Another natural constraint we can impose is that the theories we consider are asymptotically free, which makes them well-defined in the UV. Asymptotic freedom is less compelling than anomaly cancellation, as the set of random quiver theories may well represent a set of low-energy effective field theories obtained e.g. in string theory. Gauge group factors that are IR free and strongly coupled in the UV will typically start to act as global symmetries at sufficiently low energies and will not directly lead to contradictions. The condition for asymptotic freedom is that for all $i$, $$A_{ii}N_{i}+\sum_{j\neq i}(A_{ij}+A_{ji})N_{j}<3N_{i}.$$ (4) This tends to constrain the $A_{ij}$ to not be very large but to be of order unity instead. 3. Purely chiral theories. If we imagine our field theory to be some effective field theory at a high scale $M$, assume there are no other dimensionful parameters around, and write down the most general superpotential, it will contain many mass terms with masses of order $M$. At energies below $M$, it makes sense to integrate out all massive fields with masses of order $M$. The remaining gauge theory will have fewer fields and will no longer allow for mass terms: all fields that can be integrated out have been removed. The remaining set of purely chiral theories with $$A_{ii}=0,\qquad A_{ij}=0\,\,\,{\rm or}\,\,\,A_{ji}=0\,\,\,{\rm for}\,\,i\neq j$$ (5) are therefore a natural starting point for viewing random quivers as low-energy effective field theories. Such chiral theories allow for general cubic superpotentials at the marginal level. Higher order terms are suppressed by a mass scale in the Lagrangian, although some quartic superpotentials can become marginal in the infrared. 4. Equal rank theories. In order to simplify the analysis, we could take the ranks of all the gauge groups to be fixed and equal. For such theories both the anomaly cancellation constraint as well as the asymptotic freedom constraint are much easier to implement. However, we do not have an obvious physical motivation that would prefer these theories, so they are mainly helpful to develop intuition for the more general case. 2.2 Averages and typicality Given a set of gauge theories with a suitable measure on them, we can compute expectation values of various quantities, such as the average rank of a gauge group, the average number of matter fields, etc. Though averages are useful to compute, they are especially interesting when they also represent the typical value of a quantity. Typicality is a notion that exists in situations when a thermodynamic limit can be taken wherein some parameter $N$, controlling the size of the ensemble, can be taken to infinity. Then, a quantity enjoys the property of typicality if its probability distribution becomes more and more narrowly peaked around its expectation value as $N\to\infty$: $$\lim_{N\rightarrow\infty}\frac{\langle{\cal O}^{2}\rangle-\langle{\cal O}% \rangle^{2}}{\langle{\cal O}\rangle^{2}}=0.$$ (6) In other words, quantities that are typical are equal to their ensemble averages with probability one in the limit $N\to\infty$222This criterion is not very useful when $\langle{\cal O}\rangle=0$. More generally, we should normalize the operator ${\cal O}$ in such a way that the range of values it can take is independent of $N$ and then require that the variance vanishes in the large $N$ limit.. Familiar examples of typical operators are statistical mechanical quantities such as pressure and free energy. Also, we note that for a standard Boltzmann distribution, for one particular occupation number with $$\langle N\rangle=\frac{\sum_{k\geq 0}ke^{-\beta k}}{\sum_{k\geq 0}e^{-\beta k}% }=\frac{e^{-\beta}}{1-e^{-\beta}},\qquad\langle N^{2}\rangle=\frac{\sum_{k\geq 0% }k^{2}e^{-\beta k}}{\sum_{k\geq 0}e^{-\beta k}}=\frac{e^{-\beta}(1+e^{-\beta})% }{(1-e^{-\beta})^{2}}$$ (7) the variance to mean squared ratio appearing in (6) equals $e^{\beta}$. In other words, a microscopic quantity like a single occupation number will not be typical. Observables that achieve typicality are inevitably coarse-grained – e.g. the number of Boltzmann particles with energies between $c/\beta$ and $(c+\epsilon)/\beta$ for constants $c$ and $\epsilon$ will be typical. In studying the statistics of effective field theories we should be interested in finding appropriately “coarse-grained” structures that are typical. 2.3 Choice of measure In order to define and discuss averages and typicality for random quiver gauge theories, we need to define a suitable measure on this space. One could imagine that dynamics gives rise to an interesting and complicated measure. For example, one could imagine weighing field theories by the dimension or even size of the cohomology of their respective moduli spaces, having the close connection between quiver gauge theories and D-brane moduli spaces in mind. As another simple example of how dynamics can affect the measure, if we suppose that dynamical effects can give the matter fields any expectation value, then generically all the gauge groups will be broken to $U(1)$ and analysis of the distribution of gauge factors is moot. However, in $N=1$ theories of the kind we study, the potential for the matter fields typically develops isolated minima and the gauge group is broken to a product of Abelian and non-Abelian factors (for instance, a cubic superpotential for an adjoint superfield classically breaks $U(N)\rightarrow U(p)\times U(N-p)$ for some $p$.). Classically, in the context of Calabi-Yau compactification, one imagines that the manifold has some set of distinct but intersecting cycles and the non-abelian factors in the gauge theory are related to the number of branes wrapped on each cycle. Then strong gauge dynamics might break these gauge factors further. For the present we will ignore such dynamical issues and use a uniform measure subject to various constraints of boundedness. Since we are ignoring rules arising from the underlying dynamics, we will call our measures “anarchic”. Finally, in the context of e.g. string landscape discussions, one might want to associate various kinds of Bayesian measures to different types of field theories. For example, to correctly make statistical predictions for the UV field theory, given our hypothetical bound on the the matter and gauge groups, we strictly speaking condition our probability distribution on the known facts about infrared physics. From this perspective, we actually want the uniform measure on a bounded space of gauge theories that, when run to the infrared, contain the standard model as a sector. Conditioning in this way, is well beyond our ability at present, and so we will simply investigate the uniform measure on bounded spaces of quiver gauge theories, to study whether and how typicality occurs. Experience in statistical physics has shown that directly computing averages and variances over bounded configuration spaces can be difficult. Thus, to simplify analysis we can try to use a grand canonical ensemble to constrain the total rank and the total number of matter fields. This involves summing over theories with arbitrary ranks and amounts of matter while including in the measure a Boltzmann factor for the rank of the gauge group, and a separate Boltzmann factor for the total number of matter fields $$\rho\sim\exp(-\beta\sum_{i}N_{i}-\lambda\sum_{ij}A_{ij}N_{i}N_{j}).$$ (8) One could also include Boltzmann factors for, e.g., the total number of nodes, the total number of gauge bosons, etc., but for our purposes (8) will be sufficient to illustrate the main ideas. Such an approach only works if the ensemble of theories does not grow exponentially fast in the total rank and number of matter fields. If such exponential growth occurs, the Boltzmann weight does not fall quickly enough for the microcanonical ensemble to be well approximated by the canonical ensemble. We will see that the space of theories typically grows too fast with the number of fields to permit use of the canonical approach to make statistical predictions from a bounded landscape of effective field theories. 3 Typicality in Toy Landscapes 3.1 Theories without matter: coarse graining and typicality As an example of our approach, consider a landscape of field theories with no matter, where the rank of the gauge group is equal to a large number $N$. For simplicity, let the gauge group be a product of unitary factors $$G=\prod_{i=1}^{L}U(N_{i})\,.$$ (9) Then the rank of $G$ is $\sum_{i}N_{i}=N$; thus the $N_{i}$ form an integer partition of $N$. To study the distribution of gauge factors in an anarchic landscape of such field theories, we can construct the canonical partition function $$Z=\sum_{\{r_{k}\}}e^{-\beta\sum_{k}k\,r_{k}-\alpha\sum_{k}r_{k}}=\prod_{k}{1% \over 1-e^{-\beta\,k-\alpha}}\equiv\prod_{k}{1\over 1-u\,q^{k}}$$ (10) Here $r_{k}$ is the number of gauge factors of rank $k$, $\beta$ is a Lagrange multiplier constraining the total rank to be $N$ and $\alpha$ is a Lagrange multiplier that can be used to constrain the number of gauge factors; sometimes it is more convenient to work with $q=e^{-\beta}$ and $u=e^{-\alpha}$ instead. In writing this we have used measure that treats the ordering of gauge factors as irrelevant. So, for example, $U(2)\times U(3)\times U(2)$ is the same as $U(3)\times U(2)\times U(2)$ and so on. In such a measure, all $U(N_{i})$ factors are treated as identical, and not distinguished from each other by parameters like their gauge couplings. This measure will be modified if the gauge theory is realized by wrapping D-branes on cycles of Calabi-Yau because in that case the locations of branes and the sizes of the cycles will allow us to distinguish between many different configurations that lead to the same gauge group. Nevertheless, the present measure is interesting from a purely field theoretic point of view, i.e. if one is simply counting field theories, and is illustrative. To fix $\beta$ and $\alpha$ we require that $$N=\sum_{j=1}^{\infty}{j\,u\,q^{j}\over 1-u\,q^{j}}~{}~{}~{}~{};~{}~{}~{}~{}L=% \sum_{j}{u\,q^{j}\over 1-u\,q^{j}}\,,$$ (11) where $N$ is the total rank and $L$ is the total number of gauge factors. We will take $$u\sim O(1)~{}~{}~{}~{};~{}~{}~{}~{}\beta\sim{1\over\sqrt{N}}\,$$ (12) which, we will see later, implies $L\sim\sqrt{N}$. Then from (10) it is easy to show that: $$\langle r_{j}\rangle={u\,q^{j}\over 1-u\,q^{j}}~{}~{}~{};~{}~{}~{}{\rm Var}(r_% {j})={u\,q^{j}\over(1-u\,q^{j})^{2}}={\langle r_{j}\rangle\over 1-u\,q^{j}}\,.$$ (13) The variance to mean squared ratio is $${\rm{Var}(r_{j})\over\langle r_{j}\rangle^{2}}={1\over u\,q^{j}}=e^{\beta j+% \alpha}\geq e^{\alpha}\geq O(1)\,.$$ (14) To get the last inequality we simply used $\alpha,\beta>0$. Thus we see that in a universe with such anarchic landscapes, the number of gauge factors $r_{j}$ with rank $j$ is not typical in the sense defined in (6) and thus cannot be predicted with confidence. However, we could ask whether there are any more coarse grained structures in such landscapes which are more predictable. For example, consider the number of gauge factors whose ranks lie between $c\sqrt{N}$ and $(c+\epsilon)\sqrt{N}$ where $c$ and $\epsilon$ are both $O(1)$: $$\langle R(c,\epsilon)\rangle\approx\int_{c\sqrt{N}}^{(c+\epsilon)\sqrt{N}}dj\,% \langle r_{j}\rangle={1\over\beta}\ln\left[{1-u\,e^{-(c+\epsilon)\sqrt{N}\beta% }\over 1-u\,e^{-c\sqrt{N}\beta}}\right]\,,$$ (15) where we approximated the sum as an integral. The variance of this coarse-grained variable is $${\rm Var}(R(c,\epsilon))=\int_{c\sqrt{N}}^{(c+\epsilon)\sqrt{N}}dj\,{\rm Var}(% r_{j})={u\over\beta}\left[{e^{-c\sqrt{N}\beta}-e^{-(c+\epsilon)\sqrt{N}\beta}% \over(1-u\,e^{-c\sqrt{N}\beta})(1-u\,e^{-(c+\epsilon)\sqrt{N}\beta})}\right]\,,$$ (16) where used the fact that in this canonical ensemble the $r_{j}$ are statistically independent variables. Thus, for $\beta\sim 1/\sqrt{N}$ (12), $$\langle R(c,\epsilon)\rangle\sim O(\sqrt{N})~{}~{}~{}~{};~{}~{}~{}~{}{\rm Var}% (R(c,\epsilon))\sim O(\sqrt{N})~{}~{}~{}~{}\Longrightarrow~{}~{}~{}~{}{\rm{Var% }(R(c,\epsilon))\over\langle R(c,\epsilon)\rangle^{2}}\sim O(1/\sqrt{N})\,.$$ (17) This means that the variance to mean squared ratio vanishes in the large $N$ limit indicating that $R(c,\epsilon)$ is a typical variable. Thus, in such anarchic landscapes, the number of gauge factors with ranks between $c\sqrt{N}$ and $(c+\epsilon)\sqrt{N}$ can be predicted with high confidence. Also, approximating the second equation in (11) as an integral, the total number of gauge factors turns out to be $$L=-{\ln(1-u)\over u\beta}\sim O(\sqrt{N})\,.$$ (18) By the above variance analysis this number will also be typical. Thus, in such anarchic landscapes, the total number of gauge factors is highly predictable. These results follow essentially because the unordered partitions of a large integer enjoy a sort of central limit theorem – representing such partitions by a Young diagram, one can show that in the large $N$ limit, the boundary of an appropriately rescaled diagram approaches a limit shape encoded by the $\langle r_{j}\rangle$ computed above [32]. 3.2 Cyclic, chiral quivers Above we saw how suitably coarse-grained aspects of the structure of a randomly chosen field theory in a bounded landscape might be statistically predictable. The next step is to add matter to the theory to see how this changes the analysis. As we discussed, we must insist that matter is added in an anomaly-free way and implementing this constraint is one of the main difficulties in studying studying statistical ensembles of quiver gauge theories. Thus, to make a beginning, we will study cyclic, chiral quiver gauge theories for which anomaly cancelation is very easy to implement. In cyclic quivers, each gauge group is connected to the next one by bifundamentals, with the circle being completed when the last group connects to the first one. Taking the ith group around the circle to be $U(N_{i})$, the constraint on the total rank will be $\sum_{i}N_{i}=N$. So, as in the example without matter, the number $N_{i}$ form a partition of $N$. Anomaly cancellation requires that each gauge group have as many fundamentals as antifundamentals. It is easy to show that, the minimal solution to anomaly cancellation constraints is that the number of bifundamentals between $U(N_{i})$ and $U(N_{i+1})$ is $$A_{i(i+1)}=C^{-1}\cdot\prod_{l\neq i,(i+1)}N_{l}~{}~{}~{}~{};~{}~{}~{}~{}C=\rm% {GCD}(\{\prod_{l\neq i,(i+1)}N_{l}\})$$ (19) All other solutions to the anomaly cancellation equations are integer multiples of (19). We will examine an ensemble in which the matter fields in the gauge theory are presumed to satisfy (19) in such a way that the total number of fields comes as close as possible to some bound $K$. Thus for this setup the matter fields are uniquely chosen once the gauge groups are selected. (More generally, we could have imagined an ensemble where the number of matter fields was allowed to vary, in which one would need to sum over multiples of $A_{i(i+i)}$ subject to a bound. This is difficult to do here since the GCD of the products of integer subsets appearing in the denominator of (19) is presumably sporadically behaved.) One key difference from the matter-free case, is that the order in which the gauge groups appear on the ring of the quiver is important. In general, different orderings will lead to different quiver gauge theories, except when the permutations correspond to symmetries of the quiver, such as the cyclic permutations of the nodes, or cyclic permutations combined with one reflection. These are just elements of the dihedral group corresponding to the symmetries of a regular polygon with vertices corresponding to the nodes of the quivers. Additional symmetries will arise if some of the $N_{i}$ are equal and we will treat the exchange of groups with identical ranks as giving the same theory. This sort of measure would arise if we imagined our field theory landscape as arising from D-branes on a Calabi-Yau in which all the cycles give rise to gauge theories with the same coupling, which could for example happen if we would resolve an $A_{k}$ singularity in such a way that all two-cycles would have equal size. 3.2.1 The canonical ensemble breaks down We will first try to analyze the statistics of cyclic, chiral quivers in a canonical ensemble. All along, as motivated above, we will assume that the gauge groups uniquely fix the matter content. Let $r_{k}$ be the number of times the group $U(k)$ appears. Then, the total rank $N$, and the number gauge factors $L$, are $$N=\sum_{k}kr_{k}~{}~{}~{}~{};~{}~{}~{}~{}L=\sum_{k}r_{k}\,.$$ (20) We want to compute the partition function of this ensemble of ordered partitions of $N$: $$Z=\sum_{\{r_{k}\}}{1\over 2}\Bigl{(}\sum_{k}r_{k}-1\Bigr{)}!~{}e^{-\beta\sum_{% k}kr_{k}-\alpha\sum_{k}r_{k}}\prod_{k}{1\over r_{k}!}\,.$$ (21) The combinatorial factor that appears here is simply the number of ways we can choose $r_{1},r_{2},\ldots$ gauge group factors out of $\sum_{k}r_{k}$, divided by $2(\sum_{k}r_{k})$ to account for the cyclic and reflection symmetry of the quiver333This counting actually ignores certain accidental symmetries that can arise in the structure of the quiver. For example, in a cyclic quiver in which the gauge groups $U(N_{1})$ and $U(N_{2})$ alternate, only one cyclic permutation gives a different configuration for the quiver. The fully correct counting can be derived using Polya theory – we are simply using the leading terms of that analysis, which is sufficient for our purposes.. Rewriting the partition function in terms of the $\Gamma$ function, we obtain $$Z={1\over 2}\sum_{\{r_{k}\}}\Gamma\Bigl{(}\sum_{k}r_{k}\Bigr{)}\prod_{k}~{}{e^% {-\beta kr_{k}-\alpha r_{k}}\over r_{k}!}$$ (22) Using the integral representation of the $\Gamma$ function $$\Gamma(z)=\int_{0}^{\infty}dt~{}t^{z-1}e^{-t}$$ (23) the partition function can be rewritten as $$Z={1\over 2}\int_{0}^{\infty}dt~{}{e^{-t}\over t}~{}\sum_{\{r_{k}\}}\prod_{k}{% t^{r_{k}}\over r_{k}!}e^{-\beta kr_{k}-\alpha r_{k}}$$ (24) Exchanging the sum and the product, and after some manipulations, we obtain $$Z={1\over 2}\int_{0}^{\infty}dt~{}{e^{-t}\over t}~{}\exp\Bigl{(}{te^{-\alpha}e% ^{-\beta}\over 1-{e^{-\beta}}}\Bigr{)}\,.$$ (25) This integral is only convergent if $${e^{-\alpha}e^{-\beta}\over 1-{e^{-\beta}}}<1~{}~{}~{}\Rightarrow~{}~{}~{}e^{-% \beta}<{1\over 1+e^{-\alpha}}\equiv e^{-\beta_{H}}\,.$$ (26) This implies that there is a limiting $\beta$ above which the partition function is undefined, because the integrand diverges as $t\rightarrow\infty$. There is also always a divergence as $t\rightarrow 0$ which can be regulated by recognizing that the divergence is a constant independent of $\alpha$ and $\beta$. To show this, we define $\gamma={e^{-\alpha}e^{-\beta}\over 1-{e^{-\beta}}}$, and find that $${dZ\over d\gamma}=\int_{0}^{\infty}dt~{}{e^{-(1-\gamma)t}}={1\over 1-\gamma}$$ (27) which implies that, below the limiting temperature, $$Z=-\log(1-\gamma)=-\log(1-{e^{-\alpha}e^{-\beta}\over 1-{e^{-\beta}}})=-\log(1% -{uq\over 1-q})$$ (28) where $u=e^{-\alpha}$ and $q={e^{-\beta}}$. In order to achieve a large total rank, $\beta$ must be tuned to close to its limiting value $\beta_{H}$ (26). Then, if, for example, we put $u=1$, the expectation value of the total rank is $$\langle N\rangle=q\frac{\partial}{\partial q}\log Z\sim\frac{-1}{2\epsilon\log% (4\epsilon)}$$ (29) where we tuned $q=q_{H}-\epsilon=\frac{1}{2}-\epsilon$ to get a large rank. Similarly, in this approximation we can compute $$\langle r_{k}\rangle\sim\left(\frac{1}{2}\right)^{k+1}\frac{-1}{2\epsilon\log(% 4\epsilon)}\sim\left(\frac{1}{2}\right)^{k+1}\langle N\rangle\,.$$ (30) This is completely different from the matter-free result for the typical partition: for example, on average one quarter of the nodes will be abelian. However, we also find that $${\rm Var}(r_{k})\sim\left(\frac{1}{2}\right)^{2r+2}\frac{-1}{(2\epsilon)^{2}% \log(4\epsilon)}\sim-(1+\log(4\epsilon))\,\langle r_{k}\rangle^{2}$$ (31) This is much larger (as $\epsilon\rightarrow 0$) then the expectation value squared. In other words, the number of group factors with a given rank is not typical in the sense of (6). As in the matter-free case, we might wonder if a more coarse-grained question would have a more statistically predictable answer. For example, we might ask how many gauge factors we expect to see within some range of ranks. The mean and variance of such a coarse grained variable can be extracted by summing over the quantities in (30,31) because the $r_{k}$ are independent random variables in our ensemble. In the central limit theorem, summing $M$ identically distributed random variables reduces their fluctuations because both the mean and the variance are enhanced by a factor of $M$; thus the variance to mean squared ratio is reduced by a factor of $M$. In the matter-free example, something like this happened because, although the $r_{k}$ were not identically distributed, their dependence on $k$ was sufficiently to weak to allow the central limit theorem to work. In the present case, the exponential dependence of (30, 31) on the rank $k$ means that this mechanism fails – the mean and the variance remain dominated by the smallest $k$ in the range of the sum. Thus, it would appear that there is no simple statistically predictable quantity in this landscape. However, this is in fact happening because the canonical ensemble is breaking down and is not a good approximation of the microcanonical ensemble anymore. The canonical ensemble will reproduce the microcanonical ensemble when the growth of the configuration space with the total rank is slow enough so that, when multiplied by an exponential Boltzmann factor, a nicely localized measure results. Here the Gamma function and the exponential in (22) compete on a equal footing and lead to a widely spread out measure in which the rank of the gauge group fluctuates wildly over the ensemble, leading to a very large variance. Indeed, we should expect this sort of behavior to occur generally when studying the statistics of quivers since the number of graphs increases rapidly with the number of nodes. Thus we turn to the microcanonical ensemble in order to implement more accurately our constraint on the total rank. 3.2.2 Microcanonical analysis We consider once more a cyclic quiver and ignore accidental symmetries. The microcanonical partition function for cyclic gauge theories of rank $N$ and $L$ nodes is simply the number of such theories. This is given by the coefficient of $q^{N}$ in $$\frac{1}{2L}\left[q+q^{2}+q^{3}+\ldots\right]^{L}\,.$$ (32) Here the $1/2L$ divides out the cyclic permutations and reflections. We find that $$Z_{L}=\frac{1}{2L}\left(\begin{array}[]{c}N-1\\ N-L\end{array}\right).$$ (33) Summing this over $L$ we can write a partition function which is canonical in the number of nodes and microcanonical in the total rank $N$: $$Z(u)=\sum_{L=1}^{N}u^{L}\,Z_{L}={(1+u)^{N}-1\over 2N}\,.$$ (34) To get the unbiased landscape in which all theories of equal rank have equal weight, we should take $u=1$, but we will consider other values of $u$ as well. The expectation value of $L$ is $$\langle L\rangle=u\partial_{u}\log(Z(u))={u(1+u)^{N-1}\over(1+u)^{N}-1}~{}N\,.$$ (35) For the unbiased ensemble with $u=1$, we get $\langle L\rangle={N\over 2}$ in the large $N$ limit. However, if $u\sim{1\over\sqrt{N}}$, then $\langle L\rangle\sim\sqrt{N}$, and if $u\sim{1\over N}$, then $\langle L\rangle\sim O(1)$. In fact, if $u\sim N^{-a}$, $\langle L\rangle\sim N^{1-a}$. It can be checked that the canonical analysis gives the same expectation values. However, the microcanonical variance in $L$ is $${\rm Var}(L)=\Bigl{(}1-{Nu\over(1+u)^{N}-1}\Bigr{)}{1\over 1+u}\langle L\rangle$$ (36) For the three scalings of $u$, i.e. $u\sim N^{-a}$ , the variance in $L$ is some order 1 number times the mean value of $L$, independent of $a$. Thus, when $\langle L\rangle$ is large, the variance to mean squared ratio is small, unlike the canonical analysis. This means that in such landscapes the number of gauge factors is typical in the sense of (6) and is therefore highly predictable. The expectation value for the number of abelian factors is: $$\langle r_{1}\rangle={1\over Z}\sum_{L=1}^{N}{u^{L}\over L}L\left(\begin{array% }[]{c}N-2L-2\end{array}\right)={u^{2}(1+u)^{N-2}\over(1+u)^{N}-1}N$$ (37) When $u=1$, this becomes $\langle r_{1}\rangle={1\over 4}N$ in the large $N$ limit. When $u\sim{1\over\sqrt{N}}$, $\langle r_{1}\rangle\sim O(1)$. And when $u\sim{1\over N}$, $\langle r_{1}\rangle\rightarrow 0$. In fact, for $u\sim{N^{-a}}$, $\langle r_{1}\rangle\sim N^{1-2a}$. It can be checked that these expectation values match the canonical ensemble. However, the the variance in $r_{1}$ is much smaller in the microcanonical ensemble. First we compute that $$\displaystyle\langle r_{1}^{2}\rangle$$ $$\displaystyle=$$ $$\displaystyle{1\over Z}\sum_{L=1}^{N}{u^{L}\over L}\bigg{\{}L\left(\begin{% array}[]{c}N-2\\ L-2\end{array}\right)+L(L-1)\left(\begin{array}[]{c}N-3\\ L-3\end{array}\right)\bigg{\}}$$ (38) $$\displaystyle=$$ $$\displaystyle{u^{2}(1+u)^{N-4}\bigl{(}u(uN+4)+1\bigr{)}\over(1+u)^{N}-1}~{}N$$ (39) Therefore, the ratio of the variance to the mean squared is $${\langle r_{1}^{2}\rangle-\langle r_{1}\rangle^{2}\over\langle r_{1}\rangle^{2% }}={1+u\Bigl{(}4-{Nu\over(1+u)^{N}-1}\Bigr{)}\over(1+u)^{2}}\times{1\over% \langle r_{1}\rangle}$$ (40) The coefficient of $1/\langle r_{1}\rangle$ in this expression is of $O(1)$ for for $u\sim N^{-a}$, with $0\leq a\leq 1$. Pulling everything together, in the unbiased ensemble with $u=1$, the average number of gauge factors is $N/2$ and the number of abelian factors is $N/4$. Both of these quantities are typical in the sense of (6) and hence highly predictable in this landscape without any coarse-graining. In a biased ensemble with $u\sim{1/\sqrt{N}}$, the total number of gauge factors is $O(\sqrt{N})$, and the number of abelian factors is $O(1)$. Since variance is of the same order as the mean for both quantities, the number of gauge factors is typical and thus predictable, but the number of abelian factors is not. In this case, we expect that a coarse-grained statistic, such the fraction of gauge groups in a given range, would be more predictable as in the matter-free case. Higher Ranks To find the expectation value of the occupation number of rank $r$, we can insert a “chemical potential” for that rank. So $$Z(u,\{y_{k}\})=\sum_{L=1}^{N}{u^{L}\over 2L}\left[\sum_{k=1}^{N}q^{k}y_{k}% \right]^{L}|_{q^{N}}\,,$$ (41) where the left hand side equals the coefficient of $q^{N}$ in the right hand side. The expectation value $\langle r_{k}\rangle$ is given by $$\displaystyle\langle r_{k}\rangle$$ $$\displaystyle=$$ $$\displaystyle\partial_{y_{k}}\log\Bigl{(}Z(u,\{y_{k}\})\Bigr{)}|_{\{y_{k}\}=1}$$ (42) $$\displaystyle=$$ $$\displaystyle{1\over Z[u]}\sum_{L=1}^{N}u^{L}\left(\begin{array}[]{c}N-k-1\\ L-2\end{array}\right)$$ (43) $$\displaystyle=$$ $$\displaystyle{u^{2}(1+u)^{N-k-1}\over(1+u)^{N}-1}N$$ (44) In the unbiased ensemble with $u\sim 1$, $\langle r_{k}\rangle\sim(1/2)^{k+1}N$ as we found in the canonical ensemble. Similarly, the expectation value $\langle N_{r}^{2}\rangle$ is: $$\displaystyle\langle r_{k}^{2}\rangle$$ $$\displaystyle=$$ $$\displaystyle{1\over Z}y_{k}\partial_{y_{k}}y_{k}\partial_{y_{k}}\Bigl{(}Z(u,% \{y_{k}\})\Bigr{)}|_{\{y_{k}\}=1}$$ (45) $$\displaystyle=$$ $$\displaystyle{1\over Z}\sum_{L=1}^{N}u^{L}\bigg{\{}\left(\begin{array}[]{c}N-r% -1\\ L-2\end{array}\right)+(L-1)\left(\begin{array}[]{c}N-2r-1\\ L-3\end{array}\right)\bigg{\}}$$ (46) $$\displaystyle=$$ $$\displaystyle{u^{2}(1+u)^{N-2r-2}\Bigl{(}2u+(N-2r+1)u^{2}+(1+u)^{r+1}\Bigr{)}% \over(1+u)^{N}-1}N$$ (47) So the ratio of the variance to the mean squared is $${{\rm Var}(r_{k})\over\langle r_{k}\rangle^{2}}={1\over(1+u)^{k+1}}\bigg{\{}(1% +u)^{k+1}+u\bigl{(}(1-2k)u+2\bigr{)}-{Nu^{2}\over(1+u)^{N}-1}\bigg{\}}\times{1% \over\langle r_{k}\rangle}$$ (48) This is always $O(1)$ times $1/\langle r_{k}\rangle$, and hence the number of gauge groups of a given rank is typical, and hence highly predictable, if the average is large. Lessons: We are finding that in an anarchic landscape of cyclic quiver gauge theories, the actual number of gauge factors of a given rank is highly predictable. Specifically, the distribution of ranks is exponential and the low rank populations are statistically predictable with high confidence. In a biased landscape in which the measure favors having a number of gauge factors that is sufficiently smaller than the total rank, we found that the number of factors with a fixed rank in not typical in general although the total number of factors can be. In this case, one could test whether an appropriately coarse grained quantity, like the fraction of gauge groups with ranks in some range, is more predictable. 4 Thinking about the general quiver To extend our analysis to the general quiver gauge theory we could try to compute a partition sum of the form $$Z=\sum_{L}\sum_{N_{i},A_{ij}}\exp(-\beta\sum_{i}N_{i}-\lambda\sum_{ij}A_{ij}N_% {i}N_{j})$$ (49) where $L$ is the number of nodes of the quiver, $N_{i}$ are the ranks of the gauge groups, and $A_{ij}$ are the numbers of bifundamentals between nodes $i$ and $j$. One difficulty is that this partition sum is canonical and, as we found, it may not implement the constraints on the total rank and the amount of matter very well because of the rapid growth of the space of theories. Secondly the sum should only be over anomaly cancelled theories. Thirdly, there are discrete symmetries which tend to lead to vanishing expectation values. In view of this, below we will develop some approaches to dealing with the two latter issues. 4.1 Implementing anomaly cancellation A loop basis for anomaly free theories: If all the gauge groups have the same rank, the general anomaly free theory can be constructed by making sure that the bifundamental fields always form closed loops. One can always construct such matter distributions by saying that each of the possible loops in the quiver has $n_{i}$ fields running around it. Where loops overlap the matter content will either add or subtract depending on the orientation of the loops (again here we are supposing that non-chiral doublets decouple; in addition, we identify negative $A_{ij}$ with a positive $A_{ji}$ and vice versa.). Any loop in the quiver can be constructed by summation of a basis of independent 3-loops and it can be shown that this basis will have $$N_{L}={(L-1)(L-2)\over 2}$$ (50) elements. For example, consider the case with $L=6$ nodes, i.e. there are six gauge groups that we label from 1 to 6. Then, the following three loops form a basis for all loops: (123), (124), (125), (126), (234), (235), (236), (345), (346), (456). The basis has 10 elements which is equal to $N_{6}=(6-1)(6-2)/2$. We can check that the $N_{L}$ loops provide enough free parameters to parameterize the space of anomaly free theories. To see this, note that the solutions to the anomaly cancellation equations form a vector space of dimension $${L(L-1)\over 2}-(L-1)={(L-1)(L-2)\over 2}=N_{L}$$ (51) where $L(L-1)/2$ is the number of parameters $A_{ij}$ from which we have subtracted the $(L-1)$ anomaly cancellation conditions on $L$ groups. Even when the ranks are unequal, anomaly free theories can be constructed from a basis of 3-loops because (50) and (51) are equal. However, the links of any given 3-loop will have to be populated with a different number of fields in a way related to the GCDs of the three groups appearing in it. For example, suppose one has the three gauge groups $SU(r_{1}\cdot g)\times SU(r_{2}\cdot g)\times SU(r_{3}\cdot g)$ where $r_{i}$ are a triple of positive integers that do not share a common factor and $g$ is any another positive integer. Then if we take number of chiral bifundamentals between gauge group $i$ and $j$ to be $A_{ij}=\epsilon^{ijk}r_{k}$, we get an anomaly free theory.444As a specific example consider a four-node quiver with gauge group $SU(3)_{1}\times SU(5)_{2}\times SU(7)_{3}\times SU(8)_{4}$. Then, we can get an anomaly free theory by making a loop of four with $A_{12}=7\cdot 8$, $A_{23}=3\cdot 8$, $A_{34}=3\cdot 5$, $A_{41}=7\cdot 5$. We can obtain this as a sum of two 3 loops: (124)+ (234). The loop (123) corresponds to an $SU(3)_{a}\times SU(5)_{b}\times SU(8)_{c}$ with $A_{ab}=8$, $A_{bc}=3$, $A_{ca}=5$ while the loop (234) corresponds to $SU(5)_{i}\times SU(7)_{j}\times SU(8)_{k}$ with $A_{ij}=8$ , $A_{jk}=5$, $A_{ki}=7$. To get the four loop (1234), we need to cancel the (24) link which means that need to add $7\cdot(124)+3\cdot(234)$. This precisely reproduces the four loop numbers. Another anomaly free theory could be generated by adding (124) to (234). In this case, the fields along link (24) will not cancel, but by construction, the number of fields going into and out of each gauge group will cancel. This way of thinking suggests that one way to do the statistics of anomaly free theories is to first select a basis of anomaly free 3-loops and then do the statistics of populations of these loops given a bound on the total number of loops. Anomaly free, asymptotically free, chiral, equal rank gauge theories: This set of theories is very easy to analyze, as there are can be only five different types of vertices in such quivers (see Fig. LABEL:fivevertex). Therefore the most general quiver arises by combining these five vertices in various combinations. Superficially, the second vertex with two separate lines coming in and two separate lines going out allows for the largest amount of combinatorial freedom and will quite likely dominate this set of theories. It would be interesting to explore this class further. Possibly it can be mapped to an existing solvable lattice model in statistical mechanics. Anomaly cancellation for a general quiver by using an extra node: If we drop the constraint of asymptotic freedom, the set of anomaly free, chiral, and equal rank theories is easy to parametrize. It is not difficult to see that if we take any set of edges ${\cal S}$ such that the edges together form a connected tree which contain all vertices of the quiver, then the anomaly equations uniquely determine the $A_{ij},A_{ji}$ with $(ij)\in{\cal S}$ in terms of the $A_{ij},A_{ji}$ with $(ij)\not\in{\cal S}$. Thus we can simply take an arbitrary set of chiral matter fields for all edges not in ${\cal S}$, after which anomaly cancellation uniquely fixes the remaining links. A simple example of the set ${\cal S}$ is the star-shaped tree consisting of all edges $(1i)$, $i=2\ldots L$. In other words, if we remove one vertex and all its edges, and arbitrarily specify the chiral matter content in the remaining quiver with $L-1$ vertices, this uniquely determines an anomaly free , chiral, equal rank quiver gauge theory with $L$ gauge groups. To illustrate this, consider a four-node quiver. Take $A_{12}=a$, $A_{32}=b$ and $A_{13}=c$ 555If we say $A_{12}=a$, then we mean that $A_{12}=a$ for $a\geq 0$ and $A_{21}=-a$ for $a\leq 0$. This will guarantee that the theory is chiral. Then anomaly cancellation uniquely fixes $$A_{24}=a+b,\quad A_{41}=a+c,\quad A_{43}=b-c.$$ (52) This method can be extended to theories where the gauge groups have unequal ranks. First consider an arbitrary chiral, quiver with $L-1$ nodes. Let the rank of the group at the ith node be $N_{i}$. For anomaly cancellation, the net number of fundamentals minus antifundamentals at each node must be zero. Let $K_{i}$ be the net excess matter (number of fundamentals minus anti-fundamentals) at each node. Then we can always add an additional $U(1)$ gauge group with $N_{i}\,K_{i}$ bifundamental fields under this $U(1)$ and the $U(N_{i})$ of the ith node. This will give an anomaly free theory. This extra node can be non-abelian, but its rank is restricted to be a divisor of the set $\{N_{i}\,K_{i}\}$. In this way, the statistics of general anomaly free quivers on $L$ nodes can be studied by first constructing arbitrary $L-1$ node quivers and then adding a extra node according to the above algorithm. 4.2 Dealing with discrete quiver symmetries: an example From above, the set of anomaly free, chiral and equal rank theories with four nodes is parametrized by the rank $N$ of the gauge groups and three integers $a,b,c$. The measure (8) becomes $$\rho=\exp\left(-4\beta N-\lambda N^{2}\left(|a|+|b|+|c|+|a+b|+|a+c|+|b-c|% \right)\right).$$ (53) In the remainder, we will fix the value of $N$ and look only at the distribution of $a,b,c$. By symmetry, the expectation values of $a,b,c$ are all zero. This happens because there are a number of discrete symmetries of the quiver due to which averages vanish. For example, for every chiral quiver there is the anti-chiral quiver in which the orientations of all fields are reversed. Averaging these two will formally give $a=b=c=0$. Similar phenomena will always happen whenever we consider sets of quivers with symmetries. More structure appears once we break the symmetries and look at the average quiver in an ensemble with some symmetry breaking conditions imposed. Suppose for example that we impose $a>0$. This leaves a $\mathbb{Z}_{2}$ symmetry that exchanges vertices 3 and 4. Therefore, the expectation value of $A_{34}$ will be zero. Symmetry considerations further show that $$\langle\frac{1}{2}A_{12}\rangle=\langle A_{23}\rangle=\langle A_{24}\rangle=% \langle A_{31}\rangle=\langle A_{41}\rangle.$$ (54) Furthermore, each of these expectation values is proportional to $1/\lambda N^{2}$. A boundary condition that completely breaks the symmetry is to impose that $a\geq b\geq 0$. We can always achieve this up to a permutation of the vertices so there is no loss of generality. The analysis of the expectation values of the number of matter fields in this ensemble is more tedious but can still be done explicitly. To leading order in $\epsilon=\lambda N^{2}$ we obtain666Here, by $\langle A_{ij}\rangle$ we really mean $\langle A_{ij}-A_{ji}\rangle$. $$\displaystyle\langle A_{12}\rangle=\frac{47}{84\epsilon},\quad\langle A_{32}% \rangle=\frac{4}{21\epsilon},\quad\langle A_{31}\rangle=\frac{61}{588\epsilon},$$ $$\displaystyle\langle A_{24}\rangle=\frac{3}{4\epsilon},\quad\langle A_{41}% \rangle=\frac{67}{147\epsilon},\quad\langle A_{43}\rangle=\frac{173}{588% \epsilon}.$$ (55) Thus we see that after modding out the $Z_{2}$ symmetries of the quiver we are able find an interesting average quiver. Of course, since there are only four nodes here, we do not expect any notion of statistical typicality. To study whether general large quivers have some typical structure, we will have to proceed as above, by parameterizing the space of anomaly cancelled theories and then imposing symmetry breaking conditions. 4.3 Towards dynamics While we have been focusing on the structure of those field theories in which anomalies cancel, we should also be paying attention to dynamics. Since we are dealing with ${\cal N}=1$ field theories, if $N_{f}>3N_{c}$ for any gauge group then it will be infrared free. If $N_{f}<3N_{c}$ it will be asymptotically free. If $N_{f}=3N_{c}$ the one-loop beta function vanishes. If we distribute fields into a quiver, the bound of the total number of fields will tend to cause the low rank gauge groups to contain more fields. Thus they will tend to be infrared free. What is more, because, as we have seen above, anomaly cancellation including high rank gauge groups tends to require lots of fields, if a high rank group is connected to the rest of the quiver it would tend to push groups in the quiver towards infrared freedom. In general, studying RG flow requires us to know the superpotential or at least to scan statistically over them. Minimally, we should include all cubic and quartic terms in the superpotential with O(1) coefficients mutiplied by the appropriate scale. (The cubic terms are classically marginal, and some quartic terms are known to become marginal under RG flow.) Doing such a dynamical analysis of general quiver gauge theories is beyond the scope of this paper, but as an initial step to gain some experience with how this works we will study some examples without a superpotential. 4.3.1 Four node, asymptotically free quivers First recall that $SU(N)$ gauge theory with $N$ flavors confines at energies below its dynamical scale, while $SU(N)$ theory with $2N$ flavors flows to an interacting conformal fixed point. We will assume that the confining $SU(N)$ theory is on the baryonic branch. We can then naively take a quiver and simply proceed to allow individual gauge factors to confine, Seiberg dualize [33] etc. as their dynamics becomes strong. A cursory analysis of four node, asymptotically free quivers (see some examples with equal ranks $N$ in Fig. LABEL:fournodes, constructed from the vertices in Fig. LABEL:fivevertex) suggests that one will tend to get interacting conformal field theories in which the mesons of the confining factors participate. This suggests that unparticles [34] might be generic in these settings. 4.3.2 General quiver with unequal gauge groups First consider the simple case of a loop of three gauge groups, $SU(N_{1})\times SU(N_{2})\times SU(N_{3})$ which has to cancel anomalies by itself. For example, this can happen if the 3-loop is isolated within a larger quiver. As we discussed, such primitive 3-loops can be used to generate larger anomaly free quiver gauge theories. To cancel anomalies, the $(12),(23),(31)$ links will generically contain $N_{3},N_{1},N_{2}$ bifundamentals respectively.777The minimal solution to the anomaly cancellation equations will actually be that the number of bifundamentals connecting $i$ and $j$ is $N_{k}/{\rm GCD}(\{N_{i},N_{j},N_{k}\})$ as in (19). But generically the GCD will be $1$. Thus for group $i$ to be asymptotically free one will need that $$3N_{i}>N_{j}N_{k}~{}~{}~{}~{}~{}~{}i\neq j\neq k\,.$$ (56) Taking all the $N_{i}>3$ and $N_{1}<N_{2}<N_{3}$, it is clear that $SU(N_{3})$ is the only gauge group that has the possibility of being asymptotically free. So for any anomaly-free, chiral connected quiver with three nodes with ranks at least $3$ either all three groups are infrared free, or only the largest one is asymptotically free if it has sufficiently large rank. The same argument no longer works for connected quivers with more than three gauge groups, still it is easy to see that generically high rank gauge groups with links to smaller rank gauge groups have a chance to be asymptotically free, whereas low rank gauge groups connected to higher rank gauge groups tend to be IR free. Now consider three cases for the dynamics of a quiver with unequal gauge groups. 1. The number of fields $K$ is very large. If so, it seems likely that in a randomly chosen field theory all possible links in the quiver will be populated with some multiplicity, although the links between low rank groups will be enhanced. In this circumstance our arguments suggests that the entire theory will be infrared free. 2. The number of fields $K$ is small. Presumably the lowest rank gauge groups will tend to have matter and the quiver will typically consist of several disconnected smaller clusters that each for a connected quiver gauge theory. The high rank gauge groups with little matter would then confine at the appropriate dynamical scale. 3. For an intermediate number of fields the clusters will percolate and presumably there is an interesting phase structure here. 5 Conclusion It is somewhat unsettling to attempt to make statistical predictions for the structure of the theory describing nature because, ever since Galileo, we have been fortunate that observations and symmetries have constrained the possibilities sufficiently to essentially give a unique theory describing the phenomena under investigation. But string theorists are in the unprecedented situation of hoping to make predictions for the fundamental theory up to the Planck scale given observations below the TeV scale, subject to only very general constraints such as consistency and in particular a consistent coupling to quantum gravity. In such a situation, the best one can do is to predict the likelihood of possible high energy theories, conditioned on the known facts below the TeV scale, the known constraints, and our best guess regarding the measure on the space of theories. This is literally all that we can know. While this sort of Bayesian approach is unfamiliar in particle physics, it is much less unusual in cosmology where one does conceive of ensembles of possible universes or ensembles of domains with different low-energy physics in a single universe. We would nevertheless like to emphasize that we do not want to exclude the possibility that consistency requirements plus experimental input will eventually yield an (almost) unique fundamental theory, we are merely entertaining the logical possibility that this will turn out to not be the case. In this paper we have used the uniform measure on specific effective field theory landscapes, but it is not obvious that this is the measure prescribed by string theory. For example, dynamics can play a role in determining the appropriate measure because there can be transitions between vacua with different properties. Also, renormalization group flows can modify the measure in the infrared as theories flow towards their fixed points. Given the correct measure, our analysis could be repeated to find the typical predictions. However, because the uniform measure leads to typicality for some coarse-grained properties, an alternative measure would have to concentrate on an exponentially sparse part of the configuration space in order to change the typical predictions of the uniform measure. The general approach to model building suggested by these considerations does not involve the usual desert with a high scale GUT. Instead it appears that one would statistically expect a plethora of gauge factors leading to interesting structures at all scales up to the string scale. Amongst these gauge factors there will be some groups with high ranks and others with low ranks. If there is a bound on the total number of matter fields, statistically, the higher rank groups will tend to have fewer fundamentals (since this eats up matter). Thus they will tend towards confinement at a relatively high dynamical scale if all couplings are unified at the string scale. On the other hand if you have too much matter in any group it will tend to infrared triviality. Thus the low rank groups, if they are to have IR dynamics, will tend to be largely decoupled from the high rank groups. Thus if we study the statistics of anarchic landscapes of field theories, conditioned on having interesting low energy dynamics, we will tend towards a structure with dynamical low rank groups largely decoupled from a complex, interacting higher rank sector. The explicit examples of toy landscapes that we studied in Sec. 3 do not have very interesting dynamics. The matter-free case confines. The ring quivers that we studied in detail are generically infrared free since anomaly cancellation imposes the need for lots of matter unless the individual gauge group ranks conspire to make the GCD in (19) large. Thus we see that conditioning a field theory landscape on having interesting low energy dynamics, along with anomaly cancellation, will be a major constraint, and is likely to significantly modify the measure on the space of theories. It would amusing if curious number theoretic properties like the appearance of large GCDs will have to be given more weight. It would also be very interesting to explore other measures; for example the results in [26, 27] suggest to weigh rank $k$ gauge group factors with an extra factor of $1/k^{2}$ compared to the anarchic measures we have been using. Acknowledgments: We have benefited from conversations with Ofer Aharony, Michael Douglas, Gary Gibbons, Florian Gmeiner, Dieter Lüst, Juan Maldacena, Yasunori Nomura, Carlos Nunez, Al Shapere, Tanmoy Vachaspati, Brian Wecht, and Timo Weigand. We are grateful to the organizers of the Sowers Workshop at Virginia Tech where this work was initiated. V.B. thanks DAMTP, Cambridge, and the Physics Department at Swansea, and A.N. thanks the Physics Departments at Penn, the University of Amsterdam and the IAS for for hospitality during this project. VB was supported by the DOE under grant DE-FG02-95ER40893, by the NSF collaboration grant OISE-0443607, and as the Helen and Martin Chooljian member of the Institute for Advanced Study. AN is supported by a STFC advanced fellowship. JdB is partially supported by the FOM foundation. Finally, we have enjoyed the atriums of the British Library and the British Museum while this paper was being completed. References [1] [2] R. Bousso and J. Polchinski, “Quantization of four-form fluxes and dynamical neutralization of the cosmological constant,” JHEP 0006, 006 (2000) [arXiv:hep-th/0004134]. [3] [4] S. Kachru, R. Kallosh, A. Linde and S. P. Trivedi, “De Sitter vacua in string theory,” Phys. Rev.  D 68, 046005 (2003) [arXiv:hep-th/0301240]. [5] [6] B. Feldstein, L. J. Hall and T. Watari, ‘Landscape prediction for the Higgs boson and top quark masses,” Phys. Rev.  D 74, 095011 (2006) [arXiv:hep-ph/0608121]; L. J. Hall, M. P. Salem and T. Watari, “Statistical Understanding of Quark and Lepton Masses in Gaussian Landscapes,” Phys. Rev.  D 76, 093001 (2007) [arXiv:0707.3446 [hep-ph]]. [7] [8] R. Easther and L. McAllister, “Random matrices and the spectrum of N-flation,” JCAP 0605, 018 (2006) [arXiv:hep-th/0512102]. [9] [10] N. Arkani-Hamed, S. Dimopoulos and S. Kachru, “Predictive landscapes and new physics at a TeV,” arXiv:hep-th/0501082. [11] [12] L. J. Hall, H. Murayama and N. Weiner, “Neutrino mass anarchy,” Phys. Rev. Lett.  84, 2572 (2000) [arXiv:hep-ph/9911341];    N. Haba and H. Murayama, “Anarchy and hierarchy,” Phys. Rev.  D 63, 053010 (2001) [arXiv:hep-ph/0009174];    A. de Gouvea and H. Murayama, “Statistical test of anarchy,” Phys. Lett.  B 573, 94 (2003) [arXiv:hep-ph/0301050]. [13] [14] A. E. Nelson and M. J. Strassler, “Suppressing flavor anarchy,” JHEP 0009, 030 (2000) [arXiv:hep-ph/0006251]. [15] [16] C. D. Froggatt and H. B. Nielsen, “Hierarchy Of Quark Masses, Cabibbo Angles And CP Violation,” Nucl. Phys.  B 147, 277 (1979); [17] [18] G. Gibbons, “Priors”, seminar at conference The Very Early Universe 25 Years On, December 2007. [19] [20] K. R. Dienes, “Statistics on the heterotic landscape: Gauge groups and cosmological constants of four-dimensional heterotic strings,” Phys. Rev.  D 73, 106010 (2006) [arXiv:hep-th/0602286]. [21] [22] M. R. Douglas, “The statistics of string / M theory vacua,” JHEP 0305, 046 (2003) [arXiv:hep-th/0303194]; [23] [24] F. Denef and M. R. Douglas, “Distributions of flux vacua,” JHEP 0405, 072 (2004) [arXiv:hep-th/0404116];    J. P. Conlon and F. Quevedo, “On the explicit construction and statistics of Calabi-Yau flux vacua,” JHEP 0410, 039 (2004) [arXiv:hep-th/0409215]. [25] T. P. T. Dijkstra, L. R. Huiszoon and A. N. Schellekens, “Supersymmetric standard model spectra from RCFT orientifolds,” Nucl. Phys.  B 710, 3 (2005) [arXiv:hep-th/0411129]. [26] R. Blumenhagen, F. Gmeiner, G. Honecker, D. Lust and T. Weigand, “The statistics of supersymmetric D-brane models,” Nucl. Phys.  B 713, 83 (2005) [arXiv:hep-th/0411173];    F. Gmeiner, R. Blumenhagen, G. Honecker, D. Lust and T. Weigand, “One in a billion: MSSM-like D-brane statistics,” JHEP 0601, 004 (2006) [arXiv:hep-th/0510170]. [27] M. R. Douglas and W. Taylor, “The landscape of intersecting brane models,” JHEP 0701, 031 (2007) [arXiv:hep-th/0606109]. [28] C. Vafa, “The string landscape and the swampland,” arXiv:hep-th/0509212 [29] H. Ooguri and C. Vafa, “On the geometry of the string landscape and the swampland,” Nucl. Phys.  B 766, 21 (2007) [arXiv:hep-th/0605264]. [30] N. Arkani-Hamed, L. Motl, A. Nicolis and C. Vafa, “The string landscape, black holes and gravity as the weakest force,” JHEP 0706, 060 (2007) [arXiv:hep-th/0601001]. [31] J. D. Bekenstein, “Black holes and entropy,” Phys. Rev.  D 7, 2333 (1973);     J. D. Bekenstein, “Generalized second law of thermodynamics in black hole physics,” Phys. Rev.  D 9, 3292 (1974). [32] A.M. Vershik, “Statistical mechanics of combinatorial partitions, and their limit configurations,” Funkts. Anal. Prilozh. 30, No.2, 19-30 (1996) (English translation: Funct. Anal. Appl. 30, No.2, 90-105 (1996)). [33] N. Seiberg, “Electric - magnetic duality in supersymmetric nonAbelian gauge theories,” Nucl. Phys.  B 435, 129 (1995) [arXiv:hep-th/9411149]. [34] H. Georgi, “Unparticle Physics,” Phys. Rev. Lett.  98, 221601 (2007) [arXiv:hep-ph/0703260].
Quantum corrected model for plasmonic nanoparticles: A boundary element method implementation Ulrich Hohenester [email protected] Institute of Physics, University of Graz, Universitätsplatz 5, 8010 Graz, Austria (May 14, 2015) Abstract We present a variant of the recently developed quantum corrected model (QCM) for plasmonic nanoparticles [Nature Commun. 3, 825 (2012)] using non-local boundary conditions. The QCM accounts for electron tunneling in narrow gap regions of coupled metallic nanoparticles, leading to the appearance of new charge transfer plasmons. Our approach has the advantages that it emphasizes the non-local nature of tunneling and introduces only contact resistance, but not ohmic losses through tunneling. Additionally, it can be implemented much easier in boundary element method (BEM) approaches. We develop the methodology for the QCM using non-local boundary conditions, and present simulation results of our BEM implementation which are in good agreement with those of the original QCM. pacs: 73.20.Mf,78.67.Bf,03.50.De I Introduction Plasmonics allows to manipulate light at the nanoscale and to obtain strong and very confined electromagnetic fields Maier (2007); Atwater (2007); Schuller et al. (2010); Halas (2010); Stockman (2011). This is achieved by binding light to coherent electron charge oscillations at metal-dielectric interfaces, so-called surface plasmons (SPs), sometimes also referred to as surface plasmon polaritons. Recent work has addressed the question under which conditions a classical SP description in terms of a local dielectric function breaks down and quantum-mechanical corrections become mandatory. On the one hand, at sharp edges and corners of metallic nanoparticles there is a spill-out of the electron charge distribution, due to the electron gas pressure, which leads to a nonlocal dielectric response David and Garcia de Abajo (2011); Luo et al. (2013); Mortensen et al. (2014); tos causing a blue shift of the SP resonances and a reduction of the achievable field enhancements in comparison to local descriptions Ciraci et al. (2012). On the other hand, for sub-nanometer gaps and sufficiently high field strengths electrons can tunnel between neighbor nanoparticles Esteban et al. (2012); David and Garciia de Abajo (2014); Esteban et al. (2015) leading to the emergence of new charge-transfer plasmons Savage et al. (2012). Electron transfer through larger gaps can occur in molecular tunnel junctions Tan et al. (2014). From the theoretical side, such quantum corrections have been modelled by introducing either modified boundary conditions or artificial materials that mimic the quantum behaviour. In Ref. Luo et al., 2013 the authors showed that a non-local dielectric response can be modelled by replacing the non-local metal with a composite material, comprising a thin dielectric layer on top of a metal with local dielectric properties. Similarly, in the quantum-corrected model Esteban et al. (2012, 2015) (QCM) an artificial dielectric material is filled into the gap region, with a conductivity that reproduces the correct tunnel current between two neighbour nanoparticles. As the tunnel current typically has an exponential dependence with respect to the gap distance Pitarke et al. (1990), non-planar tunneling gaps must be modelled by onion-like shells of materials with different conductivities. Different materials can be easily introduced in volume based simulation approaches, such as finite difference time domain (FDTD) simulation Yee (1966); Taflove and Hagness (2005). In this paper we show how to simulate tunneling effects within a boundary element method (BEM) approach Garcia de Abajo and Howie (2002); Hohenester and Trügler (2012); Hohenester (2014) by introducing modified non-local boundary conditions. While the consideration of additional materials is computationally cheap in volume based simulations, it becomes computationally very demanding in BEM simulations, since usually a large number of different material layers is needed to resolve the exponential tunnel current dependence. In contrast, the consideration of modified boundary conditions in a QCM variant has virtually no impact on the performance of BEM simulations compared to conventional ones. We will show that both approaches, either the consideration of artificial materials or modified non-local boundary conditions, give similar results. From a conceptual point of view, non-local boundary conditions have the advantage that they emphasize the non-local behaviour of the tunneling process and tunnel currents do not suffer from ohmic losses but are only governed by contact resistance, a finding known for a long time in the field of mesoscopic electron transport Datta (1997). II Theory Figure 1(a) shows the basic principle of the original QCM Esteban et al. (2012, 2015) (in the following denoted as volume QCM) at the example of two nanoparticles separated by a small gap of sub-nanometer size. When an electric field $E$ is applied across the gap, a tunnel current $$J_{t}=\sigma_{t}\,E$$ (1) starts to flow, where $\sigma_{t}$ is the tunnel conductivity that can be either obtained from first principles or effective model calculations of various degrees of sophistication Esteban et al. (2012); Haus et al. (2014); Kaasbjerg and Nitzan (2015); Esteban et al. (2015). To mimic such tunnel currents, within the quantum corrected model one introduces in the gap region an effective, homogeneous medium $\varepsilon_{2t}$ with a conductivity chosen to yield the correct tunnel current (we adopt the notation of Ref. Garcia de Abajo and Howie, 2002 and denote the dielectric functions in- and outside the nanoparticle with $\varepsilon_{1}$ and $\varepsilon_{2}$, respectively). This approach has a number of advantages: first, it can be easily implemented in volume based simulation approaches, such as FDTD; second, the description in terms of a local current distribution guarantees that charge is conserved, i.e., the charge that leaves one nanoparticle must be transferred via the junction to the other nanoparticle. On the other hand, the approach has a number of conceptual difficulties: the current is subject to ohmic losses, contrary to the purely contact-like resistivity of quantum tunneling; additionally, current is not only induced by electric fields parallel the nanoparticle connection, such as one would expect for tunnel currents, but also by perpendicular fields. In most cases of interest these are no serious shortcomings, since fields in gap regions practically always point along the nanoparticle connection, and the tunnel junction is typically so narrow that ohmic losses are of only minor importance. We will next rephrase the QCM in terms of modified boundary conditions which are much better suited for BEM implementations. Our starting point is Gauss’ law integrated over the small pillbox indicated in Fig. 1(b), $$\displaystyle\int\nabla\cdot\bm{D}\,d\tau$$ $$\displaystyle=$$ $$\displaystyle\oint\bm{D}\cdot d\bm{a}=4\pi\int\rho\,d\tau$$ (2) $$\displaystyle=$$ $$\displaystyle\frac{4\pi}{i\omega}\int\nabla\cdot\bm{J}_{t}\,d\tau=-\frac{4\pi i% }{\omega}\oint\bm{J}_{t}\cdot d\bm{a}\,,$$ where $d\tau$ and $d\bm{a}$ denote volume and surface integrations, respectively, and we have used the Fourier transformed continuity equation to relate $\rho_{t}$ to $\bm{J}_{t}$ (we use Gaussian units throughout). We now make the following ad-hoc assumption for the boundary condition of the normal component of the dielectric displacement $$D_{2a}^{\perp}-D_{1a}^{\perp}=-\frac{4\pi i\sigma_{t}}{\omega}\,\frac{E_{2a}^{% \perp}-E_{2b}^{\perp}}{2}\,.$$ (3) Here $a$ and $b$ denote the left and right nanoparticle, respectively. The last term accounts for the charge transferred from position $\bm{s}_{a}$ to $\bm{s}_{b}$ through quantum tunneling (i.e., the loss or gain of charge in the pillbox over which Gauss’ law is integrated). Similarly to Eq. (1) we assume that the current is proportional to the tunnel conductivity $\sigma_{t}$ and the average of the electric field along the outer surface normal directions $\bm{\hat{}}{\bm{n}}_{a,b}$ [as $\hat{\bm{n}}_{a}$ and $\hat{\bm{n}}_{b}$ in the gap region are approximately antiparallel, $E_{2b}^{\perp}$ in Eq. (3) receives a negative sign]. Note that this choice is by no means unique. We could alternatively assume $\bm{J}_{at}=\sigma_{t}(\bm{E}_{2a}+\bm{E}_{2b})/2$ or $\bm{J}_{at}=\sigma_{t}\bm{E}[(\bm{s}_{a}+\bm{s}_{b})/2]$. In all cases charge remains conserved since the current $\bm{J}_{at}$ leaving particle $a$ at position $\bm{s}_{a}$ is always the opposite to the current $\bm{J}_{bt}$ entering particle $b$, and vice versa. However, the consideration of solely normal currents $J_{t}^{\perp}$ has the advantage that only the boundary condition of the dielectric displacement needs to be modified, whereas the boundary condition for the parallel magnetic field remains unaltered because of our neglect of parallel tunnel currents. Eq. (3) is the central result of this work. It replaces the consideration of artificial dielectric materials through an artificial boundary condition. Contrary to the QCM of Esteban et al. Esteban et al. (2012, 2015), our approach describes quantum tunnel as a genuine non-local process and thus does not suffer from ohmic losses in the tunnel junction. It can be also easily extended to molecular tunnel junction by lumping all microscopic details about the microscopic tunneling process into an effective $\sigma_{t}$ value. As regarding the role of normal and parallel electric fields in tunneling, both models are comparably arbitrary but could be further refined. However, since in narrow gap regions the plasmonic nearfields preferentially point along the interparticle connection, the detailed $E^{\perp}$ and $\bm{E}^{\|}$ behavior of $\sigma_{t}$ is usually completely irrelevant. In Appendix A we show how to modify the BEM approach of Ref. Garcia de Abajo and Howie, 2002 to account for quantum tunneling, and present the working equations that can be implemented within the MNPBEM toolbox Hohenester and Trügler (2012); Hohenester (2014). III Results We start by considering in accordance to Refs. Esteban et al., 2012, 2015 the case of two spheres with a gap in the sub-nanometer regime. For the dielectric function we take a Drude-type form $\varepsilon(\omega)=\varepsilon_{0}-\omega_{p}^{2}/(\omega^{2}+i\omega\gamma)$ for gold, $\varepsilon_{2}=1$ for the embedding medium, and $$\varepsilon_{2t}(\ell)=1+\frac{4\pi i\sigma_{t}(\ell)}{\omega}\,,\quad\sigma_{% t}(\ell)=-\mathfrak{Im}\left[\frac{\omega_{p}^{2}}{\omega^{2}+i\omega\gamma_{p% }e^{\ell/\ell_{c}}}\right]$$ (4) for the tunnel material Esteban et al. (2015). Here $\varepsilon_{0}=10$, $\omega_{p}=9.065$ eV, $\gamma_{p}=0.0708$ eV, and $\ell_{c}=0.04$ nm, and we consider only purely imaginary conductivity corrections for the tunnel material. These model parameters provide a good fit to experimental data Johnson and Christy (1972) for photon energies below 2 eV but underestimate dielectric losses above 2 eV where $d$-band scatterings set in. Nevertheless, in this work we keep the Drude description to facilitate the comparison with Refs. Esteban et al., 2012, 2015. The frequency dependence and details of $\sigma_{t}$ are subject of ongoing research efforts, the parametrization of Eq. (4) has been motivated by static tunneling calculations including image charge effects as well as by time-dependent density functional theory calculations Esteban et al. (2015), related work has employed theory developed for optical-assisted tunneling in the microwave domain Haus et al. (2014) or diagrammatic expansions for the ac conductance through inclusion of higher-order electron-plasmon interactions Kaasbjerg and Nitzan (2015). As the primary goal of this work is the derivation and implementation of a boundary QCM using a suitable $\sigma_{t}$ parametrization, we will here not further elaborate on this point. Fig. 2 compares for a single artificial tunnel material in between the two spheres (see inset) the extinction cross sections for different gap distances $d_{\rm gap}$. The material covers the distance range from $d_{\rm gap}$ to $d_{\rm gap}+0.2$ nm and the dielectric function $\varepsilon_{2t}(d_{\rm gap}+0.1\,\text{nm})$ is evaluated at the average distance. For the boundary QCM we use the same value for $\varepsilon_{2t}$ and connect boundary elements of the two neighbour spheres within the same distance range com . With this, we are able to compare the volume and boundary QCM directly. As can be seen in the figure, both volume and boundary QCM give practically identical results over the entire range of gap distances where tunneling sets in. Tunneling is evidenced by the disappearance of the lowest plasmon peak around 1.8 eV with decreasing gap distance, and the onset of the charge transfer peak around 0.8 eV. Similarly to the extinction spectra, also the field enhancements in the gap region (not shown) computed within the volume and boundary QCM are in almost perfect agreement. It is gratifying to see that the volume and boundary QCM models compare so well. Next, we show in Fig. 3 results for the full QCM simulations for the same setup as in Fig. 2 and for $d_{\rm gap}=0.075$ nm. For the volume QCM we use five layers of artificial materials, covering the distance range from $d_{\rm gap}$ to $d_{\rm gap}+0.2$ nm, and for the boundary QCM we use for $\varepsilon_{2t}(\ell)$ the respective distances $\ell$ between opposite boundary elements. Note that we use for both spheres the same boundary meshes with a refined discretization at one of the poles com , and simply flip and displace the spheres to obtain the dimer structure shown in the inset. Again we find good agreement between the volume and boundary QCM, although the volume QCM leads to a more pronounced exctinction peak of the charge transfer plasmon. We believe that this is an artefact caused by our BEM implementation of the volume QCM. The BEM approach of García de Abajo and Howie matches electromagnetic potentials at material boundaries in order to solve Maxwell’s equations Garcia de Abajo and Howie (2002); Hohenester and Trügler (2012). In this approach, an external plane wave excitation only excites materials connected with the embedding medium (in the gap region the outermost material is the last layer of artificial tunneling material) and the excitation is then passed to the inner layers through the solution of Maxwell’s equations Garcia de Abajo and Howie (2002). While this causes typically no problems, it becomes computationally demanding for the inhomogeneous tunnel material which is modelled through closely spaced onion-like layers. In our simulations we had problems to get fully converged results when increasing the number of layers, probably due to artificial reflections and transmissions of the incoming light at the layer interfaces. When we consider the tunneling materials only in the BEM solutions and (artificially) neglect them in the light excitation (see simulation results with diamond symbols) we obtain for the charge transfer peak perfect agreement between volume and boundary QCM. Also the (minor) differences at higher energies are probably due to implementation problems of the volume QMC within the BEM approach. The squares in Fig. 3 report results of a slight variant of the boundary QCM. Here we do not connect opposite boundary elements (as one can only do for flipped nanoparticles) but connect the closest boundary elements of the two nanoparticles. Apparently, such an approach also works for nanoparticle arrangements with a lower degree of symmetry. As one infers from a comparison of the boundary QCM1 and QCM2 results, these two approaches are in perfect agreement. As a final example, in Fig. 4 we show results for a symmetric trimer structure consisting of three spheres, demonstrating that simulations of more complicated nanoparticles and nanoparticle arrangements can be easily performed with our BEM approach. For the trimer structure we again observe the appearance of the charge transfer plasmon peak. Due to the triangular symmetry, the extinction cross sections do not depend on the polarization of the incoming light (propagating perpendicularly to the trimer plane). IV Summary and conclusions To summarize, we have presented a variant of the quantum corrected model (QCM) where tunneling is accounted for by the consideration of non-local boundary conditions. This approach has the advantage that it emphasizes the non-local nature of tunneling and does not introduce artificial ohmic tunnel losses. We have developed the methodology for implementing the boundary QCM within a boundary element method (BEM) approach, and have presented simulation results which have compared well with results of the original volume QCM. Minor differences between the two approaches have been attributed to intrinsic difficulties of our BEM scheme to properly implement a volume QCM. We believe that the volume and boundary QCM are closely related, but the availability of a different approach might be beneficial for conceptual reasons as well as for BEM implementations. Our approach might prove particularly useful for molecular tunnel junctions with larger gap sizes. Also supplementing the QCM through inclusion of non-local effects in the dielectric metal function, through modified boundary conditions, should be relatively straightforward. Future work will also address the possibilities to compute the tunnel conductivities through ab-initio calculations and to submit the pertinent tunnel parameters to classical electrodynamic simulations including quantum corrections. Acknowledgments This work has been supported in part by the Austrian science fund FWF under the SFB F49 NextLite and by NAWI Graz. I am most grateful to Claudia Draxl for her hospitality during my visit at the Humboldt university of Berlin where part of this work has been performed. Javier Aizpurua is acknowledged for helpful discussions. Appendix A Here we show how to implement the non-local quantum tunneling of Eq. (3) in the BEM approach of García de Abajo and Howie Garcia de Abajo and Howie (2002) (in the following we refer to the equations of this work with a preceding G). Importantly, we can carry over most results with the only exception of Eqs. (G17,G18) which become modified through the nonlocal boundary condition. The continuity of the scalar and vector potentials $\phi$ and $\bm{A}$ read [Eqs. (G10,G11)] $$\displaystyle G_{1}\sigma_{1}-G_{2}\sigma_{2}$$ $$\displaystyle=$$ $$\displaystyle\phi_{2}^{e}-\phi_{1}^{e}=\varphi$$ $$\displaystyle G_{1}\bm{h}_{1}-G_{2}\bm{h}_{2}$$ $$\displaystyle=$$ $$\displaystyle\bm{A}_{2}^{e}-\bm{A}_{1}^{e}=\bm{a}\,,$$ where $G_{1}$ and $G_{2}$ denote the Green functions inside and outside the nanoparticle, and $\sigma$ and $\bm{h}$ are artificial surface and current distributions at the particle boundary which are chosen such that the boundary conditions of Maxwell’s equations are fulfilled. $\phi^{e}$ and $\bm{A}^{e}$ are the scalar and vector potentials of an external excitation, such as a plane wave. For further details see Refs. Garcia de Abajo and Howie, 2002; Hohenester and Trügler, 2012. The continuity of the magnetic field becomes [see also Eq. (G14)] $$H_{1}\bm{h}_{1}-H_{2}\bm{h}_{2}-ik\,\hat{\bm{n}}\left(\varepsilon_{1}G_{1}% \sigma_{1}-\varepsilon_{2}G_{2}\sigma_{2}\right)=\bm{\alpha}^{\prime}\,$$ with $H_{1,2}$ being the surface derivative of $G_{1,2}$ taken at the particle in- or outside, and $\bm{\alpha}^{\prime}$ is defined through Eq. (G15). For the continuity of the normal dielectric displacement we get $$\varepsilon_{1}H_{1}\sigma_{1}-\varepsilon_{2t}H_{2}\sigma_{2}-ik\left(% \varepsilon_{1}\hat{\bm{n}}\cdot G_{1}\bm{h}_{1}-\varepsilon_{2t}\hat{\bm{n}}% \cdot G_{2}\bm{h}_{2}\right)={D^{e}}^{\prime}\,,$$ with $${D^{e}}^{\prime}=\varepsilon_{1}\left(ik\,\hat{\bm{n}}\cdot\bm{A}_{1}^{e}-{% \phi_{1}^{e}}^{\prime}\right)-\varepsilon_{2t}\left(ik\,\hat{\bm{n}}\cdot\bm{A% }_{2}^{e}-{\phi_{2}^{e}}^{\prime}\right)\,.$$ Here ${\phi_{1,2}^{e}}^{\prime}$ denote the surface derivatives of the external scalar potentials, and $\varepsilon_{2t}=\varepsilon_{2}+(4\pi i\sigma_{t}/\omega)$ is a non-local dielectric function accounting for quantum tunneling, see Eq. (3). Because $\varepsilon_{2t}$ is nonlocal and connects points $\bm{s}_{a}$ and $\bm{s}_{b}$ through tunneling, it cannot be commuted with the Green functions as in the original BEM approach Garcia de Abajo and Howie (2002). Yet, the derivation of the BEM equations is not too different. First, we use $$\displaystyle G_{1}\sigma_{1}$$ $$\displaystyle=$$ $$\displaystyle G_{2}\sigma_{2}+\varphi$$ $$\displaystyle G_{1}\bm{h}_{1}$$ $$\displaystyle=$$ $$\displaystyle G_{2}\bm{h}_{2}+\bm{a}$$ to replace in the continuity equation (G14) of the magnetic field $\sigma_{1}$, $\bm{h}_{1}$ by $\sigma_{2}$, $\bm{h}_{2}$, $$\left(\Sigma_{1}-\Sigma_{2}\right)G_{2}\bm{h}_{2}-ik\,\hat{\bm{n}}\left(% \varepsilon_{1}-\varepsilon_{2}\right)G_{2}\sigma_{2}=\bm{\alpha}\,,$$ with $\Sigma_{1}=H_{1}G_{1}^{-1}$, $\Sigma_{2}=H_{2}G_{2}^{-1}$ and $\bm{\alpha}=\bm{\alpha}^{\prime}-\Sigma_{1}\bm{a}+ik\,\hat{\bm{n}}\varepsilon_% {1}\varphi$. The continuity of the normal dielectric displacement becomes $$\left(\varepsilon_{1}\Sigma_{1}-\varepsilon_{2t}\Sigma_{2}\right)G_{2}\sigma_{% 2}-ik\left(\varepsilon_{1}-\varepsilon_{2t}\right)\hat{\bm{n}}\cdot G_{2}\bm{h% }_{2}=D^{e}\,,$$ with $D^{e}={D^{e}}^{\prime}-\varepsilon_{1}\Sigma_{1}\varphi+ik\varepsilon_{1}\hat{% \bm{n}}\cdot\bm{a}$. We can use the continuity equation for the magnetic field to express the surface current $\bm{h}_{2}$ in terms of $\sigma_{2}$, $$G_{2}\bm{h}_{2}=\Delta^{-1}\left[ik\,\hat{\bm{n}}(\varepsilon_{1}-\varepsilon_% {2})G_{2}\sigma_{2}+\bm{\alpha}\right]\,,$$ (5) with $\Delta=\Sigma_{1}-\Sigma_{2}$. Inserting this expression into the continuity equation for the normal dielectric displacement we finally obtain $$\displaystyle\Bigl{[}\varepsilon_{1}\Sigma_{1}-\varepsilon_{2t}\Sigma_{2}+k^{2% }(\varepsilon_{1}-\varepsilon_{2t})\hat{\bm{n}}\cdot\Delta^{-1}\hat{\bm{n}}(% \varepsilon_{1}-\varepsilon_{2})\Bigr{]}G_{2}\sigma_{2}$$ $$\displaystyle\qquad=D^{e}+ik(\varepsilon_{1}-\varepsilon_{2t})\hat{\bm{n}}% \cdot\Delta^{-1}\bm{\alpha}\,.$$ (6) Equations (5) and (A) are the two working equations of our BEM approach which can be solved through matrix inversion. Once the surface charges and currents $\sigma_{2}$ and $\bm{h}_{2}$ are known for a given external excitation, one can compute the electrodynamic potentials and fields everywhere else. References Maier (2007) S. A. Maier, Plasmonics: Fundamentals and Applications (Springer, Berlin, 2007). Atwater (2007) H. Atwater, Scientific American 296(4), 56 (2007). Schuller et al. (2010) J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, Nature Mat. 9, 193 (2010). Halas (2010) N. Halas, Nano Lett. 10, 3816 (2010). Stockman (2011) M. I. Stockman, Optics Express 19, 22029 (2011). David and Garcia de Abajo (2011) C. David and F. J. Garcia de Abajo, J. Phys. Chem. C 115, 19470 (2011). Luo et al. (2013) Y. Luo, A. I. Fernandez-Dominguez, A. Wiener, S. A. Maier, and J. B. Pendry, Phys. Rev. Lett. 111, 093901 (2013). Mortensen et al. (2014) N. A. Mortensen, S. Raza, M. Wubs, T. Sondergaard, and S. I. Bozhevolnyi, Nature Commun. 5, 3809 (2014). (9) G. Toscano, C. Rockstuhl, F. Evers, H. Xu, N. A. Mortensen, and M. Wubs, arXiv:1408.5862. . Ciraci et al. (2012) C. Ciraci, R. T. Hill, Y. Urzhumov, A. I. Fernandez-Dominguez, S. A. Maier, J. B. Pendry, A. Chilkoti, and D. R. Smith, Science 337, 1072 (2012). Esteban et al. (2012) R. Esteban, A. G. Borisov, P. Nordlander, and J. Aizpurua, Nature Commun. 3, 825 (2012). David and Garciia de Abajo (2014) C. David and J. Garciia de Abajo, ACS Nano 8, 9558 (2014). Esteban et al. (2015) R. Esteban, A. Zugarramurdi, P. Zhang, P. Nordlander, F. J. Garcia-Vidal, A. G. Borisov, and J. Aizpurua, Faraday Discussions (2015). Savage et al. (2012) K. J. Savage, M. M. Hawkeye, R. Esteband, A. G. Borisov, J. Aizpurua, and J. J. Baumberg, Nature 491, 574 (2012). Tan et al. (2014) S. F. Tan, L. Wu, J. K. W. Yang, P. Bai, M. Bosman, and C. A. Nijhuis, Science 343, 1496 (2014). Pitarke et al. (1990) J. M. Pitarke, F. Flores, and P. M. Echenique, Surf. Sci. 234, 1 (1990). Yee (1966) K. S. Yee, IEEE Trans. on Antennas and Propagation 14, 302 (1966). Taflove and Hagness (2005) A. Taflove and S. C. Hagness, Computational electrodynamics (Artech House, Boston, 2005). Garcia de Abajo and Howie (2002) F. J. Garcia de Abajo and A. Howie, Phys. Rev. B 65, 115418 (2002). Hohenester and Trügler (2012) U. Hohenester and A. Trügler, Comp. Phys. Commun. 183, 370 (2012). Hohenester (2014) U. Hohenester, Comp. Phys. Commun. 185, 1177 (2014). Datta (1997) S. Datta, Electronic transport in mesoscopic systems (Cambridge, Cambridge, 1997). Haus et al. (2014) J. W. Haus, D. de Ceglia, M. A. Vincenti, and M. Scalora, J. Opt. Soc. Am. B 31, A13 (2014). Kaasbjerg and Nitzan (2015) K. Kaasbjerg and A. Nitzan, Phys. Rev. Lett. 114, 126803 (2015). Johnson and Christy (1972) P. B. Johnson and R. W. Christy, Phys. Rev. B 6, 4370 (1972). (26) For the sphere discretization we typically use grid sizes with 20 azimuthal angles, 5–10 polar angles for each layer of the tunneling material, and 20 polar angles for the remaining sphere. The ribbons of the onion-like tunnel materials have about 10 discretization points along the nanoparticle connection. We perform refined boundary element integrations using the MNPBEM toolbox Hohenester and Trügler (2012), and checked the convergence of our simulations by systematically increasing the number of discretization points. .
A Zero-One Law for Markov Chains Michael Grabchak111Email address: [email protected]  and Isaac M. Sonin222Email address: [email protected]    University of North Carolina Charlotte (December 8, 2020) Abstract We prove an analog of the classical Zero-One Law for both homogeneous and nonhomogeneous Markov chains (MC). Its almost precise formulation is simple: given any event $A$ from the tail $\sigma$-algebra of MC $(Z_{n})$, for large $n$, with probability near one, the trajectories of the MC are in states $i$, where $P(A|Z_{n}=i)$ is either near $0$ or near $1$. A similar statement holds for the entrance $\sigma$-algebra, when $n$ tends to $-\infty$. To formulate this second result, we give detailed results on the existence of nonhomogeneous Markov chains indexed by $\mathbb{Z}_{-}$ or $\mathbb{Z}$ in both the finite and countable cases. This extends a well-known result due to Kolmogorov. Further, in our discussion, we note an interesting dichotomy between two commonly used definitions of MCs. 1 Introduction This paper addresses two problems in the study of nonhomogeneous Markov Chains (MC), where we understand homogeneous MCs to be an important special case. The first is a zero-one law for MCs and the second is an extension of a result, due to Kolmogorov, on the existence of MCs indexed by the integers and thus not having an initial starting point. The zero-one law for MCs was first formulated in Sonin (1991) [22], but without detailed proofs and with some gaps. We remedy this by giving two detailed proofs. The first is simple, but does not illustrate the underlying mechanics. The second is constructive and helps to illustrate how past (observed) events can inform future (tail) events. In the case where the MC is indexed by the integers, we also formulate a zero-one law for the entrance $\sigma$-algebra, when $n$ tends to $-\infty$. The second topic is related to a problem first posed by Kolmogorov in a short paper from 1936 [18]. While that paper is usually remembered for introducing the concept of a reversible MC, most of it deals with the following question: Given a sequence of stochastic matrices $(P_{n})_{n\in\mathbb{N}_{-}}$, does there exist a Markov chain indexed by $\mathbb{Z}_{-}$ with this as its sequence of transition matrices? When the number of states is finite and fixed over time, [18] answers in the affirmative. In this paper, we extend that result to the case where the cardinalities are finite, but may be changing and may approach infinity. We further show that, in the case where the cardinalities are countably infinite, the MC may not exist and we give a sufficient condition for when it does. These results help to explain when a MC indexed by the integers exists and thus when we can talk about the entrance $\sigma$-algebra. In our discussion, we note an interesting dichotomy between two commonly used definitions of MCs. The first is in terms of a sequence of random variables that satisfies the Markov property and the second is in terms of a sequence of transition matrices (or, in the homogeneous case, just one transition matrix). The rest of this paper is organized as follows. In Section 2 we formulate the zero-one law for the tail $\sigma$-algebra. In Section 3 we extend this to the entrance $\sigma$-algebra and discuss the above mentioned dichotomy. In Section 4 we discuss the existence of MCs on $\mathbb{Z}_{-}$ and $\mathbb{Z}$ and extend the results of [18]. Proofs are postponed to Section 5. We conclude in Section 6 by discussing some directions for future work. A short historical note is given in Section 7. Before proceeding, we introduce some notation. Let $\mathbb{Z}_{+}=\{0,1,2,\dots\}$, $\mathbb{N}=\{1,2,\dots\}$, $\mathbb{Z}_{-}=\{\dots,-2,-1,0\}$, and $\mathbb{N}_{-}=\{\dots,-2,-1\}$. Let $\mathbb{R}^{d}$ be the space of $d$-dimensional row vectors. For $m\in\mathbb{R}^{d}$, we write $m(i)$ to denote the $i$th coordinate of $m$. Let $e_{i}^{(d)}\in\mathbb{R}^{d}$ be the $i$th row of the $d\times d$-dimensional identity matrix. When we talk about convergence of a sequence of finite or infinite matrices, we mean pointwise convergence of their coordinates. For a set $A$ we write $|A|$ to denote its cardinality and $I_{A}$ to denote the indicator function on $A$. We write $\bigtriangleup$ to denote the symmetric difference operator. If $E,E_{1},E_{2},\dots$ are events in the same probability space, we write $E_{n}\to E$ a.s. to mean that $P(E_{n}\bigtriangleup E)\to 0$ as $n\to\infty$. 2 Zero-One Law for Markov Chains The classical Zero-One Law plays an important role in probability theory. This may be illustrated by the fact that it is the very first theorem presented in the well-known advanced textbook on probability theory by D. Stroock [26]. Before formulating this result, we introduce some notation. Let $$X_{0},X_{1},X_{2},\dots$$ be a sequence of random variables defined on some probability space $(\Omega,\mathcal{F},P)$. Let $\mathcal{F}_{nm}=\sigma(X_{n},\dots,X_{m})$ for $0\leq n\leq m<\infty$, let $\mathcal{F}_{n\infty}=\sigma(X_{n},\dots)$, and let $\mathcal{T}=\bigcap_{n\geq 0}\mathcal{F}_{n\infty}$ be the tail $\sigma$-algebra. Fact 1. (Kolmogorov’s Zero-One Law) Assume that $(X_{n})$ is a sequence of independent random variables. If $A\in\mathcal{T}$, then $P(A)=0$ or $P(A)=1$. While the proof of this law is quite simple, it is important to note that the tail $\sigma$-algebra is a very rich object and may have many complicated events. This is true even for fairly simple situations such as repeatedly tossing a coin. Perhaps, the most natural relaxation of the assumption of independence is to assume that the sequence of random variables forms a Markov chain. In this case, the tail $\sigma$-algebra does not, in general, satisfy a zero-one law and may contain a number of masses, see [7], [15], and the references therein. In Blackwell and Freedman (1964) [4] the following result for when a MC satisfies the zero-one law is given. Fact 2. (Blackwell and Freedman’s Zero-One Law) Assume that $(X_{n})$ is a homogeneous and recurrent MC on a finite or countably infinite state space such that $P(X_{0}=i)=1$ for some $i$. If $A\in\mathcal{T}$, then $P(A)=0$ or $P(A)=1$. If we remove any of the assumptions on the MC, then there will be examples where an event $A\in\mathcal{T}$ with $0<P(A)<1$ exists. On the other hand, Sonin (1991) [22] showed that something akin to the zero-one law nevertheless holds. This results is true even for general nonhomogeneous MCs with no assumptions made on the initial distribution or on the sequence of transition matrices. We begin by describing the general setup. Let $(S_{n})_{n\in\mathbb{Z}_{+}}$ be a sequence of finite or countably infinite state spaces and let $(Z_{n})_{n\in\mathbb{Z}_{+}}$ be a MC with $Z_{n}$ taking values in $S_{n}$. Here by MC we mean that the random sequence $(Z_{n})_{n\in\mathbb{Z}_{+}}$ satisfies the Markov property. Fix $A\in\mathcal{T}$ and for $0\leq a\leq b\leq 1$ let $$S_{n}(a,b)=\{i\in S_{n}:a\leq P(A|Z_{n}=i)\leq b\}.$$ Theorem 1. If $0<p<q<1$, then the following hold: a) $\lim_{n\to\infty}P(Z_{n}\in S_{n}(q,1))=P(A)$, b) $\lim_{n\to\infty}P(Z_{n}\in S_{n}(p,q))=0$, and c) $\lim_{n\to\infty}P(Z_{n}\in S_{n}(0,p))=1-P(A)$. Remark 1. From the proof we will see that the convergence is not just of probabilities, but of events. Specifically, for any $0<p<q<1$ we have the stronger result that $a^{\prime})$ $(Z_{n}\in S_{n}(q,1))\to A$ a.s., $b^{\prime})$ $(Z_{n}\in S_{n}(p,q))\to\emptyset$ a.s., and $c^{\prime})$ $(Z_{n}\in S_{n}(0,p))\to A^{c}$ a.s. Theorem 1 means that, for large $n$, with probability near one, the trajectories of $(Z_{n})$ are in states $i$, where $P(A|Z_{n}=i)$ is either near $0$ or near $1$. In [22] it is stated that this results “may be known but we know of no reference.” At this point we still have not seen a result of this type formulated elsewhere. However, we note that related ideas appear in [8] and [15]. The proof of Theorem 1, as given in [22], is incomplete and has gaps. In Sections 5.1 and 5.2 we give two detailed proofs. The first is simpler, but uses heavy machinery that obscures the underlying mechanics. The second is longer but constructive. It helps to illuminate how the $\sigma$-algebras $\mathcal{F}_{kn}$ and $\mathcal{F}_{k\infty}$ converge to the tail $\sigma$-algebra $\mathcal{T}$. 3 Zero-One Law for the Entrance $\sigma$-Algebra and a Dichotomy in the Definition of a MC In this section we extend the zero-one law for MCs to the entrance $\sigma$-algebra. We begin the discussion in a more general context. Let $$\dots,X_{-2},X_{-1},X_{0},X_{1},X_{2},\dots$$ be a sequence of random variables indexed by $\mathbb{Z}$. The so-called entrance $\sigma$-algebra is defined by $\mathcal{H}=\bigcap_{n\leq 0}\mathcal{F}_{-\infty n}$, where $\mathcal{F}_{-\infty n}=\sigma(\dots,X_{n})$. For $n\in\mathbb{Z}_{+}$, define $Y_{n}=X_{-n}$ and note that $\mathcal{H}$ is the tail $\sigma$-algebra for $(Y_{n})$. Thus the entrance $\sigma$-algebra is really just a tail $\sigma$-algebra, but when we change the arrow of time and run the sequence backwards. It follows that the entrance $\sigma$-algebra of $(X_{n})$ satisfies a zero-one law if and only if the tail $\sigma$-algebra of $(Y_{n})$ satisfies it. In the simplest case when $(X_{n})$ is a sequence of independent random variables, then so is $(Y_{n})$ and it follows that Kolmogorov’s zero-one law (Fact 1) holds for the entrance $\sigma$-algebra. We now turn the case of interest. Let $(Z_{n})_{n\in\mathbb{Z}}$ be a MC and, as before, let $Y_{n}=X_{-n}$ for $n\in\mathbb{Z}_{+}$. We refer to $(Z_{n})$ as the MC in forward time and to $(Y_{n})$ as the MC in reverse time. It is well-known that the sequence $(Y_{n})$ is also a MC. In fact, a simple application of Bayes’ rule gives the following result. Proposition 1. If $(Z_{n})_{n_{0}<n<n_{1}}$, where $-\infty\leq n_{0}<n_{1}\leq\infty$, is a MC, then for any integers $n_{0}<n\leq s<n_{1}$ $$\displaystyle P(Z_{n}=i_{n}|Z_{n+1}=i_{n+1},\dots,Z_{s}=i_{s})$$ $$\displaystyle=$$ $$\displaystyle P(Z_{n}=i_{n}|Z_{n+1}=i_{n+1})$$ (1) $$\displaystyle=$$ $$\displaystyle P(Z_{n+1}=i_{n+1}|Z_{n}=i_{n})\frac{P(Z_{n}=i_{n})}{P(Z_{n+1}=i_% {n+1})}.$$ Since $(Y_{n})_{n\in\mathbb{Z}_{+}}$ is a MC and $\mathcal{H}$ is its tail $\sigma$-algebra, the zero-one law for MCs, i.e. Theorem 1 and Remark 1, remains true for $A\in\mathcal{H}$ so long as we take $n\to-\infty$. In a similar way, we can define a MC indexed by $\mathbb{Z}_{-}$ and extend the zero-one law for MCs to that case. It may be interesting to note that, when the assumptions of Blackwell and Freedman’s zero-one law (Fact 2) hold for $(Z_{n})$, it does not guarantee that they will hold for $(Y_{n})$. This is because one of the assumptions is that the MC is homogeneous, but, as is clear from (1), when $(Z_{n})$ is homogeneous, $(Y_{n})$, in general, is not. While this lack of homogeneity, or equivalently of stationary transitions, in reverse time is well-known, it may nevertheless appear to be surprising. As Hunt (1960) [14] points out, “In view of the symmetry of past and future in the notion of Markoff chain, the lack of such symmetry in defining Markoff chains with stationary transitions must puzzle many a probabilist.” In fact, this asymmetry is even stronger and gets to the very heart of how MCs are defined. There are two standard definitions of a MC. The first is the one that is used in this paper and in many other places including the classic textbook [17]. This definition assumes that a MC is a sequence of random variables that satisfies the Markov property. The other definition, which is given in many if not most textbook, see e.g. [16], is to start with a sequence of transition matrices $(P_{n})$, where $P_{n}$ governs the transitions at time $n$. In the homogeneous case all of these matrices are equal to one transition matrix $P$. We refer to the collection of matrices $(P_{n})$, or to $P$ in the homogeneous case, as a Markov Chain model (MCM). This model does not define one MC, but a family of MCs, each determined by an initial distribution. When we fix an initial distribution, we fix the specific Markov chain. In forward time these two definitions are almost equivalent and, for this reason, not much attention is generally paid to the difference. However, their equivalence breaks down for MCs in reverse time. To see this, consider a MC in forward time that is governed by some MCM. From (1) it is clear that the transition matrices of the MC in reverse time depend on the initial distribution and so, in general, no MCM can exist in reverse time. There is an important exception, which is often used to circumvent this issue, see e.g. [11]. If a MC is both homogeneous and stationary, then a MCM will exist in both forward and backward time, although the transition matrices may be different. They are the same only under additional assumptions, which lead to the so-called reversible MCs. 4 Existence of Markov Chains on $\mathbb{Z}_{-}$ and $\mathbb{Z}$ In the previous section we discussed MCs indexed by $\mathbb{Z}_{-}$ and $\mathbb{Z}$. However, we did not consider the question of whether such MCs exist. This is not a trivial question because such a MC does not have a starting point in time; it starts at “minus infinity.” Thus there is no initial distribution. Note that we are not talking about the MC going in a backwards direction, the arrow of time goes, as usual, from left to right. The question of when such a MC exists was first posed by Kolmogorov in [18] and was formulated as follows: Given a sequence of stochastic matrices $(P_{n})_{n\in\mathbb{N}_{-}}$, does there exist a MC $(Z_{n})_{n\in\mathbb{Z}_{-}}$ with these as its transition matrices? We distinguish three cases: • Finite constant: the number of states in each state space is finite and equal to some integer $N$. • Finite: the number of states in each state space is finite, but may approach infinity as $n\to-\infty$. • Countable: all of the state spaces have a countably infinite number of states. To the best of our knowledge only the finite constant case has been considered in the literature, see [18] and [3]. In this case, Kolmogorov [18] showed that a MC always exists and gave a necessary and sufficient condition for uniqueness. However, the proofs in [18] are not very detailed. We give detailed proofs, which hold not only in the finite constant case but in the more general finite case. Our proofs do not seem to be exactly what Kolmogorov had in mind, but they are along similar lines. We also consider the countable case, where we show that there are situations when a MC does not exist and give a general sufficient condition for when it does. Throughout, our focus is on the case of MCs indexed by $\mathbb{Z}_{-}$. However, all of our results immediately extend to MCs indexed by $\mathbb{Z}$ since MCs indexed by $\mathbb{Z}_{+}$ always exist. 4.1 Finite Case Let $(S_{n})_{n\in\mathbb{Z}_{-}}$ be a sequence of finite state spaces with $|S_{n}|=N_{n}<\infty$ with $\liminf_{n\to-\infty}N_{n}\leq\infty$. For simplicity of notation and without loss of generality, we identify $S_{n}$ with the set $\{1,2,\dots,N_{n}\}$. For $n\leq 0$ let $$D(n)=\left\{x\in\mathbb{R}^{N_{n}}:\sum_{i=1}^{N_{n}}x(i)=1\mbox{ and }x(1),% \dots,x(N_{n})\geq 0\right\}$$ be the probability simplex, i.e. the collection of all probability measures on $S_{n}$. Let $(P_{n})_{n\in\mathbb{N}_{-}}$ be a sequence of stochastic matrices, with $P_{n}$ being an $N_{n}\times N_{n+1}$ matrix representing the transition from time $n$ to time $n+1$. For $s<t$, define multistep transition matrices by $P_{st}=\prod_{n=s}^{t-1}P_{n}$. Note that $P_{n}=P_{n,n+1}$ and that $$\displaystyle P_{st}=P_{su}P_{ut},\ \ s\leq u\leq t.$$ (2) We identify these matrices with the linear transformations $P_{st}:\mathbb{R}^{N_{s}}\mapsto\mathbb{R}^{N_{t}}$ given by $P_{st}(m)=mP_{st}$ for $m\in\mathbb{R}^{N_{s}}$. Note that we use $P_{st}$ to represent both the matrix and the corresponding linear transformation, but this should not cause any confusion. The problem of interest is to determine whether there exists a MC $(Z_{n})_{n\in\mathbb{Z}_{-}}$ with $Z_{n}$ taking values in $S_{n}$, which is governed by the sequence of transition matrices $(P_{n})_{n\in\mathbb{N}_{-}}$. Since we are starting at “minus infinity,” there is no initial distribution in this case. By Kolmogorov’s extension theorem, the problem is equivalent to asking if there exists a sequence of vectors $(m_{n})_{n\in\mathbb{Z}_{-}}$ such that $$\displaystyle m_{n}\in D(n)\mbox{ and }m_{n+1}=P_{n}(m_{n})=m_{n}P_{n},\ \ n% \in\mathbb{N}_{-}.$$ (3) In this case, $P(Z_{n}=i)=m_{n}(i)$ and for $k<n$ we have $P(Z_{k}=i_{k},Z_{k+1}=i_{k+1},\dots,Z_{n}=i_{n})=m_{k}(i_{k})p_{k}(i_{k},i_{k+% 1})\cdots p_{n-1}(i_{n-1},i_{n})$, where for $\ell\in\mathbb{N}_{-}$, $p_{\ell}(i,j)$ denotes the element of matrix $P_{\ell}$ that is located in the $i$th row and $j$th column. Theorem 2. At least one sequence of vectors $(m_{n})_{n\in\mathbb{Z}_{-}}$ satisfying (3) exists. With essentially the same proof, we get the following more general result, which will be useful for proving an analogous theorem in the countable case. Theorem 3. Let $(V_{n})_{n\in\mathbb{Z}_{-}}$ be a sequence of metric spaces and let $(P_{n})_{n\in\mathbb{N}_{-}}$ be a sequence of continuous transformations with $P_{n}:V_{n}\mapsto V_{n+1}$. Assume that there is a sequence of nonempty compact sets $(D(n))_{n\in\mathbb{Z}_{-}}$ with $D(n)\subset V_{n}$ such that the image of $D(n)$ under $P_{n}$ is contained in $D(n+1)$, i.e. $P_{n}(D(n))\subset D(n+1)$. In this case, there exists a sequence of points $(m_{n})_{n\in\mathbb{Z}_{-}}$ with $m_{n}\in D(n)$ such that $$m_{n+1}=P_{n}(m_{n}),\ \ n\in\mathbb{N}_{-}.$$ Before giving our next results, we set up some notation. For $s<t\leq 0$, define the set $\Delta(s,t)\subset D(t)$ to be the image of $D(s)$ under the linear transformation $P_{st}$. Note that, by (2), for any $s<t\leq 0$ $$\displaystyle\ldots\subset\Delta(s,t)\subset\dots\subset\Delta(t-1,t)\subset D% (t).$$ (4) The set $D(s)$ is a simplex with $N_{s}$ vertices. More specifically, it is the convex hull of $e_{1}^{(N_{s})},e_{2}^{(N_{s})},\dots,e_{N_{s}}^{(N_{s})}$, the rows of the $N_{s}\times N_{s}$-dimensional identity matrix. Similarly, $\Delta(s,t)$ is the convex hull of $a_{1},a_{2},\dots,a_{s}$, where $a_{k}=P_{st}(e_{k}^{(N_{s})})$, $k=1,2,\dots,N_{s}$. The meanings of $\Delta(s,t)$ and $a_{k}$ are as follows. If we start running MC $(Z_{n})_{s\leq n\leq t}$ at time $s$ with transition matrices $(P_{n})_{s\leq n\leq t-1}$, then any distribution in $D(s)$ can serve as the distribution of $Z_{s}$, i.e. as the initial distribution. However, at time $t$, the only possible distributions for $Z_{t}$ are those in $\Delta(s,t)$. In particular $a_{k}\in\Delta(s,t)$ corresponds to the distribution that we get at time $t$ if the initial distribution at time $s$ was $e_{k}^{(N_{s})}$. If we let the time $s$ at which we start the MC approach $-\infty$, then the possible distributions of $Z_{t}$ will be contained in $\Delta(t):=\bigcap_{s<t}\Delta(s,t)$. In fact, in the proof of Theorem 2 we show that $\Delta(t)$ is exactly the set of all possible distributions of $Z_{t}$. It follows that such a MC on $\mathbb{Z}_{-}$ exists if and only of $\Delta(0)\neq\emptyset$. We now characterize when the distribution at time $t$ is unique under an additional assumption. This extends the second theorem in [18] and Corollary 2 in [3], which were both formulated for the finite constant case. Theorem 4. Let $N=\liminf_{t\to-\infty}N_{t}$ and assume that $N<\infty$. a) For any $t\leq 0$, $\Delta(t)$ is a simplex with at most $N$ vertices. More specifically, there exists an $N\times N_{t}$-dimensional stochastic matrix $P_{t}^{*}$ and a sequence $(s_{n})\subset\mathbb{Z}_{-}$ with $s_{n}\to-\infty$ and $P_{s_{n}t}\to P^{*}_{t}$ such that $\Delta(t)$ is the convex hull of $\{a_{1},a_{2},\dots,a_{N}\}$, where $a_{i}=P_{t}^{*}(e_{i}^{(N)})$. b) There is a unique $m_{t}\in\Delta(t)$ at time $t$ if and only if for any matrix $P_{t}^{*}$, which satisfies $\lim_{n\to\infty}P_{s_{n}t}=P_{t}^{*}$ for some subsequence $(s_{n})$, all row of $P_{t}^{*}$ are equal to $m_{t}$. c) Assume, in addition, that $\lim_{t\to-\infty}N_{t}=N<\infty$ exists. In this case, there is a unique distribution at time $t$ if and only if there exists a matrix $P_{t}^{*}$ with $\lim_{s\to-\infty}P_{st}=P_{t}^{*}$ and all row of $P_{t}^{*}$ are identical and equal to this unique distribution. Note that, in a), if we consider different subsequences, we may get different matrices $P_{t}^{*}$. However, their ranges will, necessarily, be the same. This is true even if all of the state spaces have the same number of elements. A simple example is when $P_{n}=\begin{pmatrix}0&&1\\ 1&&0\end{pmatrix}$ for each $n<0$. In b), if $\lim_{t\to-\infty}N_{t}$ does not exist, then $\lim_{s\to-\infty}P_{st}=P_{t}^{*}$ will not exist. An example is when $P_{2n}=\begin{pmatrix}.5&&.5\end{pmatrix}$ and $P_{2n+1}=\begin{pmatrix}1\\ 1\end{pmatrix}$ for $n<0$. In this case if $t$ is even, then the possible limits are $P_{t}^{*}=\begin{pmatrix}.5&&.5\\ .5&&.5\end{pmatrix}$ and $P_{t}^{*}=\begin{pmatrix}.5&&.5\end{pmatrix}$, and if $t$ if odd then the possible limits are $P_{t}^{*}=\begin{pmatrix}1\\ 1\end{pmatrix}$ and $P_{t}^{*}=\begin{pmatrix}1\end{pmatrix}$. 4.2 Countable Case We now turn to the countable case, where $|S_{n}|=\infty$ for each $n\leq 0$. For simplicity of notation and without loss of generality, we assume that each $S_{n}=\mathbb{N}=\{1,2,\dots\}$. As usual, let $\ell^{1}$ be the space of absolutely summable sequences $m=(m(1),m(2),\dots)$ of real numbers equipped with the norm $$\|m\|=\sum_{i=1}^{\infty}|m(i)|.$$ Let $$D=\left\{m\in\ell_{1}:\|m\|=1\mbox{ and }m(i)\geq 0\mbox{ for each }i\geq 0\right\}$$ be the set of probability measures in $\ell^{1}$. Let $P$ be an infinite stochastic matrix, see e.g. [17] for details about such matrices. For $1\leq i,j<\infty$, let $p(i,j)$ be the element of $P$ in the $i$th row and $j$th column, let $p(i,\star)$ be the $i$th row of $P$, and let $p(\star,j)$ be the $j$th column of $P$. The fact that $P$ is stochastic means that all of its rows belong to $D$. $P$ corresponds to the linear transformation $P:\ell^{1}\mapsto\ell^{1}$ such that for any $m\in\ell^{1}$, $m^{\prime}=P(m)$ is the vector with $$m^{\prime}(j)=\sum_{i=1}^{\infty}m(i)p(i,j).$$ In this case $$\displaystyle\|m^{\prime}\|=\sum_{j=1}^{\infty}|m^{\prime}(j)|\leq\sum_{i=1}^{% \infty}|m(i)|\sum_{j=1}^{\infty}p(i,j)=\sum_{i=1}^{\infty}|m(i)|=\|m\|,$$ (5) which implies that $P$ is bounded, and thus that it is continuous, see e.g. Proposition 5.2 in [13]. Note further that, if $m(i)\geq 0$ for each $i$, then by arguments similar to those in (5), we have $\|m^{\prime}\|=\|m\|$. Let $(P_{n})_{n\in\mathbb{N}_{-}}$ be a sequence of infinite stochastic matrices. For $s<t\leq 0$ let $P_{st}=P_{t-1}\circ P_{t-2}\circ\cdots\circ P_{s}$, where $\circ$ denotes the composition operator. Each $P_{st}$ corresponds to an infinite stochastic matrix and it is easily checked that $$\displaystyle p_{st}(i,k)=\sum_{j=1}^{\infty}p_{su}(i,j)p_{ut}(j,k),\ \ \ s% \leq u\leq t.$$ (6) For $s<t\leq 0$, let $\Delta_{n}(s,t)\subset D$ be the image of $D$ under $P_{st}$. As in the finite case, to show that a MC $(Z_{n})_{n\in\mathbb{Z}_{-}}$ with transition matrices $(P_{n})_{n\in\mathbb{N}_{-}}$ exists, it suffices to show that there exists a sequence $(m_{n})_{n\in\mathbb{Z}_{-}}\subset D$ with $$\displaystyle m_{n+1}=P_{n}(m_{n}),\ \ \ n\in\mathbb{N}_{-}.$$ (7) We can use Theorem 3 to find when such a sequence exists. However, we must be careful. Unlike in the finite case, here the set $D$ is not compact. This follows from Prohorov’s Theorem (see e.g. Theorem 25.10 in [2]), which says that subsets of $D$ are compact if and only if they are closed and tight. A statement of this result in the context of the larger space $\ell^{1}$ can be found in Theorem 44.2 of [27]. For this reason, in order to define the appropriate compact sets, we require an additional assumption. First, we give a definition. Definition 1. Fix $H\subset D$. If for any $\varepsilon>0$ there exists an $N_{\varepsilon}>0$ such that for any $m\in H$ $$\sum_{k=1}^{N_{\varepsilon}}m(k)\geq 1-\varepsilon,$$ we say that $H$ is tight. If $H$ corresponds to the rows of an infinite stochastic matrix $P$, then we say that $P$ is tight. With this we can state our assumption. We call this Condition P because it is a version of the condition in Prohorov’s Theorem. Condition P. There exists an infinite set $V\subset\mathbb{N}_{-}$ such that for each $n\in V$, the infinite stochastic matrix $P_{n-1}$ is tight, i.e. for any $n\in V$ and $\varepsilon>0$ there exists an $N_{\varepsilon}(n)>0$ such that for any $i\in\mathbb{N}$ $$\sum_{k=1}^{N_{\varepsilon}(n)}p_{n-1}(i,k)\geq 1-\varepsilon.$$ Condition P means that infinitely many of the $P_{n}$s are tight. Note that, in the condition, we allow $\liminf_{n\to-\infty,n\in V}N_{\varepsilon}(n)\leq\infty$. While the condition is stated in terms of $P_{n-1}$, it implies that, for any $n\in V$ and any $k<n$, $\Delta(k,n)$ is a compact set, see Lemma 5 in Section 5.3 below. We now give our main result for the countable case. Theorem 5. If Condition P holds, then at least one sequence $(m_{n})_{n\in\mathbb{Z}_{-}}\subset D$ satisfying (7) exists. Note that this theorem shows only that Condition P is sufficient. In fact, it is not necessary. In Section 5.3 we give two examples where Condition P does not hold. The first is the example of a symmetric random walk on $\mathbb{Z}$, for which we show that a solution to (7) does not exist. The second is the situation where each $P_{n}$ is onto, in which case we show that a solution to (7) always exists. Next, we turn to the question of when the solution to (7) is unique. To state this result, we need a stronger condition, which is a uniform version of Condition P. Condition U. There exists an infinite set $V\subset\mathbb{N}_{-}$ such that for any $\varepsilon>0$ there exists an $N_{\varepsilon}$ such that for any $n\in V$ and $i\in\mathbb{N}$ $$\sum_{k=1}^{N_{\varepsilon}}p_{n-1}(i,k)\geq 1-\varepsilon.$$ For $t\leq 0$, let $\Delta(t)=\bigcap_{s<t}\Delta(s,t)$. By arguments similar to those in the proof of Theorem 2, $\Delta(t)$ is exactly the set of all vectors that can serve as solutions at time $t$, i.e. it is the set of all $m\in D$ with the property that there exists a sequence $(m_{n})_{n\in\mathbb{Z}_{-}}\subset D$ such that $m_{t}=m$ and (7) holds. We now give our uniqueness results. Theorem 6. Assume that Condition U holds. a) The set $\Delta(t)$ is the convex hull of an at most countable collection of vectors in $D$. b) There is a unique distribution at time $t$ if and only if there exists and infinite stochastic matrix $P_{t}^{*}$ with $\lim_{s\to-\infty}P_{st}=P_{t}^{*}$ such that all row of $P_{t}^{*}$ are identical and equal to this unique distribution. This result extends Theorem 4 and the second theorem in [18] to the countable case. 5 Proofs 5.1 Proof of Theorem 1 using Lévy’s Theorem In this section we give a proof of Theorem 1 using Lévy’s ‘Upward’ Theorem. For a proof of Kolmogorov’s zero-one law using a similar approach see, e.g., Section 14.3 in [28]. Lévy’s ‘Upward’ Theorem (see 14.2 in [28]) states that for any random variable $X$ with a finite expectation $$\displaystyle\lim_{n\to\infty}\mathrm{E}[X|\mathcal{F}_{0n}]=\mathrm{E}[X|% \mathcal{F}_{\infty}]\ \mbox{a.s.}$$ (8) where $\mathcal{F}_{\infty}=\sigma\left(\bigcup_{n\geq 0}\mathcal{F}_{0n}\right)$. Lemma 1. We have $\mathcal{T}\subset\mathcal{F}_{\infty}$. Proof. Since $Z_{k}$ is measurable $\mathcal{F}_{\infty}$ for each $k$, it follows that $\mathcal{F}_{n\infty}\subset\mathcal{F}_{\infty}$ for each $n$. Thus, $\mathcal{T}=\bigcap_{n\geq 0}\mathcal{F}_{n\infty}\subset\mathcal{F}_{\infty}$. ∎ For any $A\in\mathcal{T}$ we have $$\lim_{n\to\infty}P(A|Z_{n})=\lim_{n\to\infty}P(A|\mathcal{F}_{0n})=P(A|% \mathcal{F}_{\infty})=I_{A}\ \mbox{a.s.}$$ where the first equality follows by the Markov property, which is applicable since $A\in\mathcal{T}\subset\mathcal{F}_{(n+1)\infty}$, the second by (8) applied to the random variable $X=I_{A}$, and the third by the fact that $A\in\mathcal{T}\subset\mathcal{F}_{\infty}$. From here, part $a$) of Theorem 1 and part $a^{\prime}$) of Remark 1 follow immediately from the following lemma. After that, the remaining parts easily follow. Lemma 2. If $0<q<1$, then $$\{Z_{n}\in S_{n}(q,1)\}\to A\ \mbox{a.s.}$$ Proof. For simplicity of notation, we suppress the dependence on $q$ and $A$ and denote $B_{n}=\{Z_{n}\in S_{n}(q,1)\}$. Let $Y_{n}$ be a version of $P(A|Z_{n})$ such that $$\lim_{n\to\infty}Y_{n}(\omega)=I_{A}(\omega)\mbox{ for each }\omega\in\Omega.$$ Note that $A=\{\omega:\lim_{n\to\infty}Y_{n}(\omega)=1\}$. Thus, for any $\omega\in A$ there exists an $N(\omega)$ such that, if $n\geq N(\omega)$, then $$\left|Y_{n}(\omega)-1\right|<1-q.$$ It follows that $\omega\in B_{n}$ for all $n\geq N(\omega)$ and so $A\subset\liminf_{n\to\infty}B_{n}$. Next, note that $A^{c}=\{\omega:\lim_{n\to\infty}Y_{n}(\omega)=0\}$. We can similarly show that $A^{c}\subset\liminf_{n\to\infty}\left(B_{n}\right)^{c}$. It follows that $$A\subset\liminf_{n\to\infty}B_{n}\subset\limsup_{n\to\infty}B_{n}\subset A,$$ which guarantees that the limit of $B_{n}$ exists and equals $A$. Since we chose $Y$ to be a particular version of $P(A|Z_{n})$, in general, the result only holds almost surely. ∎ 5.2 Constructive Proof of Theorem 1 In this section in give a proof based on approximating events in the tail by events in $\mathcal{F}_{kn}$. A similar approach can be used to prove Kolmogorov’s zero-one law and is standard in proofs of the Hewitt-Savage zero-one law, see e.g. the proof of Theorem 36.5 in [2]. Our proof uses the following well-known facts. The first is a version of Corollary 1 on page 169 in [2] and the second is easy to show. Proposition 2. For any $k$, any $\varepsilon>0$, and large enough $n$, there exists a set $A_{kn}\in\mathcal{F}_{kn}$, such that $P(A\bigtriangleup A_{kn})<\varepsilon$. Proposition 3. Fix $\alpha,\varepsilon>0$. If $P(A\bigtriangleup B)<\alpha$ and $P(B\bigtriangleup C)<\varepsilon$, then $P(A\bigtriangleup C)<\alpha+\varepsilon$. By Proposition 2, there are $A_{kn}\in\mathcal{F}_{kn}$ with $\lim_{k}\lim_{n}A_{kn}=A$ a.s. For any $0\leq a\leq b\leq 1$, let $S_{kn}(a,b)=\{i\in S_{n}:a\leq P(A_{kn}|Z_{n}=i)\leq b\}$. For simplicity of notation, we suppress the dependence on $q$, $A$, and $A_{kn}$ and define $$\displaystyle S_{n}=S_{n}(q,1),\ S_{kn}=S_{kn}(q,1),\ B_{kn}=(Z_{n}\in S_{kn})% ,\mbox{ and }B_{n}=(Z_{n}\in S_{n}).$$ We now give an approximation result, which is fundamental to the proof and may be of independent interest. Lemma 3. Fix $A\in\mathcal{T}$ and $A_{kn}\in\mathcal{F}_{kn}$ with $\lim_{k}\lim_{n}A_{kn}=A$ a.s. If $0<p<q<1$, then: $a)$ $\lim_{k}\lim_{n}P(Z_{n}\in S_{kn}(q,1))=P(A)$, $b)$ $\lim_{k}\lim_{n}P(Z_{n}\in S_{kn}(p,q))=0$, $c)$ $\lim_{k}\lim_{n}P(Z_{n}\in S_{kn}(0,p))=1-P(A)$. Remark 2. As with Theorem 1, from the proof we will see that the following stronger result holds. For any $0<p<q<1$ we have $a^{\prime})$ $\lim_{k}\lim_{n}(Z_{n}\in S_{kn}(q,1))=A$ a.s., $b^{\prime})$ $\lim_{k}\lim_{n}(Z_{n}\in S_{kn}(p,q))=\emptyset$ a.s., and $c^{\prime})$ $\lim_{k}\lim_{n}(Z_{n}\in S_{kn}(0,p))=A^{c}$ a.s. Remark 3. Note that, at time $n$, event $A_{kn}$ is from the past, while event $A\in\mathcal{T}$ is from the future. Thus, Lemma 3 gives a backwards formulation of the problem, while Theorem 1 gives a forwards formulation. Proof of Lemma 3. First, note that, since a) is for any $q\in(0,1)$, it immediately gives b), and then, a) and b) together give c). Thus, it suffices to prove a). In fact, as we will show, it suffices to prove $$\displaystyle\lim_{k}\lim_{n}P(AB_{kn})=P(A).$$ (9) We begin by showing that (9) implies a). For the moment, assume that (9) holds for every $A\in\mathcal{T}$ and every $q\in(0,1)$. This immediately gives $\lim_{k}\lim_{n}P(AB^{c}_{kn})=0$. Further, noting that $P(A|Z_{n}=i)<q$ is equivalent to $P(A^{c}|Z_{n}=i)\geq 1-q$ and applying (9) with $A^{c}$ in place of $A$ and $1-q$ in place of $q$, we get $\lim_{k}\lim_{n}P(A^{c}B^{c}_{kn})=P(A^{c})$, and hence $\lim_{k}\lim_{n}P(A^{c}B_{kn})=0$. It follows that $$\displaystyle P(A\bigtriangleup B_{kn})=P(A^{c}B_{kn})+P(AB^{c}_{kn})\to 0,$$ (10) which easily gives a). It remains to verify (9). Fix $0\leq k<n<s<\infty$ and for simplicity set $D_{i}:=(Z_{n}=i)$. We have $$\displaystyle P(A_{kn}B^{c}_{kn}A_{ns})$$ $$\displaystyle=$$ $$\displaystyle\sum_{i\in S^{c}_{kn}}P(A_{kn}D_{i}A_{ns})=\sum_{i\in S^{c}_{kn}}% P(A_{kn}|D_{i}A_{ns})P(A_{ns}D_{i})$$ $$\displaystyle=$$ $$\displaystyle\sum_{i\in S^{c}_{kn}}P(A_{kn}|D_{i})P(A_{ns}D_{i})$$ $$\displaystyle\leq$$ $$\displaystyle q\sum_{i\in S^{c}_{kn}}P(A_{ns}D_{i})=qP(B^{c}_{kn}A_{ns}),$$ where the second line follows by Proposition 1 and the third line by the definition of $S^{c}_{kn}$. Letting $s\to\infty$, we obtain $P(A_{kn}B^{c}_{kn}A_{n\infty})\leq qP(B^{c}_{kn}A_{n\infty})$. Now, noting that $\lim_{k}\lim_{n}A_{kn}=A$ a.s. and $\lim_{n}A_{n\infty}=A$ a.s. gives $\lim_{k}\lim_{n}P(B^{c}_{kn}A)\leq q\lim_{k}\lim_{n}P(B^{c}_{kn}A)$. Since $q<1$, we have $\lim_{k}\lim_{n}P(B^{c}_{kn}A)=0$. Finally, since $P(A)=\lim_{k}\lim_{n}[P(AB_{kn})+P(AB^{c}_{kn})]$, we get $\lim_{k}\lim_{n}P(AB_{kn})=P(A)$, which is (9). ∎ We will use Lemma 3 to prove Theorem 1 by approximating $A$ by $A_{kn}$. Most of the heavy lifting is done by the following lemma. Lemma 4. If $A\in\mathcal{T}$, $q\in(0,1)$, and $\varepsilon>0$, then for large enough $k$ and $n$ $$P(B_{kn}\setminus B_{n})<\varepsilon\mbox{ and }P(A\setminus B_{n})<\varepsilon.$$ Proof. We begin by writing $$\displaystyle P(B_{kn})$$ $$\displaystyle=$$ $$\displaystyle\sum_{i\in S_{kn}S_{n}}P(Z_{n}=i)+\sum_{i\in(S_{kn}\setminus S_{n% })}P(Z_{n}=i)=:a+b,$$ $$\displaystyle P(AB_{kn})$$ $$\displaystyle=$$ $$\displaystyle\sum_{i\in S_{kn}S_{n}}P(Z_{n}=i)P(A|Z_{n}=i)+\sum_{i\in(S_{kn}% \setminus S_{n})}P(Z_{n}=i)P(A|Z_{n}=i)=:c+d.$$ Clearly, $c\leq a$ and, by the definition of $S_{n}$, $d\leq q\sum_{i\in(S_{kn}\setminus S_{n})}P(Z_{n}=i)=qb$. Therefore $P(AB_{kn})\leq a+qb$. For any $\alpha>0$, (10) implies that for large $k$ and $n$ we have $P(AB_{kn})>P(A)-\alpha$ and $P(B_{kn})<P(A)+\alpha$. It follows that $a+b<P(A)+\alpha$, $P(A)-\varepsilon<a+qb$, thus $P(B_{kn}\setminus B_{n})=b<2\alpha/(1-q)$. Since $q\in(0,1)$, this is less than $\varepsilon$ for an appropriate choice of $\alpha$, which gives the first part. Now, combining this with the fact that, by (10), for large enough $k$ and $n$ we have $P(A\setminus B_{kn})<\varepsilon/2$ and the fact that $A\setminus B_{n}\subset(A\setminus B_{kn})\cup(B_{kn}\setminus B_{n})$ completes the proof. ∎ Proof of Theorem 1. As in the proof of Lemma 3, b) and c) follow immediately from a), so we just need to prove a). In fact we will prove that for any $\varepsilon>0$ and large enough $n$ $$P(A\bigtriangleup B_{n})=P(A\setminus B_{n})+P(B_{n}\setminus A)<\varepsilon.$$ This result is stronger than what is needed for Theorem 1 and will give us the stronger results formulated in Remark 1. By Lemma 4, for large enough $n$ we have $P(A\setminus B_{n})<\varepsilon/2$. Next, note that $P(B_{n}\setminus A)=P(A^{c}\setminus B^{c}_{n})$ and that $B_{n}^{c}=(Z_{n}\in S^{*}_{n}(1-q,1))$, where $S^{*}_{n}(1-q,1)=\{i\in S_{n}:1-q\leq P(A^{c}|Z_{n}=i)\}$. Thus, we can apply Lemma 4 with $A^{c}$ and $1-q$ to get that for large enough $n$, $P(B_{n}\setminus A)<\varepsilon/2$. ∎ 5.3 Proofs for Theorems in Section 4 Proof of Theorem 2. For each $s$, $D(s)$ is a compact set and, since $P_{st}:\mathbb{R}^{N_{s}}\mapsto\mathbb{R}^{N_{t}}$ is a continuous transformation and images of compact sets under continuous transformations are compact (see e.g. Theorem 26.5 in [19]), $\Delta(s,t)$ is compact for all $s<t\leq 0$. Since $\Delta(s,t)\neq\emptyset$ for each $s<t$ and (4) holds, Cantor’s Intersection Theorem (see Theorem 26.9 in [19]) implies that $\Delta(t)\neq\emptyset$ for each $t\leq 0$. We next apply a diagonal process to show that, if $m_{t}\in\Delta(t)$, then there exists an $m_{t-1}\in\Delta(t-1)$ with $m_{t}=P_{t-1}(m_{t-1})$. To see this, first note that by the definition of $\Delta(t)$, $m_{t}\in\Delta(s,t)$ for all $s<t$. Thus, for any $s<t-1$, there exists an $m_{t-1}(s)\in\Delta(s,t-1)$ such that $P_{t-1}(m_{t-1}(s))=m_{t}$. Since $\Delta(s,t-1)$ is compact for each $s<t-1$, and (4) holds, there exists a sequence $(s_{n})\subset\mathbb{Z}_{-}$ with $s_{n}\to-\infty$ such that $m_{t-1}(s_{n})\to m_{t-1}$ for some $m_{t-1}\in D(t-1)$ and $m_{t-1}\in\Delta(s,t-1)$ for each $s$. Thus $m_{t-1}\in\Delta(t-1)$. From here, the continuity of $P_{t-1}$ implies that $m_{t}=\lim_{n\to\infty}P_{t-1}(m_{t-1}(s_{n}))=P_{t-1}(m_{t-1})$, as required. From here, a simple inductive argument completes the proof. ∎ The proof of Theorem 3 is similar to that of Theorem 2 and is thus omitted. Proof of Theorem 4. We begin by proving a). Let $M$ be any subsequential limit of $(N_{t})_{t\in\mathbb{N}_{-}}$ and note that $M\geq N$. Since the $N_{t}$’s are integers, there is a subsequence $(P_{u_{n}t})$ such that each $P_{u_{n}t}$ has $M$ rows. Within this subsequence, the first rows of the matrices form a tight sequence of probability measures (as any sequence of probability measures on a finite set is tight). It follows that there is a subsequence that converges to a probability measure. Applying this idea to the other rows of the matrices and potentially taking further subsequences, shows that there exists a subsequence $P_{s_{n}t}$ and a stochastic matrix $P_{t}^{*}$ with $M$ rows such that $P_{s_{n}t}\to P_{t}^{*}$ as $n\to\infty$. Now, fix $m_{t}\in\Delta(t)$ and note that, from the proof of Theorem 2, there exists a sequence $(m_{s})_{s<t}$ with $m_{s}\in\Delta(s)$ and $m_{t}=P_{st}(m_{s})$. Since $\{m_{s_{n}}\}$ is a tight sequence, there exists a subsequence $m_{s_{n_{k}}}$ and a probability measure $m\in\mathbb{R}^{M}$ with $m_{s_{n_{k}}}\to m$ as $k\to\infty$. It follows that $$m_{t}=\lim_{k\to\infty}m_{s_{n_{k}}}P_{s_{n_{k}}t}=mP_{t}^{*}=\sum_{i=1}^{M}m(% i)e_{i}^{(M)}P_{t}^{*}=\sum_{i=1}^{M}m(i)a_{i},$$ which is a convex combination of the $a_{i}$s and thus belongs to their convex hull. Conversely, fix a weight vector $p\in\mathbb{R}^{M}$ with $p(i)\geq 0$ and $\sum_{i=1}^{M}p(i)=1$, and consider the convex combination $m=\sum_{i=1}^{M}p(i)a_{i}$. We have $$m=\sum_{i=1}^{M}p(i)a_{n}=\sum_{i=1}^{M}p(i)e_{i}^{(M)}P_{t}^{*}=pP_{t}^{*}=% \lim_{n\to\infty}pP_{s_{n}t}.$$ Since, by definition, $pP_{s_{n}t}\in\Delta(s,t)$ for every $s\in[s_{n},t-1]$ and $\Delta(s,t)$ is compact for each $s<t$, it follows that the limit $m$ must be in each $\Delta(s,t)$. Thus $m\in\Delta(t)$. The definition of $N$ implies that the above holds with $N$ in place of $M$. We now turn to b). Arguments similar to those in a) imply that any $P_{t}^{*}$ of the required form is an $M\times N_{t}$-dimensional stochastic matrix for some $M\geq N$. Clearly, the distribution at time $t$ is unique if and only if $|\Delta(t)|=1$. In light of a), this holds if and only if $P_{t}^{*}$ maps each $e_{i}^{(M)}$ to the same vector, which is equivalent to all of the row of $P_{t}^{*}$ being the same. It follows that $\Delta(t)$ contains exactly one element, which is this row. To prove c), it suffices to show that, under the assumption $|\Delta(t)|=1$, the limit exists. Assume that there are two subsequences with $P_{s^{(1)}_{n}t}\to P_{t}^{*1}$ and $P_{s^{(2)}_{n}t}\to P_{t}^{*2}$. Tightness arguments similar to those in the proof of a) imply that both $P_{t}^{*1}$ and $P_{t}^{*2}$ are stochastic matrices. From here, b) implies that all rows of both $P_{t}^{*1}$ and $P_{t}^{*2}$ are equal to the unique vector in $\Delta(t)$. The fact that, in this case, the two matrices have the same dimensions completes the proof. ∎ Lemma 5. Assume that Condition P holds and let $V$ be the set in that condition. For all $t\in V$ and all $s<t\leq 0$, the following hold: a) both $P_{st}$ and $\Delta(s,t)$ are tight; b) $\Delta(s,t)$ is compact. Proof. We begin with a). Fix $s<t\leq 0$. From (6) it follows that $$\displaystyle\sum_{k=1}^{N_{\varepsilon}(t)}p_{st}(i,k)=\sum_{j=1}^{\infty}p_{% s,t-1}({i,j})\sum_{k=1}^{N_{\varepsilon}(t)}p_{t-1}({j,k})\geq\sum_{j=1}^{% \infty}p_{s,t-1}({i,j})(1-\varepsilon)=1-\varepsilon,$$ (11) which gives the first part. Next, fix $m\in\Delta(s,t)$, let $m^{\prime}=P_{st}(m)$, and note that $$\sum_{k=1}^{N_{\varepsilon}(t)}m^{\prime}(k)=\sum_{i=1}^{\infty}m(i)\sum_{k=1}% ^{N_{\varepsilon}(t)}p_{st}({i,k})\geq\sum_{i=1}^{\infty}m(i)(1-\varepsilon)=1% -\varepsilon,$$ which gives the second part. We now turn to b). We will show that $\Delta(s,t)$ is sequentially compact, which is equivalent to compactness since we are in a metric space. Let $\{m^{(k)}\}$ be a sequence in $\Delta(s,t)$. Since $\Delta(s,t)$ is tight, Prohorov’s Theorem implies that there exists a subsequence $\{m^{(k_{\ell})}\}$ that converges to some $m\in D$. We must show that $m\in\Delta(s,t)$. Since $m^{(k_{\ell})}\in\Delta(s,t)$, there exists a $q^{(k_{\ell})}\in D$ with $P_{st}(q^{(k_{\ell})})=m^{(k_{\ell})}$. Note that $\{m^{(k_{\ell})}\}$ is a Cauchy sequence. From (5) it follows that $\{q^{(k_{\ell})}\}$ is also a Cauchy sequence and since $\ell^{1}$ is a Banach space, there exists an $q\in\ell^{1}$ with $q^{(k_{\ell})}\to q$. By continuity of $P_{st}$ it follows that $P_{st}(q)=m$. In light of the discussion just below (5), $q\in D$ and hence $m\in\Delta(s,t)$. ∎ Proof of Theorem 5. Let $V$ be as in Lemma 5. The proof is based on Theorem 3. Let $V_{n}=\ell^{1}$ for $n\in\mathbb{Z}_{-}$ be the metric spaces. Let $t_{1}>t_{2}>\dots$ be the elements of $V$ in decreasing order and let $D(n)=\Delta(t_{n+1},t_{n})$ for $n\in\mathbb{Z}_{-}$. Note that each $D(n)\subset V_{n}$ is a compact set by Lemma 5. The continuous transformation associated with $V_{n}$ will be $P_{t_{n}t_{n-1}}$, $n<0$. All of these objects satisfy the required properties and we can use Theorem 3 to show that there exists a sequence of vectors $(m_{t_{n}})_{n\in\mathbb{Z}_{-}}$ such that for each $n$, $m_{t_{n}}\in D$ and $m_{t_{n}}=P_{t_{n+1}t_{n}}(m_{t_{n+1}})$. For any $t$ with $t_{n+1}<t<t_{n}$, we can take $m_{t}=P_{t_{n+1}t}(m_{t_{n+1}})$. This gives the result. ∎ We now give two examples where Condition P does not hold. Example 1. This is an example where Condition P does not hold and a solution to (7) does not exist. Consider a symmetric random walk on countable state space $\mathbb{Z}$, where at each time, with probability $.5$ we take one step in the positive direction and with probability $.5$ we take a step in the negative direction. In this case, letting $E=\{2n:n=0,1,\dots\}$ be the even numbers, for $n>0$ we have $$\displaystyle p_{-n,0}({i,k})$$ $$\displaystyle=$$ $$\displaystyle(.5)^{n}{n\choose.5(n+|i-k|)}I_{[|i-k|\leq n]}I_{[n-|i-k|\in E]}.$$ It is easy to see that this does not satisfy Condition P. For the sake of contradiction, assume that the required probability measures $m_{n}$ exist. Now applying the monotonicity of binomial coefficients (see e.g. [1]) and the well-known bound $\sqrt{2\pi}n^{n+.5}e^{-n}\leq n!\leq\sqrt{2e\pi}n^{n+.5}e^{-n}$ (see [20]) gives $$\displaystyle p_{-2n,0}({i,k})\leq(.5)^{2n}{2n\choose n}\leq\sqrt{\frac{e}{2% \pi}}n^{-1/2}.$$ Hence $$\displaystyle m_{0}(k)$$ $$\displaystyle=$$ $$\displaystyle\sum_{i\in\mathbb{Z}}m_{-2n}(i)p_{-2n,0}(i,k)\leq\sum_{i\in% \mathbb{Z}}m_{-2n}(i)\sqrt{\frac{e}{2\pi}}n^{-1/2}=\sqrt{\frac{e}{2\pi}}n^{-1/% 2}\to 0.$$ Thus, $m_{0}(k)=0$ for each $k$, which contradiction the assumption that this is a probability measure. Example 2. This is an example where Condition P does not hold, but a solution to (7) nevertheless does exist. A simple, but general situation, where this always holds is when each $P_{n}$ is onto. In this case, the image of $D$ through $P_{n}$ is $D$. However, in light of Lemma 5 and the fact that $D$ is not compact, Condition P does not hold in this case. A simple concrete example is a walk on $\mathbb{Z}$, where, for some fixed integer $\ell$, $$p_{n}(i,k)=I_{[k=i+\ell]}.$$ If $\ell=0$, then every state is absorbing. If $\ell=1$, then we have a walk similar to the one in Example $1$, but with a different probability of moving in the positive direction. Lemma 6. a) Let $\{m_{t}\}_{t\in\mathbb{Z}_{-}}$ be a sequence in $D$. If there exists an $m\in D$, an infinite stochastic matrix $P_{t}^{*}$, and a sequence $(s_{n})\subset\mathbb{Z}$ with $m_{s_{n}}\to m$ and $P_{s_{n}t}\to P^{*}_{t}$, then $$\lim_{n\to\infty}P_{s_{n}t}(m_{s_{n}})\to P_{t}^{*}(m).$$ b) If Condition U holds, then for each $t\leq 0$ and any sequence $(s_{n})\subset\mathbb{Z}$, there exists a further subsequence $s_{n_{k}}$ and an infinite stochastic matrix $P_{t}^{*}$ such that $P_{s_{n}t}\to P^{*}_{t}$. Proof. We begin with a). By Skorohod’s Representation Theorem, there is a probability space $(\Omega,\mathcal{F},P)$ and $\mathbb{N}$-valued random variables $X,X_{1},X_{2},\dots$ on this space such that $X$ has distribution $m$, $X_{n}$ has distribution $m_{s_{n}}$, and $X_{n}(\omega)\to X(\omega)$ for each $\omega\in\Omega$. Let $\mathrm{E}$ be the expectation operator on this probability space. Since the random variables are $\mathbb{N}$-valued, there is a function $M(\omega)$ such that if $n\geq M(\omega)$, then $X_{n}(\omega)=X(\omega)$. It follows that $p_{s_{n}t}({X_{n}(\omega),k})\to p_{t}^{*}(X(\omega),k)$ for each $\omega\in\Omega$. Let $m^{\prime}_{s_{n}}=P_{s_{n}t}(m_{s_{n}})$, $m^{\prime}=P_{t}^{*}(m)$, and note that $$\displaystyle\lim_{n\to\infty}m^{\prime}_{s_{n}}(k)$$ $$\displaystyle=$$ $$\displaystyle\lim_{n\to\infty}\sum_{i=1}^{\infty}m_{s_{n}}(i)p_{s_{n}t}({i,k})% =\lim_{n\to\infty}\mathrm{E}p_{s_{n},t}({X_{n},k})$$ $$\displaystyle=$$ $$\displaystyle\mathrm{E}p_{t}^{*}(X,k)=\sum_{i=1}^{\infty}m(i)p^{*}_{t}({i,k})=% m^{\prime}(k),$$ where we interchange limit and expectation using dominated convergence and the fact that $p_{s_{n},t}({X_{n},k})\leq 1$. We now turn to b). Let $V$ be as in Lemma 5. First, fix $t\in V$ and consider the sequence of matrices $\{P_{s_{n}t}:s_{n}<t\}$. Lemma 5 and (11) imply that the first rows of these matrices form a tight sequence of probability measures. Thus, there is a $m_{1}\in D$ and a sequence $(s_{n}^{(1)})\subset(s_{n})$ such that $p_{s^{(1)}_{n}t}(1,\star)\to m_{1}$. Similarly, there is a $m_{2}\in D$ and a further subsequence $(s_{n}^{(2)})\subset(s_{n}^{(1)})$ such that $p_{s^{(2)}_{n}t}(\ell,\star)\to m_{\ell}$ for $\ell=1,2$. Continuing in this manner, we can find a sequence $m_{1},m_{2},m_{3},\dots\in D$ and a collection of nested sequences $(s_{n}^{(1)})\supset(s_{n}^{(2)})\supset(s_{n}^{(3)})\supset\cdots$ with $\lim_{n\to\infty}p_{s^{(k)}_{n}t}(\ell,\star)=m_{\ell}$ for $k=1,2,\dots$ and $\ell=1,2,\dots,k$. Now, set $s^{*}_{n}=s_{n}^{(n)}$ and let $P_{t}^{*}$ be the infinite stochastic matrix such that $p_{t}^{*}(\ell,\star)=m_{\ell}$, $\ell=1,2,\dots$. It follows that $P_{s^{*}_{n}t}\to P_{t}^{*}$. Now, assume that $t\in\mathbb{Z}_{-}$ is not an element of $V$. By Condition U, there exists a $t_{0}<t$ with $t_{0}\in V$. Thus there is a sequence $(s^{*}_{n})\subset(s_{n})$ and a matrix $P_{t_{0}}^{*}$ such that $P_{s^{*}_{n}t_{0}}\to P^{*}_{t_{0}}$. Noting that $P_{s^{*}_{n}t}=P_{t-1}\circ\cdots\circ P_{t_{0}+1}\circ P_{t_{0}}\circ P_{s^{*% }_{n}t_{0}}$, taking $P_{t}^{*}=P_{t-1}\circ\cdots\circ P_{t_{0}+1}\circ P_{t_{0}}\circ P_{t_{0}}^{*}$, and applying a) gives the result. ∎ Proof of Theorem 6. Let $V$ be as in Lemma 5. The proof is similar to that of Theorem 4, with several changes. First, we now use Lemma 6 to guarantee the existence of a sequence $(s_{n})\subset V$ and an infinite stochastic matrix $P_{t}^{*}$ with $P_{s_{n}t}\to P^{*}_{t}$. Second, now $\Delta(t)$ is the convex hull of the vectors $a_{1},a_{2},\dots$, where $a_{i}=P_{t}^{*}(e_{i})$ and $e_{i}\in D$ is the vector with $e_{i}(i)=1$ and $e_{i}(j)=0$ for $j\neq i$. Third, we now use Lemma 5 to guarantee that $(m_{s_{n}})$ is tight and Lemma 6 to show that $$m_{t}=\lim_{k\to\infty}P_{s_{n_{k}}t}(m_{s_{n_{k}}})=P_{t}^{*}(m)=\sum_{i=1}^{% \infty}m(i)e_{i}P_{t}^{*}=\sum_{i=1}^{\infty}m(i)a_{i}.$$ Finally, we use Lemma 5 to guarantee that $\Delta(s,t)$ is compact for each $s<t$. ∎ 6 Conclusions In this paper we reviewed the zero-one law for MCs and gave two rigorous and detailed proofs. These had been missing from the literature. Further, in the case where the MC is indexed by $\mathbb{Z}$ or $\mathbb{Z}_{-}$ we gave a version of this law for the entrance $\sigma$-algebra. In the corresponding discussion, we noted an interesting dichotomy in two commonly used definitions of a MC. Further, to better understand when MCs on $\mathbb{Z}$ and $\mathbb{Z}_{-}$ exist, we extended a classical result due to Kolmogorov (1936) [18]. We conclude this paper by discussing several open problems. First, it seems that one should be able to extend the zero-one law for MCs to countable Markov Decision Processes with “tail” functionals, i.e. those that can be represented as indicators of some events in the tail $\sigma$-algebra, see [22] for a discussion of such processes. Second, it would be interesting to obtain necessary and sufficient conditions for the existence of countable MCs indexed by $\mathbb{Z}$. We only obtained a sufficient condition. Finally, as far as we know, the problem of characterizing the events in the tail and entrance $\sigma$-algebras for countable MCs has not been solved. Results are only available in the finite case, see e.g.  [7], [8], and [15]. 7 Historical Notes Homogeneous MCs are among the most fundamental concepts in probability theory and one of the most widely used probabilistic tools. However, in applications with a changing environment, these models are inadequate and one must study nonhomogeneous MCs instead. Whether due to their importance in applications or because of intrinsic interest, many prominent mathematicians and probabilists have been attracted to the study of nonhomogeneous MCs. Even a short list is impressive: starting from A. Markov himself as early as 1910. Other early pioneers include S. Bernstein, R. Dobrushin, W. Doeblin, E.B. Dynkin, and of course A. Kolmogorov. This early work was continued and extended in papers by authors such as O.O. Aalen, D. Blackwell, X.R. Cao, H. Cohn, J.L. Doob, D. Griffeath, J. Hajnal, D. Hartfiel, W.J. Hopp, G.A. Hunt, M. Iosifesku, J.F. Kingman, S.E. Kuznetsov, V. Maksimov, S. Molchanov, L. Saloff-Coste, E. Seneta, S.R. Varadhan, D. Williams and many others. While at first glance nonhomogeneous MCs may appear to have very little structure, much structure has nevertheless been uncovered. In particular, there are a number of results that aim to understand the events in the tail $\sigma$-algebra and how a MC approaches these events, while making essentially no assumptions on the underlying sequence of transition matrices. The zero-one law presented in this paper, and especially the backwards formulation given in Lemma 3, fits into the lineage of such results. In the remainder of this section, we give a short overview of the history of several results of this type, which, over time, have evolved into the so-called Decomposition-Separation (DS) Theorem. This theorem generalizes to the nonhomogeneous case the well-known decomposition of the state space of a homogeneous Markov chain into transient and recurrent classes and further into cyclical subclasses. In the nonhomogeneous case, the decomposition is not just of the state space, but of the space-time representation. The only assumption is that the number of states is bounded. The decomposition part of the theorem is primarily due to the work of Blackwell and Cohn. Motivated by Kolmogorov (1936) [18], Blackwell studied the properties of MCs in reverse time. In the seminar paper Blackwell (1945) [3], he gave a partition of the space-time representation of the state space. To describe this, it helps to introduce several definitions, although the terminology was developed later. If $(S_{n})_{n\in\mathbb{Z}_{-}}$ is the sequence of state spaces, then a sequence $J=(J_{n})_{n\in\mathbb{Z}_{-}}$, with $J_{n}\subset S_{n}$ is called a jet. A tuple of jets $(J^{1},...,J^{c})$ is called a partition of $(S_{n})_{n\in\mathbb{Z}_{-}}$ if $(J_{n}^{1},...,J_{n}^{c})$ is a partition of $S_{n}$ for every $n$. Blackwell proved that there exists a partition $(T^{1},...,T^{c})$ of $(S_{n})_{n\in\mathbb{Z}_{-}}$ such that the trajectories of the MC will, with probability one, reach and eventually stay in one of the jets $T^{i}$, $i=1,...,c$. This result was extended, in the works of Cohn, see [7], [8]. Cohn reformulated Blackwell’s results in the context of MCs in forward time and proved that the tail $\sigma$-algebra of any nonhomogeneous MC consists of a finite number of atomic (indecomposable) sets, each of them related with a jet $T^{k}$ of Blackwell’s partition. He also simplified Blackwell’s proofs. The separation part of the DS theorem was proved by Sonin, one of the authors of this paper, in a series of papers [21], [22], [23]. Here it was shown that there exist partitions into jets having the additional property that the expected number of transitions of trajectories of any MC $(Z_{n})$ between jets is finite on the infinite time interval. This separation property was not obvious and its existence had not been noted previously. Surveys about the DS theorem and related results can be found in [24] and [25]. The DS theorem has found applications in several areas, including simulated annealing, consensus algorithms, and probabilistic automata, see e.g. [9], [6], [5], [12], and the references therein. References [1] D. Andrica and T. Andreescu (2009). Number Theory: Structures, Examples, and Problems. Birkhäuser, Boston. [2] P. Billingsley (1995). Probability and Measure, 3rd ed. Wiley, New York. [3] D. Blackwell (1945). Finite non-homogeneous chains. Annals of Mathematics, 46(4):594–599. [4] D. Blackwell and D. Freedman (1964). The tail $\sigma$-field of a Markov chain and a theorem of Orey. The Annals of Mathematical Statistics, 35(3):1291–1295. [5] S. Bolouki and R.P. Malhamé (2016). Consensus algorithms and the decomposition-separation theorem. IEEE Transactions on Automatic Control, 61(9):2357–2369. [6] K. Chatterjee and M. Tracol (2012). Decidable Problems for Probabilistic Automata on Infinite Words. 27th Annual IEEE Symposium on Logic in Computer Science, Dubrovnik, pp. 185-194. [7] H. Cohn (1970). On the tail $\sigma$-algebra of the finite inhomogeneous Markov chains. Annals of Mathematical Statistics, 41(6):2175–2176. [8] H. Cohn (1974). A ratio limit theorem for the finite nonhomogeneous Markov chains. Israel Journal of Mathematics, 19(4):329–334. [9] H. Cohn and M. Fielding (1999). Simulated annealing: Searching for an optimal temperature schedule. SIAM Journal on Optimization, 9(3):779–802. [10] R.L. Dobrushin (1956). Central Limit Theorem for Nonstationary Markov Chains I. Theory of Probability and its Applications, 1(1), 65–80. [11] E.B. Dynkin (1969). Boundary theory of Markov processes (the discrete case). Russian Mathematical Surveys, 24(2):1–42. [12] S.R. Etesami (2019). A Simple Framework for Stability Analysis of State-Dependent Networks of Heterogeneous Agents. SIAM Journal on Control and Optimization, 57(3):1757–1782. [13] G.B. Folland (1999). Real Analysis, 2nd Ed. John Wiley & Sons, Hoboken, NJ. [14] G.A. Hunt (1960). Markoff chains and Martin boundaries. Illinois Journal of Mathematics, 4(3):313–340. [15] M. Iosifescu (1979). The tail structure of nonhomogeneous finite state Markov chains: survey. Banach Center Publications, 5, 125-132. [16] M. Iosifescu (1980). Finite Markov Processes and Their Applications. Dover Publications, Inc. Mineola, NY. [17] J.G. Kemeny, J.L. Snell, A.W. Knapp (1976). Denumerable Markov Chains, 2nd ed. Springer, New York. [18] A. Kolmogoroff (1936). Zur theorie der Markoffschen ketten. Mathematische Annalen, 112(1):155–160. (English translation in Selected Works of A. N. Kolmogorov Vol. 2, 1992, pp. 182–187). [19] J.R. Munkres (2000). Topology, 2nd ed. Prentice Hall, Upper Saddle River, NJ. [20] H. Robbins (1955). A remark on Stirling’s formula. American Mathematical Monthly, 62(1):26–29. [21] I.M. Sonin (1987). Theorem on separation of jets and some properties of random sequences. Stochastics, 21(3):231–249. [22] I.M. Sonin (1991a). On an extremal property of Markov chains and sufficiency of Markov strategies in Markov Decision Processes with the Dubins-Savage criterion. Annals of Operations Research, 29(1):417–426. [23] I.M. Sonin (1991b). An arbitrary nonhomogeneous Markov chain with bounded number of states may be decomposed into asymptotically noncommunicating components having the mixing property. Theory of Probability and Its Applications, 36(1):74–85. [24] I.M. Sonin (1996). The Asymptotic Behaviour of a General Finite Nonhomogeneous Markov Chain (The Decomposition-Separation Theorem). In T.S. Ferguson, L.S. Shapley, and J.B. MacQueen (eds), Statistics, Probability and Game Theory: Papers in Honor of David Blackwell. Institute of Mathematical Statistics, pp. 337–346. [25] I.M. Sonin (2008). The decomposition-separation theorem for finite nonhomogeneous Markov chains and related problems. In S. Ethier, J. Feng and R.H. Stockbridge (eds), Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz. Institute of Mathematical Statistics, pp. 1–15. [26] D.W. Stroock (2011). Probability Theory: An Analytic View, 2nd Ed. Cambridge University Press, Cambridge. [27] F. Treves (1967). Topological Vector Spaces, Distributions and Kernels. Academic Press, New York. [28] D. Williams (1991). Probability With Martingales. Cambridge University Press, Cambridge.
The Lorentzian distance formula in noncommutative geometry Nicolas Franco (Namur Center for Complex Systems (naXys) & Department of Mathematics, University of Namur, rue de Bruxelles 61, 5000 Namur, Belgium [email protected]) Abstract For almost twenty years, a search for a Lorentzian version of the well-known Connes’ distance formula has been undertaken. Several authors have contributed to this search, providing important milestones, and the time has now come to put those elements together in order to get a valid and functional formula. This paper presents a historical review of the construction and the proof of a Lorentzian distance formula suitable for noncommutative geometry. 1 Introduction and formulation of the Lorentzian distance Connes’ noncommutative geometry [Connes:1994aa, MC08] provides at the same time a beautiful mathematical theory as well as new tools for physical models of unification theory. At a mathematical level, the topological correspondence between locally compact Hausdorff spaces and commutative C${}^{*}$-algebras given by Gel’fand’s theory is brought up to the level of Riemannian manifolds. The key elements are spectral triples $(\mathcal{A},\mathcal{H},D)$ from which, among others, information concerning the metric aspect can be recovered using the Riemannian distance formula : $$d_{R}(p,q)\ =\ \sup_{f\in\mathcal{A}}\left\{{\ \left|{f(q)-f(p)}\right|\ :\ % \left\|{[D,f]}\right\|\leq 1\ }\right\}\cdot$$ (1) Applications of noncommutative geometry in mathematical physics take mainly part in particle physics and quantum field theory. However the physical Lorentzian signature of spacetimes makes the use of the initial mathematical theory more problematic, and especially concerning the formula (1). Two paths have been followed to solve this problem: the Wick rotation process which allows the use of all the well-defined tools of Riemannian noncommutative geometry, but therefore with a loss of causal information, or the adaptation of the theory to a Lorentzian signature, which is less straightforward and still undergoing, notably with the use of Krein spaces and Lorentzian spectral triples [Stro, F5, Rennie12, BESNARD2017]. In this last context, several authors have tried to generalize the formula (1) to a Lorentzian distance formula [ParfZap, Moretti, F2, F3, CQG2013, RENNIE2016108, Ming]. Each of those authors has significantly contributed to a specific step of the construction of a final formula. In this paper, we will go through the summary of those different steps and we present two formulations of a now completely proved Lorentzian distance formula. The first formulation is at the level of traditional Lorentzian geometry, where the usual Lorentzian distance $d(p,q)$ between two points, representing the maximal length of the piecewise $C^{1}$ future-directed causal curves from $p$ to $q$ [beem], is rewritten in a completely path-independent way, using the information coming from a specific set of test functions. Key elements of the proof of the following formula will be presented in Section 3. Theorem (Path-independent formulation) If $(M,g)$ is a time-oriented Lorentzian manifold (spacetime) which is either: • globally hyperbolic, • stably causal such that the usual Lorentzian distance $d$ is continuous and finite, then for all $p,q\in M$: $$d(p,q)\ =\ \inf\left\{{\ [f(q)-f(p)]^{+}\ :\ f\in\mathscr{S}\ }\right\},$$ (2) where $[\alpha]^{+}=\max\left\{{0,\alpha}\right\}$ and $\mathscr{S}$ is the set of smooth real-valued ”steep” functions, i.e. the set of $f\in C^{1}(M,{\mathbb{R}})$ such that $g(\nabla f,\nabla f)=g^{-1}(df,df)\leq-1$ and $\nabla f$ is past-directed ($f$ is a future-directed temporal function). ${}_{\blacksquare}$ Stable causality is the weakest assumption under which the RHS of (2) makes sense, otherwise the set of steep functions $\mathscr{S}$ is empty [MS08]. The condition of continuity of the Lorentzian distance $d$ is necessary since the RHS is upper semi-continuous while the LHS is lower semi-continuous [Ming]. Under such an assumption, $(M,g)$ is in fact a causally continuous spacetime in the sense of [hawking1975large]. The condition of global hyperbolicity is a particular case where the Lorentzian distance $d$ is automatically continuous and finite [beem]. The second formulation is an algebraic formulation, where every element from traditional Lorentzian geometry has been replaced by a corresponding element coming from the theory of spectral triples. This formulation opens the possibility of a generalization to noncommutative spacetimes. The proof will be presented in Section 4 while the possible technical difficulties for an application on noncommutative spacetimes will be presented in Section 5. Theorem (Spectral triple formulation) If $(M,g)$ is a $n$-dimensional spin Lorentzian manifold which is either globally hyperbolic or stably causal such that the Lorentzian distance $d$ is continuous and finite, and if we define: • The algebra $\mathcal{A}=C^{1}(M,{\mathbb{R}})$ with pointwise multiplication, • The Hilbert space $\mathcal{H}=L^{2}(M,S)$ of square integrable sections of the spinor bundle over $M$ (using a positive definite inner product on the spinor bundle), • The Dirac operator $D=-i(\hat{c}\circ\nabla^{S})=-ie^{\mu}_{a}\gamma^{a}\nabla^{S}_{\mu}$ associated with the spin connection $\nabla^{S}$, • The fundamental symmetry $\mathcal{J}=i\gamma^{0}$ where $\gamma^{0}$ is the first flat gamma matrix111Conventions used in the paper are $(-,+,+,+,\cdots)$ for the signature of the metric and $\{\gamma^{a},\gamma^{b}\}=2\eta^{ab}$ for the flat gamma matrices, with $\gamma^{0}$ anti-Hermitian and $\gamma^{a}$ Hermitian for $a>0$. , • If $n$ is even, the chirality operator $\chi=\pm i^{\frac{n}{2}+1}\gamma^{0}\cdots\gamma^{n-1},$ then for all $p,q\in M$, if $n$ is even: $$d(p,q)\ =\ \inf_{f\in\mathcal{A}}\left\{{\ [f(q)-f(p)]^{+}\ :\ \forall\phi\in% \mathcal{H},\left<{\phi,\mathcal{J}([D,f]+i\chi)\phi}\right>\leq 0\ }\right\},$$ (3) and if $n$ is odd: $$d(p,q)\ =\ \inf_{f\in\mathcal{A}}\left\{{\ [f(q)-f(p)]^{+}\ :\ \forall\phi\in% \mathcal{H},\left<{\phi,\mathcal{J}([D,f]\pm 1)\phi}\right>\leq 0\ }\right\},$$ (4) where $[\alpha]^{+}=\max\left\{{0,\alpha}\right\}$ and $\left<{\cdot,\cdot}\right>$ is the positive definite inner product on $\mathcal{H}$. ${}_{\blacksquare}$ 2 Historical construction of the Riemannian and Lorentzian distance formulas The first apparition of Connes’ distance formula (1) comes from 1989 in [connes_1989]. Common presentations and proofs of the formula are given by [Connes1992], [Connes:1994aa, Chapter 6] and [Elements, Chapter 9.3]. The formula has been studied and applied by many authors on several kinds of spaces, as among others [BIMONTE1994139, RieffelStates, IochKM, MoyalDist1, MoyalDist2, MoyalDist3, FrancoMoyal]. The way to prove this formula is quite direct. If we consider a connected compact Riemannian manifold $(M,g)$ and two points $p$ and $q$ on it, we can choose an arbitrary piecewise $C^{1}$ curve $\gamma:[0,1]\rightarrow M$ with $\gamma(0)=p$ and $\gamma(1)=q$. Then, for each function $f\in C^{\infty}(M)$, we have by using the second fundamental theorem of calculus: $$\displaystyle f(q)-f(p)$$ $$\displaystyle=$$ $$\displaystyle f(\gamma(1))-f(\gamma(0))=\int_{0}^{1}\frac{\mathrm{d}{}}{% \mathrm{d}{t}}f(\gamma(t))\,dt$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}df(\dot{\gamma}(t))\,dt=\int_{0}^{1}g(\nabla f,\dot{% \gamma}(t))\,dt.$$ Using Cauchy–Schwarz inequality, we get: $$\displaystyle\left|{f(q)-f(p)}\right|$$ $$\displaystyle\leq$$ $$\displaystyle\int_{0}^{1}\left|{g(\nabla f,\dot{\gamma}(t))}\right|\,dt\leq% \int_{0}^{1}\left|{\nabla f}\right|\left|{\dot{\gamma}(t)}\right|\,dt$$ (5) $$\displaystyle\leq$$ $$\displaystyle\left\|{\nabla f}\right\|_{\infty}\int_{0}^{1}\left|{\dot{\gamma}% (t)}\right|\,dt=\left\|{\nabla f}\right\|_{\infty}l(\gamma),$$ where $l(\gamma)$ denotes the length of the curve. So we obtain the following inequality: $$d_{R}(p,q)\geq\sup\left\{{\left|{f(q)-f(p)}\right|\ :\ f\in\mathcal{A},\ \left% \|{\nabla f}\right\|_{\infty}\leq 1}\right\}\cdot$$ (6) The condition $\left\|{\nabla f}\right\|_{\infty}\leq 1$ can be replaced by a weaker condition $\text{ess}\sup\left\|{\nabla f}\right\|\leq 1$ which allows us to work with the set $\mathcal{A}\subset C(M)$ of Lipschitz continuous functions on $M$ [WEAVER1996261]. Within this larger set, the equality is easily given by the usual distance as function of its second argument $f(\cdot)=d_{R}(p,\cdot)$. Indeed, $d_{R}$ is Lipschitz continuous with $\left\|{\nabla d_{R}}\right\|=1$ except on a set of measure zero (the cut locus), and we get the path-independent formula: $$d_{R}(p,q)=\sup\left\{{\left|{f(q)-f(p)}\right|\ :\ f\in\mathcal{A},\ \text{% ess}\sup\left\|{\nabla f}\right\|\leq 1}\right\}\cdot$$ (7) The last step is the translation of this formula into an algebraic formalism. If $M$ is a spin manifold and $D$ the Dirac operator, we have [Elements, Chapter 9.3]: $$\text{ess}\sup\left\|{\nabla f}\right\|\leq 1\ \Longleftrightarrow\ \left\|{[D% ,f]}\right\|\leq 1$$ (8) which gives the formula (1). The construction of the Riemannian distance formula can be clearly divided in three important steps: the setting of a path-independent inequality (6), the construction of the equality case (7) and the operatorial (spectral triple) formulation (8). The search for a Lorentzian equivalent formula went through the same three steps and we summarize here its historical evolution: • 1998-2000, G. N. Parfionov and R. R. Zapatrin [ParfZap]: First mention of the duality (inversion supremum-infimum and the inequality signs) in the formula (6) in a Lorentzian context. • 2002-2003, V. Moretti [Moretti]: Generalization of the formula (7) for globally hyperbolic spacetimes using a local condition on the gradient $\nabla f$ (in a more recent terminology: using functions that are ”steep” almost-everywhere but only inside some specific compact sets) and attempt of algebraization using the Laplace-Beltrami-d’Alembert operator and a net of Hilbert spaces. • 2010, N. Franco [F3]: Generalization of the formula (7) for globally hyperbolic spacetimes using a global condition on the gradient $\nabla f$ (using functions that are steep almost-everywhere on the whole spacetime). The global behavior of the test functions is chosen in order to facilitate a future algebraization. The proof of the equality case is done using non-Lipschitz continuous causal functions. • 2012-2013, N. Franco and M. Eckstein [CQG2013]: Algebraic formulation of the global condition on the gradient (steep) for $C^{1}$ functions, so a Lorentzian generalization of (8). However, this algebraic formulation is not valid for non-Lipschitz continuous functions as needed for the general proof in [F3], so the proof of the distance formula is limited to spacetimes where the usual distance function can be suitably approximated by $C^{1}$ steep functions. A particular proof for the Minkowskian case is given. • 2014-2016, A. Rennie and B. E. Whale [RENNIE2016108]: Extension of the formula obtained in [F3] for non-globally hyperbolic spacetimes. The correspondence is extended to spacetimes where the usual Lorentzian distance is finite, while conjecturing that the condition of stable causality should be necessary if the distance is also continuous. The steep condition is also proved to be necessary for a formulation in term of test functions. • 2017, E. Minguzzi [Ming]: As a consequence of the study of causality under less-regular differentiability with the smoothing of non-Lipschitz continuous steep functions, a $C^{1}$ proof of the formula given in [F3] is obtained, with the necessary and sufficient condition that the spacetime is stably causal and the usual Lorentzian distance function finite and continuous. This gives a complete smooth validation of the results presented in [F3, CQG2013, RENNIE2016108]. 3 The path-independent formulation of the Lorentzian distance In this Section, we summarize the main arguments of the proof of Theorem 1. The key elements of the Lorentzian distance formula are: • The real-valued (continuous or not) causal functions, which are the functions which do not decrease along every future-directed causal curve. • The steep functions, which are $C^{1}$ causal functions which increase sufficiently rapidly along every future-directed causal curve., i.e. with $g(\nabla f,\nabla f)\leq-1$ and past-directed gradient. We have to consider real-valued functions instead of complex ones in order to reach a non-symmetric formula. In a first time, we need an inequality which is a Lorentzian generalization of (6). The same two theorems are used in their existing Lorentzian versions: • The second fundamental theorem of calculus, valid for absolutely continuous, hence $C^{1}$, functions. • The reverse Cauchy-Schwarz inequality [beem]: $$\text{If $v$ and $w$ are timelike vectors, then }\left|{g(v,w)}\right|\geq% \sqrt{-g(v,v)}\sqrt{-g(w,w)}.$$ Now we consider a time-oriented Lorentzian manifold $(M,g)$ and two points $p$ and $q$ on it such that $p\prec\!\!\prec q$. We can choose a piecewise $C^{1}$ future-directed timelike curve $\gamma:[0,1]\rightarrow M$ with $\gamma(0)=p$ and $\gamma(1)=q$. Then, for each function $f\in C^{1}(M,{\mathbb{R}})$, $$f(q)-f(p)=\int_{0}^{1}\frac{\mathrm{d}{}}{\mathrm{d}{t}}f(\gamma(t))\,dt=\int_% {0}^{1}g(\nabla f,\dot{\gamma}(t))\,dt.$$ (9) Since $\dot{\gamma}(t)$ is everywhere a future-directed timelike vector, if we assume that $\nabla f$ is everywhere timelike with constant past-directed orientation, then the sign of $g(\nabla f,\dot{\gamma}(t))$ is constant and positive, so we get: $$\displaystyle f(q)-f(p)$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}g(\nabla f,\dot{\gamma}(t))\,dt=\int_{0}^{1}\left|{g(% \nabla f,\dot{\gamma}(t))}\right|\,dt$$ (10) $$\displaystyle\geq$$ $$\displaystyle\int_{0}^{1}\sqrt{-g(\nabla f,\nabla f)}\sqrt{-g(\dot{\gamma}(t),% \dot{\gamma}(t))}\,dt$$ $$\displaystyle\geq$$ $$\displaystyle\inf\left\{{{\sqrt{-g(\nabla f,\nabla f)}}}\right\}\;l(\gamma),$$ which is the Lorentzian counterpart of (5). This result can be extended by continuity to future-directed causal curves and $p\preceq q$. Then taking the supremum over all future-directed causal curves from $p$ to $q$ we get the following path-independent inequality: $$d(p,q)\ \leq\ \inf_{f\in C^{1}(M,{\mathbb{R}})}\left\{{\ [f(q)-f(p)]^{+}\ :% \begin{subarray}{c}g(\nabla f,\nabla f)\leq-1,\\ \ \nabla f\text{ past-directed}\end{subarray}}\right\}\cdot$$ (11) We can notice that this result can be extended to non-absolutely continuous functions (hence non-Lipschitz continuous) as long as we impose that the functions $f$ remain causal. Indeed, from their monotony, those functions are a.e. differentiable on any future-directed causal curve $\gamma:[0,1]\rightarrow M$ and instead of (9) we have the inequality $f(\gamma(1))-f(\gamma(0))\geq\int_{0}^{1}\frac{\mathrm{d}{}}{\mathrm{d}{t}}f(% \gamma(t))\,dt$ which leads to the same formulation (10) and to the following formula: $$d(p,q)\ \leq\ \inf_{\begin{subarray}{c}\text{causal}\\ \text{functions }f\end{subarray}}\left\{{\ [f(q)-f(p)]^{+}\ :\begin{subarray}{% c}\text{\rm ess}\sup g(\nabla f,\nabla f)\leq-1,\\ \ \nabla f\text{ past-directed}\end{subarray}}\right\}\cdot$$ (12) Three proofs have been given concerning the equality between the usual Lorentzian distance and the formulas (11) or (12). Due to the length of those technical proofs, we only present the main ideas here. Under the condition of global hyperbolicity, a proof of the equality for the non-smooth formulation (12) is presented in [F3], with the construction of a sequence of a.e. steep continuous functions converging to the equality and constructed as locally finite sums of distance functions computed from points located near a suitable Cauchy surface. This particular proof can now be transformed in a smooth version using the new results from [Ming] but it is still limited to globally hyperbolic spacetimes due to the use of a Cauchy surface as main element. The second proof presented in [RENNIE2016108] gets rid of the Cauchy surface and instead uses the construction of a specific achronal surface $S$ such that $M=I^{+}(S)\cup S\cup I^{-}(S)$ and considers the distance $f(\cdot)=d(S,\cdot)$ to this surface as the equality function. This extends the proof of the non-smooth formulation (12) to spacetimes with finite Lorentzian distance (where the distance is also allowed to be non-continuous). The most recent proof can be found in [Ming] and concerns the smooth formulation (11). It implies the precedent proofs (at least when the distance is continuous). The necessary and sufficient conditions are the stable causality of the spacetime and finiteness and continuity of the Lorentzian distance (which is automatic if the spacetime is globally hyperbolic). The approach is here completely different and uses the idea that metric properties of spacetimes can be computed using a causal theory on a space with one extra dimension $\tilde{M}=M\times{\mathbb{R}}$. The usual Lorentzian distance function can then be traded for a Lorentz-Finsler function defined on causal tangent vectors of the product space. The final proofs of Theorem 1 is then given by [Ming, Theorem 4.11] in the globally hyperbolic case and by [Ming, Theorem 4.15] in the more general case of stably causal spacetime, requiring the additional condition of finiteness and continuity of the Lorentzian distance. 4 The algebraic formulation of the Lorentzian distance In this Section, we present the proof of Theorem 1 and especially the origin of the algebraic constraint inside (3) and (4). Once more, we will see that the metric information emerges naturally from the causal information on a space with one extra dimension. So in a first time we need to review the causal theory for spectral triples. We consider a stably causal Lorentzian $n$-dimensional manifold $(M,g)$ with a spin structure $S$ associated with its space $L^{2}(M,S)$ of square integrable sections of the spinor bundle over $M$. This space is naturally endowed with a indefinite inner product $\left({\cdot,\cdot}\right)$ coming from the spin structure and possesses all the properties of a Krein space [Bog], but can be turned into a Hilbert space $\mathcal{H}$ using an alternative positive-definite inner product $\left<{\cdot,\cdot}\right>=\left({\cdot,\mathcal{J}\cdot}\right)$ and $\left({\cdot,\cdot}\right)=\left<{\cdot,\mathcal{J}\cdot}\right>$. The operator $\mathcal{J}$ is called a fundamental symmetry and can be constructed from the Clifford action of a timelike vector field [Muller2014]. Since stable causality implies the existence of a smooth temporal function $\mathcal{T}$ [MS08], a (Hermitian) fundamental symmetry is easily given by $\mathcal{J}=ic(d\mathcal{T})=i\gamma^{0}$, where $\gamma^{0}$ is the first flat gamma matrix respecting $(\gamma^{0})^{2}=-1$ (up to a smooth conformal transformation which leaves the causality invariant). If we consider the Lorentzian Dirac operator $D=-ie^{\mu}_{a}\gamma^{a}\nabla^{S}_{\mu}$, where $e^{\mu}_{a}$ stand for the vierbeins222We consider here a ”pseudo-orthonormal” frame coming from the timelike vector field $\partial_{0}=\partial_{\mathcal{T}}$ with $e^{0}_{0}=1$ and $e^{i}_{0}=e^{0}_{i}=0$ for $i=1,\dots,n-1$., then this operator is anti-symmetric for the Krein product, which is equivalent to say that $\mathcal{J}D$ is an anti-symmetric operator for the Hilbert space $\mathcal{H}$. Under the additional assumption of completeness of the manifold $M$ under spacelike reflexion, $\mathcal{J}D$ is a skew-Hermitian operator on $\mathcal{H}$ [Stro]. Then we have the following theorem coming from [CQG2013] : Theorem A function $f\in C^{1}(M,{\mathbb{R}})$ is causal if and only if $$\forall\phi\in\mathcal{H},\left<{\phi,\mathcal{J}[D,f]\phi}\right>\leq 0$$ where $\left<{\cdot,\cdot}\right>$ is the positive definite inner product on $\mathcal{H}$. ${}_{\blacksquare}$ Two elements are important for the proof of this Theorem. At first, the absolute continuity of the function $f$. If the function if absolutely continuous, then the causal property can be fully characterized by the following conditions on its gradient: $$g(\nabla f,\nabla f)=g^{\mu\nu}f_{,\mu}f_{,\nu}\leq 0,\qquad g(\nabla f,\nabla% \mathcal{T})=g^{\mu 0}f_{,\mu}=-f_{,0}\leq 0,$$ (13) where $df=f_{,\mu}dx^{\mu}$ and $x^{0}=\mathcal{T}$ is orthonormal to the others chosen local coordinates. The second element is the $C^{1}$ behavior. Indeed, from the continuity of the derivative, if (13) is false at some point of $M$, then it must be false on some neighborhood and this information can be caught by a suitable specific spinor. From this observation, we can see that the smooth formulation of the Lorentzian distance equation (2) is important for the algebraic generalization. From the well-known property $[D,f]=-i\,c(df)$ [Elements], the proof of Theorem 4 relies on the fact that the matrix: $$\mathcal{J}[D,f]=i\gamma^{0}\,(-i)\,(\gamma^{a}e^{\mu}_{a}f_{,\mu})=\gamma^{0}% \gamma^{a}e^{\mu}_{a}f_{,\mu}$$ is pointwise negative semi-definite if and only if (13) is respected. A first and technical proof of this equivalence if given in [CQG2013] using the technique of the characteristic polynomial.333The initial proof was done under the hypothesis of global hyperbolicity but can be extended to simply causal spacetimes by considering the pseudo-orthonormal frame. We present here another shorter consideration suggested in [2roads]. We have $$\mathcal{J}[D,f]=-f_{,0}+\gamma^{0}\gamma^{i}e^{\mu}_{i}f_{,\mu}=-f_{,0}+b$$ for $i=1,\dots,n-1$ where the second term $b$ is Hermitian and respects $b^{2}=\left\|{\gamma^{i}\gamma^{j}f_{,i}f_{,j}}\right\|^{2}$. So the spectrum of $b$ must be $\left\{{\pm\left\|{g^{ij}f_{,i}f_{,j}}\right\|}\right\}$. Since the reduced metric $g^{ij}$ is positive definite we get that $\mathcal{J}[D,f]=-f_{,0}+b$ is negative semi-definite if and only if $-f_{,0}\leq 0$ and $g^{ij}f_{,i}f_{,j}\leq f^{2}_{,0}$, hence (13). Theorem 4 is very important in noncommutative geometry since its provides a way to completely characterize causality at an algebraic level, since the complete set of causal functions is sufficient to characterize the causal relations for stably causal spacetimes [Ming, Bes09]. Consequences of Theorem 4 on noncommutative spacetimes (almost commutative spacetimes, Moyal spacetime) have already been handled with success [FrancoMoyal, SIGMA2014, CC2014, JGP2015, PROC2015, Zitter]. From Theorem 4, we can now easily get the characterization of the steep functions used in (2). Once more, we consider a product space $\tilde{M}=M\times{\mathbb{R}}$ on which we extend the Lorentzian metric $g$ to $\tilde{g}$ by adding $\tilde{g}^{nn}=1$ and $\tilde{g}^{\mu n}=\tilde{g}^{n\mu}=0$ for $\mu=0,\dots,n-1$. We will use the extended indices $\tilde{a},\tilde{\mu}=0,\dots,n$. We also have to extend the spin structure to the new dimension, giving an extended Dirac operator $\tilde{D}$. When $n$ is even, this can be done very easily by considering the chirality operator as an additional gamma matrix $\gamma^{n}=\chi=\pm i^{\frac{n}{2}+1}\gamma^{0}\cdots\gamma^{n-1}$ (with $e^{\mu}_{n}=e^{n}_{a}=0$ and $e^{n}_{n}=1$). Now we can consider all functions of the form $\tilde{f}=f-x_{n}\in C^{1}(\tilde{M},{\mathbb{R}})$ where $f\in C^{1}(M,{\mathbb{R}})$, which trivially gives $\tilde{f}_{,\mu}=f_{,\mu}$ and $\tilde{f}_{,n}=-1$. Then applying Theorem 4 to $\tilde{f}$ gives: $$\displaystyle\forall\phi\in\mathcal{H},$$ $$\displaystyle\left<{\phi,\mathcal{J}[\tilde{D},\tilde{f}]\phi}\right>$$ $$\displaystyle=$$ $$\displaystyle\left<{\phi,\mathcal{J}\left({-i\gamma^{\tilde{a}}e^{\tilde{\mu}}% _{\tilde{a}}f_{,\tilde{\mu}}}\right)\phi}\right>$$ $$\displaystyle=$$ $$\displaystyle\left<{\phi,\mathcal{J}\left({-i\gamma^{a}e^{\mu}_{a}f_{,\mu}-i% \gamma^{n}\tilde{f}_{,n}}\right)\phi}\right>$$ $$\displaystyle=$$ $$\displaystyle\left<{\phi,\mathcal{J}\left({[D,f]\pm i\chi}\right)\phi}\right>% \ \leq\ 0$$ which is equivalent to the fact that $\tilde{f}$ is causal on $(\tilde{M},\tilde{g})$, i.e. $-\tilde{f}_{,0}=-f_{,0}\leq 0$ and: $$\displaystyle\tilde{g}(\tilde{\nabla}\tilde{f},\tilde{\nabla}\tilde{f})=\tilde% {g}^{\tilde{\mu}\tilde{\nu}}\tilde{f}_{,\tilde{\mu}}\tilde{f}_{,\tilde{\nu}}=g% ^{\mu\nu}f_{,\mu}f_{,\nu}+\tilde{g}^{nn}\tilde{f}_{,n}\tilde{f}_{,n}=g(\nabla f% ,\nabla f)+1\leq 0$$ which is the exact characterization of a steep function. The choice of $+i\chi$ or $-i\chi$ is completely arbitrary and has no influence on the formula, so from (2) we can write: $$d(p,q)\ =\ \inf_{f\in C^{1}(M,{\mathbb{R}})}\left\{{\ [f(q)-f(p)]^{+}\ :\ % \forall\phi\in\mathcal{H},\left<{\phi,\mathcal{J}([D,f]+i\chi)\phi}\right>\leq 0% \ }\right\},$$ which is valid for manifolds with even dimension $n$ respecting the conditions of Theorem 1. So the proof of the formula (3) is complete. This process can also be applied in order to get a valid formula for odd-dimensional manifolds, but this requires a doubling of the Hilbert space $\tilde{\mathcal{H}}=\mathcal{H}\otimes{\mathbb{C}}^{2}$ and new gamma matrices $\tilde{\gamma}^{\mu}=\gamma^{\mu}\otimes\sigma^{1}$ for $\mu=0,\dots,n-1$ and $\tilde{\gamma}^{n}=1\otimes\sigma^{2}$ where $\sigma^{i}$ are the Pauli matrices. The fundamental symmetry becomes $\tilde{\mathcal{J}}=\mathcal{J}\otimes\sigma^{1}$. The negative semi-definite operator becomes: $$\displaystyle\tilde{\mathcal{J}}[\tilde{D},\tilde{f}]$$ $$\displaystyle=$$ $$\displaystyle\left({\mathcal{J}\otimes\sigma^{1}}\right)\left({-ie^{\mu}_{a}f_% {,\mu}(\gamma^{a}\otimes\sigma^{1})+i(1\otimes\sigma^{2})}\right)$$ $$\displaystyle=$$ $$\displaystyle\mathcal{J}[D,f]\otimes 1+\mathcal{J}\otimes\sigma^{3}.$$ However, $\sigma^{3}=\left(\begin{smallmatrix}1&0\\ 0&-1\end{smallmatrix}\right)$ is diagonal, which means that the constraint: $$\forall\tilde{\phi}=(\phi_{+},\phi_{-})\in\tilde{\mathcal{H}},\ \left<{\tilde{% \phi},\tilde{\mathcal{J}}[\tilde{D},\tilde{f}]\tilde{\phi}}\right>\leq 0$$ splits into two inequalities: $$\forall\phi_{\pm}\in\mathcal{H},\ \left<{\phi_{\pm},\mathcal{J}([D,f]\pm 1)% \phi_{\pm}}\right>\leq 0.$$ This gives rise to formula (4) and completes the proof of Theorem 1. 5 Application on noncommutative spacetimes Theorem 1 opens the possibility to compute metrical information on noncommutative spacetimes. The formalism is the one of Lorentzian spectral triples $(\mathcal{H},\mathcal{A},D)$ with a given fundamental symmetry $\mathcal{J}$ with $\mathcal{J}^{2}=1$, $\mathcal{J}^{*}=\mathcal{J}$, $[\mathcal{J},a]=0$, $\forall a\in{\mathcal{A}}$. The condition on the operator $D$ are $\forall a\in{\mathcal{A}}$, $[D,a]$ is bounded, $a(1+\frac{(DD^{*}+D^{*}D)}{2})^{-\frac{1}{2}}$ is compact and $D^{*}=-\mathcal{J}D\mathcal{J}$ (Krein skew-selfadjoint). For an even Lorentzian spectral triple, the $\mathbb{Z}_{2}$-grading $\chi$ must respect $\chi^{*}=\chi$, $\chi^{2}=1$, $[\chi,a]=0$, $\chi\mathcal{J}=-\mathcal{J}\chi$ and $\chi D=-D\chi$. Then a Lorentzian distance, respecting the usual properties of non-negativity, antisymmetry and inverse triangle inequality, can be defined between two states $\varphi,\psi$ on $\mathcal{A}$ by: $$d(\varphi,\psi)\ =\ \inf_{a\in\mathcal{A}}\left\{{\ [\psi(a)-\varphi(a)]^{+}\ % :\ \forall\phi\in\mathcal{H},\left<{\phi,\mathcal{J}([D,a]+i\chi)\phi}\right>% \leq 0\ }\right\},$$ (14) where $i\chi$ should be replaced by $\pm 1$ (both signs) if the Lorentzian spectral triple is odd. There exist two technical difficulties in applying the formula (14) which we are going to discuss. However, we will show that in all currently exiting examples, those difficulties can be bypassed. The first difficulty concerns the fundamental symmetry $\mathcal{J}$. With the minimal set of axioms presented here concerning $\mathcal{J}$, there is no guarantee that the signature is exactly Lorentzian (it can correspond to a pseudo-Riemannian manifold in the commutative case) which means that the Lorentzian distance formula could give no result. The exact set of axioms in order to guarantee a Lorentzian signature is still an active subject of research, with some existing but not identical working proposals [F5, Rennie12, BESNARD2017, CC2014]. Since we want to keep this paper as most general as possible, we will not focus on one particular condition. The existing typical examples of noncommutative spacetimes, all of which could fit into the Lorentzian spectral triple formalism, are almost-commutative manifolds (Kaluza-Klein products of a usual Lorentzian manifold and a discrete noncommutative internal space) and deformations of flat spacetimes (Moyal spacetime, $\kappa$-Minkowski, Lorentzian cylinder, …). For almost commutative manifolds, the suitable fundamental symmetry is $\mathcal{J}=\mathcal{J}_{M}\otimes 1$ where $\mathcal{J}_{M}$ is a fundamental symmetry for the based spacetime. For deformations spaces of flat spacetimes, the canonical choice is $\mathcal{J}=i\gamma^{0}$. So up to our knowledge, there is no currently existing toy model (noncommutative noncompact complete Lorentzian spacetime) for which this problem must be taken into consideration, and the question is reserved for abstract considerations. The second difficulty concerns the algebra $\mathcal{A}$ and the space of states on it. In traditional noncommutative geometry, $\mathcal{A}$ is a C${}^{*}$-algebra, since it corresponds in the commutative case to continuous functions vanishing at infinity. For causal considerations (causal functions), this algebra is too small and one must consider an additional specific unitization of $\mathcal{A}$ corresponding to bounded functions [CQG2013]. However, steep functions present in the Lorentzian distance formula are clearly unbounded and cannot fit into the usual C${}^{*}$-algebra formalism. One must find a way to extend the initial C${}^{*}$-algebra corresponding to the states to unbounded elements and be sure that (some of) the states are still well-defined and uniquely extended. One particular way to realize such an extension is presented in [F5]. Again, for all currently existing examples, the problem can be at least partially bypassed. For almost commutative manifolds, all pure states are product states between well-defined states on the based spacetime (evaluation maps) and vector states on the discrete algebra. For deformation spaces, the elements of the initial C${}^{*}$-algebra of bounded continuous functions are compact operators so all states correspond to vector states which can be easily and uniquely extended to unbounded functions as long as their remain finite, as used in [FrancoMoyal]. Once more, this problem is an abstract one and do not prevent the application of the Lorentzian distance formula to particular models of noncommutative spacetimes. Acknowledgments The author would like to thank Ettore Minguzzi for organizing the meeting ”Non-Regular Spacetime Geometry” between the two communities of Lorentzian geometry and noncommutative geometry, from which the evolution of the proof of the Lorentzian distance formula was possible. \printbibliography
Impact of new ICRU90 key data on stopping-power ratios and beam quality factors for carbon ion beams Lucas Burigo${}^{1,2}$ ${}^{1}$German Cancer Research Center (DKFZ), Heidelberg, Germany ${}^{2}$National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO) Heidelberg, Germany [email protected]    [ ${}^{1}$German Cancer Research Center (DKFZ), Heidelberg, Germany ${}^{2}$National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO) Heidelberg, Germany [email protected] Abstract The recent update of dosimetric key data by the ‘International Commission on Radiation Units and Measurements’ impacts the computation of beam quality correction factors $k_{Q}$ via changes of several key data, such as the the mean excitation energies, $I$, which enters the stopping power computation for water and air, the computation procedure itself, the average energy expended in the production of an ion pair in air, $W/e$, and the chamber perturbation factors. An accurate assessment of water-to-air stopping-power ratio, $s_{\rm w,air}$, in reference conditions with new recommendation is necessary to update the dosimetry protocols for carbon ion beams. The new ICRU90 key data were considered for computation of $s_{\rm w,air}$ for carbon ion beams using different reference conditions using Monte Carlo transport simulations, namely, monoenergetic carbon ion beams with range in water from 3 to 30 cm and Spread-Out Bragg peaks (SOBPs) of different widths and depths in water. New recommendations for $s_{\rm w,air}$ are presented, namely 1.1247 for the reference condition of 1 g cm${}^{-2}$ depth for monoenergetic carbon ion beams and 1.1274 at the center of physically-optimized SOBPs. The recommendation of a constant value (1.126) represents the stopping-power ratio within a variation of $s_{\rm w,air}$ for the different reference conditions within 0.3 %. The impact of these new $s_{\rm w,air}$ values and the updated key data on the $k_{Q}$ for carbon ion beams was evaluated in a second step. The changes were found to agree very well with experimental data for the case of cylindrical chambers, but large discrepancies are observed for plate-parallel chamber. cor1]Steffen Greilich${}^{1,2}$ Keywords: Ionization chamber dosimetry, reference dosimetry, beam calibration 1 Introduction Reference dosimetry for carbon ion beams relies on calibrated ionization chamber measurements and dose-to-water-based protocols. The most prominent code of practice – IAEA’s TRS-398 [14] – defines the relation between dose-to-water $D_{w,Q}$ for a beam quality $Q$ and charge measured by the chamber as: $$D_{w,Q}=M_{Q}\cdot N_{w,Q_{0}}\cdot k_{Q,Q_{0}}$$ (1) where $M_{Q}$ is the corrected chamber reading and $N_{w,Q_{0}}$ the calibration factor for the chamber in a calibration beam quality $Q_{0}$. Since no primary standard for carbon beams exists, $Q$ commonly differs from $Q_{0}$ which then is ${}^{60}$Co. This fact is taken into account by the ‘beam quality correction factor’ $k_{Q,Q_{0}}$. Experimental data on $k_{Q,Q_{0}}$ are still scarce and no values from Monte-Carlo transport simulations with detailed chamber geometries as done for protons [8] exist yet for carbon ions. Therefore values of $k_{Q,Q_{0}}$ are mainly based on computation via $$k_{Q,Q_{0}}=\frac{\left(s_{\rm{w,air}}\right)_{Q}\cdot p_{Q}}{\left(s_{\rm{w,% air}}\cdot p\right)_{Q_{0}}}\cdot\frac{\left(W/e\right)_{Q}}{\left(W/e\right)_% {Q_{0}}}$$ (2) Here, $s_{\rm{w,air}}$ is the water-to-air stopping-power ratio (SPR), $W/e$ the energy required to produce an ion-pair in air, and $p$ the chamber-specific overall perturbation factor. For $Q_{0}=\ ^{60}\rm{Co}$, combined data of $p$ and $s_{\rm{w,air}}$ exist and should be preferred. The recent update of dosimetric key data by the ‘International Commission on Radiation Units and Measurements’ [15] impacts Eq. 2 via changes of the mean excitation energies $I$ which enters the stopping power computation for water and air, the computational procedure itself, and $W/e$ (Tab. 1). Available studies on carbon ion SPR [6, 12, 17, 7, 27] were published before ICRU90. Only Andreo et al. [3] estimated the effect to be -0.5 %, mainly due to the changes in $I$-values. We therefore used Monte-Carlo radiation transport simulation to evaluate the impact of the updated key quantities on the stopping-power ratio $s_{\rm{w,air}}$ (SPR) and the $k_{Q}$ factors for carbon ion beams. We investigated both pristine and spread-out Bragg peaks (SOBP) to cover a wide range of reference conditions. Eventually, we parametrized the SPR as a function of residual range as a beam quality specifier. 2 Materials and Methods 2.1 Stopping power data The ICRU90 report contains updated stopping power data for water, graphite and air for electrons, positrons, as well as for protons, alpha particles and carbon ions (Tabs A.1 to A.15 therein). For these three types of ions, electronic, nuclear and total stopping power are given. The electronic stopping-power ratio water to air from ICRU90 data is shown in Fig. 1 in comparison to the data from the NIST data base for electrons, protons and alpha particles111https://physics.nist.gov/PhysRefData/Star/Text/intro.html and the data for carbon ions from the ICRU Report 73 with the Errata. ICRU90, however, does not provide tables for any other light ($3\leq Z\leq 5$) or heavier ($Z>6$) ions as created by inelastic nuclear scattering of a carbon ion beam in an absorber. Also, the kinetic energy of the ICRU90 tables for alpha particles is limited to 1000 MeV which does not cover the range of energies for $Z=2$ fragments found in clinical carbon beams with large penetration depths (up to approx. 30 cm corresponding to initial kinetic energy of 430 MeV/u). To thus complement the tables and generate the full set of electronic stopping power data necessary for this study, we followed closely the approach specified in ICRU90 (Sec. A.3 therein): • The MSTAR code (v3.12, [23, 24]) was used to compute data for the low kinetic energy regime below a threshold $T\leq T_{1}$. • In the high energy regime, above a threshold $T\geq T_{2}$, the BEST code222In the same version as used in ICRU90, i.e. including the update of constants from CODATA 2010. The original code was developed by M.J. Berger and H. Bichsel. was employed. • To connect the output from both codes in the range $T_{1}<T<T_{2}$, $\beta\cdot S_{\rm{el}}(T)/\rho$ was interpolated using a cubic spline. To be consistent with ICRU90, the choice of values for $T_{1}$ and $T_{2}$ for the ions not covered in that report were based on the values corresponding to carbon ions as follows: • $T_{2}$ was set to account for the same ratio $\left<q_{1}\right>/Z_{1}=0.9522$ of equilibrium charge to nucleus charge obtained for ${}^{12}$C at 60 MeV. • $T_{1}$ is set to $T_{1}=0.5\cdot T_{2}$. The values for $T_{1}$ and $T_{2}$ for lithium to argon ions are provided in Table 2. The full set of stopping power tables used in this study, including the additional data, are available in the supplement. 2.2 Radiation transport code The Geant4 toolkit, version 10.3 with patch 1 [1, 2] was used for radiation transport simulation. It allowed for full implementation of the revised stopping power tables for water and air as given in the ICRU90 report and complementary data generated within this study (see 2.1). To this end, the Geant4 classes G4BraggModel, G4BraggIonModel, G4BetheBlochModel and G4IonParametrisedLossModel which model the energy loss of protons and ions were modified to make explicit use of the new tabulated data for water and air333The data from ICRU90 report for protons and alpha particles were made available in the Geant4 version 10.5, while the data for heavier ions will be included in a future release.. The modular physics list approach of Geant4 was used to account for electromagnetic interactions (physics list G4EmStandardPhysics_option3), hadronic interactions (physics lists G4IonQMDPhysics, G4HadronPhysicsQGSP_BIC_HP, G4HadronElasticPhysicsHP and G4StoppingPhysics) as well as decay physics (physics lists G4DecayPhysics and G4RadioactiveDecayPhysics). 2.3 Geometry and beam 2.3.1 Target The target was modeled as a rectangular water volume with lateral extension of $50\times 50$ cm${}^{2}$ and placed in vacuum. In beam direction, the total thickness of the target, i.e. $40$ cm, was divided into 1600 small slabs with 0.25 mm thickness each. 2.3.2 Primary beam Two cases were investigated namely, i) ideally mono-energetic carbon ion beams (no energy spread), and ii) SOBP fields composed by 3 mm-spaced Bragg curves from quasi mono-energetic carbon ion beams (c.f. Sec. 2.3.3 and 2.3.5). In either case, the primary beam was modeled as an ideal needle beam centered on the $z$-axis and traveling in the $z+$ direction with initial position located 150 cm upstream the center of the target. The scoring slabs are considered laterally large enough to fully contain the primary beam and the secondary charged particles. 2.3.3 Depth-dose base data A base data set of integrated depth-dose curves representing pristine Bragg peaks with ranges of 3–30 cm was generated accounting for steps in range of 3 mm. Besides, the initial energy spread was modulated by a 3 mm ripple filter emulating a clinical carbon-ion beam. 2.3.4 Computation of biological dose The depth-dose base data set was complemented with depth curves of $\alpha$ and $\beta$ values for the cell response to the quasi-mono-energetic carbon ion beams following the linear-quadratic model. In order to account for the depth-dependent fluence and energy spectra of carbon ions and secondary charged fragments, the depth curves of $\alpha$ and $\beta$ values were computed in a multi-step process: • First, we computed the cell response to ion irradiation for a series of monoenergetic heavy charged particles ${}^{1}$H, ${}^{4}$He, ${}^{6}$Li, ${}^{8}$Be, ${}^{10}$B, ${}^{12}$C, ${}^{14}$N, ${}^{16}$O in the energy interval from 0.001 to 1000 MeV/nucleon using the ‘Compound Poisson Process with Successive Convolution’ (CPPSC) model implemented in the libamtrack library [9]. In particular, we assumed as reference condition the photon cell response of $\alpha_{\rm X}=0.1$ Gy${}^{-1}$ and $\alpha_{\rm X}/\beta_{\rm X}=2$ Gy. • Second, the $\alpha^{i}_{\rm HCP}(T)$ and $\beta^{i}_{\rm HCP}(T)$ for a heavy charged particle type $i$ at energy $T$ is estimated by linear regression fit of the ion cell response in the dose interval $[0.5,5]$ Gy using the linear-quadratic model. • Third, depth-dependent $\alpha(z)$ and $\beta(z)$ values for each quasi-monoenergetic carbon ion beam in the base data set were generated by the additivity rules of Zaider and Rossi [28]: $$\alpha(z)=\frac{\sum_{i}\int_{0}^{\infty}\Phi_{T,i}\cdot\left(S_{i}(T)/\rho% \right)_{\rm{w}}\cdot\alpha^{i}_{\rm HCP}(T)\cdot dT}{\sum_{i}\int_{0}^{\infty% }\Phi_{T,i}\cdot\left(S_{i}(T)/\rho\right)_{\rm{w}}\cdot dT}$$ (3) and $$\sqrt{\beta(z)}=\frac{\sum_{i}\int_{0}^{\infty}\Phi_{T,i}\cdot\left(S_{i}(T)/% \rho\right)_{\rm{w}}\cdot\sqrt{\beta^{i}_{\rm HCP}(T)}\cdot dT}{\sum_{i}\int_{% 0}^{\infty}\Phi_{T,i}\cdot\left(S_{i}(T)/\rho\right)_{\rm{w}}\cdot dT}$$ (4) 2.3.5 Spread-out Bragg peak optimization Spread-out Bragg peaks were composed by weighted superposition of integrated depth-dose curves from the base data set (Sec. 2.3.3). The weights were determined by minimizing the squared residuals to a set, constant physical (2 Gy) or biological (3 GyRBE) dose across the SOBP region using the extension package HITXML, version 0.9.12444https://r-forge.r-project.org/projects/hitxml/ for the programming language R[25]. For biological dose optimization, the RBE was derived using the $\alpha$ and $\beta$ data for the carbon beam tabulated with depth (Sec. 2.3.4). The resulting $\alpha$ and $\beta$ were obtained by applying the same additivity rules as in Eqs. 3 and 4: $$\alpha(z)=\sum_{k}\frac{d_{k}(z)}{D(z)}\alpha_{k}(z)$$ (5) and $$\sqrt{\beta(z)}=\sum_{k}\frac{d_{k}(z)}{D(z)}\sqrt{\beta_{k}(z)}$$ (6) where $d_{k}(z)$ is the dose contribution from the $k$-th depth dose curve to the total dose $D(z)$ at a specific depth $z$. 2.4 Computational procedure for stopping-power ratios 2.4.1 General Following Bragg-Gray cavity theory, appendix B.6.1 of the TRS398 code of pratice [14] defines the fluence-weighted stopping-power ratio as $$s_{\rm{w,air}}^{\rm{TRS}}=\frac{\sum_{i}\int_{0}^{\infty}\Phi_{T,i}\cdot\left(% S_{i}(T)/\rho\right)_{\rm{w}}\cdot dT}{\sum_{i}\int_{0}^{\infty}\Phi_{T,i}% \cdot\left(S_{i}(T)/\rho\right)_{\rm{air}}\cdot dT}$$ (7) where $\Phi_{T,i}$ is the fluence differential in kinetic energy $T$ in water and $S_{i}(T)/\rho$ are the unrestricted mass stopping powers at energy $T$ in water and air, respectively. $i$ includes both primary ions and fragmented nuclei, but no secondary electrons. In Monte Carlo radiation transport, an upper limit of kinetic energy $T_{\rm{max},i}$ which is not exceeded by any particle of type $i$ is set. More importantly however, a lower limit $T_{\rm{min},i}$ has to be defined below which particle transport is terminated or faded out and the remaining kinetic energy of the track ends is locally deposited. This leads to a modification of Eq. 7: $$s_{\rm{w,air}}^{\rm{MC}}=\frac{\sum_{i}\int_{T_{\rm{min},i}}^{T_{\rm{max},i}}% \Phi_{T,i}\cdot\left(S_{i}(T)/\rho\right)_{\rm{w}}\cdot dT+D^{\rm{TE}}_{i,\rm{% w}}}{\sum_{i}\int_{T_{\rm{min},i}}^{T_{\rm{max},i}}\Phi_{T,i}\cdot\left(S_{i}(% T)/\rho\right)_{\rm{air}}\cdot dT+D^{\rm{TE}}_{i,\rm{air}}}$$ (8) with $D^{\rm{TE}}_{i}$ being the contribution to dose of the ‘track-ends’ in water and air, respectively. The impact of this lower integration limit and omitting the $D^{\rm{TE}}_{i}$ terms were discussed in previous studies [6, 12] and transport threshold values $T_{\rm{min}}$ for ions in the order of tens of keV/u used which were assumed to have negligible impact on the resulting $s_{\rm{w,air}}$. While Eq. 7 considers only ion transport and local deposition of all energy, the inclusion of secondary electron transport yields $$s_{\rm{w,air}}^{\rm{MC,e}}=\frac{\sum_{i}\int_{T_{\rm{min},i}}^{T_{\rm{max},i}% }\Phi_{T,i}\cdot\left(S_{i}(T,\Delta)/\rho\right)_{\rm{w}}\cdot dT+D^{\rm{TE}}% _{i,\rm{w}}}{\sum_{i}\int_{T_{\rm{min},i}}^{T_{\rm{max},i}}\Phi_{T,i}\cdot% \left(S_{i}(T,\Delta)/\rho\right)_{\rm{air}}\cdot dT+D^{\rm{TE}}_{i,\rm{air}}}$$ (9) where $S_{i}(T,\Delta)/\rho$ refers now to the restricted total mass stopping power555In most Monte-Carlo codes, but esp. in the Geant4 system used in this study, the restricted stopping power is the sum of the restricted electronic and the unrestricted radiative and nuclear stopping powers. and the threshold $\Delta$ equals the production threshold set for secondary electrons. $\Delta$ does not have to correspond to the transport threshold $T_{\rm{min,e}}$ for electrons, below which energy is not be transported further but locally deposited. The definition in Eq. 9 closely resembles the one used in Spencer-Attix cavity theory: $$s_{\rm{w,air}}^{\rm{SA}}=\frac{\sum_{i}\int_{T_{\rm{min},i}}^{T_{\rm{max},i}}% \Phi_{T,i}\cdot\left(S_{\rm{el},i}(T,\Delta)/\rho\right)_{\rm{w}}\cdot dT+\Phi% _{\Delta,i}\cdot\left(S_{\rm{el},i}(\Delta)/\rho\right)_{\rm{w}}\cdot\Delta}{% \sum_{i}\int_{T_{\rm{min},i}}^{T_{\rm{max},i}}\Phi_{T,i}\cdot\left(S_{\rm{el},% i}(T,\Delta)/\rho\right)_{\rm{air}}\cdot dT+\Phi_{\Delta,i}\cdot\left(S_{\rm{% el},i}(\Delta)/\rho\right)_{\rm{air}}\cdot\Delta}$$ (10) with restricted electronic mass stopping power $S_{{\rm el},i}(T,\Delta)/\rho$ and the approximation $\Phi_{\Delta,i}\cdot\left(S_{{\rm el},i}(\Delta)/\rho\right)\cdot\Delta$ – using unrestricted electronic stopping power – for the contribution of the track-end terms. Here, $T_{{\rm min},i}$ corresponds to kinetic energy of electrons that cannot escape a cavity of finite size and $\Delta$ is equal to $T_{\rm{min,e}}$. Gomà et al. [7] evaluated Eq. 10 for $\Delta=10$ keV and $T_{\rm{min,ions}}=100$ keV for proton beams and concluded that the track-end term for ions have negligible contribution and those for electrons can be omitted due to the minor role of electrons for $s_{\rm{w,air}}$, although they can deposit a major part of the total proton energy. The usage of electronic stopping powers in Eq. 10 is motivated by its origin in photon and electron dosimetry: the radiative component is omitted as it carries energy away from the volume of interest, i.e. the cavity. For ions, however, radiative energy loss is negligible and the average energy of the secondary electrons is too low to have a significant bremsstrahlung component. In contrast, nuclear stopping power contributes at low velocity to energy deposition at the point of interest and the total stopping power is therefore used in Eqs. 7–9. 2.4.2 Ion transport In the first part of this study, stopping-power ratios were computed using Eq. 8 with $T_{\rm{min}}=1$ keV for all ion types corresponding to the lowest energy in the tabulations available in the ICRU90 report [15]. To avoid binning artifacts with respect to $T$, the numerator and denominator in Eq. 8 were computed ‘in-flight’, i.e. during the simulation. In Monte Carlo transport, particles are followed in finite steps between ‘catastrophic’ events, e.g. the explicit production of a delta electron. Along these steps, the particle looses energy continuously in processes below the production threshold. For each step, therefore, its contribution to the integrands in Eq. 8 $$\int^{T_{j}}_{T_{j}-\Delta T_{j}}\left(\Phi_{T^{\prime},i}\right)_{j}\cdot% \left(S_{i}(T^{\prime})/\rho\right)\cdot dT^{\prime}$$ (11) was evaluated by $$\frac{1}{V}\int_{r_{i}(T_{j}-\Delta T_{j})}^{r_{i}(T_{j})}S_{i}(T_{i}(r^{% \prime}))/\rho\cdot dr^{\prime}$$ (12) where $r_{i}(T)$ corresponds to the residual range at kinetic energy $T$ for a particle of type $i$ in the continuous slowing down approximation (CSDA). To implement the computation of Eq. 12, first, relations between particle energy $T$ and residual range $r$ were obtained for all particle types $i$ by tabulating the integral of the reciprocal of stopping power data generated according to Sec. 2.1 $$r_{i}(T)=\int_{0}^{T}\frac{1}{S_{i}(T^{\prime})}\cdot dT^{\prime}$$ (13) Second, a inverse lookup table was obtained providing $T_{i}(r)$. These tables were then applied to tabulate $$\frac{1}{V}\int_{0}^{r_{i}(T)}S_{i}(T_{i}(r^{\prime}))/\rho\cdot dr^{\prime}$$ (14) which was eventually used for fast computation of the integrands in Eq. 8 at each step. This numerical procedure corresponds to an extension of the Method 3 for the calculation of fluence differential in energy presented in [11] (cf. Eq. 20 therein). It allows to take the variation of stopping power during the step into account without binning artifacts. This is an advantage over a simple multiplication of the step length $l_{j}$ by stopping power (corresponding to Method 1 in [11]). In this case, Eq. 11 would be approximated by $l_{j}/V\cdot S_{i}(T_{j})/\rho$ where $T_{j}$ is the energy before, during, or after the step depending of the implementation. The approximation can be improved by shorter step lengths – but only at steeply increasing computational costs which will eventually render any refinement infeasible. It should be noted that the integration $$\frac{1}{V}\int_{r_{i}(T_{j}-\Delta T_{j})}^{r_{i}(T_{j})}dr^{\prime}\approx% \int^{T_{j}}_{T_{j}-\Delta T_{j}}\left(\Phi_{T^{\prime},i}\right)_{j}\cdot dT^% {\prime}=\left(\Phi_{i}\right)_{j}$$ (15) yields an approximation of the contribution of step $j$ to the fluence of a particle of type $i$ where the difference between the actual geometrical step length $l_{j}$ and the CSDA step length $(\Delta r)_{j}$ since multiple Coulomb scattering is neglected. When a particle reached the lower limit for transport $T_{{\rm min},i}$, the energy divided by the mass of the current volume was added as track-end contribution, i.e. $$D^{\rm{TE}}_{i,\rm{w}}=\frac{T_{{\rm min},i}}{\rho\cdot V}$$ (16) to the nominator of Eq. 8 and $$D^{\rm{TE}}_{i,\rm{air}}=\frac{T_{{\rm min},i}}{\rho\cdot V}\cdot\frac{\left(S% _{i}(T_{{\rm min},i})\right)_{\rm{air}}}{\left(S_{i}(T_{{\rm min},i})\right)_{% \rm{w}}}$$ (17) to the denominator. To study the impact of using the electronic instead of the total stopping power, a subset of the simulations were repeated using $S_{{\rm el},i}(T)$ instead of $S_{i}(T)$ in Eqs. 8, 16, and 17. 2.4.3 Including transport of secondary electrons Secondly, explicit transport of secondary electrons following Eq. 9 was performed following the same computational procedure as described above for Eq. 8. $T_{{\rm min},i}$ for ions and electrons was set to 1 keV, too, while the electron production threshold $\Delta$ was varied between 1 keV and 500 keV to study the impact of the production threshold on the $s_{\rm{w,air}}$ resulting from the simulation. Again, the impact of replacing the total by the electronic stopping power was investigated. 2.4.4 Spencer-Attix stopping-power ratio Lastly, Eq. 10 was evaluated with $T_{\rm{min},i}$ for ions set to values corresponding a range defined by $T_{\rm{min,e}}=\Delta=10$ keV. These were obtained by using the relations $T_{i}(r)$ and $r_{i}(T)$ between kinetic energy and residual range derived from Eq. 13: $$T_{{\rm min},i}=T_{i}(r_{\rm{e}}(10\,\rm{keV}))$$ (18) 2.5 Beam quality specifier Instead of the initial kinetic energy, range or SOBP width, we use as a beam quality specifier the residual range $R_{\rm{res}}$ at a depth $z$ $$R_{\rm{res}}=R_{p}-z$$ (19) where $R_{\rm{p}}$ is the practical range of the beam, i.e. the depth at which the absorbed dose beyond the Bragg peak or spread-out Bragg peak decreases to 10 % of its maximum value. Due to the fragmentation tail, the Bragg peak may not directly drop to a 10 % level. In this case a tangent at the steepest point of the distal fall-off is used to construct the virtual position of $R_{\rm{p}}$. For SOBPs, the Bragg peak of the highest energy determines the practical range. 2.6 Beam quality correction factors $k_{Q}$ factors were computed according to Eq. 2 using the stopping-power ratio values for carbon beams obtained using ion transport only settings as described in Sec. 3.2, the updated $W/e$ value from ICRU90 (34.71 keV) and – in lack of data available – a total, chamber-independent pertubation factor $p_{Q}=1$. $\left(s_{\rm{w,air}}\cdot p\right)_{{}^{60}\rm{Co}}$ were taken from [8] and a $W/e$ value for ${}^{60}$Co of 33.97 keV. Where available, experimental $k_{Q}$ [21, 22] averaged with calculated values. 3 Results and Discussion Tab. 3 gives an overview on the beam configurations and simulation parameters used in this study. After confirming the soundness of the approach used for the computation of SPR (Sec. 3.1), Sec. 3.2 focuses on a wide variety of irradiation situations using the simulation of ion transport only. The results are then compared to simulations taking into account the transport of secondary electrons (Sec. 3.3) and the usage of the Spencer-Attix computational rule for $s_{\rm{w,air}}$ (Sec. 3.4). Results from Sec. 3.2 are eventually used to obtain updated $k_{Q}$ factors for a number of ionization chambers (Sec. 3.5.3). 3.1 General 3.1.1 Comparison to previous results Andreo et al. [3] estimated the impact of the new key data recommendation, esp. the change in $I$ for water, for a pristine carbon beam of initial kinetic energy of 250 MeV/u using the Shield-HIT transport simulation code v10. The SPR values are 0.5 % lower for $I_{\rm{water}}=78$ eV in comparison to $I_{\rm{water}}=75$ eV as used in the TRS398 code of practice (Fig. 2). These results were compared to the outcome from the modified Geant4 code used in this study (cf. Sec. 2.2) which agree within 0.1 % with the data from [3] for $I_{\rm{water}}=78$ eV, except in the immediate vicinity of and beyond the Bragg peak. 3.1.2 Impact of lower integration threshold Fig. 3 shows that the impact of a variation of the lower integration threshold, $T_{\rm{min}}$, between 1 and 100 keV on the calculation of stopping-power ratio for the carbon ion beam found in Fig. 2 is negligible. Similar results were observed for pristine Bragg peaks with residual ranges between 3 and 30 cm and lower integration thresholds between 1 and 500 keV (data not shown). 3.1.3 Total vs. electronic stopping power The impact of using electronic stopping power versus total stopping power to calculate the stopping-power ratio is also shown in Fig. 3. A systematically smaller stopping-power ratio was observed when using the electronic stopping power but the effect is in the order of ($0.5-2\cdot 10^{-4}$) and does thus not play a role in this study. Similar results were obtained for residual range in water between 3 and 30 cm (again, data not shown). 3.2 Ion transport 3.2.1 Pristine Bragg peaks Depth-dose profiles and stopping-power ratios obtained for monoenergetic carbon ion beams with residual range in water from 3 to 30 cm are shown in Fig. 4. High values of $s_{\rm{w,air}}$ are observed around the Bragg peak for beams with small residual ranges. However, as the effect of energy straggling increases for beams with larger range – as seen in the increasing width of the Bragg peak – the spatial concentration of stopping carbon ions for which $s_{\rm{w,air}}$ is large (see Fig. 1) decreases and the high stopping-power ratio values fade away. In Fig. 5, the $s_{\rm{w,air}}$ values needed for reference dosimetry at a depth of 1 g cm${}^{-2}$ for the beam qualities investigated were parametrized as a function of residual range $R_{\rm{res}}$: $$s_{\rm{w,air}}(R_{\rm{res}})=a+b\cdot R_{\rm{res}}+\frac{c}{R_{\rm{res}}}\quad.$$ (20) Tab. 4 gives the resulting values for the parameters. To simplify the interpretation and usage, the curve was fitted to the data points with the constraint $$c=-b\cdot\left(R_{\rm{res}}^{\rm{rep}}\right)^{2}$$ (21) with $R_{\rm{res}}^{\rm{rep}}=10$ cm. At this depth, $a$ equal $1.1247$ and represents $s_{\rm{w,air}}$ within an interval (-0.07 %,+0.12 %) for beams with residual ranges in water between 3 and 30 cm. 3.2.2 Spread-out Bragg peaks The depth-dose profiles and stopping-power ratios obtained from both physically and biologically optimized SOBPs with varying width and range are shown in Fig. 6. A pattern can be observed in the stopping-power ratio characterized by an abrupt increase of $s_{\rm{w,air}}$ at the proximal edge of the SOBP of about 2 ‰ due to the higher stopping-power ratio of stopping carbon ions present in this part. The $s_{\rm{w,air}}$ values within the high dose region for different SOBP widths lie well on top of one another (middle panels). This is also observed for SOBPs at different depths when analyzed as a function of residual range (lower panels). This supports the suitability of $R_{\rm{res}}$ as a simplified beam quality specifier. In the same way as done for pristine Bragg curves, $s_{\rm{w,air}}(R_{\rm{res}})$ was parameterized by Eq. 20 for reference conditions, which is in this case the depth at the center of the SOBP. The shape of the function is different than for pristine Bragg peaks as reflected in the values of $b$ and $c$. The data points at the mid-SOBP for the physical SOBPs were fitted using the constraint in Eq. 21 with $R_{\rm{res}}^{\rm{rep}}=3.5$ cm. Here $a=1.1274$ represent $s_{\rm{w,air}}$ within (-0.09 %,+0.18 %) for the different widths and depths of the physical SOBPs investigated in this study. For the case of the mid-SOBP positions for the biological SOBPs, the same parameters $b$ and $c$ were used, and only the parameter $a$ was fitted, accounting for a systematic shift of the $s_{\rm{w,air}}$ values. Since the weights of the most distal quasi-monoenergetic beams contributing to the SOBP is reduced by taking into account a higher RBE in biological optimization, the stopping power values in the SOBP region systematically increases. Although reference dosimetry should focus on the determination on physical dose and corresponding beam configurations, even a strong deviation of dose from a physically optimized SOBP by an approximate factor of 2 at the distal end as in the case shown, $s_{\rm{w,air}}$ increases only by about 0.8 ‰. 3.2.3 Choice of SPR values The value of $s_{\rm{w,air}}$ suggested by these studies for use in reference dosimetry can be summarized as follows: • If a single constant $s_{\rm{w,air}}$ is used as recommended in TRS398, an average between the values representative for pristine and for (physically optimized) SOBP configurations should be used, which is 1.126. All values obtained in this study fall into a ($-0.2\,\%$,$+0.3\,\%$) interval around this value. The change of $-0.4\,\%$ with respect to the recommended value of 1.130 in TRS398 corresponds with the statements given in ICRU90. • If the SPR should be representative for either a pristine or SOBP situation, then the corresponding value of $a$ from Tab. 4 can be used. • Eventually, the full parameterization given in Tab. 4 can be employed to study the dependence of $s_{\rm{w,air}}$ on the specific beam situation. The $s_{\rm{w,air}}$ data given in the German DIN6801-1 are considerably lower (around 1.120) – which is mostly due to the lower value for $I_{\rm{air}}$. 3.3 Electron transport As seen in Fig. 8, the stopping-power ratio increases with lower production threshold for the transport of secondary electrons, reaching considerably higher values than when considering ion transport only. This is consistent with previous observations [18, 27]. In our study, the change of stopping-power ratio $s_{\rm{w,air}}$ due to the transport of secondary electrons can be explained by the different ratio of monoenergetic electronic stopping power as shown in Fig. 1. In particular, the lower the production threshold for the transport of secondary electrons, the larger the number of electron tracks contributing to the denominator term for the track-end term (Eq. 17). The usage of such track-end terms corresponds to the assumption that the stopping-power ratio at $T_{\rm{min}}$ is representative for the entire slow down process of electrons below this energy. If this was correct, the results seen should be closer to the true value as in the ‘ions only’ case. But on one hand it is unclear yet how stopping power continues for energies below 1 keV, and it is therefore not safe to assume constancy. More severe, however, is the fact that secondary electrons in clinical photon beams have initial energies in the MeV range and that the fluence spectra from the slow down process is relatively constant. Thus, the assumption the track ends below 1 keV are of minor importance is valid. But in ion beams the vast majority of electrons are of very low kinetic energy, i.e., with a lower production threshold not only their number increases rapidly (which is seen in the apparent divergence of $s_{\rm{w,air}})$ with $\Delta$) but their contribution to energy deposition below 1 keV is major and not negligible. In carbon beams this effect is even more pronounced than in protons due to the large discrepancy between electron and carbon stopping-power ratios at lower energy, esp. for ICRU90 (see Fig. 1). Therefore, we suggest that the usage of pure ion unrestricted stopping powers in the ‘ions only’ case – which should include the entire slow down process of electrons – outweighs the inaccuracy of neglecting the energy transport by secondary electron and assuming local deposition. 3.4 Spencer-Attix Indeed, using the Spencer-Attix formulation of the stopping-power ratio (Eq. 10), $s_{\rm{w,air}}$ is closer to the results from the ‘ions only’ case (Eq. 8) than including the electron contribution by Eq. 9. This, however, is not based on a more realistic description of the situation and present the same limitations described above regarding the contribution of delta electrons to $s_{\rm{w,air}}$. 3.5 Beam quality correction factors 3.5.1 Pertubation factors To derive updated $k_{Q}$ values for ionization chambers in carbon beams, the ICRU90 key quantities listed in Tab. 1 together with the constant value of $s_{\rm{w,air}}$ as suggested in Sec. 3.2.3 were used. Chamber-specific perturbation factors were obtained from three sources: 1. Perturbation factors were directly extracted from TRS398 and $k_{Q}$ values recalculated using updated key quantities (Tab. 1) and Eq. 2. 2. The draft of the German Code of Practice (DIN6801-1)[5] lists updated perturbation factors extracted from Muir et al. [19] for some chambers. For others, the source is unclear as they are reported to be from Muir et al., but the respective chamber models are not studied in the original paper. These two type of perturbation factors were directly used for computation of the updated beam quality correction factors together with the new key quantities. Perturbation factors reported in DIN6801-1 to be identical to TRS398 were multiplied by 1.0012 to follow the ICRU90 recommendation. 3. Gomà et al. [8] provide combined data for $\left(s_{\rm{w,air}}\cdot p\right)_{{}^{60}{\rm{Co}}}$ which were directly inserted into Eq. 2 for $k_{Q}$ calculation for chamber types studied therein. 3.5.2 Original values Tab. 5 lists these values together with $k_{Q}$ data from TRS398 report [14], the German Code of Practice [5] and experimental data  [21, 22]. Fig. 10 shows a subset excluding chambers for which perturbation factors are only from TRS398 and no experimental data were available. Original $k_{Q}$ data from DIN6801-1 (orange squares) are lower for most chambers than the values from TRS398. A baseline difference of $-0.9\,\%$ arises from the considerably lower SPR for ions (1.121 for a residual range of 15 cm) used in the German Code of Practice. Experimental values for cylindrical chambers are closer to the TRS398 data ($-0.5\,\%$) than to those from DIN6801-1 ($+0.9\,\%$). For plate-parallel chambers, there is only one value for TRS398 ($-0.4\,\%$) but three for DIN6801-1 ($-0.1\,\%$). 3.5.3 Updated values For updated $k_{Q}$ data with perturbation factors taken from TRS398 directly (case i), a constant change of $+0.5\,\%$ is seen for all chambers between TRS398 and the recalculations in this study due to the following (cf. Table 7.2 in [15]): • The impact of the new key data on the stopping-power ratio water to air for ${}^{60}$Co is $-0.5\,\%$. • The general chamber perturbation factor sees a large increase from recent transport calculations ($+1.2\,\%$) which yields an overall change of $\left(s_{\rm{w,air}}\cdot p\right)_{{{}^{60}\rm{Co}}}$ of $+0.7\,\%$ in the denominator of Eq. 2. • While the recommended value for $s_{\rm{w,air}}$ for carbon changes from 1.130 to 1.126 ($-0.4\,\%$), $W/e$ for air changes by $+0.6\,\%$ – yielding together $+0.2\,\%$ for the nominator. Together with the changes in the ${}^{60}$Co-related quantities, this totals to $-0.5\,\%$ and is reflected by the nearly constant distance between the corresponding symbols (blue squares, blue circles) in Fig. 10. Interestingly, this compensates the difference to experimental data for cylindrical chambers. For case (ii), i.e. the usage of perturbation factors from DIN6801-1, those updated $k_{Q}$ values for which perturbation factors were reported to be the same as in TRS398 are very close ($+0.2\,\%$) but not identical to the updated $k_{Q}$ derived from TRS398 directly. This could be due to round-off errors. Most perturbation factors from Muir and Rogers as used by DIN6801-1, however, are considerably larger than those originally used in TRS398. While this is in line with ICRU90, the actual difference is for most chambers less than the recommendation of $+1.2\,\%$ which causes the updated $k_{Q}$ values from this study that are based on perturbation factors from Muir and Rogers and those with unclear source to be considerably larger than the ones derived from TRS398 directly. Consequently, they show a deviation from experimental data ($+0.5\,\%$ for cylindrical, $+1.7\,\%$ for plane-parallel chambers). The relatively few updated $k_{Q}$ values based on the combined $\left(s_{\rm{w,air}}\cdot p\right)_{{{}^{60}\rm{Co}}}$ data from Gomà et al. were very close to the data updated from TRS398 directly for cylindrical chambers. For plate-parallel chambers, discrepancies in the order of 1 % arise. With a low number of data points currently available, however, no clear conclusion can be drawn. It is conspicuous, however, that the experimental data for two chambers (IBA PPPC-05 and -40) just agree with the original $k_{Q}$ values from DIN6801-1 within the lower bound of their uncertainty ($1.1\,\%$), but other data were generally considerably underestimated by the DIN CoP. 4 Conclusion The new key data on stopping power tables from the ICRU90 Report were implemented in the Geant4 toolkit for water-to-air stopping-power ratio calculations for monoenergetic and SOBP carbon ion beams. Results of $s_{\rm{w,air}}$ for monoergetic carbon ions were shown to agree within 0.1 % to previously published calculations providing confidence for the evaluation of SPR in different reference conditions. The impact of integration limits as well as the choice of electronic or total stopping power on the stopping-power ratio computation was shown to be negligible when accounting for ion transport only. When transport of secondary electrons is included, the specific choice of the electron production threshold is shown to have a large impact on $s_{\rm{w,air}}$. New recommendations for the water-to-air stopping-power ratio are presented, namely, $s_{\rm{w,air}}=1.1247$ for the reference condition of 1 g cm${}^{-2}$ depth for monoenertic carbon ions, and $s_{\rm{w,air}}=1.1274$ at the center of physically-optimized SOBPs. Parametrizations of $s_{\rm{w,air}}$ with respect to residual range in water were obtained for the reference conditions of monoenertic carbon ion beams and SOBPs. These can be applied to precisely estimate $s_{\rm{w,air}}$ at the different reference conditions investigated in this study. Eventually, it was shown that the new recommendation of a constant stopping-power ratio $s_{\rm{w,air}}=1.126$ represents the variation of $s_{\rm{w,air}}$ for different reference conditions within 0.3 % which is considerably smaller than the uncertainty currently connected with SPR data ($1.5\,\%$). The impact of the resulting $s_{\rm{w,air}}$ for ions together with the updated key data on the beam quality correction factors for carbon ion beams was evaluated. The changes due to the updated key quantities were found to agree very well with experimental data for the case of cylindrical chambers. For plate-parallel chamber, however, larger discrepancies are seen and more experimental and computational data is needed. Acknowledgements The authors are deeply grateful to Prof. Pedro Andreo and Prof. Steve Seltzer for fruitful discussions and making the BEST software available. References References [1] Agostinelli S et al 2003 Geant4 - A simulation toolkit Nucl. Instrum. Methods A 506 250–303 [2] Allison J et al 2006 Geant4 developments and applications IEEE Trans. Nucl. Sci. 53 270–8 [3] Andreo P, Wulff J, Burns D T and Palmans H 2013 Consistency in reference radiotherapy dosimetry: resolution of an apparent conundrum when ${}^{60}$Co is the reference quality for charged-particle and photon beams Phys. Med. Biol. 58(19) 6593–6621 [4] Bouchard H 2012 A theoretical re-examination of Spencer–Attix cavity theory Med. Phys. 57(11) 3333–3358 [5] DIN 6801-1:2016-06 - Entwurf Jun 2016 Dosismessverfahren nach der Sondenmethode für Protonen- und Ionenstrahlung - Teil 1: Ionisationskammern (Procedures of dosimetry with probe-type detectors for proton and ion radiation - Part 1: Ionization chambers) 68 p., Beuth [6] Geithner O, Andreo P, Sobolevsky N, Hartmann G and Jäkel O 2006 Calculation of stopping power ratios for carbon ion dosimetry Phys. Med. Biol. 51(9) 2279–2292 [7] Gomà C, Andreo P and Sempau J 2013 Spencer-–Attix water/medium stopping-power ratios for the dosimetry of proton pencil beams Phys. Med. Biol. 58(8) 2509–2522 [8] Gomà C, Andreo P and Sempau J 2016 Monte Carlo calculation of beam quality correction factors in proton beams using detailed simulation of ionization chambers Phys. Med. Biol. 61(6) 2389–2406 [9] Greilich S, Hahn U, Kiderlen M, Andersen C E and Bassler N 2014 Efficient calculation of local dose distributions for response modeling in proton and heavier ion beams The European Physical Journal D 68(10) 327 [10] Hartmann G H, Jäkel O, Heeg P, Karger C P, and Krießbach A 1999 Determination of water absorbed dose in a carbon ion beam using thimble ionization chambers Phys. Med. Biol. 44(5) 1193–1206 [11] Hartmann G H, Andreo P 2017 Fluence calculation methods in Monte Carlo dosimetry simulations Z. Med. Phys. in press, doi: 10.1016/j.zemedi.2018.08.003 [12] Henkner K, Bassler N, Sobolevsky N and Jäkel O 2009 Monte Carlo simulations on the water-to-air stopping power ratio for carbon ion dosimetry Med. Phys. 36(4) 1230–1235 [13] Hiraoka T and Bichsel H 1995 Stopping powers and ranges for heavy ions Jpn. J. Med. Phys. 15 91-100 [14] International Atomic Energy Agency 2000 Absorbed dose determination in external beam radiotherapy – An international code of practice for dosimetry based on standards of absorbed dose to water Technical Report Series 398 [15] International Commission on Radiation Units and Measurements 2014 Key data for ionizing-radiation dosimetry: measurement standards and applications J. ICRU 14(1) [16] Laitano R F and Rosetti M 2000 Proton stopping powers averaged over beam energy spectra Med. Phys. 45(10) 3025–3043 [17] Lühr A, Hansen D C, Jäkel O, Sobolevsky N and Bassler N 2011 Analytical expressions for water-to-air stopping-power ratios relevant for accurate dosimetry in particle therapy Phys. Med. Biol. 56(8) 2515–2533 [18] Medin J and Andreo P 1997 Monte Carlo calculated stopping-power ratios, water/air, for clinical proton dosimetry (50-250 MeV) Phys. Med. Biol. 42(1) 89–105 [19] Muir, B R and Rogers, D W O 2010 Monte Carlo calculations of $k_{Q}$, the beam quality conversion factor Med. Phys. 37(11) 5939–5950 [20] Nahum A E 1978 Water/air mass stopping power ratios for megavoltage photon and electron beams Med. Phys. 23(1) 24–38 [21] Osinga-Blättermann J-M, Brons S, Greilich, S, Jäkel O, Krauss A 2017 Direct determination of $k_{Q}$ for Farmer-type ionization chambers in a clincal scanned carbon ion beam using water calorimetry Phys. Med. Biol. 62(6) 2033–2054 [22] Osinga-Blättermann J-M and Krauss A 2018 Determination of $k_{Q}$ factors for cylindrical and plane-parallel ionization chambers in a scanned ion beam by means of cross calibration Phys. Med. Biol. accepted manuscript, doi: 10.1088/1361-6560/aaf5ac [23] Paul H and Schinner A 2001 An empirical approach to the stopping power of solids and gases for ions from ${}^{3}$Li to ${}^{18}$Ar Nucl. Instrum. Meth. B 179 299–315 [24] Paul H and Schinner A 2002 An empirical approach to the stopping power of solids and gases for ions from ${}^{3}$Li to ${}^{18}$Ar — Part II Nucl. Instrum. Meth. B 195 166–174 [25] R Core Team 2018 R: A Language and Environment for Statistical Computing R Foundation for Statistical Computing, Vienna, Austria, www.R-project.org [26] Salamon M H 1980 A range-energy program for relativistic heavy ions in the region $1<E<3000$ MeV/amu LBL Report 10446, LBL, Berkeley [27] Sánchez-Parcerisa D, Gemmel A, Jäkel O, Rietzel E and Parodi K 2013 Influence of the delta ray production threshold on water-to-air stopping power ratio calculations for carbon ion beam radiotherapy Phys. Med. Biol. 58(1) 145–158 [28] Zaider M and Rossi H H The Synergistic Effects of Different Radiations Radiation Research 83(3) 732–739
FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows Jiawei Yu1\equalcontrib, Ye Zheng2,3\equalcontrib, Xiang Wang1, Wei Li1, Yushuang Wu4, Rui Zhao1, Liwei Wu1 Abstract Unsupervised anomaly detection and localization is crucial to the practical application when collecting and labeling sufficient anomaly data is infeasible. Most existing representation-based approaches extract normal image features with a deep convolutional neural network and characterize the corresponding distribution through non-parametric distribution estimation methods. The anomaly score is calculated by measuring the distance between the feature of the test image and the estimated distribution. However, current methods can not effectively map image features to a tractable base distribution and ignore the relationship between local and global features which are important to identify anomalies. To this end, we propose FastFlow implemented with 2D normalizing flows and use it as the probability distribution estimator. Our FastFlow can be used as a plug-in module with arbitrary deep feature extractors such as ResNet and vision transformer for unsupervised anomaly detection and localization. In training phase, FastFlow learns to transform the input visual feature into a tractable distribution and obtains the likelihood to recognize anomalies in inference phase. Extensive experimental results on the MVTec AD dataset show that FastFlow surpasses previous state-of-the-art methods in terms of accuracy and inference efficiency with various backbone networks. Our approach achieves 99.4% AUC in anomaly detection with high inference efficiency. 1 Introduction The purpose of anomaly detection and localization in computer vision field is to identify abnormal images and locate abnormal areas, which is widely used in industrial defect detection (Bergmann et al. 2019, 2020), medical image inspection (Philipp Seeböck et al. 2017), security check (Akcay, Atapour-Abarghouei, and Breckon 2018) and other fields. However, due to the low probability density of anomalies, the normal and abnormal data usually show a serious long-tail distribution, and even in some cases, no abnormal samples are available. The drawback of this reality makes it difficult to collect and annotate a large amount of abnormal data for supervised learning in practice. Unsupervised anomaly detection has been proposed to address this problem, which is also denoted as one-class classification or out-of-distribution detection. That is, we can only use normal samples during training process but need to identify and locate anomalies in testing. One promising method in unsupervised anomaly detection is using deep neural networks to obtain the features of normal images and model the distribution with some statistical methods, then detect the abnormal samples that have different distributions (Bergman and Hoshen 2020; Rippel, Mertens, and Merhof 2021; Yi and Yoon 2020; Cohen and Hoshen 2020; Defard et al. 2020). Following this methodology, there are two main components: the feature extraction module and the distribution estimation module. To the distribution estimation module, previous approaches used the non-parametric method to model the distribution of features for normal images. For example, they estimated the multidimensional Gaussian distribution (Li et al. 2021; Defard et al. 2020) by calculating the mean and variance for features, or used a clustering algorithm to estimate these normal features by normal clustering (Reiss et al. 2021; Roth et al. 2021). Recently, some works (Rudolph, Wandt, and Rosenhahn 2021; Gudovskiy, Ishizaka, and Kozuka 2021) began to use normalizing flow (Kingma and Dhariwal 2018) to estimate distribution. Through a trainable process that maximizes the log-likelihood of normal image features, they embed normal image features into standard normal distribution and use the probability to identify and locate anomalies. However, original one-dimensional normalizing flow model need to flatten the two-dimensional input feature into a one-dimensional vector to estimate the distribution, which destroys the inherent spatial positional relationship of the two-dimensional image and limits the ability of flow model. In addition, these methods need to extract the features for a large number of patches in images through the sliding window method, and detect anomalies for each patch, so as to obtain anomaly location results, which leads to high complexity in inference and limits the practical value of these methods. To address above problems, we propose the FastFlow which extend the original normalizing flow to two-dimensional space. We use fully convolutional network as the subnet in our flow model and it can maintain the relative position of the space to improve the performance of anomaly detection. At the same time, it supports the end-to-end inference of the whole image and directly outputs the anomaly detection and location results at once to improve the inference efficiency. To the feature extraction module in anomaly detection, besides using CNN backbone network such as ResNet (He et al. 2016) to obtain discriminant features, most of the existing work (Defard et al. 2020; Reiss et al. 2021; Rudolph, Wandt, and Rosenhahn 2021; Gudovskiy, Ishizaka, and Kozuka 2021) focuses on how to reasonably use multi-scale features to identify anomalies at different scales and semantic levels, and achieve pixel-level anomaly localization through sliding window method. The importance of the correlation between global information and local anomalies (Yan et al. 2021; Wang et al. 2021) can not be fully utilized, and the sliding window method needs to test a large number of image patches with high computational complexity. To address the problems, we use FastFlow to obtain learnable modeling of global and local feature distributions through an end-to-end testing phase, instead of designing complicated multi-scale strategy and using sliding window method. We conducted experiments on two types of backbone networks: vision transformers and CNN. Compared with CNN, vision transformers can provide a global receptive field and make better use of global and local information while maintaining semantic information in different depths. Therefore, we only use the feature of one certain layer in vision transform. Replacing CNN with vision transformer seems trivial, but we found that performing this simple replacement in other methods actually degrade the performance, but our 2D flow achieve competitive results when using CNN. Our FastFlow has stronger global and local modeling capabilities, so it can better play the effectiveness of the transformer. As shown in Figure 1, in our approach, we first extract visual features by the feature extractor and then input them into the FastFlow to estimate the probability density. In training stage, our FastFlow is trained with normal images to transform the original distribution to a standard normal distribution in a 2D manner. In inference, we use the probability value of each location on the two-dimensional feature as the anomaly score. To summarize, the main contributions of this paper are: • We propose a 2D normalizing flow denoted as FastFlow for anomaly detection and localization with fully convolutional networks and two-dimensional loss function to effectively model global and local distribution. • We design a lightweight network structure for FastFlow with the alternate stacking of large and small convolution kernels for all steps. It adopts an end-to-end inference phase and has high efficiency. • The proposed FastFlow model can be used as a plug-in model with various different feature extractors. The experimental results in MVTec anomaly detection dataset (Bergmann et al. 2019) show that our method outperforms the previous state-of-the-art anomaly detection methods in both accuracy and reasoning efficiency. 2 Related Work 2.1 Anomaly Detection Methods Existing anomaly detection methods can be summarized as reconstruction-based and representation-based methods. Reconstruction-based methods (Bergmann et al. 2019; Gong et al. 2019; Perera, Nallapati, and Xiang 2019) typically utilize generative models like auto-encoders or generative adversarial networks to encode and reconstruct the normal data. These methods hold the insights that the anomalies can not be reconstructed since they do not exist at the training samples. Representation-based methods extract discriminative features for normal images (Ruff et al. 2018; Bergman and Hoshen 2020; Rippel, Mertens, and Merhof 2021; Rudolph, Wandt, and Rosenhahn 2021) or normal image patches (Yi and Yoon 2020; Cohen and Hoshen 2020; Reiss et al. 2021; Gudovskiy, Ishizaka, and Kozuka 2021) with deep convolutional neural network, and establish distribution of these normal features. Then these methods obtain the anomaly score by calculating the distance between the feature of a test image and the distribution of normal features. The distribution is typically established by modeling the Gaussian distribution with mean and variance of normal features (Defard et al. 2020; Li et al. 2021), or the kNN for the entire normal image embedding (Reiss et al. 2021; Roth et al. 2021). We follow the methodology in representation-based method which extract the visual feature from vision transformer or ResNet and establish the distribution through FastFlow model. 2.2 Feature extractors for Anomaly Detection With the development of deep learning, recent unsupervised anomaly detection methods use deep neural networks as feature extractors, and produce more promising anomaly results. Most of them (Cohen and Hoshen 2020; Defard et al. 2020; Roth et al. 2021) use ResNet (He et al. 2016) to extract distinguish visual features. Some work has also begun to introduce ViT (Dosovitskiy et al. 2020) into unsupervised anomaly detection fields, such as VT-ADL (Mishra et al. 2021) uses vision transformer as backbone in a generated-based way. ViT has a global receptive field and can learn the relationship between global and local better. DeiT (Touvron et al. 2021a) and CaiT (Touvron et al. 2021b) are two typical models for ViT. DeiT introduces a teacher-student strategy specific to transformers, which makes image transformers learn more efficiently and got a new state-of-the-art performance. CaiT proposes a simple yet effective architecture designed in the spirit of encoder/decoder architecture and demonstrates that transformer models offer a competitive alternative to the best convolutional neural networks. In this paper, we use various networks belonging to CNN and ViT to prove the universality of our method. 2.3 Normalizing Flow Normalizing Flows (NF) (Rezende and Mohamed 2015) are used to learn transformations between data distributions with special property that their transform process is bijective and the flow model can be used in both directions. Real-NVP (Dinh, Sohl-Dickstein, and Bengio 2016) and Glow (Kingma and Dhariwal 2018) are two typical methods for NF, in which both forward and reverse processes can be processed quickly. NF is generally used to generate data from variables sampled in a specific probability distribution, such as images or audios. Recently, some work (Rudolph, Wandt, and Rosenhahn 2021; Gudovskiy, Ishizaka, and Kozuka 2021) began to use it for unsupervised anomaly detection and localization. DifferNet (Rudolph, Wandt, and Rosenhahn 2021) achieved good image level anomaly detection performance by using NF to estimate the precise likelihood of test images. Unfortunately, this work failed to obtain the exact anomaly localization results since they flattened the outputs of feature extractor. CFLOW-AD (Gudovskiy, Ishizaka, and Kozuka 2021) proposes to use hard code position embedding to leverage the distribution learned by NF, which probably underperforms at more complicated datasets. 3 Methodology In this section, we introduce the pipeline of our method and the architecture of the FastFlow, as shown in Figure 2. We first set up the problem definition of unsupervised anomaly detection, and introduce the basic methodology that uses a learnable probability density estimation model in the representation-based method. Then we describe the details of feature extractor and FastFlow models, respectively. 3.1 Problem Definition and Basic Methodology Unsupervised anomaly detection is also denoted as one-class classification or out-of-distribution detection which requires the model to determines whether the test image is normal or abnormal. Anomaly localization requires a more fine-grained result that gives the anomalies label for each pixel. During the training stage, only normal images were observed, but the normal images and abnormal images simultaneously appear in inference. One of the mainstream methods is representation-based method which extracts discriminative feature vectors from normal images or normal image patches to construct the distribution and calculate anomaly score by the distance between the embedding of a test image and the distribution. The distribution is typically characterized by the center of an n-sphere for the normal image, the Gaussian distribution of normal images, or the normal embedding cluster stored in the memory bank obtained from KNN. After extract the features of the training dataset $D=\{x_{1},x_{2},\cdots,x_{N}\}$ where $x_{i},i=1,2,\cdots,N$ are samples from the distribution $p_{X}(x)$, a representation-based anomaly detection model $\mathcal{P}=\{P_{\theta}:\theta\in\Theta\}$ aims to learn the parameter $\theta$ in the parameter space $\Theta$ to map all $x_{i}$ from the raw distribution $p_{X}(x)$ into the same distribution $p_{Z}(z)$, with anomalous pixels or instances mapped out of the distribution. In our method, we follow this methodology and propose FastFlow $P_{\theta}$ to project the high-dimensional visual features of normal images extracted from typical backbone networks into the standard normal distribution. 3.2 Feature Extractor In the whole pipeline of our method, we first extract the representative feature from the input image through ResNet or vision transformers. As mentioned in the Sec 1, one of significant challenges in the anomaly detection task is the global relation grasped to distinguish those abnormal regions from other local parts. Therefore, when using vision transformer (ViT) (Dosovitskiy et al. 2020) as the feature extractor, we only use the feature of one certain layer because ViT has stronger ability to capture the relationship between local patches and the global feature. For ResNet, we directly use the features of the last layer in the first three blocks, and put these features into three corresponding FastFlow model. 3.3 2D Flow Model As shown in Figure 2, our 2D flow $f:X\rightarrow Z$ is used to project the image features $x\in p_{X}(x)$ into the hidden variable $z\in p_{Z}(z)$ with a bijective invertible mapping. For this bijection function, the change of the variable formula defines the model distribution on $X$ by: $$\displaystyle{p}_{X}(x)={p}_{Z}(z)\left|\mathbf{det}(\frac{\partial z}{\partial x})\right|$$ (1) We can estimate the log likelihoods for image features from $p_{Z}(z)$ by: $$\displaystyle\log{p}_{X}(x)$$ $$\displaystyle=\log p_{Z}(z)+\log\left|\mathbf{det}(\frac{\partial z}{\partial x})\right|$$ (2) $$\displaystyle=\log p_{Z}(f_{\theta}(x))+\log\left|\mathbf{det}(\frac{\partial f_{\theta}(x)}{\partial x})\right|,$$ where $z\sim\mathcal{N}(o,I)$ and the $\mathbf{\frac{\partial f_{\theta}(x)}{\partial x}}$ is the Jacobian of a bijective invertible flow model that $z=f_{\theta}(x)$ and $x=f^{-1}_{\theta}(z)$, $\theta$ is parameters of the 2D flow model. In inference, the features of anomalous images should be out of distribution and hence have lower likelihoods than normal images and the likelihood can be used as the anomaly score. Specifically, we sum the two-dimensional probabilities of each channel to get the final probability map and upsample it to the input image resolution using bilinear interpolation. In actual implementation, our flow model $f_{2d}$ is constructed by stacking multiple invertible transformations blocks $f_{i}$ in a sequence that: $$X\xrightarrow{f_{1}}H_{1}\xrightarrow{f_{2}}H_{2}\xrightarrow{f_{3}}\cdots\xrightarrow{f_{K}}Z,$$ and $$X\xleftarrow{f^{-1}_{1}}H_{1}\xleftarrow{f^{-1}_{2}}H_{2}\xleftarrow{f^{-1}_{3}}\cdots\xleftarrow{f^{-1}_{K}}Z,$$ where the 2D flow model is $f_{2d}=f_{1}\circ f_{2}\circ f_{3}\circ\cdots\circ f_{K}$ with $K$ transformation blocks. Each transformation block $f_{i}$ consists of multiple steps. Following (Dinh, Krueger, and Bengio 2014), we employ affine coupling layers in each block, and each step is formulated as follow: $$\displaystyle y_{a},y_{b}$$ $$\displaystyle=\text{split}(y)$$ (3) $$\displaystyle y^{\prime}_{a}$$ $$\displaystyle=y_{a}$$ $$\displaystyle y^{\prime}_{b}$$ $$\displaystyle=s(y_{a})\odot y_{b}+b(y_{a})$$ $$\displaystyle y^{\prime}$$ $$\displaystyle=\text{concat}(y^{\prime}_{a},y^{\prime}_{b}),$$ where $s(y_{a})$ and $b(y_{a})$ are outputs of two neural networks. The split($\cdot$) and concat($\cdot$) functions perform splitting and concatenation operations along the channel dimension. The two subnets s($\cdot$) and b($\cdot$) are usually implemented as fully connected networks in original normalizing flow model and need to flatten and squeeze the input visual features from 2D to 1D which destroy the spatial position relationship in the feature map. To convert the original normalizing flow to 2D manner, we adopt two-dimensional convolution layer in the default subnet to reserve spatial information in the flow model and adjust the loss function accordingly. In particular, we adopt a fully convolutional network in which 3$\times$3 convolution and 1$\times$1 convolution appear alternately, which reserves spatial information in the flow model. 4 Experiments 4.1 Datasets and Metrics We evaluate the proposed method on three datasets: MVTec AD (Bergmann et al. 2019), BTAD (Mishra et al. 2021) and CIFAR-10 (Krizhevsky, Hinton et al. 2009). MVTec AD and BTAD are both industrial anomaly detection datasets with pixel-level annotations, which are used for anomaly detection and localization. CIFAR-10 is built for image classification and we use it to do anomaly detection. Following the previous works, we choose one of the categories as normal, and the rest as abnormal. The anomalies in these industrial datasets are finer than those in CIFAR-10, and the anomalies in CIFAR-10 are more related to the semantic high-level information. For example, the anomalies in MVTec AD are defined as small areas, while the anomalies in CIFAR-10 dataset are defined as different object categories. Under the unsupervised setting, we train our model for each category with its respective normal images and evaluate it in test images that contain both normal and abnormal images. The performance of the proposed method and all comparable methods is measured by the area under the receiver operating characteristic curve (AUROC) at image or pixel level. For the detection task, evaluated models are required to output single score (anomaly score) for each input test image. In the localization task, methods need to output anomaly scores for every pixel. 4.2 Complexity Analysis We make a complexity analysis of FastFlow and other methods from aspects of inference speed, additional inference time and additional model parameters, “additional” refers to not considering the backbone itself. The hardware configuration of the machine used for testing is Intel(R) Xeon(R) CPU E5-2680 [email protected] and NVIDIA GeForce GTX 1080Ti. SPADE and Patch Core perform KNN clustering between each test feature of each image patch and the gallery features of normal image patches, and they do not need to introduce parameters other than backbone. CFlow avoids the time-consuming k-nearest-neighbor-search process, but it still needs to perform testing phase in the form of a slice window. Our FastFlow adopts an end-to-end inference phase which has high efficiency of inference. The analysis results are shown in Table 1, we can observe that our method is up to $10\times$ faster than other methods. Compared with CFlow which also uses flow model, our method achieves $1.5\times$ speedup and $2\times$ parameter reduction. When using vision transformers (deit and cait) as the feature extractor, our FastFlow can achieve 99.4 image-level AUC for anomaly detection which is superior to CFlow and Patch Core. From the perspective of additional inference time, our method achieves up to $4\times$ reduction compared to Cflow and $10\times$ reduction compared to Patch Core. Our FastFlow can still have a competitive performance when using ResNet model as feature extractor. 4.3 Quantitative Results MVTec AD There are 15 industrial products in MVTec AD dataset (Bergmann et al. 2019), with a total of 5,354 images, among which 10 are objects and the remaining 5 are textures. The training set is only composed of normal images, while the test set is a mixture of normal images and abnormal images. We compare our proposed method with the state-of-the-art anomaly detection works, including SPADE* (Reiss et al. 2021), PatchSVDD (Yi and Yoon 2020), DifferNet (Rudolph, Wandt, and Rosenhahn 2021), Mah.AD (Rippel, Mertens, and Merhof 2021), PaDiM (Defard et al. 2020), Cut Paste (Li et al. 2021), Patch Core (Roth et al. 2021), CFlow (Gudovskiy, Ishizaka, and Kozuka 2021) under the metrics of image-level AUC and pixel-level AUC. The detailed comparison results of all categories are shown in Table 2. We can observe that FastFlow achieves 99.4 AUC on image-level and 98.5 AUC on pixel-level, suppresses all other methods in anomaly detection task. BTAD BeanTech Anomaly Detection dataset (Mishra et al. 2021) has 3 categories industrial products with 2540 images. The training set consists only of normal images, while the test set is a mixture of normal images and abnormal images. Under the measure of pixel-level AUC, we compare the results of our FastFlow with the results of three methods reported in VT-ADL (Mishra et al. 2021): auto encoder with mean square error, automatic encoder with SSIM loss and VT-ADL. The comparison results are shown in Table 3. We can observe that our FastFlow achieves 97.0 pixel-wise AUC and suppresses other methods as high as 7% AUC. CIFAR-10 dataset CIFAR-10 has 10 categories with 60000 natural images. Under the setting of anomaly detection, one category is regarded as anomaly and other categories are used as normal data. And we need to train the corresponding model for each class respectively. The AUC scores of our method and other methods are reported in Table 4. Methods for comparison includes OC-SVM (Schölkopf et al. 1999), KDE (Bishop 2006), $\textit{l}_{2}$-AE (Hadsell, Chopra, and LeCun 2006), VAE (An and Cho 2015), Pixel CNN (Oord et al. 2016), LSA (Abati et al. 2019), AnoGAN (Schlegl et al. 2017), DSVDD (Ruff et al. 2018) and OCGAN (Perera, Nallapati, and Xiang 2019). Our method outperforms these comparison methods. The results in three different datasets show that our method can adapt to different anomaly detection settings. 4.4 Ablation Study To investigate the effectiveness of the proposed FastFlow structure, we design ablation experiments about the convolution kernel selection in subnet. We compare alternately using $3\times 3$ and $1\times 1$ convolution kernel and only using $3\times 3$ kernel under the AUC and inference speed for the subnet with various backbone networks. The results are shown in Table 5. For the backbone network with large model capacities such as CaiT and Wide-ResNet50-2, alternate using $3\times 3$ and $1\times 1$ convolution layer can obtain higher performance while reducing the amount of parameters. For the backbone network with small model capacities such as DeiT and ResNet18, only using $3\times 3$ convolution layer has higher performance. To achieve the balance of accuracy and inference speed, we use alternate convolution kernels of $3\times 3$, $1\times 1$ with DeiT, CaiT and Wide-ResNet50-2, and only use $3\times 3$ convolution layer with ResNet18. 4.5 Feature Visualization and Generation. Our FastFlow model is a bidirectional invertible probability distribution transformer. In the forward process, it takes the feature map from the backbone network as input and transforms its original distribution into a standard normal distribution in two-dimensional space. In the reverse process, the inverse of FastFlow can generate the visual feature from a specific probability sampling variable. To better understand this ability in view of our FastFlow, we visualize the forward (from visual features to probability map) and reverse (from probability map to visual features) processes. As shown in Figure 4, we extract the features of an input image belonging to the leather class and the abnormal area is indicated by the red arrow. We forward it through the FastFlow model to obtain the probability map. Our FastFlow successfully transformed the original distribution into the standard normal distribution. Then, we add noise interference to a certain spatial area which is indicated by the yellow arrow in this probability map, and generate a leather feature tensor from the pollution probability map by using the inverse Fastflow model. In which we visualized the feature map of one channel in this feature tensor, and we can observe that new anomaly appeared in the corresponding pollution position. 4.6 Qualitative Results We visualize some results of anomaly detection and localization in Figure 3 with the MVTec AD dataset. The top row shows test images with ground truth masks with and without anomalies, and the anomaly localization score heatmap is shown in the bottom row. There are both normal and abnormal images and our FastFlow gives accurate anomaly localization results. 4.7 Implementation Details We provide the details of the structure of feature extractor, the selection of feature layer and the size of input image in Table 6. For vision transformer, our method only uses feature maps of a specific layer, and does not need to design complicated multi-scale features manually. For ResNet18 and Wide-ResNet50-2, we directly use the features of the last layer in the first three blocks, put these features into the 2D flow model to obtain their respective anomaly detection and localization results, and finally take the average value as the final result. All these backbone are initialized with the ImageNet pre-trained weights and their parameters are frozen in the following training process. For FastFlow, we use 20-step flows in CaiT and DeiT and 8-step flows for ResNet18 and Wide-ResNet50-2. We train our model using Adam optimizer with the learning rate of 1e-3 and weight decay of 1e-5. We use a 500 epoch training schedule, and the batch size is 32. 5 Conclusion In this paper, we propose a novel approach named FastFlow for unsupervised anomaly detection and localization. Our key observation is that anomaly detection and localization requires comprehensive consideration of global and local information with a learnable distribution modeling method, and efficient inference process, which are ignored in the existing approaches. To this end, we present a 2D flow model denoted as FastFlow which has a lightweight structure and is used to project the feature distribution of normal images to the standard normal distribution in training, and use the probabilities as the anomaly score in testing. FastFlow can be used in typical feature extraction networks such as ResNet and ViT in the form of plug-ins. Extensive experimental results on MVTec AD dataset show FastFlow superiority over the state-of-the art methods in terms of accuracy and reasoning efficiency. Supplementary Material for FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows 6 More Ablation Studies 6.1 Channels of Hidden Layers in Flow Model In the original flow model which has been used in DifferNet (Rudolph, Wandt, and Rosenhahn 2021) and CFLOW (Gudovskiy, Ishizaka, and Kozuka 2021), the number of channels of hidden layers in all subnet is set to $2\times$ as much the input and output layer’s channel. This kind of design improves the results by increasing the complexity of the model, but it reduces the efficiency of inference. In our FastFlow, we found that using $0.16\times$ number of channels in CaiT and $1\times$ number of channels in Wide-ResNet50-2 can achieve a balance between performance and model parameters. In addition, when we use $0.25\times$ number of channels of Wide-ResNet50-2, we can further reduce the model parameters while still maintaining high accuracy. The results are shown in Table 7. 6.2 Training Data Augmentation In order to learn a more robust FastFlow model, we apply various data augmentation methods to the MVTec AD dataset during the training phase. We use random horizontal flip, vertical flip and rotation, with probabilities of 0.5, 0.3 and 0.7, respectively. It should be noted that some categories are not suitable for violent data augmentation. For example, the transistor can not be flipped upside down and rotated. The results are shown in Table 8. 7 Bad Cases and Ambiguity Label We visualize bad cases for our method on MVTec AD dataset in Figure 5 to Figure 7 which are summarized into three categories. We show the missing detection cases in Figure 5, false detection cases in Figure 6 and label ambiguity cases in Figure 7. In Figure 5, our method missed a few small and unobvious anomalies. In Figure 6, our method had false detection results in some background areas, such as areas with hair and dirt in the background. In Figure 7, our method found some areas belong to abnormal but not be labeled, such as the “scratch neck” for screw and the “fabric interior” for zipper. 8 Non-aligned Disturbed MVTec AD Dataset Considering that the MVTec AD dataset has the characteristic of sample alignment which is infrequent in practical application, we perform a series of spatial perturbations on the test data to obtain an unaligned MVTec AD dataset. In detail, we apply random zoom in/out with 0.85 ratio, random rotation with $\pm 15$ angle, random translation with 0.15 ratio to expand the original test dataset by $4\times$ to the new test dataset. We evaluate our FastFlow (with CaiT) in this new test dataset and we obtain 99.2 image-level AUC and 98.1 pixle-level AUC. There is almost no performance loss compared with the results in original aligned MVTec AD test dataset, which proves the robustness of our method. We also give some visualization results in Figure 8. We can observe that FastFlow can still have high performance on anomaly detection and location result in this non-aligned disturbed MVTec AD dataset. References Abati et al. (2019) Abati, D.; Porrello, A.; Calderara, S.; and Cucchiara, R. 2019. Latent space autoregression for novelty detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 481–490. An and Cho (2015) An, J.; and Cho, S. 2015. Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE, 2(1): 1–18. Bergman and Hoshen (2020) Bergman, L.; and Hoshen, Y. 2020. Classification-based anomaly detection for general data. International Conference on Learning Representations (ICLR). Bergmann et al. (2019) Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD – A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9592–9600. Bishop (2006) Bishop, C. M. 2006. Pattern recognition. Machine learning, 128(9). Cohen and Hoshen (2020) Cohen, N.; and Hoshen, Y. 2020. Sub-image anomaly detection with deep pyramid correspondences. arXiv preprint arXiv:2005.02357. Defard et al. (2020) Defard, T.; Setkov, A.; Loesch, A.; and Audigier, R. 2020. PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization. arXiv preprint arXiv:2011.08785. Dinh, Krueger, and Bengio (2014) Dinh, L.; Krueger, D.; and Bengio, Y. 2014. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516. Dinh, Sohl-Dickstein, and Bengio (2016) Dinh, L.; Sohl-Dickstein, J.; and Bengio, S. 2016. Density estimation using real nvp. arXiv preprint arXiv:1605.08803. Dosovitskiy et al. (2020) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Gong et al. (2019) Gong, D.; Liu, L.; Le, V.; Saha, B.; Mansour, M. R.; Venkatesh, S.; and Hengel, A. v. d. 2019. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1705–1714. Gudovskiy, Ishizaka, and Kozuka (2021) Gudovskiy, D.; Ishizaka, S.; and Kozuka, K. 2021. CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows. arXiv preprint arXiv:2107.12571. Hadsell, Chopra, and LeCun (2006) Hadsell, R.; Chopra, S.; and LeCun, Y. 2006. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, 1735–1742. IEEE. He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. Kingma and Dhariwal (2018) Kingma, D. P.; and Dhariwal, P. 2018. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039. Krizhevsky, Hinton et al. (2009) Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images. Li et al. (2021) Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. CutPaste: Self-Supervised Learning for Anomaly Detection and Localization. arXiv preprint arXiv:2104.04015. Mishra et al. (2021) Mishra, P.; Verk, R.; Fornasier, D.; Piciarelli, C.; and Foresti, G. L. 2021. VT-ADL: A Vision Transformer Network for Image Anomaly Detection and Localization. arXiv preprint arXiv:2104.10036. Oord et al. (2016) Oord, A. v. d.; Kalchbrenner, N.; Vinyals, O.; Espeholt, L.; Graves, A.; and Kavukcuoglu, K. 2016. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328. Perera, Nallapati, and Xiang (2019) Perera, P.; Nallapati, R.; and Xiang, B. 2019. Ocgan: One-class novelty detection using gans with constrained latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2898–2906. Reiss et al. (2021) Reiss, T.; Cohen, N.; Bergman, L.; and Hoshen, Y. 2021. PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2806–2814. Rezende and Mohamed (2015) Rezende, D.; and Mohamed, S. 2015. Variational inference with normalizing flows. In International conference on machine learning, 1530–1538. PMLR. Rippel, Mertens, and Merhof (2021) Rippel, O.; Mertens, P.; and Merhof, D. 2021. Modeling the distribution of normal data in pre-trained deep features for anomaly detection. In 2020 25th International Conference on Pattern Recognition (ICPR), 6726–6733. IEEE. Roth et al. (2021) Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; and Gehler, P. 2021. Towards Total Recall in Industrial Anomaly Detection. arXiv preprint arXiv:2106.08265. Rudolph, Wandt, and Rosenhahn (2021) Rudolph, M.; Wandt, B.; and Rosenhahn, B. 2021. Same same but differnet: Semi-supervised defect detection with normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1907–1916. Ruff et al. (2018) Ruff, L.; Vandermeulen, R.; Goernitz, N.; Deecke, L.; Siddiqui, S. A.; Binder, A.; Müller, E.; and Kloft, M. 2018. Deep one-class classification. In International conference on machine learning, 4393–4402. PMLR. Schlegl et al. (2017) Schlegl, T.; Seeböck, P.; Waldstein, S. M.; Schmidt-Erfurth, U.; and Langs, G. 2017. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, 146–157. Springer. Schölkopf et al. (1999) Schölkopf, B.; Williamson, R. C.; Smola, A. J.; Shawe-Taylor, J.; Platt, J. C.; et al. 1999. Support vector method for novelty detection. In NIPS, volume 12, 582–588. Citeseer. Touvron et al. (2021a) Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and Jégou, H. 2021a. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, 10347–10357. PMLR. Touvron et al. (2021b) Touvron, H.; Cord, M.; Sablayrolles, A.; Synnaeve, G.; and Jégou, H. 2021b. Going deeper with image transformers. arXiv preprint arXiv:2103.17239. Wang et al. (2021) Wang, S.; Wu, L.; Cui, L.; and Shen, Y. 2021. Glancing at the Patch: Anomaly Localization With Global and Local Feature Comparison. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 254–263. Yan et al. (2021) Yan, X.; Zhang, H.; Xu, X.; Hu, X.; and Heng, P.-A. 2021. Learning Semantic Context from Normal Samples for Unsupervised Anomaly Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 3110–3118. Yi and Yoon (2020) Yi, J.; and Yoon, S. 2020. Patch SVDD: Patch-level SVDD for Anomaly Detection and Segmentation. In Proceedings of the Asian Conference on Computer Vision.
Nonequilibrium Equalities Derived from Lebesgue’s Decomposition Yûto Murashita    Ken Funo    Masahito Ueda Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan (November 20, 2020) Abstract Most of the integral fluctuation theorems cannot be applied to situations in which the forward-path probability vanishes in a certain region or error-free measurements are performed under feedback control. We identify the mathematical origins of these problems based on Lebesgue’s decomposition theorem and derive new nonequilibrium equalities applicable to the above two situations. Inequalities derived from the equalities impose a stronger restriction on the averaged entropy production than the conventional second law in certain systems. pacs: 05.70.Ln, 05.20.-y Introduction. The last two decades have witnessed remarkable progress in nonequilibrium statistical mechanics Evans et al. (1993); Evans and Searles (1994); Gallavotti and Cohen (1995); Kurchan (1998); Lebowitz and Spohn (1999); Maes (1999); Lepri et al. (2000); Jarzynski (1997a, b, 2000); Crooks (1998, 1999, 2000); Hummer and Szabo (2001); Jarzynski (2004); Kawai et al. (2007); Parrondo et al. (2009); Esposito and Van den Broeck (2011); Sagawa and Ueda (2010); Sagawa (2011); Sagawa and Ueda (2012); Horowitz and Vaikuntanathan (2010); Morikuni and Tasaki (2011); Seifert (2012); Funo et al. (2013); Liphardt et al. (2002, 2002); Wang et al. (2002); Douarche et al. (2005); Collin et al. (2005, 2005); Gupta et al. (2011); Toyabe et al. (2010). In particular, Jarzynski established an integral nonequilibrium equality based on the Hamiltonian dynamics Jarzynski (1997a) and subsequently generalized it to a broader class of situations in nonequilibrium statistical mechanics Jarzynski (1997b, 2000). Then, Crooks offered a general proof of the Jarzynski equality and fluctuation theorems in stochastic systems, based on a technique of comparing a thermodynamic process and its time-reversed one Crooks (1998, 1999, 2000). This technique is fundamentally important in evaluating thermodynamic irreversibility, i.e., entropy production. Later, a generalized Jarzynski equality under feedback control which involves mutual information obtained by measurement was shown in stochastic systems Sagawa and Ueda (2010) and under the Hamiltonian dynamics Sagawa (2011). Various types of the Jarzynski equality can be shown in stochastic systems by an appropriate choice of the so-called reference probability Seifert (2012). Possible applications of nonequilibrium fluctuation equalities range from physics to biology Liphardt et al. (2002); Wang et al. (2002); Douarche et al. (2005); Collin et al. (2005); Gupta et al. (2011); Toyabe et al. (2010). For example, the free-energy landscape of a DNA can be surveyed through nonequilibrium experiments Gupta et al. (2011) using the Hummer-Szabo equality Hummer and Szabo (2001). Moreover, feedback control based on information processing as in Ref. Toyabe et al. (2010) can be utilized to manipulate nanomachines subject to large thermal fluctuations such as in vivo. Although the nonequilibrium equalities have wide applications, there are as yet uncovered important situations. The Jarzynski equality is inapplicable to free expansion of an ideal gas as discussed in Refs. Sung (2005); Lua and Grosberg (2005); Gross (2005a); Jarzynski (2005); Gross (2005b). Horowitz et al. pointed out that this is due to the fact that the forward-path probability vanishes and the backward-path probability is nonzero Horowitz and Vaikuntanathan (2010). Such a situation arises when the initial probability distribution is confined to a restricted region in phase space or when error-free measurement is performed under feedback control Horowitz and Vaikuntanathan (2010); Morikuni and Tasaki (2011). These situations have been explicitly exempt from previous considerations, for example, in the proof of generalized Jarzynski equalities in Refs. Sagawa and Ueda (2012); Funo et al. (2013). Exceptions are Kawai-Parrondo-Van den Broeck-type equalities which circumvent the problem arising from the zero forward-path probability by formulation using the relative entropy Kawai et al. (2007); Parrondo et al. (2009); Esposito and Van den Broeck (2011). It is natural to ask whether we can generalize Jarzynski-type equalities which are applicable to such frequently encountered situations. In this Letter, we prove new nonequilibrium equalities which overcome the above difficulties. The previously excluded situations can be mathematically understood in terms of Lebesgue’s decomposition theorem in measure theory. Moreover, by the decomposition, we find that the conventional integral equality is inapplicable under a new situation, where a delta-function-like confinement exists in the reference probability. This corresponds, for example, to a situation in which a particle of interest is trapped in a certain narrow region such as a trapping center. It is noteworthy that the proof of the new equalities is valid regardless of the dynamics of the system and whether feedback control is performed or not. Main equality. First, let us consider an arbitrary nonequilibrium process without feedback control. Let $\Gamma(t)$ denote the trajectory of the system in phase space during a time interval $0\leq t\leq\tau$, and let $\mathcal{M}[\mathcal{D}\Gamma(t)]$ denote the probability measure in phase space. Let $\mathcal{M}^{r}$ be an arbitrary reference probability measure. According to Lebesgue’s decomposition theorem Halmos (1974); Bartle (1995) , $\mathcal{M}^{r}$ can be uniquely decomposed into two parts: $$\displaystyle\mathcal{M}^{r}=\mathcal{M}^{r}_{\rm AC}+\mathcal{M}^{r}_{\rm S},$$ (1) where $\mathcal{M}^{r}_{\rm AC}$ is absolutely continuous with respect to $\mathcal{M}$ and this part can be written by the ratio of forward and backward probabilities as in previous literature Crooks (1999, 2000); $\mathcal{M}^{r}_{\rm S}$ is the singular part corresponding to the region of the phase space in which the probability defined by $\mathcal{M}$ is zero while the probability defined by $\mathcal{M}^{r}$ remains nonvanishing. If $\mathcal{M}^{r}_{\rm S}$ exists, the conventional Jarzynski-type equalities break down Sung (2005); Lua and Grosberg (2005); Gross (2005a); Jarzynski (2005); Gross (2005b); Sagawa and Ueda (2012); Funo et al. (2013). Surprisingly, this purely mathematical theorem points to the distinct physical problems in the conventional fluctuation theorem and leads us to new nonequilibrium equalities as we show below. Because $\mathcal{M}^{r}_{\rm AC}$ is absolutely continuous with respect to $\mathcal{M}$, we may apply the Radon-Nikodym theorem to obtain $$\displaystyle\mathcal{M}^{r}_{\rm AC}[\mathcal{D}\Gamma(t)]=\left.\frac{% \mathcal{D}\mathcal{M}^{r}_{\rm AC}}{\mathcal{D}\mathcal{M}}\right|_{\Gamma(t)% }\mathcal{M}[\mathcal{D}\Gamma(t)],$$ (2) where $\left.{\mathcal{D}\mathcal{M}^{r}_{\rm AC}}/{\mathcal{D}\mathcal{M}}\right|_{% \Gamma(t)}$ is the Radon-Nykodym derivative which is an integrable function with respect to $\mathcal{M}$. Let us formally define the entropy production as $$\displaystyle\sigma=-\ln\left.\frac{\mathcal{D}\mathcal{M}^{r}_{\rm AC}}{% \mathcal{D}\mathcal{M}}\right|_{\Gamma(t)}.$$ (3) If $\mathcal{M}$ and $\mathcal{M}^{r}_{\rm AC}$ can be written by probability densities $\mathcal{M}[\mathcal{D}\Gamma(t)]=\mathcal{P}[\Gamma(t)]\mathcal{D}\Gamma(t)$ and $\mathcal{M}^{r}_{\rm AC}[\mathcal{D}\Gamma(t)]=\mathcal{P}^{r}[\Gamma(t)]% \mathcal{D}\Gamma(t)$, Eq. (3) can be rewritten as $\sigma=-\ln{\mathcal{P}^{r}[\Gamma(t)]}/{\mathcal{P}[\Gamma(t)]},$ which is the standard definition of formal entropy production. A physical interpretation of $\sigma$ will be discussed later. Let $\mathcal{F}$ denote an arbitrary functional of a path and $\langle\cdots\rangle$, $\langle\cdots\rangle^{r}$ and $\langle\cdots\rangle^{r}_{I}\ (I={\rm AC},{\rm S})$ denote the averages over $\mathcal{M}$, $\mathcal{M}^{r}$ and $\mathcal{M}^{r}_{I}$ respectively. Then, we have $$\displaystyle\langle\mathcal{F}\rangle^{r}_{\rm AC}$$ $$\displaystyle=$$ $$\displaystyle\int\mathcal{F}[\Gamma(t)]\mathcal{M}^{r}_{\rm AC}[\mathcal{D}% \Gamma(t)]$$ (4) $$\displaystyle=$$ $$\displaystyle\int\mathcal{F}[\Gamma(t)]\left.\frac{\mathcal{D}\mathcal{M}^{r}_% {\rm AC}}{\mathcal{D}\mathcal{M}}\right|_{\Gamma(t)}\mathcal{M}[\mathcal{D}% \Gamma(t)]$$ $$\displaystyle=$$ $$\displaystyle\langle\mathcal{F}e^{-\sigma}\rangle.$$ In accordance with the decomposition in Eq. (1), $\langle\mathcal{F}\rangle_{\rm AC}^{r}=\langle\mathcal{F}\rangle^{r}-\langle% \mathcal{F}\rangle_{\rm S}^{r}$ holds. Thus we obtain an integral fluctuation theorem: $$\displaystyle\langle\mathcal{F}e^{-\sigma}\rangle=\langle\mathcal{F}\rangle^{r% }-\langle\mathcal{F}\rangle_{\rm S}^{r}.$$ (5) This equality can be seen as a generalization of the master integral fluctuation theorem in Refs. Crooks (2000); Seifert (2012). If we set $\mathcal{F}=1$, Eq. (5) reduces to $$\displaystyle\langle e^{-\sigma}\rangle=1-\lambda_{\rm S},$$ (6) where $\lambda_{\rm S}=\int\mathcal{M}^{r}_{\rm S}[\mathcal{D}\Gamma(t)]$ is the probability of the singular part. The singular part has divergent entropy production with probability $\lambda_{\rm S}$. The right-hand side of Eq. (6) can therefore be interpreted as the probability for the entropy production to be finite and mathematically well-defined. When the reference probability measure has only the absolute continuous part, i.e., $\lambda_{\rm S}=0$, the Jarzynski equality $\langle e^{-\sigma}\rangle=1$ is reproduced. Using Jensen’s inequality: $\langle e^{-\sigma}\rangle\geq e^{-\langle\sigma\rangle}$, we have $$\displaystyle\langle\sigma\rangle\geq-\ln(1-\lambda_{\rm S}).$$ (7) This inequality indicates that the second law of thermodynamics holds even when the singularity exists in the reference probability measure because the right-hand side is equal to or greater than zero. Moreover, when $\lambda_{\rm S}>0$ which is realized, for example, when the nonequilibrium process consists of free expansion, the inequality imposes a stronger restriction on the average entropy production than the second law, that is, the average entropy production must be positive in this case. Main equality under feedback control. Let us now consider a nonequilibrium process with feedback control. Let $\Lambda(t)$ denote a path in the phase space of outcomes. The control protocol can depend on $\Lambda(t)$ at earlier times. Clearly, this protocol includes repeated discrete feedback discussed in Ref. Horowitz and Vaikuntanathan (2010). Then, for a given $\Lambda(t)$, we can choose an arbitrary reference probability measure $\mathcal{M}^{r}_{|\Lambda(t)}$, which can also be decomposed into two parts: $$\displaystyle\mathcal{M}^{r}_{|\Lambda(t)}=\mathcal{M}^{r}_{{\rm AC}|\Lambda(t% )}+\mathcal{M}^{r}_{{\rm S}|\Lambda(t)}.$$ (8) On the other hand, let $\mathcal{M}_{|\Lambda(t)}$ denote the conditional probability measure for paths whose measurement outcomes are $\Lambda(t)$. When $\Lambda(t)$ is given, the control protocol is also fixed: $$\displaystyle\langle\mathcal{F}e^{-R_{|\Lambda(t)}}\rangle_{|\Lambda(t)}=% \langle\mathcal{F}\rangle^{r}_{|\Lambda(t)}-\langle\mathcal{F}\rangle^{r}_{{% \rm S}|\Lambda(t)},$$ (9) where $$\displaystyle R_{|\Lambda(t)}=-\ln\left.\frac{\mathcal{D}\mathcal{M}^{r}_{{\rm AC% }|\Lambda(t)}}{\mathcal{D}\mathcal{M}_{|\Lambda(t)}}\right|_{\Gamma(t)}.$$ (10) Averaging this over the outcome $\Lambda(t)$, we obtain $$\displaystyle\langle\mathcal{F}e^{-R_{|\Lambda(t)}}\rangle=\langle\mathcal{F}% \rangle^{r}-\langle\mathcal{F}\rangle^{r}_{\rm S}.$$ (11) The entropy production of the system should be defined by the initial probability measure and the conditional reference probability measure which reflects our knowledge acquired by measurement; then the entropy production is given by $$\displaystyle\sigma$$ $$\displaystyle=$$ $$\displaystyle-\ln\left.\frac{\mathcal{D}\mathcal{M}^{r}_{{\rm AC}|\Lambda(t)}}% {\mathcal{D}\mathcal{M}}\right|_{\Gamma(t)}.$$ (12) Then $R_{|\Lambda(t)}$ can be separated into two parts: $$\displaystyle R_{|\Lambda(t)}$$ $$\displaystyle=$$ $$\displaystyle-\ln\left.\frac{\mathcal{D}\mathcal{M}^{r}_{{\rm AC}|\Lambda(t)}}% {\mathcal{D}\mathcal{M}}\right|_{\Gamma(t)}-\ln\left.\frac{\mathcal{D}\mathcal% {M}}{\mathcal{D}\mathcal{M}_{|\Lambda(t)}}\right|_{\Gamma(t)}$$ (13) $$\displaystyle=$$ $$\displaystyle\sigma+I,$$ where $I=-\ln{\mathcal{D}\mathcal{M}}/{\mathcal{D}\mathcal{M}_{|\Lambda(t)}}|_{\Gamma% (t)}$ is the gain of information for a given outcome $\Lambda(t)$ and can be identified as the mutual information between the trajectory of the system and that of the outcomes. If $\mathcal{M}$ and $\mathcal{M}_{|\Lambda(t)}$ can be written by probability densities, it reduces to the standard definition: $I=-\ln{\mathcal{P}[\Gamma(t)]}/{\mathcal{P}[\Gamma(t)|\Lambda(t)]}.$ Equation (11) can be rewritten as $$\displaystyle\langle\mathcal{F}e^{-\sigma-I}\rangle=\langle\mathcal{F}\rangle^% {r}-\langle\mathcal{F}\rangle^{r}_{\rm S}.$$ (14) If we set $\mathcal{F}=1$, Eq.(14) reduces to $$\displaystyle\langle e^{-\sigma-I}\rangle=1-\lambda_{\rm S}.$$ (15) In particular, if the measurement is performed only once and there are no singular parts in the reference probability, the equality reproduces the original equality obtained in Refs. Sagawa and Ueda (2010); Sagawa (2011). The corresponding inequality is $$\displaystyle\langle\sigma\rangle\geq-\langle I\rangle-\ln(1-\lambda_{\rm S}).$$ (16) Thus, what determines the lower bound of entropy production is not the mutual information alone but the balance between the mutual information and the term arising from expansion and trapping. In particular, if $\langle I\rangle>-\ln(1-\lambda_{\rm S})$, i.e., the right-hand side of the inequality is negative, the averaged entropy production may be negative. When $\mathcal{M}$ can be written by the probability density, a stronger version of Lebesgue’s decomposition holds. Now, $\mathcal{M}^{r}_{|\Lambda(t)}$ can be uniquely decomposed into three parts: $$\displaystyle\mathcal{M}^{r}_{|\Lambda(t)}=\mathcal{M}^{r}_{{\rm ac}|\Lambda(t% )}+\mathcal{M}^{r}_{{\rm sc}|\Lambda(t)}+\mathcal{M}^{r}_{{\rm d}|\Lambda(t)},$$ (17) where $\mathcal{M}^{r}_{{\rm ac}|\Lambda(t)}$ is absolutely continuous with respect to $\mathcal{M}$; $\mathcal{M}^{r}_{{\rm sc}|\Lambda(t)}$ is the singular continuous part corresponding to vanishing forward-path probability. What is new here is the discrete part $\mathcal{M}^{r}_{{\rm d}|\Lambda(t)}$ which represents such a reference probability distribution as a delta function. To the best of our knowledge, no research has been conducted on this part. However, the existence of this part makes the conventional fluctuation theorem inapplicable. In accordance with Eq. (17), we obtain $$\displaystyle\langle\mathcal{F}e^{-\sigma-I}\rangle$$ $$\displaystyle=$$ $$\displaystyle\langle\mathcal{F}\rangle^{r}-\langle\mathcal{F}\rangle^{r}_{\rm sc% }-\langle\mathcal{F}\rangle^{r}_{\rm d},$$ (18) $$\displaystyle\langle e^{-\sigma-I}\rangle$$ $$\displaystyle=$$ $$\displaystyle 1-\lambda_{\rm sc}-\lambda_{\rm d}.$$ (19) The proof of the above nonequilibrium equalities (5), (6), (14), (15), (18) and (19) can be made on a very general ground regardless of the dynamics of the system. In particular, the proof can be applied to Langevin systems and Hamiltonian systems. Physics enters the problem in the choice of the reference probability measure. Once the reference probability measure is chosen, the formal entropy production becomes the corresponding physical entropy production. To elucidate the physical meaning of the derived formulae, let us first consider Langevin systems. By comparing the original dynamics and the time-reversed one, we can quantify the asymmetry in time reversal of a physical process. When we set the reference probability to the time-reversed path trajectory probability under the time-reversed protocol, the formal entropy production reduces to $$\displaystyle\sigma=-\ln\frac{p^{\dagger}_{0}(\Gamma(\tau)|\Lambda(t))}{p_{0}(% \Gamma(0))}+\Delta s^{B},$$ (20) where $p_{0}$ and $p_{0}^{\dagger}$ are the initial probability distribution of the original path and that of the time-reversed one respectively, and $\Delta s^{B}$ is the entropy production of the surrounding medium Seifert (2012). If we assume that the initial probability of the original path is the canonical distribution with inverse temperature $\beta$ and set an arbitrary initial probability of the time-reversed path to the canonical distribution with the same inverse temperature $\beta$, the entropy production becomes $$\displaystyle\sigma=\beta(W-\Delta F),$$ (21) where $W$ is the work performed on the system and $\Delta F$ is the free energy difference of the system Seifert (2012). Thus, the integral fluctuation theorem (15) reduces to $$\displaystyle\langle e^{-\beta(W-\Delta F)-I}\rangle=1-\lambda_{\rm S},$$ (22) which is the stochastic Jarzynski equality with feedback derived in Ref. Sagawa and Ueda (2010) when $\lambda_{\rm S}=0$. It is noteworthy that $\lambda_{\rm S}$ can be experimentally determined through measurement of the time-reversed process; $\lambda_{\rm S}$ is the total probability of backward paths with corresponding forward paths vanishing. When we set the initial probability distribution of the time-reversed path to the final distribution of the original path, the first term in Eq. (20) is the Shanon entropy production of the system $\Delta s$; then $$\displaystyle\sigma=\Delta s+\Delta s^{B}=:\Delta s^{\rm tot}$$ (23) is the total entropy production of both the system and the bath. In this case, the integral fluctuation theorem is $$\displaystyle\langle e^{-\Delta s^{\rm tot}-I}\rangle=1-\lambda_{\rm S}.$$ (24) On the other hand, in deterministic systems such as Hamiltonian systems, if we fix the measurement outcome $\Lambda(t)$ and then fix the protocol, the probability of a certain path is the same as the probability of being on the path at a certain time. In other words, the probability measures can be replaced as follows: $$\displaystyle\mathcal{M}(\mathcal{D}\Gamma(t))$$ $$\displaystyle\to$$ $$\displaystyle\mu_{t_{1}}(d\Gamma_{t_{1}}),$$ (25) $$\displaystyle\mathcal{M}^{r}_{(|\Lambda(t))}(\mathcal{D}\Gamma(t))$$ $$\displaystyle\to$$ $$\displaystyle\mu^{r}_{t_{2}(|\Lambda(t))}(d\Gamma_{t_{2}}),$$ (26) $$\displaystyle\sigma$$ $$\displaystyle\to$$ $$\displaystyle-\ln[{d\mu^{r}_{t_{2}(|\Lambda(t))}}/{d\mu_{t_{1}}}],$$ (27) where $t_{1}$ and $t_{2}$ are arbitrary times and $\mu_{t_{1}}$ and $\mu^{r}_{t_{2}(|\Lambda(t))}$ are respectively the probability measure at time $t_{1}$ and the (conditional) reference probability measure at time $t_{2}$. Ordinarily, $t_{1}$ and $t_{2}$ are set to be the initial time $0$ and the final time $\tau$ of a nonequilibrium process. We can obtain equalities in exactly the same forms for deterministic processes with the same assumption and choice of the reference probability as in Eqs.(22) and (24). Numerical Simulations. We demonstrate Eq. (19) by numerical simulations when $I=0$. First, let us consider the case of $\lambda_{d}=0$ and $\lambda_{s}\neq 0$, i.e., the singular continuous part exists. We perform numerical simulations for an overdamped Langevin system confined on a one-dimensional ring, where the potential consists of $n$ identical harmonic potential wells with stiffness $k(t)$ (see Fig.1(a)). The initial distribution is set to be the local equilibrium distribution in a given well and vanishes in all the other wells. We set the canonical distribution corresponding to the final potential as the reference probability. We study a nonequilibrium process during a time interval $\tau$, in which the stiffness of potentials is decreased from $k=K$ to $0$ at a constant rate between $t=0$ and $\tau/2$, and then increased from $k=0$ to $n^{2}K$ at a constant rate between $t=\tau/2$ and $\tau$. If we assume $K$ is sufficiently large, the free-energy difference is zero in this process. Because a backward path terminates in a certain well with the equal probability, the probability that the backward path does not have the corresponding forward path is $\lambda_{d}=(n-1)/n$. Then the nonequilibrium integral equality reduces to $\langle e^{-\beta W}\rangle={1}/{n},$ and the corresponding second law is $\langle W\rangle\geq k_{B}T\ln n.$ Figure 1(b) shows the distribution of work at different $n$ obtained by the numerical simulation. The value of $\langle e^{-\beta W}\rangle$ is confirmed to be $1/n$ (Fig. 1(c)). It is also verified that the averaged dissipation values are larger than the minima predicted by the inequality (Fig. 1(d)). When the initial probability is confined, there is diffusion to the entire phase space; then entropy production tends to be positive. Note that this process can be regarded as the information erasure process of the symmetric $n$-bit memory. Next, let us consider the case of $\lambda_{d}\neq 0$ and $\lambda_{s}=0$, i.e., the case in which discrete part exists. We perform numerical simulations for one-dimensional system in which a single particle is trapped in a single harmonic potential with stiffness $k(t)$. There is a trapping point in the system and the distance between the point and the center of the harmonic potential is $x_{c}$. Reaching the trapping point, the particle is trapped with unit probability. The initial distribution is the equilibrium distribution of the harmonic potential. The stiffness of the potential is decreased from $k=K$ to $0$ at a constant rate between $t=0$ and $\tau/2$, and then increased from $0$ to $K$ at a constant rate between $t=\tau/2$ to $\tau$. Let us denote by $p_{\rm trap}$ the trapping probability of the final state and set the reference probability to the final probability of the process. Then the fluctuation equality (19) reduces to $\langle e^{-\Delta s^{\rm tot}}\rangle=1-p_{\rm trap},$ and the corresponding inequality is $\langle\Delta s^{\rm tot}\rangle\geq-\ln(1-p_{\rm trap}).$ Both formulae are consistent with the numerical simulations shown in Fig. 2. Conclusion. By Lebesgue’s decomposition, the nonequilibrium fluctuation equalities are derived under more general condition than the conventional ones. This generalization extends the physical applicability of the fluctuation theorems. In some cases, the inequalities derived from our equalities are stronger than the conventional second law of thermodynamics. Our equalities are verified by the numerical simulations of the Langevin systems. Acknowledgement. This work was supported by KAKENHI Grant No. 22340114 from the Japan Society for the Promotion of Science, and a Grant-in-Aid for Scientific Research on Innovation Areas ”Topological Quantum Phenomena” (KAKENHI Grant No. 22103005), and the Photon Frontier Network Program from MEXT of Japan. Y. M. thanks Yui Kuramochi and Tomohiro Shitara for fruitful discussion. K. F. acknowledges support from JSPS (Grant No. 254105). References Evans et al. (1993) D. J. Evans, E. G. D. Cohen,  and G. P. Morriss, Phys. Rev. Lett. 71, 2401 (1993). Evans and Searles (1994) D. H. Evans and D. J. Searles, Phys. Rev. E 50, 1645 (1994). Gallavotti and Cohen (1995) G. Gallavotti and E. G. D. Cohen, Phys. Rev. Lett. 74, 2694 (1995). Kurchan (1998) J. Kurchan, J. Phys. A: Math. Gen. 31, 3719 (1998). Lebowitz and Spohn (1999) J. L. Lebowitz and H. Spohn, J. Stat. Phys. 95, 333 (1999). Maes (1999) C. Maes, J. Stat. Phys. 95, 367 (1999). Lepri et al. (2000) S. Lepri, L. Rondoni,  and G. Benettin, J. Stat. Phys. 99, 857 (2000). Jarzynski (1997a) C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997a). Jarzynski (1997b) C. Jarzynski, Phys. Rev. E 56, 5018 (1997b). Jarzynski (2000) C. Jarzynski, J. Stat. Phys. 98, 77 (2000). Crooks (1998) G. E. Crooks, J. Stat. Phys. 90, 1481 (1998). Crooks (1999) G. E. Crooks, Phys. Rev. E 60, 2721 (1999). Crooks (2000) G. E. Crooks, Phys. Rev. E 61, 2361 (2000). Hummer and Szabo (2001) G. Hummer and A. Szabo, Proc. Natl. Acad. Sci. USA 98, 3658 (2001). Jarzynski (2004) C. Jarzynski, J. Stat. Mech.: Theor. Exp. , P09005 (2004). Kawai et al. (2007) R. Kawai, J. M. R. Parrondo,  and C. Van den Broeck, Phys. Rev. Lett. 98, 080602 (2007). Parrondo et al. (2009) J. M. R. Parrondo, C. Van den Broeck,  and R. Kawai, New. J. Phys. 11, 073008 (2009). Esposito and Van den Broeck (2011) M. Esposito and C. Van den Broeck, Europhys. Lett. 95, 40004 (2011). Sagawa and Ueda (2010) T. Sagawa and M. Ueda, Phys. Rev. Lett. 104, 090602 (2010). Sagawa (2011) T. Sagawa, J. Phys.: Conf. Ser. 297, 012015 (2011). Sagawa and Ueda (2012) T. Sagawa and M. Ueda, Phys. Rev. Lett. 109, 180602 (2012). Horowitz and Vaikuntanathan (2010) J. M. Horowitz and S. Vaikuntanathan, Phys. Rev. E 82, 061120 (2010). Morikuni and Tasaki (2011) Y. Morikuni and H. Tasaki, J. Stat. Phys. 143, 1 (2011). Seifert (2012) U. Seifert, Rep. Prog. Phys. 75, 126001 (2012). Funo et al. (2013) K. Funo, Y. Watanabe,  and M. Ueda, Phys. Rev. E 88, 052121 (2013). Liphardt et al. (2002) J. Liphardt, S. Dumont, S. B. Smith, J. Ignacio Tinoco,  and C. Bustamante, Science 296, 1832 (2002). Wang et al. (2002) G. M. Wang, E. M. Sevick, E. Mittag, D. J. Searles,  and D. J. Evans, Phys. Rev. Lett. 89, 050601 (2002). Douarche et al. (2005) F. Douarche, S. Ciliberto, A. Petrosyan,  and I. Rabbiosi, Europhys. Lett. 70, 593 (2005). Collin et al. (2005) D. Collin, F. Ritort, C. Jarzynski, S. B. Smith, J. I. Tinoco,  and C. Bustamante, Nature 437, 231 (2005). Gupta et al. (2011) A. N. Gupta, A. Vincent, K. Neupane, H. Yu, F. Wang,  and M. T. Woodside, Nature Phys. 7, 631 (2011). Toyabe et al. (2010) S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki,  and M. Sano, Nature Phys. 6, 988 (2010). Sung (2005) J. Sung, arXiv cond-mat, 0506214 (2005). Lua and Grosberg (2005) R. C. Lua and A. Y. Grosberg, J. Phys. Chem. B 109, 6805 (2005). Gross (2005a) D. H. E. Gross, arXiv cond-mat, 0508721 (2005a). Jarzynski (2005) C. Jarzynski, arXiv cond-mat, 0509344 (2005). Gross (2005b) D. H. E. Gross, arXiv cond-mat, 0509648 (2005b). Halmos (1974) P. R. Halmos, Measure Theory (Springer, 1974) pp. 134, 182. Bartle (1995) R. G. Bartle, The Elements of Integration and Lebesgue Measure (John Wiley & Sons Ltd., 1995) p. 88.
The fields of definition of branched Galois covers of the projective line Hilaf Hasson (Date:: January 19, 2013) Abstract. In this paper I explore the structure of the fields of definition of Galois branched covers of the projective line over $\bar{\mathbb{Q}}$. The first main result states that every mere cover model has a unique minimal field of definition where its automorphisms are defined, and goes on to describe special properties of this field. One corollary of this result is that for every $G$-Galois branched cover there is a field of definition which is Galois over its field of moduli, with Galois group a subgroup of $\operatorname{Aut}(G)$. The second main theorem states that the field resulting by adjoining to the field of moduli all of the roots of unity whose order divides some power of $|Z(G)|$ is a field of definition. By combining this result with results from an earlier paper, I prove corollaries related to the Inverse Galois Problem. For example, it allows me to prove that for every finite group $G$, there is an extension of number fields $\mathbb{Q}\subset E\subset F$ such that $F/E$ is $G$-Galois, and $E/\mathbb{Q}$ ramifies only over those primes that divide $|G|$. I.e., $G$ is realizable over a field that is “close” to $\mathbb{Q}$. 1. Overview The Inverse Galois Problem asks whether every finite group $G$ is realizable as a Galois group over $\mathbb{Q}$ (or more generally over every number field $K$). Most attempts to solve the Inverse Galois Problem over a number field $K$ have focused on trying to solve its geometric analogue, the Regular Inverse Galois Problem. The Regular Inverse Galois Problem asks whether for every finite group $G$ there is a $G$-Galois branched cover of the projective line over $\bar{\mathbb{Q}}$ that is defined (together with its automorphisms) by polynomials with coefficients in $K$. It is well known that for every finite group $G$ there is a $G$-Galois branched cover of the projective line over $\bar{\mathbb{Q}}$. (This is proven via transcendental methods; see Remark 2.4 for more details.) While most previous work has focused on the field of moduli (see Definition 2.5) of such covers, the focus of this paper is on the structure of their fields of definition. In Section 2 we provide an introduction to the definitions and concepts in this paper. In Section 3 we give a bijection between mere cover models and a group-theoretic object. (See Lemma 3.1.) This allows us to prove the first main theorem of this paper in Section 4, namely Theorem 4.3. This theorem states that every mere cover model of a $G$-Galois branched cover has a unique minimal field where its automorphisms are defined, and this field of definition has special properties. This theorem has several noteworthy corollaries. Among them, it follows that for every $G$-Galois branched cover of $\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ there is a field of definition that is Galois over the field of moduli, with Galois group a subgroup of $\operatorname{Aut}(G)$. (See Corollary 4.5.) In particular, there is always a “small” field of definition over the field of moduli. Finally, in Section 5 we construct a special field of definition (infinite over the field of moduli) for every $G$-Galois branched cover, resulting from adjoining certain elements to the field of moduli. (See Theorem 5.1.) This, together with results from a previous paper ([10]), allow us to prove several corollaries (gathered in Corollary 5.2). For example, it allows us to prove that for every finite group $G$, there is an extension of number fields $\mathbb{Q}\subset E\subset F$ such that $F/E$ is $G$-Galois, and $E/\mathbb{Q}$ ramifies only over those primes that divide $|G|$. I.e., $G$ is realizable over a field that is “close” to $\mathbb{Q}$. This paper is based in large part on portions of the author’s doctoral thesis, written at the University of Pennsylvania under the supervision of David Harbater. 2. Introduction and Definitions Notation 2.1. Given an integral scheme $S$, we write $\kappa(S)$ for its function field. Definition 2.2. Let $K$ be a field, and let $X_{K}$ and $Y_{K}$ be connected, normal, complete curves over $K$. We say that a map $X_{K}\rightarrow Y_{K}$ of $K$-curves is a branched cover (or simply a cover) if the map is finite and generically étale. We say that a branched cover is Galois if the induced extension of function fields $\kappa(X_{K})/\kappa(Y_{K})$ is a Galois extension of fields. We sometimes refer to branched covers as mere covers. Let $G$ be a finite group. A $G$-Galois branched cover is a branched cover $X_{K}\rightarrow Y_{K}$ which is Galois, together with an isomorphism of $\operatorname{Gal}(\kappa(X_{K})/\kappa(Y_{K}))$ with $G$. Definition 2.3. Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover of curves over $\bar{\mathbb{Q}}$. We say that $K\subset\bar{\mathbb{Q}}$ is a field of definition of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ as a mere cover if it descends to a map of $K$-curves $X_{K}\rightarrow Y_{K}$. (Any such $X_{K}\rightarrow Y_{K}$ is called a $K$-model of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$.) We say that $K$ is a field of definition as a $G$-Galois branched cover if $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ has a $K$-model that is Galois. Let $G$ be a finite group. In this paper we will be interested in $G$-Galois branched covers $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ of the projective line. Such covers have a special importance in Galois Theory. Namely, if a number field $K$ is a field of definition of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover then Hilbert’s Irreducibility Theorem ([8], Chapter 11) implies that $G$ is the Galois group of a Galois field extension of $K$. In particular, if for every finite group $G$ there is a $G$-Galois branched cover $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ that descends to $\mathbb{Q}$ (as a $G$-Galois branched cover) then the answer to the Inverse Galois Problem is affirmative. Remark 2.4. Let $a_{1},...,a_{r}$ be closed points of $\mathbb{P}^{1}_{\mathbb{C}}$. Riemann’s Existence Theorem (see [9], exposé XII) states that every topological covering space of $\mathbb{P}^{1}_{\mathbb{C}}\smallsetminus\{a_{1},...,a_{r}\}$ is defined by polynomials. It follows that there is an equivalence of categories between $G$-Galois branched covers of $\mathbb{P}^{1}_{\mathbb{C}}\smallsetminus\{a_{1},...,a_{r}\}$ that are étale and principal $G$-bundles of the induced topological space. Since the (topological) fundamental group of the Riemann Sphere punctured at $r$ points is free with $r-1$ generators, it follows that it has a principal $G$-bundle for every finite group $G$ that is generated by $r-1$ elements. In particular it implies that for every finite group $G$ there exists a $G$-Galois branched cover of $\mathbb{P}^{1}_{\mathbb{C}}$. In fact, if we choose $a_{1},...,a_{r}$ so that they come from closed points of $\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ it follows from an argument of Grothendieck that the cover descends to $\bar{\mathbb{Q}}$. Therefore, for every finite group $G$ there exists a $G$-Galois branched cover of $\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$. However, since the proof of Riemann’s Existence Theorem is not constructive, very little is known about the fields of definition of these covers. Previous work on the structure of fields of definition of $G$-Galois branched covers (resp. mere covers) has concentrated on the “field of moduli”. The field of moduli is a field naturally associated to a $G$-Galois branched cover (resp. mere cover), and is the best candidate for the smallest field of definition (if one exists). Definition 2.5. Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ and $X^{\prime}_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ be $G$-Galois branched covers of $Y_{\bar{\mathbb{Q}}}$. We say that they are isomorphic as mere covers if there exists an isomorphism $\eta$ that makes the following commute: $$\begindc{\commdiag}[45]\obj(0,1)[do]{X_{\bar{\mathbb{Q}}}}\obj(1,0)[dt]{Y_{% \bar{\mathbb{Q}}}}\obj(2,1)[df]{X^{\prime}_{\bar{\mathbb{Q}}}}\mor{do}{dt}{}% \mor{do}{df}{\eta}\mor{df}{dt}{}\enddc$$ If $\eta$ commutes with the given isomorphisms of $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{\bar{\mathbb{Q}}}))$ and $\operatorname{Gal}(\kappa(X^{\prime}_{\bar{\mathbb{Q}}})/\kappa(Y_{\bar{% \mathbb{Q}}}))$ with $G$, we say that $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ and $X^{\prime}_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ are isomorphic as $G$-Galois branched covers. Let $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover of curves over a field $\bar{\mathbb{Q}}$. Let $K$ be a subfield of $\bar{\mathbb{Q}}$. The field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover (resp. mere cover) relative to $K$ is the subfield of $\bar{\mathbb{Q}}$ fixed by those automorphisms of $\operatorname{Gal}(\bar{\mathbb{Q}}/K)$ that take the $G$-Galois branched cover (resp. mere cover) to an isomorphic copy of itself. We will use the convention that the field of moduli is always taken relative to $\mathbb{Q}$, unless otherwise stated. Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover. It is clear that the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover (resp. mere cover) is contained in all of its fields of definition as a $G$-Galois branched cover (resp. mere cover). David Harbater and Kevin Coombes have proven in [3] that the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$, considered as a $G$-Galois branched cover (resp. mere cover) is in fact equal to the intersection of all of its fields of definition as a $G$-Galois branched cover (resp. mere cover). Furthermore, the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a mere cover is a field of definition as a mere cover, and therefore the unique minimal field of definition as a mere cover. It is important to note that the field of moduli of a $G$-Galois branched cover of the projective line is not necessarily a field of definition as a $G$-Galois branched cover. In other words, a $G$-Galois branched cover may not have a unique minimal field of definition. The obstruction for the field of moduli $M$ of a $G$-Galois branched cover to being a field of definition (as a $G$-Galois branched cover) lies in $H^{2}(M,Z(G))$. (See [2], [7] and [5]. The reader may also wish to consult [6].) In particular, if $G$ is centerless or if $M$ has cohomological dimension $1$ it follows that the field of moduli is a field of definition. In [13] Stefan Wewers has explored this obstruction in detail. 3. Mere Cover Models and Sections Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ a $G$-Galois branched cover of normal complete curves over $\bar{\mathbb{Q}}$. Let $L$ be a field of definition of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ as a mere cover, and let $Y_{L}$ be an $L$-model of $Y_{\bar{\mathbb{Q}}}$. Let $\Omega$ be the set of mere cover models $X_{L}\rightarrow Y_{L}$ of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ over $L$ that lie above $Y_{L}$. The goal of this section is to give a bijection between $\Omega$ and the set of sections of some epimorphism of pro-finite groups. In order to do that, we require some notation. We have following diagram of fields: $$\begindc{\commdiag}[25]\obj(0,3)[do]{\kappa(Y_{\bar{\mathbb{Q}}})}\obj(0,1)[dt% ]{\bar{\mathbb{Q}}}\obj(2,2)[df]{\kappa(Y_{L})}\obj(2,0)[zz]{L}\obj(0,5)[za]{% \kappa(X_{\bar{\mathbb{Q}}})}\mor{do}{dt}{}[\atright,\solidline]\mor{do}{df}{}% [\atright,\solidline]\mor{df}{zz}{}[\atright,\solidline]\mor{zz}{dt}{}[% \atright,\solidline]\mor{do}{za}{G}[\atleft,\solidline]\mor{za}{df}{}[\atright% ,\solidline]\enddc$$ Since we assumed $L$ is a field of definition as a mere cover, Lemma 2.4 in [1] (see also [11]) implies that $\kappa(X_{\bar{\mathbb{Q}}})$ is Galois over $\kappa(Y_{L})$. We have a short exact sequence: $$1\rightarrow G\rightarrow\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/% \kappa(Y_{L}))\rightarrow\operatorname{Gal}(\kappa(Y_{\bar{\mathbb{Q}}})/% \kappa(Y_{L}))\rightarrow 1$$ Let $\operatorname{Gal}(L)$ denote the absolute Galois group $\operatorname{Gal}(\bar{\mathbb{Q}}/L)$. Let $f:\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))% \twoheadrightarrow\operatorname{Gal}(L)$ be the composition of the quotient map $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))% \twoheadrightarrow\operatorname{Gal}(\kappa(Y_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))$ with the isomorphism $\operatorname{Gal}(\kappa(Y_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))\to^{\!\!\!\!\!% \!\!\sim\,}\operatorname{Gal}(L)$. In other words, the map $f$ takes an automorphism $\sigma$ in $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))$ to the restriction $\sigma|_{\bar{\mathbb{Q}}}$ of $\sigma$ to $\bar{\mathbb{Q}}$. We get the following short exact sequence. $$1\rightarrow G\rightarrow\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/% \kappa(Y_{L}))\xrightarrow[]{f}\operatorname{Gal}(L)\rightarrow 1$$ Let $\operatorname{Sec}(f)$ denote the set of sections of $f$ in the category of pro-finite groups. Let $X_{L}\rightarrow Y_{L}$ in $\Omega$ be a mere cover model of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$. Note that $\kappa(X_{\bar{\mathbb{Q}}})$ is naturally isomorphic to the tensor product $\bar{\mathbb{Q}}\otimes_{L}\kappa(X_{L})$. We denote by $\omega_{X_{L}/Y_{L}}:\operatorname{Gal}(L)\rightarrow\operatorname{Gal}(\kappa% (X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))$ the map taking $\sigma$ to $\sigma\otimes\operatorname{id}_{\kappa(X_{L})}$. Lemma 3.1. In the above situation, the following hold: (1) Let $\alpha:\Omega\rightarrow\operatorname{Sec}(f)$ be the map taking a mere cover model $X_{L}\rightarrow Y_{L}$ to $\omega_{X_{L}/Y_{L}}$. Then $\alpha$ is a bijection. (2) Let $X_{L}\rightarrow Y_{L}$ be a mere cover model of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$. Then $X_{L}\rightarrow Y_{L}$ is Galois if and only if the image of $w_{X_{L}/Y_{L}}$ commutes with $G$. Proof. In order to prove that $\alpha$ is onto, we first prove that for every section $s\in\operatorname{Sec}(f)$, the field $L$ is algebraically closed in $\kappa(X_{\bar{\mathbb{Q}}})^{s(\operatorname{Gal}(L))}$. It is straightforward to see that the natural map $\operatorname{Ker}(f)\rightarrow\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}}% )/\kappa(Y_{L}))/s(Gal(L))$ is a bijection of sets. Therefore the field $\kappa(X_{\bar{\mathbb{Q}}})^{s(\operatorname{Gal}(L))}$ has degree $|\operatorname{Ker}(f)|=|G|$ over $\kappa(Y_{L})$. This implies that $\kappa(Y_{\bar{\mathbb{Q}}})$ is linearly disjoint from $\kappa(X_{\bar{\mathbb{Q}}})^{s(\operatorname{Gal}(L))}$ over $\kappa(Y_{L})$, and therefore $L$ is algebraically closed in $\kappa(X_{\bar{\mathbb{Q}}})^{s(\operatorname{Gal}(L))}$. It follows from the above that there is a mere cover model $X_{L,s}\rightarrow Y_{L}$ that induces the field extension $\kappa(X_{\bar{\mathbb{Q}}})^{s(\operatorname{Gal}(L))}/\kappa(Y_{L})$, and that the field $\kappa(X_{\bar{\mathbb{Q}}})$ is equal to the the compositum $\bar{\mathbb{Q}}\cdot\kappa(X_{L,s})$. Let $\sigma$ be an element of $\operatorname{Gal}(L)$. Since both $s(\sigma)$ and $w_{X_{L,s}/Y_{L}}(\sigma)$ restrict to $\sigma$ on $\bar{\mathbb{Q}}$, and restrict to the trivial automorphism on $\kappa(X_{L,s})$, it follows that $s(\sigma)$ is equal to $w_{X_{L,s}/Y_{L}}(\sigma)$. In other words, $\alpha$ is onto. In order to finish the proof of Claim (1) of this lemma, it remains to prove that $\alpha$ is injective. Let $X_{L}\rightarrow Y_{L}$ be an element of $\Omega$. As we have seen above, the field extension $\kappa(X_{\bar{\mathbb{Q}}})^{w_{X_{L}/Y_{L}}(\operatorname{Gal}(L))}/\kappa(Y% _{L})$ has degree $|G|$. It is clear by the definition of $w_{X_{L}/Y_{L}}$ that $\kappa(X_{L})$ is contained in $\kappa(X_{\bar{\mathbb{Q}}})^{w_{X_{L}/Y_{L}}(\operatorname{Gal}(L))}$. Since $\kappa(X_{L})$ also has degree $|G|$ over $\kappa(Y_{L})$ it follows that $[\kappa(X_{\bar{\mathbb{Q}}})^{s(\operatorname{Gal}(L))}:\kappa(X_{L})]=1$, and they are equal. In other words you can recover the mere-cover model $X_{L}\rightarrow Y_{L}$ from its induced section. This concludes the proof of Claim (1) of the lemma. It remains to prove Claim (2) of this lemma, i.e. that given a mere cover model $X_{L}\rightarrow Y_{L}$ the group $w_{X_{L}/Y_{L}}(\operatorname{Gal}(L))$ commutes with $G$ if and only if $X_{L}\rightarrow Y_{L}$ is Galois. As we have seen above $\kappa(X_{L})$ is equal to $\kappa(X_{\bar{\mathbb{Q}}})^{w_{X_{L}/Y_{L}}(\operatorname{Gal}(L))}$. Therefore, by Galois Theory, the cover $X_{L}\rightarrow Y_{L}$ is Galois exactly when $w_{X_{L}/Y_{L}}(\operatorname{Gal}(L))$ is normal in $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))$. Since $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))$ is the semi-direct product of $G$ and $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{L}))$, this is equivalent to $w_{X_{L}/Y_{L}}(\operatorname{Gal}(L))$ commuting with $G$. ∎ Remark 3.2. The construction of $\alpha$ is functorial in the following sense. Let $E$ be an overfield of $L$ that is contained in $\bar{\mathbb{Q}}$, and let $Y_{E}$ be $Y_{L}\times_{L}E$. Let $\Omega_{E}$ be the set of mere cover models $X_{E}\rightarrow Y_{E}$ of $X_{\bar{\mathbb{Q}}}\rightarrow Y_{\bar{\mathbb{Q}}}$ lying above $Y_{E}$. Let $\alpha^{\prime}$ be the bijection between $\Omega_{E}$ and $\operatorname{Sec}(g)$ where $g:\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{E}))% \twoheadrightarrow\operatorname{Gal}(E)$ is the composition of the quotient map $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/\kappa(Y_{E}))% \twoheadrightarrow\operatorname{Gal}(\kappa(Y_{\bar{\mathbb{Q}}})/\kappa(Y_{E}))$ with the isomorphism $\operatorname{Gal}(\kappa(Y_{\bar{\mathbb{Q}}})/\kappa(Y_{E}))\to^{\!\!\!\!\!% \!\!\sim\,}\operatorname{Gal}(E)$. Then $\alpha^{\prime}(X_{L}\times_{L}E\rightarrow Y_{E})$ is the restriction of $\alpha(X_{L}\rightarrow Y_{L})$ to $\operatorname{Gal}(E)$. 4. Minimal Fields of Definition of a Given Model The main theorem (Theorem 4.3) of this section states that every mere cover model of a $G$-Galois branched cover has a unique minimal field of definition that makes it Galois, and explores the special properties of this field. This result is somewhat surprising, since it is well known that if you do not fix the mere cover model there may not be a unique minimal field of definition for the automorphisms. (See Remark 4.4 for further discussion.) In order to prove Theorem 4.3 we require a group-theoretic lemma (Lemma 4.2). Notation 4.1. Let $g$ and $h$ be elements in a group $G$. We use the notation ${}^{h}g$ to mean the conjugation $hgh^{-1}$. Lemma 4.2. Let $J$ and $M$ be groups, and let $I$ be a semi-direct product $J\rtimes M$. Let $N$ be $M\cap C_{I}(J)$, where $C_{I}(J)$ is the centralizer of $J$ in $I$. Then the following hold: (1) $N$ is normal in $I$. (2) Let $\gamma:M/N\rightarrow\operatorname{Aut}(J)$ be defined by taking $mN$ to the automorphism $j\mapsto\,^{m}j$. Then $\gamma$ is well defined, and injective. (3) $I/N$ is isomorphic to the semi-direct product $J\rtimes_{\gamma}(M/N)$. Proof. Since $J$ is normal in $I$, it follows that so is $C_{I}(J)$. Therefore $N$ is normal in $M$. In order to show that $N$ is normal in $I$ it suffices to prove for every $n$ in $N$, $j$ in $J$, and $m$ in $M$ that ${}^{jm}n$ is in $N$. Since $N$ is normal in $M$, ${}^{m}n$ is an element of $N$. Since $J$ commutes with $N$ it follows that ${}^{jm}n=\,^{j}(^{m}n)=\,^{m}n$. It is now clear that ${}^{jm}n$ is in $N$, and therefore (1) is proven. The homomorphism $\gamma$ is well defined because $N$ commutes with $J$. It remains to show that $\gamma$ is injective. Indeed if $\gamma(mN)=id$ then for every $j\in J$, we have ${}^{m}j=j$. Therefore $m$ commutes with $J$. Since $m$ is also in $M$, we conclude that it is in $N$. Therefore $mN=N$. This proves (2). It is now an easy verification that the map $I=J\rtimes M\rightarrow J\rtimes_{\gamma}(M/N)$ taking $jm$, where $j\in J$ and $m\in M$, to $(j,mN)$ is a well-defined homomorphism with kernel $N$, proving (3). ∎ We are now ready for the main theorem of this section: Theorem 4.3. Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover that descends as a mere cover to a number field $L$. Let $X_{L}\rightarrow\mathbb{P}^{1}_{L}$ be a model of it over $L$, and let $\mathcal{A}$ be the set of all overfields $E$ of $L$ such that $X_{L}\times_{L}E\rightarrow\mathbb{P}^{1}_{E}$ is Galois. Then there is a field $E$ in $\mathcal{A}$ that is contained in all of the other fields in $\mathcal{A}$, and it satisfies the following properties: (1) The field extension $E/L$ is Galois, with Galois group isomorphic to a subgroup $H$ of $\operatorname{Aut}(G)$. (2) For every $G$-Galois field extension $F/E$ coming from specializing the $G$-Galois branched cover $X_{L}\times_{L}E\rightarrow\mathbb{P}^{1}_{E}$ at an $E$-rational point, the field extension $F/L$ is Galois with Galois group isomorphic to $G\rtimes H$ (where $\operatorname{Gal}(F/E)\cong G$ is the obvious subgroup of $G\rtimes H$, and where the action of $H$ on $G$ is given by the embedding of $H$ in $\operatorname{Aut}(G)$). Proof. Let $L(x)$ be the function field of $\mathbb{P}^{1}_{L}$, where $x$ is a transcendental element. By Lemma 2.4 in [1], $\kappa(X_{\bar{\mathbb{Q}}})$ is Galois over $L(x)$. Let $s:\operatorname{Gal}(L)\rightarrow\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}% }})/L(x))$ be the section corresponding to $X_{L}\rightarrow\mathbb{P}^{1}_{L}$ via the bijection $\alpha$ from Lemma 3.1. Let $V$ be the intersection of $s(\operatorname{Gal}(L))$ with the centralizer of $G$ in $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/L(x))$. Applying Lemma 4.2 with $G$ in the role of $J$, $s(\operatorname{Gal}(L))$ in the role of $M$, $V$ in the role of $N$, and $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/L(x))$ in the role of $I$, we see that $V$ is normal in $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/L(x))$, and that $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/L(x))/V$ is isomorphic to a semi direct product of $G$ with a subgroup of $\operatorname{Aut}(G)$. In particular, the group $V$ is has finite index in $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/L(x))$, and therefore the compositum $GV$ is an open subgroup of $\operatorname{Gal}(\kappa(X_{\bar{\mathbb{Q}}})/L(x))$ containing $G$. Therefore there exists a finite field extension $E$ of $L$, contained in $\bar{\mathbb{Q}}$, such that the fixed subfield of $\kappa(X_{\bar{\mathbb{Q}}})$ by $GV$ is equal to $E(x)$. Note that $\kappa(X_{L}\times_{L}E)$ is the fixed subfield of $\kappa(X_{\bar{\mathbb{Q}}})$ by $V$. We first show that $E$ is an element of $\mathcal{A}$, and in fact the least element (i.e. $\forall E^{\prime}\in\mathcal{A}\,\,\,E\subseteq E^{\prime}$). By Lemma 3.1 and Remark 3.2, the map $X_{L}\times_{L}E\rightarrow\mathbb{P}^{1}_{E}$ is Galois because the image of the restriction of $s$ to $\operatorname{Gal}(E)$ commutes with $G$. If $E^{\prime}$ is another element of $\mathcal{A}$, then again by Lemma 3.1 and Remark 3.2 the image of the restriction of $s$ to $\operatorname{Gal}(E^{\prime})$ commutes with $G$. But this implies that $\operatorname{Gal}(\kappa(X_{\bar{L}})/E^{\prime}(x))$ is contained in $GV$. This proves that $E$ is the least element in $\mathcal{A}$. The group $\operatorname{Gal}(E/L)\cong\operatorname{Gal}(E(x)/L(x))$ is isomorphic $s(\operatorname{Gal}(L))/V$ by the second isomorphism theorem. It follows from the above that $\operatorname{Gal}(E/L)$ embeds into $\operatorname{Aut}(G)$. This proves Claim (1) of Theorem 4.3. Claim (3) of Lemma 4.2, applied to our situation as above, implies that the field extension $\kappa(X_{L}\times_{L}E)/L$ is Galois with Galois group isomorphic to $G\rtimes H$ (where the action of $H$ on $G$ is given by the embedding of $H$ in $\operatorname{Aut}(G)$); and that furthermore, we have $\kappa(X_{L}\times_{L}E)^{G}=E(x)$. Claim (2) in Theorem 4.3 is now proven by specializing. ∎ Remark 4.4. Let $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover with field of moduli $M$. Recall that the field $M$ is the intersection of all of the fields of definition of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover, but is not necessarily one itself. However, since $M$ contains the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a mere cover, it is a field of definition as a mere cover. (See Section 2.) In light of Theorem 4.3, one can explain the failure of $M$ to be a field of definition as a $G$-Galois branched cover as the combination of two factors: (1) Theorem 4.3 gives a unique minimal field of definition as a $G$-Galois branched cover for any particular mere cover model $X_{M}\rightarrow\mathbb{P}^{1}_{M}$. However each model might give a different minimal field of definition. Therefore the non-uniqueness of a model of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ over $M$ contributes to the plurality of the minimal fields of definition. (2) If $L$ is an overfield of $M$, then there may be a mere cover model of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ over $L$ that does not descend to a mere cover model over $M$. This theorem has a number of noteworthy corollaries. Corollary 4.5. Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover. Then the following hold: (1) Assume $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ descends as a mere cover to a number field $L$ (i.e., $L$ contains the field of moduli as a mere cover). Then there exists a field of definition for $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover that is Galois over $L$ with Galois group a subgroup of $\operatorname{Aut}(G)$. In particular this holds when $L$ is the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover. (2) Assume $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ descends as a mere cover to a number field $L$. Then there exists a subgroup $H\leq\operatorname{Aut}(G)$ such that $G\rtimes H$ is realizable as a Galois group over $L$. (3) Let $F$ be the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a mere cover, and let $M$ be the field of moduli of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover. Then $M$ is Galois over $F$ with Galois group a subquotient of $\operatorname{Aut}(G)$. Proof. Claims (1) and (2) follow immediately from Theorem 4.3. In light of Theorem 4.3, in order to prove Claim (3) it suffices to show that $M$ is Galois over $F$. Recall that $M$ is the intersection of all of the fields of definition as a $G$-Galois branched cover. It therefore suffices to prove that for every field of definition $L$ of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover, and for every $\sigma$ in $\operatorname{Gal}(\bar{\mathbb{Q}}/F)$, the field $\sigma L$ is also a field of definition as a $G$-Galois branched cover. Let $X_{L}\rightarrow\mathbb{P}^{1}_{L}$ be an $L$-model as a $G$-Galois branched cover, and let $X_{\sigma L}\rightarrow\mathbb{P}^{1}_{\sigma L}$ be its twist by $\sigma$. This cover is clearly Galois. Furthermore, note that $X_{\sigma L}\rightarrow\mathbb{P}^{1}_{\sigma L}$ is a mere cover model over $\sigma L$ of the cover $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ after it has been twisted by $\sigma$. By the definition of $F$, the cover resulting from twisting $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ by $\sigma$ is isomorphic to $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a mere cover. Therefore $\sigma L$ is a field of definition of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a mere cover, and $X_{\sigma L}\rightarrow\mathbb{P}^{1}_{\sigma L}$ is a mere cover model of this cover that is Galois. In other words, the field $\sigma L$ is field of definition of $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ as a $G$-Galois branched cover, which is what we wanted to prove. ∎ Remark 4.6. Note that Claim (3) in Corollary 4.5 implies that there exists a subgroup $H\leq\operatorname{Aut}(G)$ such that $G\rtimes H$ is a Galois group over $L$ without proving it is realizable regularly (i.e. as the Galois group of a regular extension of $L(x)$). 5. Adjoining Roots of Unity to a Field of Moduli to get a Field of Definition While Theorem 4.3 describes a general relationship between the field of moduli and fields of definition, the main theorem of this section (Theorem 5.1) describes the existence of a particular field of definition (infinite over the field of moduli) with special properties. Let $G$ be a finite group, and let $X_{\bar{\mathbb{Q}}}\rightarrow\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ be a $G$-Galois branched cover. As noted in Section 2, its field of moduli $M$ as a $G$-Galois branched cover may not be a field of definition as a $G$-Galois branched cover. However, Coombes and Harbater ([3]) have proven that the field $\cup_{n}M(\zeta_{n})$ resulting from adjoining all of the roots of unity to $M$ is a field of definition. (Here $\zeta_{n}$ is defined to be $e^{\frac{2\pi i}{n}}$.) The following is a strengthening of this result. Theorem 5.1. In the situation above, the field $\cup_{\{n|\exists m:\,n|\,|Z(G)|^{m}\}}M(\zeta_{n})$ is a field of definition. In particular, there exists a field of definition (finite over $\mathbb{Q}$) that is ramified over the field of moduli $M$ only over the primes that divide $|Z(G)|$. Proof. If $G$ is centerless, then the cover is defined over its field of moduli ([3]) and therefore the theorem follows. Otherwise $\cup_{\{n|\exists m:\,n|\,|Z(G)|^{m}\}}M(\zeta_{n})$ satisfies the hypotheses of Proposition 9 in Chapter II of [12]. We conclude that $\operatorname{cd}_{p}(\cup_{\{n|\exists m:\,n|\,|Z(G)|^{m}\}}M(\zeta_{n}))\leq 1$ for every prime $p$ that divides $|Z(G)|$. This implies that $H^{2}(\cup_{\{n|\exists m:\,n|\,|Z(G)|^{m}\}}M(\zeta_{n}),Z(G))$ is trivial. As the obstruction for this field to be a field of definition lies in this group ([6]), we are done. ∎ Combining Theorem 5.1 with results that I have proven in [10], we get the following corollaries. Corollary 5.2. Let $G$ be a finite group. Then the following hold: (1) For every positive integer $r$ there is a set $T=\{a_{1},...,a_{r}\}$ of closed points of $\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$, such that every $G$-Galois branched cover of $\mathbb{P}^{1}_{\bar{\mathbb{Q}}}$ that is ramified only over $T$, has a field of definition that is unramified (over $\mathbb{Q}$) outside of the primes dividing $|G|$. (2) For every positive integer $r$, and for every finite set $S$ of rational primes that don’t divide $|G|$, there is a choice of $\mathbb{Q}$-rational points $T=\{a_{1},...,a_{r}\}$ such that every $G$-Galois étale cover of $\mathbb{P}^{1}_{\bar{\mathbb{Q}}}\smallsetminus T$ has a field of definition that is unramified (over $\mathbb{Q}$) over the primes of $S$. (3) There is an extension of number fields $\mathbb{Q}\subset E\subset F$ such that $F/E$ is $G$-Galois, and $E/\mathbb{Q}$ ramifies only over those primes that divide $|G|$. Proof. Claims (1) and (2) of the corollary are straightforward from Theorems 9.1 and 9.6 of [10] respectively, together with Theorem 5.1. Claim (3) follows from Claim (1) by specializing. ∎ References [1] Beckmann, Sybilla. “Galois groups of fields of definition of solvable branched coverings,” Compositio Math. 66 (1988), no. 2, 121-144. [2] Belyi, G.V. “On Galois extensions of a maximal cyclotomic field,” (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 43 (1979), no. 2, 267-276, 479. [3] Coombes, Kevin; Harbater, David. “Hurwitz families and arithmetic Galois groups,” Duke Math J., 52 (1985), 821-839. [4] Dèbes, Pierre. “Algebraic covers: field of moduli versus field of definition,” (English, French summary) Ann. Sci. École Norm. Sup. (4) 30 (1997), no. 3, 303-338. [5] Dèbes, Pierre. “Covers of $\mathbb{P}^{1}$ over the $p$-adics.” Recent developments in the inverse Galois problem (Seattle, WA, 1993), 217-238, Contemp. Math., 186 (1995), Amer. Math. Soc., Providence, RI. [6] Dèbes, Pierre. “Descent theory for algebraic covers,” Arithmetic fundamental groups and noncommutative algebra (Berkeley, CA, 1999), 3-25, Proc. Sympos. Pure Math., 70 (2002), Amer. Math. Soc., Providence, RI. [7] Dèbes, Pierre. “Groupes de Galois sur K(T),” (French) Sém. Théor. Nombres Bordeaux (2) 2 (1990), no. 2, 229–243. [8] Fried, Michael; Jarden, Moshe. “Field Arithmetic,” third edition, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 11. Springer-Verlag, Berlin (2008). [9] Grothendieck, Alexander. “Séminaire de Géométrie Algébrique,” mimeographed notes (1960-61), I.H.E.S., Paris, no. 1. [10] Hasson, Hilaf. “The prime-to-$p$ part of étale fundamental groups of curves,” 2012 preprint, available at arXiv:1209.3693. [11] Matzat, B. Heinrich. “Konstruktion von Zahl- und Funktionenkörpern mit vorgegebener Galoisgruppe,” J. reine u. angew. Math. 349 (1984), 179-220. [12] Serre, Jean-Pierre. “Cohomologie Galoisienne” fifth edition, Lecture Notes in Mathematics (1994), no. 5, Springer-Verlag, Berlin. [13] Wewers, Stefan. “Field of moduli and field of definition of Galois covers,” Arithmetic Fundamental Groups and Noncommutative Algebra (1999), 221-245, edited by M. D. Fried and Y. Ihara, Proc. Sympos. Pure Math. 70, Amer. Math. Soc., Providence, RI, 2002. Current author information: Hilaf Hasson: Department of Mathematics, Pennsylvania State University, State College, PA 16802, USA email: [email protected]
Reliable validation of Reinforcement Learning Benchmarks Matthias Müller-Brockhausen, Aske Plaat, Mike Preuss Leiden Institute of Advanced Computer Science (LIACS) Leiden University The Netherlands {m.f.t.muller-brockhausen, a.plaat, m.preuss}@liacs.leidenuniv.nl Abstract Reinforcement Learning (RL) is one of the most dynamic research areas in Game AI and AI as a whole, and a wide variety of games are used as its prominent test problems. However, it is subject to the replicability crisis that currently affects most algorithmic AI research. Benchmarking in Reinforcement Learning could be improved through verifiable results. There are numerous benchmark environments whose scores are used to compare different algorithms, such as Atari. Nevertheless, reviewers must trust that figures represent truthful values, as it is difficult to reproduce an exact training curve. We propose improving this situation by providing access to the original experimental data to validate study results. To that end, we rely on the concept of minimal traces. These allow re-simulation of action sequences in deterministic RL environments and, in turn, enable reviewers to verify, re-use, and manually inspect experimental results without needing large compute clusters. It also permits validation of presented reward graphs, an inspection of individual episodes, and re-use of result data (baselines) for proper comparison in follow-up papers. We offer plug-and-play code that works with Gym so that our measures fit well in the existing RL and reproducibility eco-system. Our approach is freely available, easy to use, and adds minimal overhead, as minimal traces allow a data compression ratio of up to $\approx 10^{4}:1$ (94 GB to 8 MB for Atari Pong) compared to a regular MDP trace used in offline RL datasets. The paper presents proof-of-concept results for a variety of games. Index Terms: Verifiable, Benchmarks, Reinforcement Learning, Reproducibility I Introduction Reproducibility is a key component of peer-reviewed science. Reviewers are supposed to read, understand, and ideally be able to reproduce an experiment to ensure its factual correctness. It touches not only computer science, but any science, as without easy reproducibility, fraud is difficult to detect[1]. Especially for benchmarks and competitions, where fraudulent submissions potentially poison the rankings of a leaderboard, it is important to have tools for validation. Benchmarking AI algorithms has become increasingly important and is now a driving force behind algorithm development. In Game AI, competitions have been an important part of scientific conferences for a long time already, and especially game problem benchmarks are currently more and more spreading out to core AI conferences, e.g., with the MineRL competition at NeurIPS [2, 3]. Whereas the overall aims of algorithm development are often to improve generality and especially sample efficiency, the employed methods are still relatively slow and thus need very long runs, thereby making reproducibility difficult. In theory, computers are excellent for reproducibility. One can run the same code, bit for bit, on many different machines. This may be simplified down to issuing a single command, based on technologies such as Docker[4]. However, non-deterministic methods (e.g., evolutionary algorithms[5], deep neural networks[6]) hamper the reproducibility of experiments. Another growing problem is the availability of computing resources that would be needed to replicate results. The tremendous successes of AlphaStar [7] and Dota 2[8] are prominent examples. The large computing clusters they relied on are unavailable to most researchers for running any type of replication experiment. Furthermore, it becomes increasingly important to also consider sustainability issues, as the big cluster experiments are energy inefficient. Such considerations have been voiced, e.g., for Natural Language Processing (NLP)[9] or complex Games as Go (AlphaZero)[10]. It would thus make more sense to avoid re-computing everything but to improve the inspection of existing log data. Other issues that stand in the way of exact replication include insufficient reporting[11] or not open-sourcing code[12]. If replication itself is unavailable for some experiments, the next best thing could be verifiability, namely the ability to inspect, check, and replay parts of the experiment. However, this is also difficult even in terms of handling the huge amounts of data that are produced during the big experiments. In order to achieve this, we would need some way of highly compressing this data, which instantly points us to the concept of minimal traces. The research question we are going to tackle in this work is thus: How may a researcher verify the reinforcement learning experiment of other researchers, especially the display of results in figures and tables, based on minimal traces? We offer the following contributions: • We explain how minimal traces (Section II) allow reproducible verification of results such as benchmark leaderboards (Section III-A). Moreover, we empirically show that they enable a compression ratio of up to $10^{4}:1$ for offline RL datasets (Section IV-A) • We provide Plug-n-Play code[13] to collect minimal traces that integrate with the RL-Ecosystem (Gym[14]) • We provide an agenda for further research on how to obtain verifiable RL experiment results using minimal traces (Section V) Although there are many factors at play with reproducibility, our work solely focuses on methodological improvements for reinforcement learning research. After briefly introducing the concept of minimal traces (section II), we first look at suggestions that have already been made for improving reproducibility or experimental methodology in section III. Based on that state, we find improvable points (section III-A) and suggest concrete, actionable steps (section V). We then outline how these steps can be applied in practice with the code that we provide (section V-A). To enable compatibility, we ensured that it properly interfaces to the existing RL-Ecosystem. II Background: Minimal Traces For the following sections, the concept of minimal traces is important, thus we review its origins and known uses here. Reinforcement learning optimizes sequential decision making processes, that are modeled as so called Markov decision processes (MDPs). An MDP consists of a tuple ($S$, $A$, $P_{a}(s,s^{\prime})$, $R_{a}(s,s^{\prime})$) (state space, action space, the probability to go from state s to $s^{\prime}$, and reward for going from s to $s^{\prime}$)[15]. A trace, also commonly referred to as an Episode within an RL-Environment[16], is a list of tuples that contain the start state $s_{t}$, the chosen action $a$, the received reward $r$, and the resulting state $s_{t+1}$. Traces are sufficient to train an RL Algorithm offline / off-policy, and they are also shared by related work as dataset-basis for training[17]. Staying true to the computer science reinforcement learning terminology drawing inspiration from psychology[18], we found a fitting concept to our problem, namely minimal traces. Their goal seems related: “Predicting the Past from Minimal Traces”[19]. We want reviewers to reliably predict (verify) the past (experiment results) using minimal traces. To reduce the used space (minimal) of traces we assume that an MDP, given the same initial state $S_{0}$ and action sequence $\alpha$, will yield the exact same trace. In consequence, the probabilities of $P_{a}(s,s^{\prime})$ need to be fixed based on an initial configuration $s_{init}$. Fixing these probabilities is commonly referred to as a deterministic MDP[20]. Deterministic MDPs reduce the required data for re-simulation of minimal traces to ($s_{init}$, $s_{0}$, $\alpha_{t_{0}}...\alpha_{t_{n}}$). Minimal traces fit well into reinforcement learning problems as there the action set $A$ is usually smaller than the state space $S$. Hence it makes more sense to only save actions, if the observations can be re-constructed afterward. While the added re-simulation cost might seem impractical for verification purposes, our Experiments show that it is can requires less than 0.7% of the original RL training time (Section IV-A). III Related Work While the matters of reproducibility, replicability, and verifiability are relevant to all scientific fields[1], we will focus on reinforcement learning here. In reinforcement learning, previous works suggest guidelines on how to design and report a well reproducible experiment[11]. Conferences such as NeurIPS are moving towards implementing these guidelines and ask reviewers to fill in a questionnaire about reproducibility. This has led to more and more sharing of code, and researchers are encouraged to do so [12]. Moreover, reviewers found it easier to judge submissions that included code. Most of the reviewer guidelines focus on the paper itself, which is the well-established scientific tradition that was practiced already when computers were not yet invented. However, many researchers in Computer Science now believe that for experimental works, we should go one step further and exploit its theoretically perfect and exact ability to verify factual correctness of reported results and submitted code / data (Section III-A). This so-called “Verification of Artifacts”[5] is not a new concept. For example, tools available to make policy training as reproducible as possible are readily available (e.g. Garage[21]). For experiments that are not using tools as Garage, there are also clear guidelines on how to properly compare to a baseline of an algorithm[22]. Moreover, researchers have suggested to save the final values used in graphs to be able to verify the figures[23]. This theoretically works for any Figure. However, how can we be sure that the Figure is correct[24]? Games are an interesting playground for RL-AI. GVGAI is a prominent example[25], as it is used for competitions that benchmark individual algorithm submissions from both planning[26] and learning such as RL[27]. For these competitions, the validity of results is guaranteed by means of execution of the submitted code by the event host. This is made possible through a pre-defined agent interface, that allows to interact with arbitrary games. In other reinforcement learning environments, we also have a pre-defined agent interface (see the center of Figure 1), but re-running policy training is not at all reproducible. Reproducibility is not guaranteed in GVGAI either, as the applied algorithms, such as MCTS[28], include randomness that is not fixed. While GVGAI remedies this by means of multiple runs, research has shown that averaging over runs does not prevent inconsistencies in reproduction attempts[11]. Whereas minimal traces do not alleviate the problem directly, they do lift the requirement to have to run the agent code oneself. III-A Why minimal traces? Reproducibility in the RL ecosystem is an evolving matter. Environments usually behave deterministically if seeded, as, e.g., Cartpole or MountainCar in Gym[14]. Famous problems that did not yet satisfy this requirement have been converted (e.g. Robotics[29, 30]). Moreover, besides theory focused guidelines[11], practical tools like Garage exist to enable reproducible policy training[21]. Nonetheless, RL lacks verifiability of experimental results, such as benchmark submissions. Reviewers shall be enabled to easily verify reported results in figures or tables with minimal effort. MDP’s already come with the concept of traces that are basically a full log of all data (observation, action, reward)[16], from which figure data could also be constructed (see Section II). These traces are used for offline reinforcement learning. However, offline RL datasets can become quite large (3TB for the experiments in one paper [17]) and, as a consequence, are hosted on a proprietary central authority. To remedy this problem, we collect as little data as possible and without requiring a central hosting authority regardless of size (Section IV-C). In Figure 1, we attempt to visualize the relationship between offline RL datasets, minimal traces, and the data that Garage[21] collects. Thus, we suggest to collect an initial environment configuration corresponding to $s_{init}$ from the Minimal Trace (Section II). In Gym, $s_{init}$ contains all values that influence an environments initialization (reset) and transition function (step). Table II contains an example of $s_{init}$ for CartPole. Moreover, the actions that an agent takes are saved. A consequence is that our method is limited to fully deterministic environments. By properly seeding non-deterministic algorithms’ random number generation, non-deterministic problems can also be used. We later show that minimal traces allow a compression ratio of up to 77 for regular and up to 12559 for image-based environments (Section IV-A). Our focus on the environment instead of the policy training is well motivated. The data collected by tools such as Garage[21] can not overcome one specific problem. The training of neural networks includes certain operations that render it not reproducible if the host machine, host operating system, or software library version change [31]. Unless researchers have the exact same machine and software configuration, reproducibility will become difficult. Software configurations are easily reproducible via Docker[4]. However, if the same code produces different results on other machines, then reproducibility becomes practically impossible. We see minimal traces as an add-on rather than a replacement to current reproducibility approaches. It is a workaround because reproducible policy training, such as attempts to Garage[21] offer, does not yet work. Should training become fully deterministic, minimal traces might become obsolete. Nevertheless, they would still offer the advantage of being computationally cheaper than training, as environment execution requires less computational resources than updating weights of a large neural network. Moreover, environment re-simulation is cheap enough to potentially run in the web browser enabling interactive tools, which is difficult to reproduce with computing clusters needed for training. Whereas minimal traces are limited to reinforcement learning, their concept transfers perfectly to video games. TrackMania is a great example because its physics is fully deterministic. For its leaderboard, replays are saved. A replay contains the name of a level and all taken actions by the player. Leaderboard submissions are verified for validity[32]. Moreover, it allows to detect Tool-Assisted Runs in pure human play leaderboards[33]. More complex games, such as Counter Strike: Global Offensive, Dota 2, and Starcraft, support replays as well. They are also minimal in their sense. However, the multitude of possible inputs is difficult to map directly to a reinforcement learning action space and hence minimal traces. The same applies to the Unreal Engine, which has a general ReplaySystem that can replay any data [34] but does not automatically allow replaying arbitrary scenes deterministically based purely on agent actions. Nevertheless, this larger trove of data is still useful to detect cheats, such as AimBots [35]. Furthermore, it can be used to make estimations of player skill [36]. As games are used in competitions, it could also be applied for validation there. For example, two of the games we previously mentioned to support replays have been a competition at the IEEE Conference on Games 2021: Dota 2[37] and Starcraft[38]. Moreover, SpaceInvaders, which we test from Gym, is also a CoG competition[39]. Other games seem suitable as well, such as GvGAI[26], Snakes[40], or Bot Bowl[41]. These games might even manage to be directly compatible with minimal traces, as, for example, GvGAI has managed to provide an RL-Gym Environment for its learning track[27]. Lastly, our data collection suggestion harmonizes with the concept of Procedural Content Generation (PCG), as the seed can be saved for deterministic reproduction. ProcGen has already shown that current RL-Algorithms are struggling to generalize [42]. However, PCG applied correctly already enables better generalization [43] and more fine-grained training curricula [44], and is thus especially important in benchmarking Transfer in Reinforcement Learning [45]. IV Method We will describe our approach in detail, along the lines of 3 different aspects. Minimal traces achieve a high compression ratio compared to regular traces. They can compress up to $10^{4}:1$ (Section IV-A). Next, we detail how minimal traces enable re-usable visualizations (Section IV-B). Moreover, we suggest the usage of a distributed file system, the interplanetary file system (IPFS)[46], for long-term storage (Section IV-C). Based on these insights, we propose a reproducibility agenda (Section V). IV-A Data compression Minimal traces are a way to store experiment signatures efficiently. We will have a closer look at efficient storage for different games. Please refer to Table I for the size comparisons. Reinforcement learning experiment logs can take up a considerable amount of space. For example, a single work offering an offline RL trace provides 3TB of data [17]. In order to compare the amount of space needed for traces vs. minimal traces (Section II), we chose different environments with growing observation spaces. Taxi with one integer, CartPole with two floats, BipedalWalker with 24 floats, Atari-Games RAM using 128 bytes, and the produced 210 x 160 pixel RGB image using 100800 bytes. More interestingly, whereas most environments have only a single integer action space $A$, BipedalWalker has three continuous actions increasing the space required for minimal traces. We train on the environment for 1 Million steps using PPO[47] from stable baselines 3[48], using the default hyperparameters for both environment and agent. During training, we collect the full MDP-trace (Observation, Action, and Reward) and the minimal MDP-trace (Env-Params and Actions) for each environment. The results in Table I are striking: Minimal traces enable a compression ratio of 12559.36 for image-based environments with a single action, such as Atari Pong. For the 128-byte ram observation, the compression ratio falls to 99.58 for Pong, reducing 767.18 MB to 7.7 MB. Nevertheless, environments with small observation spaces such as CartPole still allow a compression rate of 53, reducing a 452.25 MB trace down to 8.5 MB. However, BipedalWalker underlines that the potentially saved space depends solely on the size difference between the observation space $S$ and the action space $A$. For BipedalWalker, 24 numbers in the observation space vs. 4 in the action space. Hence the compression ratio of 2.9 reduces the 530.72 MB trace down to 181.79 MB. An intriguing discovery we made is a varying size for the re-simulated trace of BipedalWalker, which should not occur. Thus we performed an experimental analysis and found that 5 in 100 re-simulations yielded a different trace due to rounding errors. Consequently, BipedalWalker is not yet fully deterministic. We repeated this experiment for all other tested environments in Table 1 and found that they are fully deterministic, hence yielding 0 failed re-simulations. We also measured the time to re-simulate a full MDP-trace from a minimal MDP-trace vs. the training time. Our measurements are also shown in Table II. Note that for timing-related data, there is variation in runs for different hardware. All experiments were run on a machine with an Intel Xeon Silver 4214, an Nvidia Geforce RTX 3090, and 256GB of RAM. The cost of re-simulation depends on the complexity and observation space of the environment. For Atari, re-simulation varies per observation and game. In the worst case, it takes 22.39% of the training time for Breakout-ram-v0, and in the best case 6.03% for Pong-v0. Computationally less intensive environments, such as BipedalWalker-v3 or CartPole can lower this further to less than 1 % (0.68%) of training time. The main reason for re-simulation outperforming training is that many episode traces can be re-simulated in parallel. IV-B Re-Usable Visualizations Khetarpal et al. [23] suggest saving the values that are used to plot figures and providing the code to the plot. This improves reproducibility when the numbers are the results of the experiments. Minimal traces enable re-simulating this data to verify if this is the case. Moreover, minimal traces allow looking at different values than presented in a paper. For example, if a paper reported only the average reward, one could extract the median reward instead through the re-simulated data. Alternatively, if reward per episode was reported, one could instead look at reward per step. To increase the re-usability and accessibility of figures, we suggest using Vega[49]. Vega is a JSON-based graph description language. The main advantage is that plotted values are embedded inside the human-readable JSON data. Hence, one could extract a baseline algorithm reward line from the Vega description of an original paper and then compare it to a newly trained variation without re-simulation. Of course, it still allows exporting a scalable graphic for usage inside of the paper (e.g., Figure 2). In this case, guidelines on designing a proper baseline[22] are not that important anymore because the actual data from other papers can be directly compared111Assuming they are being compared on the same benchmark environment with the same configuration. Another advantage of Vega[49] is the ability to load Graph-Definition-Files in a Browser. These come with various advantages, such as seeing the exact value of an individual coordinate, zooming, scrolling through the axes, and changing colors if these are not color-blind friendly. IV-C Data availability The data we suggest to collect (Section III-A) potentially requires much storage. Whereas some hosting authorities allow researchers to upload large datasets for long-term availability, the two main problems with a central authority are the possibility that data could be tampered with / changed and that there is only a single point of failure regarding availability. The Interplanetary File System[46] lets us address these issues. It behaves similar to BitTorrent. Users who downloaded a file also share it, eliminating the need for a central server to store all files. Moreover, the data is hashed, so it can not be altered afterward by the original author, the hosting authority, or a redistributing user. A hash can represent a full folder with many subfiles that also all have individual hashes / IDs. So the log file itself and the results produced with it become verifiable. IPFS [46] also advertises its usefulness for scientific purposes, and we believe our minimal traces are well suited for it. Moreover, IPFS can co-exist and even integrate into a hosting service (such as Zenodo[50]) that could mirror the data it provides via IPFS. That would allow them to use it as a Content Delivery Network (CDN) for the actual files listed in the Zenodo database. Finally, conferences or journals could have public lists of relevant dataset IDs for published papers that should be pinned. Pinning in IPFS can be seen as the equivalent of mirroring data. A pinned ID will permanently (until unpinned) be held in local storage and thus be available to others requesting it. V Reproducibility Agenda Minimal traces are a useful step on the path to reproducibility, allowing efficient verification of experiments. To further improve the reproducibility of the field and to put our work in perspective, we suggest the following agenda. We propose researchers experimenting on deterministic Environments (ideally using PCG in order to improve generalizability) to do the following: 1. Use the verifiability tool to collect minimal traces 2. Provide source code, including a runnable container (e.g. Docker), to allow verification of results and figures 3. Generate figures using a common visualization grammar such as Vega[49], facilitating re-use of figure data 4. Utilize IPFS[46] to ensure the data’s integrity and long-term availability V-A Agenda applied to CartPole To better illustrate our suggested agenda in practice, we will illustrate how following it looks for the proverbial CartPole environment. In Table II, we have prepared the environment hyperparameters ($s_{init}$) relevant for each CartPole episode. Garage[21] (Figure 1) also collects these parameters but assumes policy training to be fully reproducible, which it is currently not (Section III-A). In Listing 2, we show that recording minimal traces merely requires wrapping the Gym-environment. The code in this listing is not pseudo-code but the actual main file of our runnable example. The last function generates a Vega[49] (Section IV-B) JSON-Description (Listing 1) that can be used to visualize a graph (Figure 2). It can also be pasted into the Vega-Editor222https://vega.github.io/editor/. Table I is also based on re-simulated data from minimal traces. New projects wanting to record minimal traces need to wrap their Gym-environment with the record function from the vgym-folder[13] before training. Then, the minimal traces will be saved into a sub-folder of the current working directory. A minimal trace file is serialized into the Concise Binary Object Format (CBOR)[51] and compressed with zlib[52]. Note that minimal trace sizes in Table I reflects the size before serialization and compression. The utility function load_replay in our package abstracts these serialization details away from the user. After loading a file, either all episodes can be re-simulated into regular traces in parallel using resimulate_parallel, or an individual episode can be re-simulated using episode_to_trace. Based on that, re-simulated data figures and tables should be generated, and the code as well as recorded minimal traces published alongside the submission. Our code repository contains a Dockerfile, recorded / used minimal traces, the example code behind the imported functions of Listing 2, and a readme detailing how to re-simulate the shown graph (Figure 2), as well as the values for the size comparison table (Table I). All data is hosted on IPFS[46] here[13]. VI Discussion If properly applied, the concept of minimal traces allows for reproducible and verifiable reinforcement learning experiments. Moreover, it enables re-usability of result data, more accessible ways to view the data interactively, inspection of individual episodes and data-saving for offline RL datasets. A limitation of minimal traces is that the source of the action sequence can not properly be verified. From the obtained data alone one cannot exclude any sophisticated ways of cheating the authors may have applied. Whereas a result figure or table may be fully reproduced with our data, one can not know whether the agent created the supplied action sequences. The data could be handcrafted or made by a heuristic or any other method that is not listed in the reviewed paper. Although minimal traces do not yet achieve end-to-end verifiable research experiments, they are an important step towards verifiable experiment figures to show that values portrayed in figures or tables have truly been achieved within the tested environment and have not been randomly generated. In combination with host executed competition benchmarks such as GVGAI, they enable post hoc analysis of individual agent performances. VI-A Environmental Impact In times where big research experiments use large amounts of electricity[53], it is important to note the possible environmental impact of our suggestion. Hard-disk space seems cheaper than the high wattages of training an agent on a GPU. Hard-disk space instead of computation is also used to “greenwash” novel blockchains as Chia[54]. Nevertheless, nothing electronic comes free, so the hardware still needs to be built, and energy needs to be used to power the servers pertaining the datasets we suggest to collect. In consequence, it will increase the global environmental impact of RL. However, we see no other, more minimal, and less intrusive way to ensure full verifiability of RL research. Thanks to the suggestion to share the data via IPFS[46], the servers would not need to stay online 24 / 7. There could be specific times of data availability where the server is switched on and the data accessible while ensuring the data is not tampered with. Moreover, the distributed nature of IPFS might make the necessity of a central storage server obsolete, given enough participants pertaining parts of the datasets. VII Conclusion In reinforcement learning, reproducibility of experimental results, and verification of research claims, is an important challenge. Our work introduces a methodology to verify experimental results building on the concept of minimal traces. We provide a full implementation of this method and have tested it on small and larger reinforcement learning experiments. For typical experiments, minimal traces enable compression ratios of up to 12539, reducing an offline RL trace of Atari Pong from 94 GB down to 8 MB. Moreover, re-simulating this minimal trace back to its original size takes 6% of the original training time. As our example shows, the collection of minimal traces requires but a wrapper around a Gym-environment. While minimal traces are limited to deterministic Reinforcement Learning problems, the idea transfers well to (video) games. Trackmania already applies leaderboard verification via replays, showing that benchmarks and competitions could adopt similar concepts. We envision a web-based tool that allows re-simulation and verification of results without any software setup by a reviewer on their local machine for future work. To that end, we provide a mock-up (Figure 3) with a functionality description. This could be implemented through either a Rust-Gym port of the current approach or by having a Python interpreter that properly works in a web assembly environment and with Gym. References [1] National Academies of Sciences, Engineering, and Medicine and others, Reproducibility and replicability in science.   National Academies Press, 2019. [2] W. H. Guss, C. Codel, K. Hofmann, B. Houghton, N. S. Kuno, S. Milani, S. Mohanty, D. P. Liebana, R. Salakhutdinov, N. Topin, M. Veloso, and P. Wang, “The minerl competition on sample efficient reinforcement learning using human priors,” in Thirty-third Conference on Neural Information Processing Systems (NeurIPS) Competition track, December 2019. [Online]. Available: https://www.microsoft.com/en-us/research/publication/the-minerl-competition-on-sample-efficient-reinforcement-learning-using-human-priors/ [3] W. H. Guss, M. Y. Castro, S. Devlin, B. Houghton, N. S. Kuno, C. Loomis, S. Milani, S. Mohanty, K. Nakata, R. Salakhutdinov, J. Schulman, S. Shiroshita, N. Topin, A. Ummadisingu, and O. Vinyals, “The minerl 2020 competition on sample efficient reinforcement learning using human priors,” January 2021. [Online]. Available: https://www.microsoft.com/en-us/research/publication/the-minerl-2020-competition-on-sample-efficient-reinforcement-learning-using-human-priors/ [4] C. Boettiger, “An introduction to docker for reproducible research,” ACM SIGOPS Operating Systems Review, vol. 49, no. 1, pp. 71–79, 2015. [5] M. López-Ibáñez, J. Branke, and L. Paquete, “Reproducibility in evolutionary computation,” ACM Transactions on Evolutionary Learning and Optimization, vol. 1, no. 4, pp. 1–21, 2021. [6] C. Liu, C. Gao, X. Xia, D. Lo, J. Grundy, and X. Yang, “On the replicability and reproducibility of deep learning in software engineering,” arXiv preprint arXiv:2006.14244, 2020. [7] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, Ç. Gülçehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. P. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, and D. Silver, “Grandmaster level in starcraft II using multi-agent reinforcement learning,” Nat., vol. 575, no. 7782, pp. 350–354, 2019. [Online]. Available: https://doi.org/10.1038/s41586-019-1724-z [8] C. Berner, G. Brockman, B. Chan, V. Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, R. Józefowicz, S. Gray, C. Olsson, J. Pachocki, M. Petrov, H. P. de Oliveira Pinto, J. Raiman, T. Salimans, J. Schlatter, J. Schneider, S. Sidor, I. Sutskever, J. Tang, F. Wolski, and S. Zhang, “Dota 2 with large scale deep reinforcement learning,” CoRR, vol. abs/1912.06680, 2019. [Online]. Available: http://arxiv.org/abs/1912.06680 [9] E. Strubell, A. Ganesh, and A. McCallum, “Energy and policy considerations for deep learning in nlp,” 2019. [10] R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni, “Green ai,” Communications of the ACM, vol. 63, no. 12, pp. 54–63, 2020. [11] P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger, “Deep reinforcement learning that matters,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018. [12] J. Pineau, P. Vincent-Lamarre, K. Sinha, V. Larivière, A. Beygelzimer, F. d’Alché Buc, E. Fox, and H. Larochelle, “Improving reproducibility in machine learning research (a report from the neurips 2019 reproducibility program),” Journal of Machine Learning Research, vol. 22, 2021. [13] A. paper authors. (2022-02-27) Code and trace repository. [Online]. Available: https://ipfs.io/ipfs/QmRDg98PaQEdvrj6tcDr5eQRLCPc2YovphMd7uHNJet1Ug [14] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” 2016. [15] R. BELLMAN, “A markovian decision process,” Journal of Mathematics and Mechanics, vol. 6, no. 5, pp. 679–684, 1957. [Online]. Available: http://www.jstor.org/stable/24900506 [16] A. Plaat, Deep Reinforcement Learning.   Springer Nature, 2021. [17] R. Agarwal, D. Schuurmans, and M. Norouzi, “An optimistic perspective on offline reinforcement learning,” in International Conference on Machine Learning, 2020. [18] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. Second Edition.   MIT press, 2018. [19] M. Werning, “Predicting the past from minimal traces: Episodic memory and its distinction from imagination and preservation,” Review of philosophy and psychology, vol. 11, no. 2, pp. 301–333, 2020. [20] I. Post and Y. Ye, “The simplex method is strongly polynomial for deterministic markov decision processes,” Mathematics of Operations Research, vol. 40, no. 4, pp. 859–868, 2015. [21] T. garage contributors, “Garage: A toolkit for reproducible reinforcement learning research,” https://github.com/rlworkgroup/garage, 2019. [22] R. Islam, P. Henderson, M. Gomrokchi, and D. Precup, “Reproducibility of benchmarked deep reinforcement learning tasks for continuous control,” 2017. [23] K. Khetarpal, Z. Ahmed, A. Cianflone, R. Islam, and J. Pineau, “Re-evaluate: Reproducibility in evaluating reinforcement learning algorithms,” 2018. [24] D. Eisner, “Reproducibility of science: Fraud, impact factors and carelessness,” Journal of molecular and cellular cardiology, vol. 114, pp. 364–368, 2018. [25] D. Pérez-Liébana, S. Samothrakis, J. Togelius, T. Schaul, and S. M. Lucas, “Analyzing the robustness of general video game playing agents,” in 2016 IEEE Conference on Computational Intelligence and Games (CIG).   IEEE, 2016, pp. 1–8. [26] R. D. Gaina, A. Couëtoux, D. J. Soemers, M. H. Winands, T. Vodopivec, F. Kirchgeßner, J. Liu, S. M. Lucas, and D. Perez-Liebana, “The 2016 two-player gvgai competition,” IEEE Transactions on Games, vol. 10, no. 2, pp. 209–220, 2017. [27] R. R. Torrado, P. Bontrager, J. Togelius, J. Liu, and D. Perez-Liebana, “Deep reinforcement learning for general video game ai,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG).   IEEE, 2018, pp. 1–8. [28] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton, “A survey of monte carlo tree search methods,” IEEE Transactions on Computational Intelligence and AI in games, vol. 4, no. 1, pp. 1–43, 2012. [29] D. Ferigo, S. Traversaro, G. Metta, and D. Pucci, “Gym-ignition: Reproducible robotic simulations for reinforcement learning,” in 2020 IEEE/SICE International Symposium on System Integration (SII).   IEEE, 2020, pp. 885–890. [30] P. Aumjaud, D. McAuliffe, F. J. R. Lera, and P. Cardiff, “rl_reach: Reproducible reinforcement learning experiments for robotic reaching tasks,” Software Impacts, vol. 8, p. 100061, 2021. [31] PyTorch. (2021-12-15) Reproducibility - pytorch documentation. [Online]. Available: https://pytorch.org/docs/stable/notes/randomness.html [32] A. Donadigo. (2021-12-15) Extracting inputs from replays. [Online]. Available: https://donadigo.com/tminterface/input-extraction [33] ——. (2021-12-15) Tmx replay investigation. [Online]. Available: https://donadigo.com/tmx1 [34] E. Games. (2021-12-15) Replay system - unreal engine documentation. [Online]. Available: https://docs.unrealengine.com/latest/INT/Engine/Replay/ [35] K. Maberry, S. Paustian, and S. Bakir, “Using an artificial neural network to detect aim assistance in counter-strike: Global offensive,” DOI, vol. 10, no. 1235, pp. 1–4. [36] P. Xenopoulos, H. Doraiswamy, and C. Silva, “Valuing player actions in counter-strike: Global offensive,” in 2020 IEEE International Conference on Big Data (Big Data).   IEEE, 2020, pp. 1283–1292. [37] J. M. Font and T. Mahlmann, “Dota 2 bot competition,” IEEE Transactions on Games, vol. 11, no. 3, pp. 285–289, 2018. [38] S. S. Farooq, I.-S. Oh, M.-J. Kim, and K. J. Kim, “Starcraft ai competition report,” AI Magazine, vol. 37, no. 2, pp. 102–107, 2016. [39] J. A. Brown, L. J. P. de Araujo, and A. Grichshenko, “Ai space invaders 2021 competition,” 2021. [40] ——, “Snakes ai competition 2020 and 2021 report,” 2021. [41] N. Justesen, P. D. Moore, L. M. Uth, J. Togelius, C. Jakobsen, and S. Risi, “Blood bowl: A new board game challenge and competition for ai,” in 2019 IEEE Conference on Games (COG).   IEEE, 2019. [42] K. Cobbe, C. Hesse, J. Hilton, and J. Schulman, “Leveraging procedural generation to benchmark reinforcement learning,” arXiv preprint arXiv:1912.01588, 2019. [43] S. Risi and J. Togelius, “Increasing generality in machine learning through procedural content generation,” Nature Machine Intelligence, vol. 2, no. 8, pp. 428–436, 2020. [44] M. C. Green, B. Sergent, P. Shandilya, and V. Kumar, “Evolutionarily-curated curriculum learning for deep reinforcement learning agents,” 2019. [45] M. Müller-Brockhausen, M. Preuss, and A. Plaat, “Procedural content generation: Better benchmarks for transfer reinforcement learning,” in 2021 IEEE Conference on Games (CoG), Copenhagen, Denmark, August 17-20, 2021.   IEEE, 2021, pp. 1–8. [Online]. Available: https://doi.org/10.1109/CoG52621.2021.9619000 [46] S. Muralidharan and H. Ko, “An interplanetary file system (ipfs) based iot framework,” in 2019 IEEE International Conference on Consumer Electronics (ICCE), 2019, pp. 1–2. [47] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. [48] A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann, “Stable-baselines3: Reliable reinforcement learning implementations,” Journal of Machine Learning Research, vol. 22, no. 268, pp. 1–8, 2021. [Online]. Available: http://jmlr.org/papers/v22/20-1364.html [49] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer, “Vega-lite: A grammar of interactive graphics,” IEEE transactions on visualization and computer graphics, vol. 23, no. 1, pp. 341–350, 2016. [50] European Organization For Nuclear Research and OpenAIRE, “Zenodo,” 2013. [Online]. Available: https://www.zenodo.org/ [51] C. Bormann and P. Hoffman, “Concise binary object representation (cbor),” RFC 7049, DOI 10.17487/RFC7049, October 2013,¡ https://www. rfc-editor. org …, Tech. Rep., 2013. [52] P. Deutsch and J.-L. Gailly, “Zlib compressed data format specification version 3.3,” RFC 1950, May, Tech. Rep., 1996. [53] D. Patterson, J. Gonzalez, Q. Le, C. Liang, L.-M. Munguia, D. Rothchild, D. So, M. Texier, and J. Dean, “Carbon emissions and large neural network training,” 2021. [54] B. Cohen and K. Pietrzak, “The chia network blockchain,” 2019.
Merger signatures in the dynamics of star-forming gas Chao-Ling Hung {CJK*}UTF8bsmi(洪肇伶)11affiliationmark: 22affiliationmark: 33affiliationmark: Christopher C. Hayward44affiliationmark: 22affiliationmark: Howard A. Smith22affiliationmark: Matthew L. N. Ashby22affiliationmark: Lauranne Lanz55affiliationmark: Juan R. Martínez-Galarza22affiliationmark: D. B. Sanders11affiliationmark: Andreas Zezas66affiliationmark: 22affiliationmark: 11affiliationmark: Institute for Astronomy, University of Hawaii, 2680 Woodlawn Dr., Honolulu, HI 96822, USA; [email protected] 22affiliationmark: Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138, USA 33affiliationmark: Department of Astronomy, the University of Texas at Austin, 2515 Speedway Blvd., Austin, TX 78712, USA; Harlan J. Smith Fellow 44affiliationmark: TAPIR, Mailcode 350-17, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA 55affiliationmark: Infrared Processing and Analysis Center, Caltech 100-22, Pasadena, CA 91125, USA 66affiliationmark: University of Crete, Physics Department & Institute of Theoretical & Computational Physics, GR-710 03 Heraklion, Crete, Greece Abstract The recent advent of integral field spectrographs and millimeter interferometers has revealed the internal dynamics of many hundreds of star-forming galaxies. Spatially resolved kinematics have been used to determine the dynamical status of star-forming galaxies with ambiguous morphologies, and constrain the importance of galaxy interactions during the assembly of galaxies. However, measuring the importance of interactions or galaxy merger rates requires knowledge of the systematics in kinematic diagnostics and the visible time with merger indicators. We analyze the dynamics of star-forming gas in a set of binary merger hydrodynamic simulations with stellar mass ratios of 1:1 and 1:4. We find that the evolution of kinematic asymmetries traced by star-forming gas mirrors morphological asymmetries derived from mock optical images, in which both merger indicators show the largest deviation from isolated disks during strong interaction phases. Based on a series of simulations with various initial disk orientations, orbital parameters, gas fractions, and mass ratios, we find that the merger signatures are visible for $\sim 0.2-0.4$ Gyr with kinematic merger indicators but can be approximately twice as long for equal-mass mergers of massive gas-rich disk galaxies designed to be analogs of $z\sim 2-3$ submillimeter galaxies. Merger signatures are most apparent after the second passage and before the black holes coalescence, but in some cases they persist up to several hundred Myr after coalescence. About $20-60\%$ of the simulated galaxies are not identified as mergers during the strong interaction phase, implying that galaxies undergoing violent merging process do not necessarily exhibit highly asymmetric kinematics in their star-forming gas. The lack of identifiable merger signatures in this population can lead to an underestimation of merger abundances in star-forming galaxies, and including them in samples of star-forming disks may bias the measurements of disk properties such as intrinsic velocity dispersion. Subject headings:galaxies: interactions$-$galaxies: kinematics and dynamics$-$galaxies: structure 1. Introduction The identification of galaxy mergers/interacting systems is critical to understand the role of interactions in the growth and assembly of galaxies. Specifically, what is the relative importance between smooth or continuous accretion and discrete merger events in galaxy evolution (e.g., Hopkins et al., 2006; Genel et al., 2008; Dekel et al., 2009b), and what roles do mergers play in triggering star formation and nuclear activity across cosmic time (e.g., Engel et al., 2010; Hayward et al., 2013; Hung et al., 2013; Casey et al., 2014)? The vast majority of such constraints have been derived from large optical imaging surveys via measurements of galaxy pair fractions and identifying merger-induced disturbed structures (e.g., Lin et al., 2004; Conselice et al., 2008; Lotz et al., 2011; Man et al., 2012). The observed abundance of mergers can then be used to test the predictions of galaxy evolution models after proper conversions from merger fractions to galaxy merger rates (Kitzbichler & White, 2008; Lotz et al., 2008; Hopkins et al., 2010). Measurements of merger fractions or the merger/disk nature of individual galaxies based on optical morphologies can be ambiguous. Disturbed morphological structures like tidal tails and bridges are indisputable evidence of galaxy interactions (Toomre & Toomre, 1972; Barnes & Hernquist, 1992; Kim et al., 2002; Rothberg & Joseph, 2004), but these features often fade away at large distances due to surface brightness dimming (e.g., Hibbard & Vacca, 1997; Overzier et al., 2010; Hung et al., 2014). Some galaxy mergers exhibit highly clumpy, irregular star-forming regions that are visible in the rest-frame UV and optical wavelengths (e.g., Miralles-Caballero et al., 2011; Petty et al., 2014). However, these features are also commonly seen in clumpy star-forming galaxies at intermediate ($z\sim 0.1$) and high ($z\gtrsim 1$) redshifts (e.g., Elmegreen et al., 2004, 2007; Fisher et al., 2014; Guo et al., 2015), in which their star-forming clumps are formed through gravitational instabilities in highly unstable, turbulent disks (Bournaud et al., 2007; Dekel et al., 2009a; Ceverino et al., 2010). Spectral lines from stars, neutral gas, molecular gas, and ionized gas of nearby galaxies (e.g., de Zeeuw et al., 2002; Helfer et al., 2003; Dicaire et al., 2008; Walter et al., 2008) trace galaxy dynamics out to different radii (e.g., Yun et al., 1994; Aalto et al., 1999), and in some cases they may reveal the evolution and interaction history of galaxies (Davis et al., 2011). Emission lines from molecular gas and ionized gas are the most common tracers for a large sample of resolved galaxy kinematics out to $z\sim 3$ (e.g., Tacconi et al., 2006; Förster Schreiber et al., 2009; Daddi et al., 2010; Gnerucci et al., 2011), for which the gas traces the star-forming fuel and massive star forming regions. In fact, kinematic structures traced by molecular and ionized gas have been used to reveal the dynamical status of galaxies independent of their visible morphologies (e.g., Swinbank et al., 2006; Tacconi et al., 2006, and a review by Glazebrook, 2013); that is, whether galaxies display rotational patterns as expected for disks (e.g., Daigle et al., 2006; Dicaire et al., 2008) or complicated kinematics as expected for mergers (Mihos & Bothun, 1998; Colina et al., 2005). Recent large integral field spectrograph (IFS) surveys such as CALIFA (Husemann et al., 2013), SAMI (Cortese et al., 2014), MaNGA (Law et al., 2015), and KMOS${}^{\rm 3D}$ (Wisnioski et al., 2015) have significantly increased the sample of star-forming galaxies with resolved kinematics. These observations are able to constrain merger abundances with respect to a wide range of galaxies’ luminosities, stellar masses ($M_{*}$), and star formation rates (SFR), and complement to the studies based on optical imaging surveys. However, several complications attend kinematic diagnostics. It has been demonstrated in both simulations and observations that gaseous disks are able to survive during the interaction between gas-rich systems or reform through accreting gas after two nuclei merge (e.g., Downes & Solomon, 1998; Barnes, 2002; Springel & Hernquist, 2005; Hopkins et al., 2009; Ueda et al., 2014). These reformed disks can have $M_{*}$, SFR, and gas mass comparable to some of the $z\sim 1-3$ star-forming disks (e.g., Robertson & Bullock, 2008). Therefore, disk-like kinematics do not guarantee that the evolution history was quiescent. Secondly, even during the earlier strong interaction stages, a small but significant fraction of mergers lacks the complicated kinematics expected from their disturbed morphology (e.g., Mihos & Bothun, 1998; Bellocchi et al., 2013). The contamination rates of mis-identified mergers/disks can be up to 50% when classifying galaxies based solely on their resolved kinematics, and the results depend strongly on the interaction stage and the choice of kinematic classification schemes (Hung et al., 2015). Comparisons between simulated and observed interacting galaxies have been used as a powerful tool to constrain detailed properties of mergers such as the initial encounter conditions (Barnes & Hibbard, 2009; Privon et al., 2013). Although this detailed scrutiny for a large sample of galaxies is currently unattainable, mock observations based on hydrodynamic simulations can be used to study how the merger indicators evolve along the interaction sequence of different mass ratios, masses, and gas fractions (e.g., Lotz et al., 2008, 2010a, 2010b; Snyder et al., 2015). These studies also enable empirical calibrations of galaxy merger rates based on various morphological merger indicators. Extensive work has been done exploring the kinematics of interacting galaxies and merger remnants using stellar populations as dynamical tracers (e.g., Bendo & Barnes, 2000; Jesseit et al., 2007; Naab et al., 2014; Stickley & Canalizo, 2014), and some studies focus on the dynamics probed by the star-forming gas (e.g., Robertson & Bullock, 2008; Narayanan et al., 2009; Ceverino et al., 2012; Kassin et al., 2014). However, to date, there is a paucity of studies that systematically constrain the time intervals during which kinematic merger indicators are visible. In this paper, we examine the evolution of kinematic merger indicators using a set of hydrodynamic simulations of binary mergers described in Section 2. Specifically, we include merger simulations based on progenitor disks that are representative for local SDSS galaxies and $z\sim 2-3$ submillimeter galaxies (Lanz et al., 2014; Hayward et al., 2013). These simulations use widely employed SPH code gadget (Springel, 2005), and their implementation of star formation and feedback are similar to many previous work (e.g., Cox et al., 2006b; Robertson et al., 2006a). In Section 3, we detail the realization of mock kinematic maps and optical images. The merger indicators used in this paper are described in Section 4. We report our results in Section 5 and discuss their implications in Section 6. We list our conclusions in Section 7. 2. Simulated Galaxy Mergers We use a set of hydrodynamic simulations of galaxy mergers and isolated galaxies performed by Lanz et al. (2014, hereafter L14, also see , ). These simulations are carried out using gadget-3 (Springel, 2005), which computes gravitational interactions via a hierarchical tree method (Barnes & Hut, 1986) and gas dynamics via smoothed-particle hydrodynamics111Although the traditional formulation of SPH can be inaccurate in some fluid mixing processes (e.g., Agertz et al., 2007), the type of idealized merger simulations performed here are insensitive to these limitations (Hayward et al., 2014a). (SPH; Gingold & Monaghan, 1977; Lucy, 1977). Each model galaxy contains a disk with stars and gas, a stellar bulge, a dark matter halo, and a supermassive black hole. The gravitational softening lengths of the baryonic and dark matter particles are 100 pc and 400 pc, respectively. Star formation and supernova feedback are implemented via the effective equation of state (EOS) method of the sub-resolution interstellar medium (ISM) model (Springel & Hernquist, 2003), and only gas particles with density higher than a threshold of $n\sim 0.1$ cm${}^{-3}$ are assumed to follow the effective EOS of this model. The instantaneous SFR of each gas particle is determined using a volumetric generalization of the Kennicutt-Schmidt relation, SFR $\propto\rho_{\rm gas}^{N}$ (Schmidt, 1959; Kennicutt, 1998), with $N=1.5$ (Springel & Hernquist, 2003). Stellar winds are not included in these simulations. L14 simulations also includes black hole accretion and AGN feedback models from Springel et al. (2005). The simulations in L14 include a suite of galaxy mergers from four progenitor disks (named as M0, M1, M2, M3 in L14) that are representative of galaxies from the Sloan Digital Sky Survey (SDSS). These progenitor disks are similar to G0, G1, G2, G3 in Jonsson et al. (2006) and Cox et al. (2008) except that G0-G3 have slightly higher gas mass and $M_{*}$ than M0-M3, and no supermassive black hole is included in G0-G3. The disk component in M0, M1, M2, M3 has a central metallicity of 0.34, 0.5, 0.7, 1.6 $Z_{\odot}$ and it follows a metallicity gradient between $-0.04$ and $-0.06$ dex/kpc. Each gas particle undergoes self-enrichment at a rate determined by its SFR. The new star particles formed during simulations are characterized by a formation time and a metallicity from their parent gas particles. In this paper, we focus our morphological and kinematic analyses on the two most massive mergers from L14 (M3M3e and M3M2e, where “e” refers to one of the non-special disk orientations defined in Cox et al., 2006a). Simulated mergers M3M3e and M3M2e have a total $M_{*}$ of $5.4\times 10^{10}$ $M_{\odot}$ and $8.44\times 10^{10}$ $M_{\odot}$, respectively, which are typical for IFS surveys at $z\sim 1-3$ (e.g., Förster Schreiber et al., 2009; Wisnioski et al., 2015). Details of initial masses, numbers of SPH particles, gas fractions, disk orientations, and orbital parameters of M3M2e and M3M3e are summarized in Table 1. In addition to the M3M3e and M3M2e simulations from L14, we perform variations based on these two simulations to explore the possible impacts from numerical resolutions, gas fractions, orbital parameters, and the choices of initial disk orientations. We perform two high resolution runs with the particle numbers 5 and 10 times higher than the runs in L14. The gas-rich versions of M3M3e and M3M2e are carried out by doubling the initial gas fraction of progenitor disks. Motivated by the cosmological simulations of dark matter halos in Khochfar & Burkert (2006), we test three different orbital parameters222Khochfar & Burkert (2006) show that almost half of major mergers with mass ratio $\leq 4$ have near parabolic orbit ($e\sim 1$) and the rest are dominated by bound orbits ($e<1$). In these three additional runs, we choose two near parabolic orbits ($e=0.95$) with different $r_{p}$ and the other orbit with smaller $e=0.8$. However, we note that Khochfar & Burkert (2006) use a dark matter-only simulation, and the orbital parameters of the dark matter halos may not necessarily correspond to the orbital parameters of the galaxies in the halos. with various eccentricity ($e$) and pericentric distance ($r_{p}$). Finally, we carry out additional M3M3 and M3M2 simulations with four special initial disk orientations defined in Cox et al. (2006a). Detailed parameters used in these variations are summarized in Table 1. Finally, to address how well kinematic analyses based on binary merger simulations (L14) apply to $z\sim 1-3$ star-forming galaxies, we include two additional simulations from Hayward et al. (2013, hereafter H13) as a test case. The b6b6e and b6b5e simulations from H13 have stellar mass ratios of 1:1 and 1:4, and the progenitor disks are scaled to $z=3$ based on the method described in Robertson et al. (2006b). These two simulations are more gas rich than the M3M2e and M3M3e simulations in L14 (Table 1), but have physical properties ($M_{*}$, SFR, submillimeter flux densities, etc.) typical for $z\sim 2-3$ submillimeter galaxies (SMGs, Hayward et al., 2011, 2012; Michałowski et al., 2012). The gravitational softening length of dark matter is 200 pc in the H13 simulations. Otherwise, the b6b6e and b6b5e simulations were configured identically to those in L14. 3. Galaxy Morphology and Dynamics 3.1. Broadband Images We use the three-dimensional Monte Carlo radiative transfer code sunrise (Jonsson, 2006; Jonsson et al., 2010) to produce mock images of the simulated galaxies described in Section 2. sunrise determines the emission from stars and AGNs in the hydrodynamic simulations with SED templates (Leitherer et al., 1999; Hopkins et al., 2007) and then performs radiative transfer calculations to account for the absorption, scattering, and re-emission by dust. We adopt the same dust model as L14 (the Milky Way-type dust model of Draine & Li (2007)). L14 discuss two possible treatments of the sub-resolution ISM structure during radiative transfer (i.e., whether dust mass is derived based on the diffuse gas content in the Springel & Hernquist, 2003 model or the total gas content). Here we adopt the conversion that dust mass is based on the diffuse gas content, which can better reproduce the SEDs of the observed interacting galaxies (L14). We derive optical morphological properties using the mock SDSS $i^{\prime}$-band ($\lambda_{\rm eff}=7439$Å, $\Delta\lambda=1044$Å) images produced from sunrise. Rest-frame optical wavelength is an ideal window to trace the disturbed structures induced by galaxy mergers because the emission is dominated by old stellar populations instead of the clumpy star-forming regions (e.g., Abraham et al., 2003; Conselice, 2003; Lotz et al., 2004), and it is available for a large sample of star-forming galaxies from $z\sim 0$ out to $z\sim 1-3$ (e.g., van der Wel et al., 2012; Kartaltepe et al., 2014). The $\sim 7000-8000$Å regime is not severely affected by dust extinction except for extreme cases like ultraluminous and luminous infrared galaxies ((U)LIRGs; Haan et al., 2011; Hayward et al., 2012). No significant impacts from dust extinction are seen in our morphological analysis based on the $i^{\prime}$-band images throughout the M3M2e and M3M3e simulations. We generate mock $i^{\prime}$-band images at 100 Myr intervals throughout the interaction sequence, and decrease the sampling steps to 20 Myr intervals during the strong interaction phase. For each snapshot, we obtain mock images from seven viewing angles sampled in a regular grid in spherical coordinate. We then convert the mock images from sunrise to images comparable to real observations. First, we place our simulated galaxies at a distance of 100 Mpc, in which the plate scale of SDSS images ($0\farcs 396$) corresponds to a physical size of $\sim 200$ pc. The observed number of counts is determined according to the surface brightness of galaxies at the assumed distance. We then convolve the sunrise images with the typical point spread function (PSF) of SDSS $i^{\prime}$ observations ($\sim 1\farcs 3$), and add a noise frame extracted from the blank region in real SDSS $i^{\prime}$ images. Examples of processed mock images from the M3M2e and M3M3e simulations are shown in the left panels of Figures 1 and 2, respectively. 3.2. Kinematic Maps As discussed in Section 1, emission lines from molecular gas and ionized gas are the most common tracers for a large sample of resolved galaxy kinematics at $z\sim 0-3$. Therefore, we focus our analysis on the kinematic properties derived from star-forming gas, and we discuss possible impacts using different dynamical tracers in Section 5.4. We construct the kinematic maps based on the dynamical information from the SPH particles. We select the subset of gas particles that have SFR $>0$ as a proxy of star-forming gas (where the gas density must be higher than a threshold of $n\sim 0.1$ cm${}^{-3}$) in the simulated galaxies. In this simple approximation, possible impacts from dust are not included. To convert particle-based information to kinematic maps, we make projected velocity and velocity dispersion maps from seven viewing angles that are consistent with sunrise images. In each viewing angle, we bin the gas particles into equally-spaced 500 pc $\times$ 500 pc bins (500 pc corresponds to $\sim$1″ at the distance of 100 Mpc). The velocity and velocity dispersion in each pixel are then derived from the median and standard deviation of the gas particles weighted according to their SFR. Finally, we adopt adaptive binning (Cappellari & Copin, 2003) for the kinematic maps to ensure that each region (combined from $\geq$ 1 pixel) contains at least 10 star-forming gas particles. Examples of velocity and velocity dispersion maps from M3M2e and M3M3e simulations are shown in the middle and right panels of Figures 1 and 2, respectively. 4. Merger Indicators 4.1. Kinematic Properties A common kinematic diagnostic of disks and mergers is the complexity of the galaxies’ resolved kinematic properties, i.e., whether galaxies show ordered rotational patterns as expected for disk-like galaxies or chaotic patterns as expected for interacting systems. Such identifications have been done via kinematic asymmetries (Shapiro et al., 2008; Bellocchi et al., 2012), visual inspections (e.g., Flores et al., 2006; Epinat et al., 2012), and visual comparisons with galaxy merger simulations (e.g., Hammer et al., 2009). In this paper, we quantify how the degree of galaxies’ kinematic maps deviate from a rotating disk using the kinematic asymmetries defined by Shapiro et al. (2008), which is based on the higher-order moments kinematic coefficients of the velocity and velocity dispersion distributions derived using the kinemetry analysis (Krajnović et al., 2006). The line-of-sight velocity map or velocity dispersion map $K(a,\psi)$ can be divided into a series of elliptical rings (with semi-major axis $a$) as velocity or velocity dispersion profiles. These profiles can then be described as an expansion of $N+1$ harmonic terms: $$K(a,\psi)=A_{0}(a)+\sum\limits_{n=1}^{N}A_{n}(a)\sin{n\psi}+B_{n}(a)\cos{n\psi},$$ (1) where $\psi$ is the azimuthal angle. Shapiro et al. (2008) quantify the level of deviation from an ideal disk by defining asymmetric measures of velocity and velocity dispersion fields as: $$v_{asym}=\left<\frac{\sum\limits_{n=2}^{5}k_{n,v}/4}{B_{1,v}}\right>_{r},% \sigma_{asym}=\left<\frac{\sum\limits_{n=1}^{5}k_{n,\sigma}/5}{B_{1,v}}\right>% _{r},$$ (2) where $k_{n}=(A_{n}^{2}+B_{n}^{2})^{1/2}$, the subscripts $v$ and $\sigma$ refer to the quantifies corresponding to the velocity and velocity dispersion maps, and $r$ refers to the average over all radii. Finally, kinematic asymmetries, $K_{asym}$ is defined as $(v^{2}_{asym}+\sigma^{2}_{asym})^{1/2}$. We measure $K_{asym}$ of all simulations from the velocity and velocity dispersion maps described in Section 3.2 using the IDL routine Kinemetry555http://davor.krajnovic.org/idl/ (Krajnović et al., 2006). We adopt the gas density peak position as the center of the kinematic maps, and then use Kinemetry to find the best fit ellipse with position angle (PA) and the flattening factor ($Q=1-e$) at each radius step until more than 25% of the data points along an ellipse are not present (the COVER parameter=0.75). The choice of this COVER parameter typically leads to an outer radius of $\sim$10 kpc during early interaction stages and $\sim$5 kpc during strong interaction and post-coalescence phases. The evolution of $K_{asym}$ along the interaction sequence of M3M2e and M3M3e simulations is shown in the bottom panels of Figures 3 and 4. In general, only one galaxy in the interacting system (the one with higher central density) is included in the calculation when two galaxies are well-separated ($\gtrsim 10$ kpc), and the evolution of $K_{asym}$ does not necessarily follow the same galaxy during the early interaction phases. We also derive $K_{asym}$ in two additional cases following each galaxy in the interacting system, in which the centers of the kinematic maps are chosen at the positions of the supermassive black holes. We note that Kinemetry can fail to perform the elliptical fitting when the systems traced by the star-forming gas are too compact (e.g., $\lesssim$ 5 pixels across the galaxy), but typically less than 5% of the data do not have $K_{asym}$ measurements in a given interaction sequence for this reason. 4.2. Morphological Properties Various non-parametric statistics have been developed to quantify the irregularity of galaxy structure, and they can be used as indicators for possible disturbance due to galaxy mergers (Conselice et al., 2000; Bershady et al., 2000; Conselice, 2003; Abraham et al., 2003; Lotz et al., 2004; Freeman et al., 2013). Extensive work has also been done to quantify the evolution of these parameters along the interaction sequence (e.g., Conselice, 2006; Lotz et al., 2008, 2010a, 2010b) and their robustness for nearby and distant galaxies (e.g., Abraham et al., 1996; Overzier et al., 2010; Hung et al., 2014). In this paper, we quantify the morphological properties of the M3M2e and M3M3e simulations only to assist with the kinematic analysis, and refer the reader to the references listed above for detailed discussions of merger observability using morphological properties. We measure the asymmetry parameter ($A$; Conselice et al., 2000) of galaxies in the M3M2e and M3M3e simulations from the mock SDSS $i^{\prime}$ images. We follow the definition of $A$ in Conselice et al. (2000), in which it quantifies the deviation from $180^{\circ}$ rotational symmetry. $$A=\sum_{i,j}\frac{\left|I(i,j)-I_{180}(i,j)\right|}{\left|I(i,j)\right|}-\sum_% {i,j}\frac{\left|B(i,j)-B_{180}(i,j)\right|}{\left|I(i,j)\right|},$$ (3) where $I$ and $I_{180}$ is the galaxy image and its $180^{\circ}$ rotated version, and $B$ and $B_{180}$ represent the background and its $180^{\circ}$ rotation. $A$ is often significantly enhanced relative to elliptical or spiral galaxies in the presence of multiple bright components and extremely irregular structure. The merger simulations in Lotz et al. (2008, 2010a, 2010b) have also demonstrated that $A$ is most sensitive to interacting galaxies during the strong interaction phases before the final coalescence and in some cases during the first passage as well. To derive $A$, we first use SExtractor (Bertin & Arnouts, 1996) to identify galaxies in each mock $i^{\prime}$-band images. The de-blending parameters have been chosen so that the interacting systems are identified as one galaxy when the projected distance between two nuclei is smaller than $\sim 5-10$ kpc. When more than one object is detected in the images, we mask out the detections other than the brightest galaxy and we refill the masked regions with nearby sky. In this case, most of the identified regions along the interaction sequence for deriving $A$ are consistent with the kinematic measurements. We apply a “quasi-Petrosian” method (Abraham et al., 2007) to define the Petrosian radius ($r_{p}$) as the effective radius at the isophotal threshold of 0.2 and we define the center of galaxies as where $A$ is minimized (Conselice et al., 2000). Finally, $A$ is derived by summing over all pixels within 1.5 $r_{p}$ (Equation 1). The evolution of $A$ along the interaction sequence of M3M2e and M3M3e simulations is shown in the middle panels of Figures 3 and 4. 5. Results 5.1. Merger indicators along the interaction sequence Figures 3 and 4 show the evolution of SFR, $A$, and $K_{asym}$ along the interaction sequence of the M3M2e and M3M3e simulations. The distribution of $A$ and $K_{asym}$ from isolated M3 simulations with various viewing angles and time are indicated in gray shaded area. In both M3M2e and M3M3e simulations, $A$ is significantly enhanced only after the second passage of galaxies and before coalescence. During this strong interaction phase, individual galaxies display large scale tidal features and lead to high $A$ even when two galaxies can still be resolved. When the two nuclei are close enough ($\lesssim 5-10$ kpc) to be considered as one system, the multiple bright components can also result in higher values of $A$. The enhancement of $A$ during the strong interaction phases are consistent with the results of G3G3P and G3G2P simulations in Lotz et al. (2008, 2010b), which use similar progenitor galaxies and orbital parameters but different initial disk orientations. Although a small fraction of the data in Lotz et al. (2010b) have elevated $A$ during the first passage, no significant enhancement is seen in our M3M2e and M3M3e simulations. The evolution of $K_{asym}$ approximately tracks $A$ before the coalescence phase in both M3M2e and M3M3e simulations. The low $K_{asym}$ during the early interacting stages demonstrates that within individual galaxies, only minimal disturbance is seen in the kinematic structures traced by star-forming gas. Although galaxy interactions may begin to affect the SFR and metallicity of individual galaxies during the early phase of interaction (e.g. Scudder et al., 2012; Moreno et al., 2015), this impact may not necessarily reflect on the irregularity in galaxy kinematics. The lack of detectable enhancement in $K_{asym}$ is the case for each galaxy in the interacting systems. We derive $K_{asym}$ in two additional cases following two individual galaxies in which the center of the kinematic maps coincide with the positions of supermassive black holes (blue and green solid lines in the bottom panels of Figures 3 and 4). The resulting median $K_{asym}$ curves show similar trends with the kinematic maps centered at the gas density peak. From right after the second passage through the coalescence phases, $K_{asym}$ show significant deviations from the isolated M3 simulations. Most of the snapshots during this strong interaction phase display highly disturbed structure in both velocity and velocity dispersion maps, in which the kinematic structures are dominated by the bulk motion of two nuclei and the merger-induced gas flows. The oscillations of $K_{asym}$ between second passage and coalescence reflect the projected distance between two nuclei; stronger disturbances are measured when two nuclei approach each other whereas such disturbances decrease as the two nuclei recede from each other. After the two nuclei merge, a gaseous disk survives in the M3M2e simulations and its $K_{asym}$ decreases to the level of isolated M3 simulations. However, no such structure is formed in the M3M3e simulations, and most of the gas has funneled to the galaxy center and been consumed by the starbursts within $\sim 100-200$ Myr. The $K_{asym}$ of the M3M3e simulations remain slightly enhanced after the coalescence phase for $\sim 100$ Myr until the star-forming gas is exhausted (SFR $\lesssim 0.5$ $M_{\odot}$ yr${}^{-1}$) and kinematics is no longer traced. In Figure 5, we show the evolution of SFR and $K_{asym}$ in the b6b5e and b6b6e simulations, which are binary mergers of SMG-type progenitors as described in Section 2. Prior to the coalescence phases, the b6b5e and b6b6e simulations have significantly higher SFR than the M3M2e and M3M3e simulations because the SMG-type progenitor disks are more gas rich and have higher gas densities. Despite these differences, the evolution of $K_{asym}$ in b6b5e and b6b6e shows a similar trend as M3M2e and M3M3e. For instance, $K_{asym}$ only begins to elevate significantly after the second passage. A key difference seen between the equal mass mergers M3M3e and b6b6e is that $K_{asym}$ of b6b6e is elevated for $\sim 400$ Myr after black hole coalescence. This prolonged disturbance in the dynamics of star-forming gas is visible due to a more gradual decline in SFR after coalescence (i.e. it only takes $\sim$ 0.25 Gyr for M3M3e to reach a SFR that is 0.01% of its peak SFR after black hole coalescence, whereas it takes $\sim$1 Gyr for b6b6e to reach 0.01% of its peak SFR). 5.2. Merger observable time and probability with kinematic indicators We derive the merger observable time (i.e. the time duration that merger signatures are detectable, hereafter MOT) using the median $K_{asym}$ curves (e.g., the bottom panels of Figures 3 and 4). We define the criterion of a galaxy to be classified as a merger when its $K_{asym}$ is significantly enhanced, and here we use a threshold of $K_{asym}=0.15$ (a value higher than 95% of galaxies from the isolated M3 simulations). We note that this threshold is comparable to the one defined by Bellocchi et al. (2012) but considerably lower than the criteria used by Shapiro et al. (2008). Since our criteria are defined using simulations of the progenitor disk followed with the same kinematic mapping as the merger simulations, any enhancement in $K_{asym}$ can be attributed as a result of interactions. The derived MOT with $K_{asym}>0.15$ are 0.22 and 0.36 Gyr for the M3M2e and M3M3e simulations (Table 2). The uncertainties are derived based on the 1 $\sigma$ distributions of the median $K_{asym}$ curves (the blue shaded area in Figures 3 and 4). Results based on different numerical resolutions typically differ within $\pm$0.1 Gyr. Since we attribute the main source of uncertainty as the variation due to the viewing angles, it is important to examine whether our choice of seven viewing angles are truly representative to the typical variation in $K_{asym}$. We derive $K_{asym}$ of 70 viewing angles for two snapshots of M3M2e with one in the early interaction stage and the other close to the coalescence. We find that in both snapshots, the 1 $\sigma$ distribution of the data points from 7 viewing angles span a range similar to the distribution derived based on 70 viewing angles. Another concern is whether a time step of 100 Myr is sufficient to trace the variation during the early interaction phases. We have increased the sampling in timestep to 20 Myr before the second passage, and the MOT only increases 20 Myr for the M3M2e simulations and does not change for the M3M3e simulations. We explore the dependence of MOTs on initial conditions of galaxy merger simulations. Specifically, we focus on whether the choices of gas fractions, orbital parameters, and initial disk orientations may have significant impacts (Table 2). The gas rich runs of M3M2e and M3M3e with doubling the initial gas fraction can lead to molecular gas fraction comparable to local LIRGs or ULIRGs type objects (e.g., Sanders et al., 1991), yet their MOTs remain similar to the original runs. The results from various orbital parameters span a wider range ($0.2-0.48$ Gyr), in which “orb2 ($e=0.95$, $r_{p}=27.2)$” have larger MOTs due to its $\sim$ twice longer duration between second passage and coalescence. We also carry out simulations with four special initial disk orientations, and these variations lead to observable time of $0.20-0.36$ Gyr. In all variations based on L14 simulations, merger signatures in $K_{asym}$ are most visible during the strong interaction phase and only visible for $\lesssim$ 100 Myr after black holes coalescence regardless the availability of star-forming gas. The equal-mass merger simulation with SMG-type progenitors (b6b6e) has doubled MOT compared to M3M3e, in which merger signatures are visible for $\sim 0.4$ Gyr during the post-coalescence phase until its SFR decreases to $\sim 0.5$ $M_{\odot}$ yr${}^{-1}$. The merger/disk classification criteria and the time when the $K_{asym}$ curves end may introduce additional systematics to MOTs. For example, if we apply a lower classification threshold, e.g., $K_{asym}=0.11$ (a value higher than 68% of galaxies from the isolated M3 simulations), then the MOTs of the M3M2e and M3M3e simulations increase to 0.48 and 0.56 Gyr, respectively. On the other hand, our kinematic analysis stops when SFRs of merger remnants are $\lesssim 0.5$ $M_{\odot}$ yr${}^{-1}$, where no sufficient gas particles are available to make kinematic maps with a even lower SFR. In the case that the disk structure is completely destroyed during the interaction, $K_{asym}$ remains elevated after black hole coalescence and MOTs may be sensitive to the choice of SFR limits to derive $K_{asym}$. However, MOTs of M3M3e and b6b6e do not change significantly when varying SFR limits to several $M_{\odot}$ yr${}^{-1}$. When treating the simulated galaxies at each snapshot and viewing angle as individual systems, we can quantify the observable merger fractions as a function of interaction stages. Figure 6 shows the fraction of simulated galaxies classified as mergers using the criterion $K_{asym}\geq 0.15$ for simulations with five different initial disk orientations. Before coalescence, the derived merger fractions of all the M3M2 simulations agree within $\sim 20-40\%$ and the M3M3e simulations are typically higher compared to the M3M2e counterparts in all interaction stages. As expected based on the results shown in Section 5.1, the derived merger fractions past the coalescence phases show larger scatters as a result of different remnants in these simulations. These merger fractions are comparable to the results in Hung et al. (2015) when they use the classification scheme in Shapiro et al. (2008) and systematically lower by $\sim 50\%$ when they use the classification scheme in Bellocchi et al. (2012). 5.3. Dependence on spatial resolution Spatial resolution is critical for accurately deriving galaxy kinematic properties. For example, Gonçalves et al. (2010) find that the merger fraction of Lyman Break Analogs at $z\sim 0.2$ decreases by a factor of two (from $\sim 70\%$ to $\sim 38\%$) when artificially redshifting the sample to $\sim 2.2$, where the spatial resolution is 10 times worse in the redshifted datacubes compared to the original ones. The kinematic measurements in this paper are derived using kinematic maps with a spatial resolution of 0.5 kpc, which can be achieved in seeing-limited observations of local galaxies (e.g., Husemann et al., 2013) and adaptive optics-assisted observations out to $z\sim 0.4$ (e.g., Gonçalves et al., 2010). However, typical IFS surveys of $z\sim 1-3$ galaxies often have spatial resolution of $\gtrsim$ 1 kpc (e.g., Law et al., 2009) except for lensed galaxies (e.g., Yuan et al., 2011; Livermore et al., 2015). We examine how our kinematic measurements of the M3M2e simulations vary if the spatial resolutions of kinematic maps decreases from 0.5 kpc to 1 kpc. We create the kinematic maps following the description in Section 3.2 but replace the 500 pc $\times$ 500 pc grids with the 1 kpc $\times$ 1 kpc grids. To ensure a consistent classification as discussed in Section 5.2, we also create low-resolution kinematic maps for the isolated M3 simulations and re-define the merger classification threshold for the low resolution maps as $K_{asym}\geq 0.192$ (higher than 95% of the galaxies derived from the isolated M3 simulations). The MOT derived from the median $K_{asym}$ curve decreases from 0.22$\pm$0.04 Gyr with 0.5 kpc resolutions to only 0.14$\pm$0.04 Gyr with 1 kpc resolutions. This result demonstrates that with worse spatial resolution, the contrast between disturbed kinematics and comparison disks becomes smaller and thus the merger observable time becomes shorter. 5.4. Gas kinematics versus stellar kinematics So far, our analyses have focused on galaxy kinematics traced by star-forming gas. However, the flows of stars and gas during galaxy interactions may diverge in the presence of large-scale shocks (e.g., Barnes & Hernquist, 1991; Barrera-Ballesteros et al., 2015). It is thus intriguing to quantify how the kinematic merger indicator, $K_{asym}$, may depend on which observational tracers are used along the interaction sequence. We create the stellar kinematic maps following the procedures described in Section 3.2 using all of the stellar particles in the simulations. The center of the kinematic maps are chosen as the positions of the supermassive black holes. The velocity and velocity dispersion in each initial bins are determined as the median and standard deviation of all stellar particles weighted according to their masses. Figure 7 shows the median $K_{asym}$ curves of the M3M2e and M3M3e simulations derived based on all star particles until the end of our simulations ($\sim 1.5$ Gyr after coalescence). Although the star particles in general trace galaxy structure to larger radii than the star-forming gas throughout the interaction, the median $K_{asym}$ curve traced by stars progresses similarly as the curve traced by star-forming gas in both M3M2e and M3M3e simulations. In both simulations, $K_{asym}$ does not increase significantly until second passage but the enhancement of $K_{asym}$ lasts through the entire strong interaction phase. After coalescence, the remnant of the M3M2e simulations exhibits a rotational pattern, and its $K_{asym}$ reaches a lower, stable value than the $K_{asym}$ during strong interaction phase. The remnant of the M3M3e simulations still show highly disturbed kinematic structure, and its $K_{asym}$ remains highly elevated compared to isolated disks and during the interval before second passage. 6. Discussion 6.1. Implications to the measurements of galaxy merger rates and merger fractions One important application of the large IFS surveys is to constrain the merger abundance of star-forming galaxies using kinematically identified close pairs (e.g., López-Sanjuan et al., 2013) or signatures of complex dynamics (e.g., Yang et al., 2008). Our work shows that when defining mergers as galaxies with significantly elevated $K_{asym}$, the MOTs are typically 0.2$-$0.4 Gyr except the equal mass merger with SMG-type progenitors, which has MOT that is approximately twice as long as those of $z\sim 0$ mergers due to its more gradual decline in SFR after black hole coalescence. The MOTs can be shorter if the resolution of kinematic maps is worse than $\sim 0.5$ kpc. Since no noise is added in the kinematic maps, the observable times derived here are likely represent the best case scenario at least with currently achievable resolutions. Even during the strong interaction phase (i.e. after second passage and before coalescence), only $\sim 40-80\%$ of galaxy mergers show significant enhancement in $K_{asym}$ (Figure 6). The short merger observable times and the incompleteness of merger fractions reinforce the need of careful corrections when deriving galaxy merger rates and merger fractions using kinematic diagnostics. The merger observable times based on $K_{asym}$ are comparable to the morphologically identified merger observable times using $Gini$ coefficient, $A$, and $M_{20}$ (Lotz et al., 2008, 2010b), in which both morphology and kinematics-based identifications are most sensitive to galaxy mergers during the strong interaction phases. An advantage of kinematic diagnostics is that the complex kinematics are visible for up to several hundred Myr after black hole coalescence (e.g., M3M3e, M3M2h, b6b6e). Combining morphological and kinematic information can thus provide a more accurate assessment of galaxies’ dynamical status. For instance, when defining galaxies as mergers with either elevated $A$ or $K_{asym}$, the MOTs of M3M2e and M3M3e simulations increase from $0.22$ and $0.36$ Gyr to $0.28$ and $0.38$ Gyr, respectively. 6.2. Measurements of disk properties A key result from recent studies of galaxy kinematics is that the velocity dispersion of disk galaxies are systematically higher at higher $z$ (e.g., Law et al., 2009; Epinat et al., 2012; Kassin et al., 2012, although local LIRG-type isolated disks typically have higher velocity dispersion as well; Bellocchi et al., 2013). The increased velocity dispersions are often attributed to the enhanced gas fractions in the high$-z$ disk galaxies, which can lead to highly unstable and turbulent dynamics (e.g., Genzel et al., 2011). However, given the short merger observable times and the $<100\%$ merger recovery rates (Figure 6) based on the disturbance in kinematics, some of the disk galaxies identified by IFS surveys may be misidentified or a result of mergers. It is therefore important to quantify the evolution of velocity dispersions during galaxy interactions. We define a sample of “disk galaxies” from the M3M2e simulations (original and doubled gas fractions) as those galaxies having $K_{asym}$ consistent with the $K_{asym}$ of 95% of the isolated M3 simulations. The M3M2e simulations are chosen because the disk structure survives after the coalescence. We measure the intrinsic velocity dispersion777Here we compare $\sigma_{0}$ at different interaction stage that are derived in a consistent methodology. Note that these numbers are not necessarily comparable to those in the literature as different groups use varying methods to calculate $\sigma_{0}$ (e.g., see the discussion in Glazebrook, 2013; Wisnioski et al., 2015). ($\sigma_{0}$) of this disk sample, in which we define $\sigma_{0}$ as the velocity dispersion at the positions with the largest velocities along the axis of the steepest velocity gradient. Figure 8 shows $\sigma_{0}$ for the disk sample as a function of interaction stage. The star-forming disks surviving after coalescence have a median $\sigma_{0}$ $\sim 4$ times higher than the progenitor disks before the first passage. Even during the strong interaction phases when the dynamics of star-forming gas is dominated by the bulk motion of two nuclei but not coherent rotation, the measured $\sigma_{0}$ can be significantly higher than during earlier interaction stages. This implies that if the disk sample identified by the IFS surveys contains misidentified mergers or merger remnants, this population may also lead to high $\sigma_{0}$. 6.3. Limitations of this work Unlike optical imaging surveys, kinematic studies based on IFS observations often require pre-selection of the observed samples (e.g., optical and near-infrared colors) and this may introduce biases when converting the observed merger fractions to the overall galaxy merger rates. To obtain merger recovery rates for arbitrary sample selections, it is important to expand the kinematic analysis conducted in this work to large binary merger simulation library or cosmological simulations that provide a means to test various sample selections that mimic those used in the IFS surveys. However, the paucity of strong merger-induced starbursts in state-of-the-art large-volume cosmological simulations (Sparre et al., 2015) suggests that such simulations may not yet sufficiently resolve the nuclear regions of galaxy mergers. High resolution zoom-in cosmological simulations (e.g., Hopkins et al., 2014) can partially overcome this drawback, but they are computationally expensive, making it challenging to assemble a large sample of interacting galaxy simulations with this technique. Consequently, suites of idealized merger simulations will likely remain the best tool for studies such as the present one for some time. Although we attempt to address the applicability of our results to $z\gtrsim 2$ IFS studies by using the progenitors of gas-rich disks, and the SMG-type progenitors, a possible caveat is that the gas properties assumed in our hydrodynamic simulations may not be comparable to those of high$-z$ star-forming galaxies. For instance, Bournaud et al. (2011) show that interactions between clumpy disks can lead to more chaotic kinematics compared to the progenitors with stabilized ISM. However, it is unclear whether the drastic differences shown by the entire gas content (Figure 3 & 4 in Bournaud et al., 2011) are visible with only dense, star-forming gas. We perform a test run of M3M2e and M3M3e simulations with extreme initial gas fraction (0.8) and soft effective equation of state ($q_{\rm EOS}=0.05$), in which these parameters can lead to highly unstable disk within several hundreds Myr after the start of the simulations and large star-forming clumps similar to some $z\sim 1-3$ star-forming galaxies (Springel et al., 2005). Yet without a continuous replenishment of gas in these simulations, the gas fractions decrease to only $0.2-0.3$ during the strong interaction phase and thus galaxy kinematics at this stage is consistent with other simulation runs with lower initial gas fractions. Finally, we use gas particles with SFR$>0$ (i.e. $n\gtrsim 0.1$ cm${}^{-3}$) as a proxy of star-forming gas throughout this analysis, yet such simple approximation does not consider possible impacts from dust attenuations or optical depth. Future implementations of radiative transfer codes such as sunrise (Jonsson et al., 2010) to kinematic analysis will allow us to explore the effects of dust extinctions. The mock IFS datacubes will also allow us to intuitively include observational effects such as skylines in the near-infrared observations. 7. Conclusions We study the dynamics of star-forming gas in interacting galaxies using a set of hydrodynamic simulations with stellar mass ratios of 1:1 and 1:4. Using the SPH gas particles with SFR$>0$ as a proxy for star-forming gas, we construct two-dimensional velocity and velocity dispersion maps throughout the interaction sequence. We quantify the disturbance in the kinematic maps based on the measurements of kinematic asymmetries ($K_{asym}$), and we define galaxies as observable mergers when their $K_{asym}$ is significantly elevated above the values of isolated disk galaxies. Our conclusions are summarized as follows: 1. The evolution of $K_{asym}$ mirrors that of the morphological asymmetries ($A$) in both equal and unequal mass galaxy mergers (our M3M3e and M3M2e simulations), in which they most significantly deviate from the isolated disk simulations during the strong interaction stage. 2. When defining mergers as snapshots having $K_{asym}$ higher than 95% of the isolated disk simulations, the merger observable time (i.e. the time duration that merger signatures are detectable) are 0.22$\pm$0.04 for the M3M2e simulations and 0.36$\pm$0.06 for the M3M3e simulations. These observable times are typically $0.2-0.4$ Gyr based on simulations with various orbital parameters, initial disk orientations, and gas fractions. 3. The 1:1 and 1:4 galaxy mergers with SMG-type progenitors (our b6b6e and b6b5e simulations) show a similar evolution in $K_{asym}$ as the $z\sim 0$ mergers, in which $K_{asym}$ only begins to elevate significantly after the second passage. However, the merger observable time of b6b6e is approximately twice longer than M3M3e because the SFR of b6b6e declines more gradually than M3M3e after black hole coalescence. 4. The merger observable time are sensitive to the spatial resolution used to construct the kinematic maps. In our test with the M3M2e simulations, the observable time decrease from 0.22 Gyr to 0.14 Gyr when using 1 kpc $\times$ 1 kpc instead of 0.5 kpc $\times$ 0.5 kpc grids. 5. We find that the merger observable probability shows a strong trend with the interaction stage. The measured merger recovery rates are typically below 20% before second passage. The recovery rates increase to $40-80\%$ during the strong interaction stages, and the scatter is even larger after the black hole coalescence depending on whether the disk structures survive during interactions. 6. We derive the intrinsic velocity dispersion ($\sigma_{0}$) of galaxies consistent with isolated disks (in $K_{asym}$) for the M3M2e simulations. We find that the surviving disks after coalescence have a median $\sigma_{0}$ $\sim 4$ times higher than the progenitor disks. The enhanced $\sigma_{0}$ is also measured during the strong interaction phases even when the systems are not indeed rotating disks. C-LH, HAS, MLNA, and JRM-G wish to acknowledge partial funding support from NASA grants NNX14AJ61G and NNX15AE56G. CCH is grateful to the Gordon and Betty Moore Foundation for financial support. The computations in this paper were run on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University. References Aalto et al. (1999) Aalto, S., Hüttemeister, S., Scoville, N. Z., & Thaddeus, P. 1999, ApJ, 522, 165 Abraham et al. (1996) Abraham, R. G., van den Bergh, S., Glazebrook, K., et al. 1996, ApJS, 107, 1 Abraham et al. (2003) Abraham, R. G., van den Bergh, S., & Nair, P. 2003, ApJ, 588, 218 Abraham et al. (2007) Abraham, R. G., Nair, P., McCarthy, P. J., et al. 2007, ApJ, 669, 184 Agertz et al. (2007) Agertz, O., Moore, B., Stadel, J., et al. 2007, MNRAS, 380, 963 Barnes & Hut (1986) Barnes, J., & Hut, P. 1986, Nature, 324, 446 Barnes (2002) Barnes, J. E. 2002, MNRAS, 333, 481 Barnes & Hernquist (1992) Barnes, J. E., & Hernquist, L. 1992, ARA&A, 30, 705 Barnes & Hernquist (1991) Barnes, J. E., & Hernquist, L. E. 1991, ApJ, 370, L65 Barnes & Hibbard (2009) Barnes, J. E., & Hibbard, J. E. 2009, AJ, 137, 3071 Barrera-Ballesteros et al. (2015) Barrera-Ballesteros, J. K., García-Lorenzo, B., Falcón-Barroso, J., et al. 2015, ArXiv e-prints, arXiv:1506.03819 Bellocchi et al. (2012) Bellocchi, E., Arribas, S., & Colina, L. 2012, A&A, 542, A54 Bellocchi et al. (2013) Bellocchi, E., Arribas, S., Colina, L., & Miralles-Caballero, D. 2013, A&A, 557, A59 Bendo & Barnes (2000) Bendo, G. J., & Barnes, J. E. 2000, MNRAS, 316, 315 Bershady et al. (2000) Bershady, M. A., Jangren, A., & Conselice, C. J. 2000, AJ, 119, 2645 Bertin & Arnouts (1996) Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 Bournaud et al. (2007) Bournaud, F., Elmegreen, B. G., & Elmegreen, D. M. 2007, ApJ, 670, 237 Bournaud et al. (2011) Bournaud, F., Chapon, D., Teyssier, R., et al. 2011, ApJ, 730, 4 Cappellari & Copin (2003) Cappellari, M., & Copin, Y. 2003, MNRAS, 342, 345 Casey et al. (2014) Casey, C. M., Narayanan, D., & Cooray, A. 2014, Phys. Rep., 541, 45 Ceverino et al. (2010) Ceverino, D., Dekel, A., & Bournaud, F. 2010, MNRAS, 404, 2151 Ceverino et al. (2012) Ceverino, D., Dekel, A., Mandelker, N., et al. 2012, MNRAS, 420, 3490 Colina et al. (2005) Colina, L., Arribas, S., & Monreal-Ibero, A. 2005, ApJ, 621, 725 Conselice (2003) Conselice, C. J. 2003, ApJS, 147, 1 Conselice (2006) —. 2006, ApJ, 638, 686 Conselice et al. (2000) Conselice, C. J., Bershady, M. A., & Jangren, A. 2000, ApJ, 529, 886 Conselice et al. (2008) Conselice, C. J., Rajgor, S., & Myers, R. 2008, MNRAS, 386, 909 Cortese et al. (2014) Cortese, L., Fogarty, L. M. R., Ho, I.-T., et al. 2014, ApJ, 795, L37 Cox et al. (2006a) Cox, T. J., Dutta, S. N., Di Matteo, T., et al. 2006a, ApJ, 650, 791 Cox et al. (2006b) Cox, T. J., Jonsson, P., Primack, J. R., & Somerville, R. S. 2006b, MNRAS, 373, 1013 Cox et al. (2008) Cox, T. J., Jonsson, P., Somerville, R. S., Primack, J. R., & Dekel, A. 2008, MNRAS, 384, 386 Daddi et al. (2010) Daddi, E., Bournaud, F., Walter, F., et al. 2010, ApJ, 713, 686 Daigle et al. (2006) Daigle, O., Carignan, C., Amram, P., et al. 2006, MNRAS, 367, 469 Davis et al. (2011) Davis, T. A., Alatalo, K., Sarzi, M., et al. 2011, MNRAS, 417, 882 de Zeeuw et al. (2002) de Zeeuw, P. T., Bureau, M., Emsellem, E., et al. 2002, MNRAS, 329, 513 Dekel et al. (2009a) Dekel, A., Sari, R., & Ceverino, D. 2009a, ApJ, 703, 785 Dekel et al. (2009b) Dekel, A., Birnboim, Y., Engel, G., et al. 2009b, Nature, 457, 451 Dicaire et al. (2008) Dicaire, I., Carignan, C., Amram, P., et al. 2008, MNRAS, 385, 553 Downes & Solomon (1998) Downes, D., & Solomon, P. M. 1998, ApJ, 507, 615 Draine & Li (2007) Draine, B. T., & Li, A. 2007, ApJ, 657, 810 Elmegreen et al. (2004) Elmegreen, D. M., Elmegreen, B. G., & Hirst, A. C. 2004, ApJ, 604, L21 Elmegreen et al. (2007) Elmegreen, D. M., Elmegreen, B. G., Ravindranath, S., & Coe, D. A. 2007, ApJ, 658, 763 Engel et al. (2010) Engel, H., Tacconi, L. J., Davies, R. I., et al. 2010, ApJ, 724, 233 Epinat et al. (2012) Epinat, B., Tasca, L., Amram, P., et al. 2012, A&A, 539, A92 Fisher et al. (2014) Fisher, D. B., Glazebrook, K., Bolatto, A., et al. 2014, ApJ, 790, L30 Flores et al. (2006) Flores, H., Hammer, F., Puech, M., Amram, P., & Balkowski, C. 2006, A&A, 455, 107 Förster Schreiber et al. (2009) Förster Schreiber, N. M., Genzel, R., Bouché, N., et al. 2009, ApJ, 706, 1364 Freeman et al. (2013) Freeman, P. E., Izbicki, R., Lee, A. B., et al. 2013, MNRAS, 434, 282 Genel et al. (2008) Genel, S., Genzel, R., Bouché, N., et al. 2008, ApJ, 688, 789 Genzel et al. (2011) Genzel, R., Newman, S., Jones, T., et al. 2011, ApJ, 733, 101 Gingold & Monaghan (1977) Gingold, R. A., & Monaghan, J. J. 1977, MNRAS, 181, 375 Glazebrook (2013) Glazebrook, K. 2013, PASA, 30, 56 Gnerucci et al. (2011) Gnerucci, A., Marconi, A., Cresci, G., et al. 2011, A&A, 528, A88 Gonçalves et al. (2010) Gonçalves, T. S., Basu-Zych, A., Overzier, R., et al. 2010, ApJ, 724, 1373 Guo et al. (2015) Guo, Y., Ferguson, H. C., Bell, E. F., et al. 2015, ApJ, 800, 39 Haan et al. (2011) Haan, S., Surace, J. A., Armus, L., et al. 2011, AJ, 141, 100 Hammer et al. (2009) Hammer, F., Flores, H., Puech, M., et al. 2009, A&A, 507, 1313 Hayward et al. (2012) Hayward, C. C., Jonsson, P., Kereš, D., et al. 2012, MNRAS, 424, 951 Hayward et al. (2011) Hayward, C. C., Kereš, D., Jonsson, P., et al. 2011, ApJ, 743, 159 Hayward et al. (2013) Hayward, C. C., Narayanan, D., Kereš, D., et al. 2013, MNRAS, 428, 2529 Hayward et al. (2014a) Hayward, C. C., Torrey, P., Springel, V., Hernquist, L., & Vogelsberger, M. 2014a, MNRAS, 442, 1992 Hayward et al. (2014b) Hayward, C. C., Lanz, L., Ashby, M. L. N., et al. 2014b, MNRAS, 445, 1598 Helfer et al. (2003) Helfer, T. T., Thornley, M. D., Regan, M. W., et al. 2003, ApJS, 145, 259 Hibbard & Vacca (1997) Hibbard, J. E., & Vacca, W. D. 1997, AJ, 114, 1741 Hopkins et al. (2009) Hopkins, P. F., Cox, T. J., Younger, J. D., & Hernquist, L. 2009, ApJ, 691, 1168 Hopkins et al. (2014) Hopkins, P. F., Kereš, D., Oñorbe, J., et al. 2014, MNRAS, 445, 581 Hopkins et al. (2007) Hopkins, P. F., Richards, G. T., & Hernquist, L. 2007, ApJ, 654, 731 Hopkins et al. (2006) Hopkins, P. F., Somerville, R. S., Hernquist, L., et al. 2006, ApJ, 652, 864 Hopkins et al. (2010) Hopkins, P. F., Croton, D., Bundy, K., et al. 2010, ApJ, 724, 915 Hung et al. (2013) Hung, C.-L., Sanders, D. B., Casey, C. M., et al. 2013, ApJ, 778, 129 Hung et al. (2014) —. 2014, ApJ, 791, 63 Hung et al. (2015) Hung, C.-L., Rich, J. A., Yuan, T., et al. 2015, ApJ, 803, 62 Husemann et al. (2013) Husemann, B., Jahnke, K., Sánchez, S. F., et al. 2013, A&A, 549, A87 Jesseit et al. (2007) Jesseit, R., Naab, T., Peletier, R. F., & Burkert, A. 2007, MNRAS, 376, 997 Jonsson (2006) Jonsson, P. 2006, MNRAS, 372, 2 Jonsson et al. (2006) Jonsson, P., Cox, T. J., Primack, J. R., & Somerville, R. S. 2006, ApJ, 637, 255 Jonsson et al. (2010) Jonsson, P., Groves, B. A., & Cox, T. J. 2010, MNRAS, 403, 17 Kartaltepe et al. (2014) Kartaltepe, J. S., Mozena, M., Kocevski, D., et al. 2014, ArXiv e-prints, arXiv:1401.2455 Kassin et al. (2014) Kassin, S. A., Brooks, A., Governato, F., Weiner, B. J., & Gardner, J. P. 2014, ApJ, 790, 89 Kassin et al. (2012) Kassin, S. A., Weiner, B. J., Faber, S. M., et al. 2012, ApJ, 758, 106 Kennicutt (1998) Kennicutt, Jr., R. C. 1998, ARA&A, 36, 189 Khochfar & Burkert (2006) Khochfar, S., & Burkert, A. 2006, A&A, 445, 403 Kim et al. (2002) Kim, D.-C., Veilleux, S., & Sanders, D. B. 2002, ApJS, 143, 277 Kitzbichler & White (2008) Kitzbichler, M. G., & White, S. D. M. 2008, MNRAS, 391, 1489 Krajnović et al. (2006) Krajnović, D., Cappellari, M., de Zeeuw, P. T., & Copin, Y. 2006, MNRAS, 366, 787 Lanz et al. (2014) Lanz, L., Hayward, C. C., Zezas, A., et al. 2014, ApJ, 785, 39 Law et al. (2009) Law, D. R., Steidel, C. C., Erb, D. K., et al. 2009, ApJ, 697, 2057 Law et al. (2015) Law, D. R., Yan, R., Bershady, M. A., et al. 2015, ArXiv e-prints, arXiv:1505.04285 Leitherer et al. (1999) Leitherer, C., Schaerer, D., Goldader, J. D., et al. 1999, ApJS, 123, 3 Lin et al. (2004) Lin, L., Koo, D. C., Willmer, C. N. A., et al. 2004, ApJ, 617, L9 Livermore et al. (2015) Livermore, R. C., Jones, T. A., Richard, J., et al. 2015, MNRAS, 450, 1812 López-Sanjuan et al. (2013) López-Sanjuan, C., Le Fèvre, O., Tasca, L. A. M., et al. 2013, A&A, 553, A78 Lotz et al. (2011) Lotz, J. M., Jonsson, P., Cox, T. J., et al. 2011, ApJ, 742, 103 Lotz et al. (2008) Lotz, J. M., Jonsson, P., Cox, T. J., & Primack, J. R. 2008, MNRAS, 391, 1137 Lotz et al. (2010a) —. 2010a, MNRAS, 404, 590 Lotz et al. (2010b) —. 2010b, MNRAS, 404, 575 Lotz et al. (2004) Lotz, J. M., Primack, J., & Madau, P. 2004, AJ, 128, 163 Lucy (1977) Lucy, L. B. 1977, AJ, 82, 1013 Man et al. (2012) Man, A. W. S., Toft, S., Zirm, A. W., Wuyts, S., & van der Wel, A. 2012, ApJ, 744, 85 Martínez-Galarza et al. (2014) Martínez-Galarza, J. R., Smith, H. A., Lanz, L., et al. 2014, ArXiv e-prints, arXiv:1412.2760 Michałowski et al. (2012) Michałowski, M. J., Dunlop, J. S., Cirasuolo, M., et al. 2012, A&A, 541, A85 Mihos & Bothun (1998) Mihos, J. C., & Bothun, G. D. 1998, ApJ, 500, 619 Miralles-Caballero et al. (2011) Miralles-Caballero, D., Colina, L., Arribas, S., & Duc, P.-A. 2011, AJ, 142, 79 Moreno et al. (2015) Moreno, J., Torrey, P., Ellison, S. L., et al. 2015, MNRAS, 448, 1107 Naab et al. (2014) Naab, T., Oser, L., Emsellem, E., et al. 2014, MNRAS, 444, 3357 Narayanan et al. (2009) Narayanan, D., Cox, T. J., Hayward, C. C., Younger, J. D., & Hernquist, L. 2009, MNRAS, 400, 1919 Overzier et al. (2010) Overzier, R. A., Heckman, T. M., Schiminovich, D., et al. 2010, ApJ, 710, 979 Petty et al. (2014) Petty, S. M., Armus, L., Charmandaris, V., et al. 2014, AJ, 148, 111 Privon et al. (2013) Privon, G. C., Barnes, J. E., Evans, A. S., et al. 2013, ApJ, 771, 120 Robertson et al. (2006a) Robertson, B., Bullock, J. S., Cox, T. J., et al. 2006a, ApJ, 645, 986 Robertson et al. (2006b) Robertson, B., Hernquist, L., Cox, T. J., et al. 2006b, ApJ, 641, 90 Robertson & Bullock (2008) Robertson, B. E., & Bullock, J. S. 2008, ApJ, 685, L27 Rothberg & Joseph (2004) Rothberg, B., & Joseph, R. D. 2004, AJ, 128, 2098 Sanders et al. (1991) Sanders, D. B., Scoville, N. Z., & Soifer, B. T. 1991, ApJ, 370, 158 Schmidt (1959) Schmidt, M. 1959, ApJ, 129, 243 Scudder et al. (2012) Scudder, J. M., Ellison, S. L., Torrey, P., Patton, D. R., & Mendel, J. T. 2012, MNRAS, 426, 549 Shapiro et al. (2008) Shapiro, K. L., Genzel, R., Förster Schreiber, N. M., et al. 2008, ApJ, 682, 231 Snyder et al. (2015) Snyder, G. F., Lotz, J., Moody, C., et al. 2015, MNRAS, 451, 4290 Sparre et al. (2015) Sparre, M., Hayward, C. C., Springel, V., et al. 2015, MNRAS, 447, 3548 Springel (2005) Springel, V. 2005, MNRAS, 364, 1105 Springel et al. (2005) Springel, V., Di Matteo, T., & Hernquist, L. 2005, MNRAS, 361, 776 Springel & Hernquist (2003) Springel, V., & Hernquist, L. 2003, MNRAS, 339, 289 Springel & Hernquist (2005) —. 2005, ApJ, 622, L9 Stickley & Canalizo (2014) Stickley, N. R., & Canalizo, G. 2014, ApJ, 786, 12 Swinbank et al. (2006) Swinbank, A. M., Chapman, S. C., Smail, I., et al. 2006, MNRAS, 371, 465 Tacconi et al. (2006) Tacconi, L. J., Neri, R., Chapman, S. C., et al. 2006, ApJ, 640, 228 Toomre & Toomre (1972) Toomre, A., & Toomre, J. 1972, ApJ, 178, 623 Ueda et al. (2014) Ueda, J., Iono, D., Yun, M. S., et al. 2014, ApJS, 214, 1 van der Wel et al. (2012) van der Wel, A., Bell, E. F., Häussler, B., et al. 2012, ApJS, 203, 24 Walter et al. (2008) Walter, F., Brinks, E., de Blok, W. J. G., et al. 2008, AJ, 136, 2563 Wisnioski et al. (2015) Wisnioski, E., Förster Schreiber, N. M., Wuyts, S., et al. 2015, ApJ, 799, 209 Yang et al. (2008) Yang, Y., Flores, H., Hammer, F., et al. 2008, A&A, 477, 789 Yuan et al. (2011) Yuan, T.-T., Kewley, L. J., Swinbank, A. M., Richard, J., & Livermore, R. C. 2011, ApJ, 732, L14 Yun et al. (1994) Yun, M. S., Ho, P. T. P., & Lo, K. Y. 1994, Nature, 372, 530
On mutations of selfinjective quivers with potential Yuya Mizuno Graduate School of Mathematics Nagoya University Frocho Chikusaku Nagoya 464-8602 Japan [email protected] Abstract. We study silting mutations (Okuyama-Rickard complexes) for selfinjective algebras given by quivers with potential (QPs). We show that silting mutation is compatible with QP mutation. As an application, we get a family of derived equivalences of Jacobian algebras. The author is supported by Grant-in-Aid for JSPS Fellowships No.23.5593. 1. Introduction Derived categories are nowadays considered as an essential tool in the study of many areas of mathematics. In the representation theory of algebras, derived equivalences of algebras have been one of the central themes and extensively investigated. It is well-known that endomorphism algebras of tilting complexes are derived equivalent to the original algebra [R1]. Therefore it is an important problem to give concrete methods to calculate endomorphism algebras of tilting complexes. In this paper, we focus on one of the fundamental tilting complexes over selfinjective algebras, known as Okuyama-Rickard complexes, which play an important role in the study of Broué’s abelian defect group conjecture. From a categorical viewpoint, they are nowadays interpreted as a special case of silting mutation [AI]. We provide a method to determine the quivers with relations of the endomorphism algebras of Okuyama-Rickard complexes when selfinjective algebras are given by quivers with potential (QPs for short). The notion of QPs was introduced by [DWZ], which plays a significant role in the study of cluster algebras (we refer to [K2]). Recently it has been discovered that mutations of QPs (Definition 2.2) give rise to derived equivalences [BIRS, KeY, M, V]. The aim of this paper is to give a similar (but different) type of derived equivalences by comparing QP mutation and silting mutation (Definition 2.4). Our main result is the following (see sections 2 and 3 for unexplained notions). Theorem 1.1. (Proposition LABEL:tilt_self, Theorem LABEL:main1, Corollary LABEL:cor and Lemma LABEL:leftright) Let $(Q,W)$ be a selfinjective QP (Definition 2.1) and $\Lambda:=\mathcal{P}(Q,W)$. For a set of vertices $I\subset Q_{0}$, we assume the following conditions. $\bullet$ Any vertex in $I$ is not contained in 2-cycles in $Q$. $\bullet$ There are no arrows between vertices in $I$. (a) We have an algebra isomorphism $$\mathop{\mathrm{End}}\nolimits_{\mathsf{K}^{\rm{b}}(\operatorname{proj}% \nolimits{\Lambda})}(\mu_{I}(\Lambda))\cong\mathcal{P}(\mu_{I}(Q,W)),$$ where $\mu_{I}(\Lambda)$ is left (or right) silting mutation and $\mu_{I}(Q,W)$ is QP mutation. (b) If $\sigma I=\sigma$ for the Nakayama permutation $\sigma$ of $\Lambda$, then $\mu_{I}(\Lambda)$ is a tilting complex. In particular, $\Lambda$ and $\mathcal{P}(\mu_{I}(Q,W))$ are derived equivalent. Since selfinjective algebras are closed under derived equivalence, we conclude that from (b) above the new QP is also a selfinjective QP, which is a result given in [HI, Theorem 4.2]. Then we can apply our result to the new QP again and these processes provide a family of derived equivalences. We note that Keller-Yang [KeY] proved that, for two QPs related by QP mutation, their Ginzburg dg algebras, which are certain enhancement of Jacobian algebras, are derived equivalent though their Jacobian algebras are far from being derived equivalent. On the other hand, Theorem 1.1 tells us that Jacobian algebras are already derived equivalent in our setting. Notations Let $K$ be an algebraically closed field and $D:=\mathop{\mathrm{Hom}}\nolimits_{K}(-,K)$. All modules are left modules. For a finite dimensional algebra $\Lambda$, we denote by $\mathop{\mathrm{mod}}\nolimits\Lambda$ the category of finitely generated $\Lambda$-modules and by add$M$ the subcategory of $\mathop{\mathrm{mod}}\nolimits\Lambda$ consisting of direct summands of finite direct sums of copies of $M\in\mathop{\mathrm{mod}}\nolimits\Lambda$. The composition $fg$ means first $f$, then $g$. For a quiver $Q$, we denote by $Q_{0}$ vertices and $Q_{1}$ arrows of $Q$ and by $a:s(a)\to e(a)$ the start and end vertices of an arrow or path $a$. For a finite dimensional algebra $KQ/(R)$, we denote by $P_{i}$ the indecomposable projective $KQ/(R)$-module corresponding to the vertex $i\in Q_{0}$. Acknowledgement. First and foremost, the author would like to thank Osamu Iyama for his support and patient guidance. He would like to thank Hideto Asashiba for stimulating discussions and questions. He is grateful to Martin Herschend, who kindly explain the construction of selfinjective QPs, and Kota Yamaura and Takahide Adachi for their valuable comments and advice. 2. Preliminaries 2.1. Quivers with potential We recall the definition of quivers with potential. We follow [DWZ]. $\bullet$ Let $Q$ be a finite connected quiver without loops. We denote by $KQ_{i}$ the $K$-vector space with basis consisting of paths of length $i$ in $Q$, and by $KQ_{i,cyc}$ the subspace of $KQ_{i}$ spanned by all cycles. We denote the complete path algebra by $$\widehat{KQ}=\prod_{i\geq 0}KQ_{i}$$ and by $J_{\widehat{KQ}}$ the Jacobson radical of $\widehat{KQ}$. A quiver with potential (QP) is a pair $(Q,W)$ consisting of a finite connected quiver $Q$ without loops and an element $W\in\prod_{i\geq 2}KQ_{i,{\rm cyc}}$, called a potential. For each arrow $a$ in $Q$, the cyclic derivative $\partial_{a}:\widehat{KQ}_{cyc}\to\widehat{KQ}$ is defined as the continuous linear map satisfying $\partial_{a}(a_{1}\cdots a_{d})=\sum_{a_{i}=a}a_{i+1}\cdots a_{d}a_{1}\cdots a% _{i-1}$ for a cycle $a_{1}\cdots a_{d}$. For a QP $(Q,W)$, we define the Jacobian algebra by $$\mathcal{P}(Q,W)=\widehat{KQ}/{\mathcal{J}}(W),$$ where ${\mathcal{J}}(W)=\overline{\langle\partial_{a}W\mid a\in Q_{1}\rangle}$ is the closure of the ideal generated by $\partial_{a}W$ with respect to the $J_{\widehat{KQ}}$-adic topology. $\bullet$ A QP $(Q,W)$ is called trivial if $W$ is a linear combination of cycles of length 2 and $\mathcal{P}(Q,W)$ is isomorphic to the semisimple algebra $\widehat{KQ_{0}}$. It is called reduced if $W\in\prod_{i\geq 3}KQ_{i,{\rm cyc}}.$ Following [HI], we use this terminology. Definition 2.1. We call a QP $(Q,W)$ selfinjective if $\mathcal{P}(Q,W)$ is a finite dimensional selfinjective algebra. Next we recall the definition of mutation of QPs. Definition 2.2. For each vertex $k$ in $Q$ not lying on a 2-cycle, we define a new QP $\widetilde{\mu}_{k}(Q,W):=(Q^{\prime},W^{\prime})$ as follows. (a) $Q^{\prime}$ is a quiver obtained from $Q$ by the following changes. $\bullet$ Replace each arrow $a:k\to v$ in $Q$ by a new arrow $a^{*}:v\to k$. $\bullet$ Replace each arrow $b:u\to k$ in $Q$ by a new arrow $b^{*}:k\to u$. $\bullet$ For each pair of arrows $u\overset{b}{\to}k\overset{a}{\to}v$, add a new arrow $[ba]:u\to v$ (b) $W^{\prime}=[W]+\Delta$ is defined as follows. $\bullet$ $[W]$ is obtained from the potential $W$ by replacing all compositions $ba$ by the new arrows $[ba]$ for each pair of arrows $u\overset{b}{\to}k\overset{a}{\to}v$. $\bullet$ $\Delta={\displaystyle\sum_{\begin{smallmatrix}a,b\in Q_{1}\\ e(b)=k=s(a)\end{smallmatrix}}}[ba]a^{*}b^{*}$. Then mutation ${\mu}_{k}(Q,W)$ is defined as a reduced part of $\widetilde{\mu}_{k}(Q,W)$ (we refer to [DWZ]). 2.2. Silting mutation The notion of silting objects was introduced by [KV], which is a generalization of tilting objects. Recently its theory has been rapidly developed and many connections have been discovered, for example [BRT, AI, G, KoY]. In this section, we briefly recall their definitions and properties. Now let $\Lambda$ be a finite dimensional algebra and $\operatorname{\mathcal{T}}\nolimits:=\mathsf{K}^{\rm{b}}(\operatorname{proj}% \nolimits{\Lambda})$ be the homotopy category of bounded complexes of finitely generated projective $\Lambda$-modules. Definition 2.3. Let $T$ be an object of $\operatorname{\mathcal{T}}\nolimits$. We call $T$ silting (respectively, tilting) if $\mathop{\mathrm{Hom}}\nolimits_{\operatorname{\mathcal{T}}\nolimits}(T,T[i])=0$ for any positive integer $i>0$ (for any integer $i\neq 0$) and satisfies $\operatorname{\mathcal{T}}\nolimits=\mathsf{thick}{T}$, where $\mathsf{thick}T$ denote by the smallest thick subcategory of $\operatorname{\mathcal{T}}\nolimits$ containing $T$. We call a morphism $f:X\to Y$ left minimal if any morphism $g:Y\to Y$ satisfying $fg=f$ is an isomorphism. For a object $M\in\operatorname{\mathcal{T}}\nolimits$, we call a morphism $f:X\to M^{\prime}$ left $(\mathop{\mathrm{add}}\nolimits{M})$-approximation of $X$ if $M^{\prime}$ belongs to $\mathop{\mathrm{add}}\nolimits{M}$ and $\mathop{\mathrm{Hom}}\nolimits_{\operatorname{\mathcal{T}}\nolimits}(f,M^{% \prime\prime})$ is surjective for any object $M^{\prime\prime}$ in $\mathop{\mathrm{add}}\nolimits{M}$. Dually we define a right minimal morphism and a right $(\mathop{\mathrm{add}}\nolimits{M})$-approximation. Definition 2.4. Let $T$ be a basic silting object in $\operatorname{\mathcal{T}}\nolimits$ and take an arbitrary decomposition $T=X\oplus M$. We take a minimal left $(\mathop{\mathrm{add}}\nolimits{M})$-approximation $f:X\to M^{\prime}$ of $X$ and a triangle
Observation of the $D_{sJ}(2317)$ and $D_{sJ}(2457)$ in $B$ decays P. Krokovny Budker Institute of Nuclear Physics, Novosibirsk    K. Abe High Energy Accelerator Research Organization (KEK), Tsukuba    K. Abe Tohoku Gakuin University, Tagajo    T. Abe High Energy Accelerator Research Organization (KEK), Tsukuba    I. Adachi High Energy Accelerator Research Organization (KEK), Tsukuba    H. Aihara Department of Physics, University of Tokyo, Tokyo    K. Akai High Energy Accelerator Research Organization (KEK), Tsukuba    M. Akatsu Nagoya University, Nagoya    M. Akemoto High Energy Accelerator Research Organization (KEK), Tsukuba    Y. Asano University of Tsukuba, Tsukuba    T. Aso Toyama National College of Maritime Technology, Toyama    T. Aushev Institute for Theoretical and Experimental Physics, Moscow    A. M. Bakich University of Sydney, Sydney NSW    I. Bedny Budker Institute of Nuclear Physics, Novosibirsk    P. K. Behera Utkal University, Bhubaneswer    I. Bizjak J. Stefan Institute, Ljubljana    A. Bondar Budker Institute of Nuclear Physics, Novosibirsk    M. Bračko University of Maribor, Maribor J. Stefan Institute, Ljubljana    T. E. Browder University of Hawaii, Honolulu, Hawaii 96822    B. C. K. Casey University of Hawaii, Honolulu, Hawaii 96822    Y. Chao Department of Physics, National Taiwan University, Taipei    B. G. Cheon Sungkyunkwan University, Suwon    R. Chistov Institute for Theoretical and Experimental Physics, Moscow    S.-K. Choi Gyeongsang National University, Chinju    Y. Choi Sungkyunkwan University, Suwon    Y. K. Choi Sungkyunkwan University, Suwon    A. Chuvikov Princeton University, Princeton, New Jersey 08545    L. Y. Dong Institute of High Energy Physics, Chinese Academy of Sciences, Beijing    J. Dragic University of Melbourne, Victoria    S. Eidelman Budker Institute of Nuclear Physics, Novosibirsk    V. Eiges Institute for Theoretical and Experimental Physics, Moscow    Y. Enari Nagoya University, Nagoya    J. Flanagan High Energy Accelerator Research Organization (KEK), Tsukuba    N. Gabyshev High Energy Accelerator Research Organization (KEK), Tsukuba    A. Garmash Budker Institute of Nuclear Physics, Novosibirsk High Energy Accelerator Research Organization (KEK), Tsukuba    T. Gershon High Energy Accelerator Research Organization (KEK), Tsukuba    B. Golob University of Ljubljana, Ljubljana J. Stefan Institute, Ljubljana    R. Guo National Kaohsiung Normal University, Kaohsiung    C. Hagner Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061    F. Handa Tohoku University, Sendai    N. C. Hastings High Energy Accelerator Research Organization (KEK), Tsukuba    H. Hayashii Nara Women’s University, Nara    M. Hazumi High Energy Accelerator Research Organization (KEK), Tsukuba    L. Hinz Institut de Physique des Hautes Énergies, Université de Lausanne, Lausanne    T. Hokuue Nagoya University, Nagoya    Y. Hoshi Tohoku Gakuin University, Tagajo    W.-S. Hou Department of Physics, National Taiwan University, Taipei    H.-C. Huang Department of Physics, National Taiwan University, Taipei    Y. Igarashi High Energy Accelerator Research Organization (KEK), Tsukuba    H. Ikeda High Energy Accelerator Research Organization (KEK), Tsukuba    A. Ishikawa Nagoya University, Nagoya    R. Itoh High Energy Accelerator Research Organization (KEK), Tsukuba    H. Iwasaki High Energy Accelerator Research Organization (KEK), Tsukuba    M. Iwasaki Department of Physics, University of Tokyo, Tokyo    H. K. Jang Seoul National University, Seoul    T. Kamitani High Energy Accelerator Research Organization (KEK), Tsukuba    J. H. Kang Yonsei University, Seoul    N. Katayama High Energy Accelerator Research Organization (KEK), Tsukuba    H. Kawai Chiba University, Chiba    T. Kawasaki Niigata University, Niigata    H. Kichimi High Energy Accelerator Research Organization (KEK), Tsukuba    E. Kikutani High Energy Accelerator Research Organization (KEK), Tsukuba    D. W. Kim Sungkyunkwan University, Suwon    H. J. Kim Yonsei University, Seoul    Hyunwoo Kim Korea University, Seoul    J. H. Kim Sungkyunkwan University, Suwon    K. Kinoshita University of Cincinnati, Cincinnati, Ohio 45221    H. Koiso High Energy Accelerator Research Organization (KEK), Tsukuba    P. Koppenburg High Energy Accelerator Research Organization (KEK), Tsukuba    S. Korpar University of Maribor, Maribor J. Stefan Institute, Ljubljana    P. Križan University of Ljubljana, Ljubljana J. Stefan Institute, Ljubljana    A. Kuzmin Budker Institute of Nuclear Physics, Novosibirsk    Y.-J. Kwon Yonsei University, Seoul    J. S. Lange University of Frankfurt, Frankfurt RIKEN BNL Research Center, Upton, New York 11973    S. H. Lee Seoul National University, Seoul    T. Lesiak H. Niewodniczanski Institute of Nuclear Physics, Krakow    A. Limosani University of Melbourne, Victoria    S.-W. Lin Department of Physics, National Taiwan University, Taipei    J. MacNaughton Institute of High Energy Physics, Vienna    G. Majumder Tata Institute of Fundamental Research, Bombay    F. Mandl Institute of High Energy Physics, Vienna    M. Masuzawa High Energy Accelerator Research Organization (KEK), Tsukuba    T. Matsumoto Tokyo Metropolitan University, Tokyo    S. Michizono High Energy Accelerator Research Organization (KEK), Tsukuba    Y. Mikami Tohoku University, Sendai    W. Mitaroff Institute of High Energy Physics, Vienna    H. Miyata Niigata University, Niigata    D. Mohapatra Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061    G. R. Moloney University of Melbourne, Victoria    T. Nagamine Tohoku University, Sendai    Y. Nagasaka Hiroshima Institute of Technology, Hiroshima    T. Nakadaira Department of Physics, University of Tokyo, Tokyo    T. T. Nakamura High Energy Accelerator Research Organization (KEK), Tsukuba    E. Nakano Osaka City University, Osaka    M. Nakao High Energy Accelerator Research Organization (KEK), Tsukuba    H. Nakazawa High Energy Accelerator Research Organization (KEK), Tsukuba    J. W. Nam Sungkyunkwan University, Suwon    Z. Natkaniec H. Niewodniczanski Institute of Nuclear Physics, Krakow    S. Nishida High Energy Accelerator Research Organization (KEK), Tsukuba    O. Nitoh Tokyo University of Agriculture and Technology, Tokyo    T. Nozaki High Energy Accelerator Research Organization (KEK), Tsukuba    S. Ogawa Toho University, Funabashi    Y. Ogawa High Energy Accelerator Research Organization (KEK), Tsukuba    Y. Ohnishi High Energy Accelerator Research Organization (KEK), Tsukuba    T. Ohshima Nagoya University, Nagoya    N. Ohuchi High Energy Accelerator Research Organization (KEK), Tsukuba    K. Oide High Energy Accelerator Research Organization (KEK), Tsukuba    T. Okabe Nagoya University, Nagoya    S. Okuno Kanagawa University, Yokohama    S. L. Olsen University of Hawaii, Honolulu, Hawaii 96822    W. Ostrowicz H. Niewodniczanski Institute of Nuclear Physics, Krakow    H. Ozaki High Energy Accelerator Research Organization (KEK), Tsukuba    P. Pakhlov Institute for Theoretical and Experimental Physics, Moscow    H. Palka H. Niewodniczanski Institute of Nuclear Physics, Krakow    C. W. Park Korea University, Seoul    H. Park Kyungpook National University, Taegu    K. S. Park Sungkyunkwan University, Suwon    N. Parslow University of Sydney, Sydney NSW    L. E. Piilonen Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061    N. Root Budker Institute of Nuclear Physics, Novosibirsk    M. Rozanska H. Niewodniczanski Institute of Nuclear Physics, Krakow    H. Sagawa High Energy Accelerator Research Organization (KEK), Tsukuba    S. Saitoh High Energy Accelerator Research Organization (KEK), Tsukuba    Y. Sakai High Energy Accelerator Research Organization (KEK), Tsukuba    T. R. Sarangi Utkal University, Bhubaneswer    A. Satpathy High Energy Accelerator Research Organization (KEK), Tsukuba University of Cincinnati, Cincinnati, Ohio 45221    O. Schneider Institut de Physique des Hautes Énergies, Université de Lausanne, Lausanne    C. Schwanda High Energy Accelerator Research Organization (KEK), Tsukuba Institute of High Energy Physics, Vienna    A. J. Schwartz University of Cincinnati, Cincinnati, Ohio 45221    S. Semenov Institute for Theoretical and Experimental Physics, Moscow    M. E. Sevior University of Melbourne, Victoria    H. Shibuya Toho University, Funabashi    T. Shidara High Energy Accelerator Research Organization (KEK), Tsukuba    V. Sidorov Budker Institute of Nuclear Physics, Novosibirsk    J. B. Singh Panjab University, Chandigarh    N. Soni Panjab University, Chandigarh    S. Stanič [ University of Tsukuba, Tsukuba    A. Sugi Nagoya University, Nagoya    K. Sumisawa High Energy Accelerator Research Organization (KEK), Tsukuba    T. Sumiyoshi Tokyo Metropolitan University, Tokyo    S. Suzuki Yokkaichi University, Yokkaichi    F. Takasaki High Energy Accelerator Research Organization (KEK), Tsukuba    K. Tamai High Energy Accelerator Research Organization (KEK), Tsukuba    N. Tamura Niigata University, Niigata    J. Tanaka Department of Physics, University of Tokyo, Tokyo    M. Tanaka High Energy Accelerator Research Organization (KEK), Tsukuba    M. Tawada High Energy Accelerator Research Organization (KEK), Tsukuba    Y. Teramoto Osaka City University, Osaka    T. Tomura Department of Physics, University of Tokyo, Tokyo    K. Trabelsi University of Hawaii, Honolulu, Hawaii 96822    T. Tsuboyama High Energy Accelerator Research Organization (KEK), Tsukuba    T. Tsukamoto High Energy Accelerator Research Organization (KEK), Tsukuba    S. Uehara High Energy Accelerator Research Organization (KEK), Tsukuba    K. E. Varvell University of Sydney, Sydney NSW    C. H. Wang National Lien-Ho Institute of Technology, Miao Li    Y. Watanabe Tokyo Institute of Technology, Tokyo    E. Won Korea University, Seoul    B. D. Yabsley Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061    Y. Yamada High Energy Accelerator Research Organization (KEK), Tsukuba    A. Yamaguchi Tohoku University, Sendai    N. Yamamoto High Energy Accelerator Research Organization (KEK), Tsukuba    Y. Yamashita Nihon Dental College, Niigata    M. Yamauchi High Energy Accelerator Research Organization (KEK), Tsukuba    H. Yanai Niigata University, Niigata    Y. Yuan Institute of High Energy Physics, Chinese Academy of Sciences, Beijing    C. C. Zhang Institute of High Energy Physics, Chinese Academy of Sciences, Beijing    Z. P. Zhang University of Science and Technology of China, Hefei    V. Zhilich Budker Institute of Nuclear Physics, Novosibirsk    D. Žontar University of Ljubljana, Ljubljana J. Stefan Institute, Ljubljana Abstract We report the first observation of the $B\to\bar{D}D_{sJ}(2317)$ and $B\to\bar{D}D_{sJ}(2457)$ decays based on $123.8\times 10^{6}$ $B{\bar{B}}$ events collected with the Belle detector at KEKB. We observe the $D_{sJ}(2317)$ decay to $D_{s}\pi^{0}$ and $D_{sJ}(2457)$ decay to the $D_{s}^{*}\pi^{0}$ and $D_{s}\gamma$ final states. We also set 90% CL upper limits for the decays $D_{sJ}(2317)\to D_{s}^{*}\gamma$, $D_{sJ}(2457)\to D_{s}^{*}\gamma$, $D_{sJ}(2457)\to D_{s}\pi^{0}$ and $D_{sJ}(2457)\to D_{s}\pi^{+}\pi^{-}$. pacs: 13.25.Hw, 14.40.Lb on leave from ]Nova Gorica Polytechnic, Nova Gorica The Belle Collaboration Recently a new $D_{s}\pi^{0}$ resonance with a mass of 2317 MeV$/c^{2}$ and a very narrow width was observed by the BaBar collaboration babar_dspi0 . A natural interpretation is that this is a $P$-wave $c\bar{s}$ quark state that is below the $DK$ threshold, which accounts for the small width bardeen . This interpretation is supported by the observation of a $D_{s}^{*}\pi^{0}$ resonance footnote1 by the CLEO collaboration cleo_dspi0 and Belle collaboration belle_dspi0 . All groups observe these states in inclusive $e^{+}e^{-}$ processes. The mass difference between the two observed states is consistent with the expected hyperfine splitting of the $P$-wave $D_{s}$ meson doublet with total light-quark angular momentum $j=1/2$ bardeen . However, the masses of these states are considerably below potential model expectations bartelt , and are nearly the same as those of the corresponding $c\bar{u}$ states recently measured by Belle kuzmin . The low mass values have caused speculation that these states may be more exotic than a simple $q\bar{q}$ meson system cahn ; lipkin ; beveren ; hou ; fazio ; godfrey . To clarify the nature of these states, it is necessary to determine their quantum numbers and decay branching fractions, particularly those for radiative decays. In this context it is useful to search for these states, which we refer to as $D_{sJ}$, in exclusive $B$ meson decay processes. We search for decays of the type $B\to\bar{D}D_{sJ}$, which are expected to be the dominant exclusive $D_{sJ}$ production mechanism in $B$ decays. Because of the known properties of the parent $B$ meson, angular analyses of these decays can unambiguously determine the $D_{sJ}$ quantum numbers. Moreover, since QCD sum rules in HQET predict that $P$-wave mesons with $j=1/2$ should be more readily produced in $B$ decays than mesons with $j=3/2$ yao , the observation of $B\to\bar{D}D_{sJ}$ would provide additional support for the $P$-wave nature of these states as well as serving as a check of these predictions. In this Letter we report on a search for the $B\to\bar{D}D_{sJ}(2317)$ and $B\to\bar{D}D_{sJ}(2457)$ decays based on a sample of $123.8\times 10^{6}$ $B{\bar{B}}$ pairs produced at the KEKB asymmetric energy $e^{+}e^{-}$ collider KEKB . The inclusion of charge conjugate states is implicit throughout this report. The Belle detector has been described elsewhere NIM . Charged tracks are selected with a set of requirements based on the average hit residual and impact parameter relative to the interaction point (IP). The transverse momentum of at least 0.05 GeV$/c$ is required for each track in order to reduce the combinatorial background. For charged particle identification (PID), the combined information from specific ionization in the central drift chamber ($dE/dx$), time-of-flight scintillation counters and aerogel Čerenkov counters is used. Charged kaons are selected with PID criteria that have an efficiency of 88%, a pion misidentification probability of 8%, and negligible contamination from protons. All charged tracks with PID responses consistent with a pion hypothesis that are not positively identified as electrons are considered as pion candidates. Neutral kaons are reconstructed via the decay $K_{S}^{0}\to\pi^{+}\pi^{-}$ with no PID requirements for the daughter pions. The two-pion invariant mass is required to be within 9 MeV$/c^{2}$ ($\sim 3\sigma$) of the $K^{0}$ mass and the displacement of the $\pi^{+}\pi^{-}$ vertex from the IP in the transverse ($r-\varphi$) plane is required to be between 0.2 cm and 20 cm. The direction in the $r-\varphi$ plane from the IP to the $\pi^{+}\pi^{-}$ vertex is required to agree within 0.2 radians with the combined momentum of the two pions. Photon candidates are selected from calorimeter showers not associated with charged tracks. An energy deposition of at least 30 MeV and a photon-like shape are required for each candidate. A pair of photons with an invariant mass within 12 MeV$/c^{2}$ ($\sim 2.5\sigma$) of the $\pi^{0}$ mass is considered as a $\pi^{0}$ candidate. We reconstruct $\bar{D}^{0}(D^{-})$ mesons in the $K^{+}\pi^{-}$, $K^{+}\pi^{-}\pi^{-}\pi^{+}$ and $K^{+}\pi^{-}\pi^{0}$ $(K^{+}\pi^{-}\pi^{-})$ decay channels and require the invariant mass to be within 12 MeV$/c^{2}$ ($1.5\sigma$ for $K^{+}\pi^{-}\pi^{0}$ and $2.5\sigma$ for other modes) of the $\bar{D}^{0}(D^{-})$ mass. For the $\pi^{0}$ from the $\bar{D}^{0}\to K^{+}\pi^{-}\pi^{0}$ decay, we require that the $\pi^{0}$ momentum in the $\Upsilon(4S)$ center-of-mass (CM) frame be greater than 0.4 GeV$/c$ in order to reduce combinatorial backgrounds. We reconstruct $D_{s}^{+}$ mesons in the $\phi\pi^{+}$, $\bar{K}^{*0}K^{+}$ and $K_{S}^{0}K^{+}$ decay channels. $\phi$ mesons are reconstructed from $K^{+}K^{-}$ pairs with an invariant mass within 10 MeV$/c^{2}$ ($2.5\Gamma$) of the $\phi$ mass. $\bar{K}^{*0}$ mesons are reconstructed from $K^{-}\pi^{+}$ pairs with an invariant mass within 75 MeV$/c^{2}$ ($1.5\Gamma$) of the $\bar{K}^{*0}$ mass. After calculating the invariant mass of the corresponding set of particles, we define the $D_{s}^{+}$ signal region as being within 12 MeV$/c^{2}$ ($\sim 2.5\sigma$) of the $D_{s}$ mass. $D_{s}^{*}$ mesons are reconstructed in the $D_{s}^{*}\to D_{s}\gamma$ decay channel. The mass difference between $D_{s}^{*}$ and $D_{s}$ candidates is required to be within 8 MeV$/c^{2}$ of its nominal value ($\sim 2.5\sigma$). The $D_{sJ}$ candidates are reconstructed from $D_{s}^{(*)}$ mesons and a $\pi^{0}$, $\gamma$, or $\pi^{+}\pi^{-}$ pair. The mass difference $M(D_{sJ})-M(D_{s}^{(*)})$ is used to select $D_{sJ}$ candidates. We use central mass values of 2317 MeV$/c^{2}$ and 2460 MeV$/c^{2}$ for $D_{sJ}(2317)$ and $D_{sJ}(2457)$ respectively and define signal regions within 12 MeV$/c^{2}$ for the corresponding mass difference. We combine $\bar{D}$ and $D_{sJ}$ candidates to form $B$ mesons. Candidate events are identified by their CM energy difference, $\Delta E=(\sum_{i}E_{i})-E_{\rm beam}$, and the beam constrained mass, $M_{\rm bc}=\sqrt{E_{\rm beam}^{2}-(\sum_{i}\vec{p}_{i})^{2}}$, where $E_{\rm beam}$ is the beam energy and $\vec{p}_{i}$ and $E_{i}$ are the momenta and energies of the decay products of the $B$ meson in the CM frame. We select events with $5.272$ GeV$/c^{2}<M_{\rm bc}<5.288$ GeV$/c^{2}$ and $|\Delta E|<0.2$ GeV, and define a $B$ signal region of $|\Delta E|<0.03$ GeV. In cases with more than one candidate in an event, the one with $D$ and $D_{s}^{(*)+}$ masses closest to the nominal values is chosen. We use a Monte Carlo (MC) simulation to model the response of the detector and determine the efficiency GEANT . Variables that characterize the event topology are used to suppress background from the two-jet-like $e^{+}e^{-}\to q{\bar{q}}$ continuum process. We require $|\cos\theta_{\rm thr}|<0.80$, where $\theta_{\rm thr}$ is the angle between the thrust axis of the $B$ candidate and that of the rest of the event; this eliminates 77% of the continuum background while retaining 78% of the signal events. To suppress combinatorial background we apply a restriction on the invariant mass of the $D$ meson and the $\pi^{0}$ or $\gamma$ from $D_{sJ}$ decay: $M(D\pi^{0})>2.3$ GeV$/c^{2}$, $M(D\gamma)>2.2$ GeV$/c^{2}$. The $\Delta E$ and $D_{sJ}$ candidate’s invariant mass ($M(D_{sJ})$) distributions for $B\to\bar{D}D_{sJ}$ candidates are presented in Fig. 1, where all $\bar{D}^{0}$ and $D^{-}$ decay modes are combined. Each distribution is the projection of the signal region of the other parameter; distributions for events in the $M(D_{sJ})$ and $\Delta E$ sidebands are shown as crosshatched histograms. Clear signals are observed for the $DD_{sJ}(2317)[D_{s}\pi^{0}]$ and $DD_{sJ}(2457)[D_{s}^{*}\pi^{0},D_{s}\gamma]$ final states. The measured masses for the $D_{sJ}(2317)$ and $D_{sJ}(2457)$ are $(2319.8\pm 2.1\pm 2.0)$ MeV$/c^{2}$ and $(2459.2\pm 1.6\pm 2.0)$ MeV$/c^{2}$ respectively. The fitted widths are consistent with those expected for $D_{sJ}$ mesons of zero intrinsic width. The systematic error in the $D_{sJ}$ mass is expected to come from the photon energy scale. We also study the helicity distribution for the $D_{sJ}(2457)\to D_{s}\gamma$ decay. The helicity angle $\theta_{D_{s}\gamma}$ is defined as the angle between the $D_{sJ}(2457)$ momentum in the $B$ meson rest frame and the $D_{s}$ momentum in the $D_{sJ}(2457)$ rest frame. The $\theta_{D_{s}\gamma}$ distribution in the data (Fig. 2) is consistent with MC expectations for the $J=1$ hypothesis for the $D_{sJ}(2457)$ ($\chi^{2}/$n.d.f$=5/6$), and contradicts the $J=2$ hypothesis ($\chi^{2}/$n.d.f.$=44/6$). The $J=0$ hypothesis is already ruled out by the conservation of angular momentum and parity in $D_{sJ}(2457)\to D_{s}\gamma$. For each decay channel, the $\Delta E$ distribution is fitted with a Gaussian signal and a linear background function. The Gaussian mean value and width are fixed to the values from a MC simulation of signal events. The region $\Delta E<-0.07$ GeV is excluded from the fit to avoid contributions from other $B$ decays of the type $B\to\bar{D}D_{sJ}X$ where $X$ denotes an additional particle that is not reconstructed. The $M(D_{sJ})$ distribution is fitted by the sum of a Gaussian for the signal, and a linear function for the background. The Gaussian width is fixed to the value found in the MC (6–7 MeV$/c^{2}$ depending on the decay mode). The fit results are given in Table 1, where the listed efficiencies include intermediate branching fractions. We use the $\Delta E$ distribution to calculate the branching fractions. The statistical significance of the signal quoted in Table 1 is defined as $\sqrt{-2\ln({\cal L}_{0}/{\cal L}_{max})}$, where ${\cal L}_{max}$ and ${\cal L}_{0}$ denote the maximum likelihood with the nominal and with zero signal yield, respectively. The results of combined fits of $B^{+}\to\bar{D}^{0}D_{sJ}^{+}$ and $B^{0}\to D^{-}D_{sJ}^{+}$ modes assuming isospin invariance are shown in Table 2. The normalization of the background in each sub-mode is allowed to float while the signal yields are required to satisfy the constraint $N_{i}=N_{B{\bar{B}}}\cdot{\cal B}(B\to\bar{D}D_{sJ})\cdot\varepsilon_{i}\,,$ where the branching fraction ${\cal B}(B\to\bar{D}D_{sJ})$ is a fit parameter; $N_{B{\bar{B}}}$ is the number of $B{\bar{B}}$ pairs and $\varepsilon_{i}$ is the efficiency, which includes all intermediate branching fractions. From the two $B\to\bar{D}D_{sJ}(2457)$ branching fraction measurements, we determine the ratio ${\cal B}(D_{sJ}(2457)\to D_{s}\gamma)/{\cal B}(D_{sJ}(2457)\to D_{s}^{*}\pi^{0% })=0.38\pm 0.11\pm 0.04$. The signals for the $B\to\bar{D}D_{sJ}(2317)[D_{s}\pi^{0}]$ and $B\to\bar{D}D_{sJ}(2457)[D_{s}^{*}\pi^{0},D_{s}\gamma]$ channels have greater than $5\sigma$ statistical significance. Figure 3 shows the $\Delta E$ distributions for the other channels, where significant signals are not seen. We set 90% confidence level (CL) upper limits for these modes. We study the possible feed-across between all studied $D_{sJ}$ decay modes using MC. We also analyze a MC sample of generic $B{\bar{B}}$ events corresponding to our data sample. No peaking background is found. As a check, we apply a similar procedure to decay chains with the similar final states: $B\to\bar{D}^{(*)}D_{s}^{(*)}$. For each mode, we measure branching fractions that are consistent with the world average values PDG . The following sources of systematic errors are considered: tracking efficiency (1-2% per track), kaon identification efficiency (1%), $\pi^{0}$ efficiency (6%), $K^{0}_{S}$ reconstruction efficiency (6%), $D$ branching fraction uncertainties (2%-6%), signal and background shape parameterization (4%) and MC statistics (3%). The uncertainty in the tracking efficiency is estimated using partially reconstructed $D^{*+}\to D^{0}[K_{S}^{0}\pi^{+}\pi^{-}]\pi^{+}$ decays. The kaon identification uncertainty is determined from $D^{*+}\to D^{0}[K^{-}\pi^{+}]\pi^{+}$ decays. The $\pi^{0}$ reconstruction uncertainty is obtained using $D^{0}$ decays to $K^{-}\pi^{+}$ and $K^{-}\pi^{+}\pi^{0}$. We assume equal production rates for $B^{+}B^{-}$ and $B^{0}\bar{B}^{0}$ pairs and do not include the uncertainty related to this assumption in the total systematic error. For the calculation of the branching fractions, the errors in the $D_{s}$ meson branching fractions are taken into account. These uncertainties are dominated by the error on the $D_{s}\to\phi\pi^{+}$ branching ratio of 25% PDG . The overall systematic uncertainty is 30%. In summary, we report the first observation of $B\to\bar{D}D_{sJ}(2317)$ and $B\to\bar{D}D_{sJ}(2457)$ decays. The measured branching fractions with the corresponding statistical significances are presented in Table 2. The observation of $D_{sJ}(2457)\to D_{s}\gamma$ decay eliminates the zero spin of $D_{sJ}(2457)$. The angular analysis of this decay supports the hypothesis that the $D_{sJ}(2457)$ is a $1^{+}$ state. We wish to thank the KEKB accelerator group for the excellent operation of the KEKB accelerator. We acknowledge support from the Ministry of Education, Culture, Sports, Science, and Technology of Japan and the Japan Society for the Promotion of Science; the Australian Research Council and the Australian Department of Education, Science and Training; the National Science Foundation of China under contract No. 10175071; the Department of Science and Technology of India; the BK21 program of the Ministry of Education of Korea and the CHEP SRC program of the Korea Science and Engineering Foundation; the Polish State Committee for Scientific Research under contract No. 2P03B 01324; the Ministry of Science and Technology of the Russian Federation; the Ministry of Education, Science and Sport of the Republic of Slovenia; the National Science Council and the Ministry of Education of Taiwan; and the U.S. Department of Energy. References (1) BaBar Collaboration, B. Aubert et al., Phys. Rev. Lett. 90, 242001 (2003). (2) W. Bardeen, E. Eichten and C. Hill, Phys. Rev. D 68, 054024 (2003). (3) In the heavy $c$-quark mass limit, one expects two doublets of $c\bar{s}$ states with quantum numbers $J^{P}=0^{+}$, $1^{+}$ and $1^{+}$, $2^{+}$. The second one has been observed in $D^{(*)}K$ decays. (4) CLEO Collaboration, D. Besson et al., Phys. Rev. D 68, 032002 (2003). (5) Belle Collaboration, K. Abe et al., EPS contribution paper, BELLE-CONF-0340, hep-ex/0307052. (6) J. Bartelt and S. Shukla, Ann. Rev. Nucl. Part. Sci. 45, 133 (1995). (7) Belle Collaboration, K. Abe et al., hep-ex/0307021, submitted to Phys. Rev. D. (8) R. Cahn and D. Jackson, Phys. Rev. D 68, 037502 (2003). (9) T. Barnes, F. Close and H. Lipkin, Phys. Rev. D 68, 054006 (2003). (10) E. Beveren and G. Rupp, Phys. Rev. Lett. 91, 012003 (2003). (11) H. Cheng and W. Hou, Phys. Lett. B 566, 193 (2003). (12) P. Colangelo and F. Fazio, Phys. Lett. B 570, 180 (2003). (13) S. Godfrey, Phys. Lett. B 568, 254 (2003). (14) A. Yaouanc et al., Phys. Lett. B 520, 59 (2001). (15) S. Kurokawa and E. Kikutani, Nucl. Instr. and Meth. A 499, 1 (2003). (16) Belle Collaboration, A. Abashian et al., Nucl. Inst. and Meth. A 479, 117 (2002). (17) R. Brun et al., GEANT 3.21, CERN DD/EE/84-1, 1984. (18) K. Hagiwara et al. (Particle Data Group), Phys. Rev. D 66, 010001 (2002).
Floquet topological phase transitions in a kicked Haldane-Chern insulator Tridev Mishra [email protected]    Anurag Pallaprolu [email protected]    Tapomoy Guha Sarkar [email protected]    Jayendra N. Bandyopadhyay [email protected] Department of Physics, Birla Institute of Technology and Science, Pilani 333031, India. Abstract We consider a periodically $\delta$-kicked Haldane type Chern insulator with the kicking applied in the $\hat{z}$ direction. This is known to behave as an inversion symmetry breaking perturbation, since it introduces a time-dependent staggered sub-lattice potential. We study here the effects of such driving on the topological phase diagram of the original Haldane model of a Hall effect in the absence of a net magnetic field. The resultant Floquet band topology is again that of a Chern insulator with the driving parameters, frequency and amplitude, influencing the inversion breaking mass $M$ of the undriven Haldane model. A family of such, periodically related, ‘Semenoff masses’ is observed to occur which support a periodic repetition of Haldane like phase diagrams along the inversion breaking axis of the phase plots. Out of these it is possible to identify two in-equivalent masses in the reduced zone scheme of the Floquet quasienergies, which form the centres of two inequivalent phase diagrams. Further, variation in the driving amplitude’s magnitude alone is shown to effect the topological properties by linearly shifting the phase diagram of the driven model about the position of the undriven case. A phenomenon that allows the study of Floquet topological phase transitions in the system. Finally, we also discuss some issues regarding the modifications to Haldane’s condition for preventing band overlaps at the Dirac point touchings in the Brillouin zone, in the presence of kicking. I Introduction Topology and notions intrinsic to it were introduced into the band theory of solids through the work of Thouless, Halperin and others Thouless et al. (1982); Halperin (1982); Avron et al. (1983); Niu et al. (1985); Hatsugai (1993) while theoretically exploring the remarkable phenomenon of the Integer Quantum Hall Effect (IQHE) Klitzing et al. (1980); Laughlin (1981). Many such exotic features were predicted and identified for other associated phenomena, which went beyond conventional time-reversal symmetry breaking, such as the Quantum Spin Hall Effect (QSHE), in graphene and other topological materials Bernevig and Zhang (2006); Bernevig et al. (2006); Kane and Mele (2005a, b).Experimentally, this has sparked off a flurry of activity directed towards the synthesis of materials and nano structures which exhibit such novel features Fu et al. (2007); Moore (2010); Chen et al. (2009); Hasan et al. (2014); Hasan and Kane (2010); Qi and Zhang (2011). As such shaping the field of ‘Topological Insulators’. From a theoretical perspective, the broad objective has been to achieve a comprehensive classification scheme for these insulators Altland and Zirnbauer (1997); Schnyder et al. (2008); Kitaev (2009). Graphene, beyond its much touted mechanical and transport properties Castro Neto et al. (2009); Das Sarma et al. (2011); Goerbig (2011), has shown itself to be rich in topological features Delplace et al. (2011); Hatsugai et al. (2006) and various topological aspects of the honeycomb lattice have been investigated in cold-atom and photonic-crystal setups Zhang et al. (2005); Koghee et al. (2012); Tarruell et al. (2012); Rechtsman et al. (2013a); Jotzu et al. (2014). Studies of graphene irradiated or periodically driven by circularly polarized light have revealed rich topological textures beyond those seen in the undriven case Oka and Aoki (2009); Kitagawa et al. (2011); Gu et al. (2011); Suárez Morell and Foa Torres (2012); Iadecola et al. (2013); Delplace et al. (2013); Perez-Piskunow et al. (2014); Usaj et al. (2014); Perez-Piskunow et al. (2015); Sentef et al. (2015). An entire sub-domain of “Floquet Topological Insulators”Cayssol et al. (2013) has emerged as a result, which has offered unprecedented control and freedom to engineer new topological phases and edge state(in some cases Majorana modes) behaviors Katan and Podolsky (2013); Wang et al. (2013a); Tong et al. (2013); Grushin et al. (2014); He and Zhang (2014); Zhou et al. (2014); Anisimovas et al. (2015); Benito and Platero (2015); Farrell and Pereg-Barnea (2016); Zhou et al. (2016); Xiong et al. (2016); Saha (2016); Inoue and Tanaka (2010); Dóra et al. (2012); Lindner et al. (2013); Kundu et al. (2014); Ho and Gong (2012); Dehghani et al. (2014); Dal Lago et al. (2015); Titum et al. (2016) as well as a knob to study topological phase transitions in cold-atom or photonic crystal setups Rechtsman et al. (2013b); Zheng and Zhai (2014); Reichl and Mueller (2014); Yan et al. (2015); Verdeny and Mintert (2015); Leykam et al. (2016); Račiūnas et al. (2016). The theoretical classification of Floquet topological insulators and the identification of valid topological invariants that correctly characterize the bulk-edge correspondence for these systems is an on-going effort Kitagawa et al. (2010); Gómez-León and Platero (2013); Rudner et al. (2013); Carpentier et al. (2015); Nathan and Rudner (2015); Fulga and Maksymenko (2016); Fruchart (2016); Roy and Harper (2016). Of late, the use of delta-function kicks has also been shown to impart interesting topological properties in the form of new Floquet topological phases such as semi-metallic phases in Harper models Bomantara et al. (2016), chiral edge modes in Quantum Hall systems Lababidi et al. (2014), appearance of unexpected topological equivalence between spectrally distinct Hamiltonians Wang et al. (2013b) as well as generation of Majorana end modes in 1-D systems Thakurathi et al. (2013). This has led to interest in studying Dirac systems especially graphene, its nano-ribbons and other hexagonal lattice models such as the Kitaev model under periodic driving or kicking Babajanov et al. (2014); Bhattacharya et al. (2016); Agarwala et al. (2016). In this work we shall be considering a form of kicking which is found to introduce a Semenoff like mass, hence no topological nontivialities (in the absence of time reversal symmetry breaking) in the spectrum of planar Graphene but, shows some promise as far as manipulating the topology of Haldane-like Chern insulators is concerned. The recent success in realizing the Haldane model experimentally Jotzu et al. (2014) within the framework of ultracold atoms in optical lattices has opened a doorway to engineering various kinds of Chern insulators, using the paradigm of shaken optical lattices and the Floquet formalism, and studying topological transitions in them Verdeny and Mintert (2015); Račiūnas et al. (2016); Plekhanov et al. (2017). These realizations offer an appreciable degree of tunability and provide an encouraging platform for the study of Haldane systems under periodic driving. We consider these setups as possible avenues for realizing the kind of delta-kicked Haldane model which is the centerpiece of our study. Beyond the cold atom setups, an interesting recent experiment Sentef et al. (2015) drives graphene itself using ultra-fast, short-duration low frequency laser pulses of circularly polarized light which open local gaps in the Floquet quasi-energies of the irradiated graphene. This procedure hints at the creation of local Haldane like band structures but their topological classification has issues that need to be addressed. A more viable candidate for an actual material realization of the Haldane model is presented in Wright (2013) where, a honeycomb lattice, specifically Silicene, with out-of-plane staggering of sublattice sites, effectively realizes Haldane’s prescription of a staggered magnetic field upon being subjected to an in-plane magnetic field which could be made very weak. Other interesting proposals exist, that realize Chern insulators either using electron correlations at low dimensions, in say double perovskite hetero-structures Cook and Paramekanti (2014), or the notion of in-plane magnetic fields, such as in perovskite monolayers Cook (2016) and laterally patterned $p$-type semiconductor hetero-structures with low-symmetry interfaces Li and Sushkov (2016). The proposal in Wright (2013), along with the suggestions in Agarwala et al. (2016) that outline methods to implement a kicking using hexagonal boron nitride over graphene Jung et al. (2015); Ortix et al. (2012); Weinberg et al. (2016), provide the broad experimental context in which our system has some hope of being realized. This motivates our academic interest in the study undertaken here. In this paper, we begin with an overview of various features of the Haldane model, broadly describing its spectral and topological aspects in Sec.II. This is followed by the description and analysis of our choice of a kicked Haldane model in Sec.II.2 and a brief introduction to the computation of the Chern topological invariant in Sec.III. A detailed exposition of the various properties and behavior of the kicked model is provided in the Sec.IV, on results and discussion. We see that, the kicking scheme incorporates itself into the effective Hamiltonian in a way that it provides a means of manipulating the inversion symmetry breaking parameter of the Haldane model which is essentially the staggered off-set to the on-site energies at the two closest neighboring sites of the two interpenetrating triangular sub-lattices $A$ and $B$. This and various other aspects are discussed therein, followed by a conclusion comparing our work to related studies. The kicking protocol studied in this paper has the effect of breaking inversion symmetry and opening a gap in graphene. This, however, does not lead to the appearance of any non-trivial topological features in graphene as the equal gap/mass term at the two Dirac points is of the Semenoff kind. We observe that in our present problem the general effect of the driving is to modify the inversion breaking energy of the Haldane model. II The Kicked Haldane Model II.1 The Undriven Haldane Model The Haldane model Haldane (1988) is a perfectly 2-dimensional Quantum Hall insulator with the unique property of exhibiting Quantum Hall behavior in the absence of any net magnetic field through any of its unit cells. It consists of a 2-D hexagonal lattice of atoms, with a single tight binding orbital at each of the two lattice sites within a unit cell. These are the two distinct sites belonging to the $A$ and $B$ triangular sub-lattices as shown in Fig. (1) by filled and hollow points respectively. Normally, such a lattice shows a semimettalic band structure which is well known from Graphene. However, to realize an insulator, the degeneracies at the Dirac points in the 2-D Brillouin zone need to be lifted by breaking the inversion and time-reversal symmetries in the system. In the Haldane model these are broken to ensure Quantum Hall behavior. The inversion symmetry is broken by giving an off-set to the on-site energies at the two in-equivalent nearest neighbor sites $A$ and $B$ by an amount $-M$ and $+M$ respectively. Breaking inversion symmetry opens a gap at the band touchings in the Brillouin zone and makes the system a semiconductor/normal insulator. In order to get a topological insulator it is further required to break time-reversal symmetry which is done here by making the hoppings to the next to nearest-neighbor sites complex valued as, $t_{2}e^{\pm i\phi}$, $t_{2}$ being real. The nearest neighbor hoppings $t_{1}$ on the other hand remain real valued. An ingenious choice of magnetic field helps to ensure this by making the overall magnetic flux through any of the hexagons (unit cells) of the lattice zero and hence realizes a globally vanishing magnetic field while at the same time breaking time-reversal symmetry. Though this does require the local existence of a spatially periodic magnetic field everywhere, perpendicularly applied to the lattice plane. It gives rise to a flux arrangement that collectively disappears over a unit cell. One such choice is illustrated in Fig. (1), where the condition $(\phi_{a}+\phi_{b})=0$ fulfills this requirement. Several such choices are permitted by gauge freedom and since travelling along the sides of any hexagon encloses zero flux the $t_{1}$ hoppings acquire no phase contribution. The hopping term $t_{2}$ for the next to nearest neighbor sites acquires phases in hops around triangular cells which enclose non-zero flux. For the case in Fig. (1) this phase $\phi$ comes out to be $2\pi(2\phi_{a}+\phi_{b})/\phi_{0}$ expressed in units of the flux quantum $\phi_{0}$. The need to break time-reversal invariance arises from the familiar requirement encountered in the IQHE Klitzing et al. (1980); Laughlin (1981); Thouless et al. (1982); Halperin (1982) that for non-zero quantized transverse conductance $\sigma_{xy}$, time-reversal invariance must be absent in the system as otherwise $\sigma_{xy}$ is an odd function and amounts to zero. It is the behavior of the gap that opens at the Dirac points also called the mass term from the low energy, (2+1)-D relativistic linearization approximation, that crucially determines the existence of the Hall conductance. In the presence of just broken inversion symmetry this mass term ($M$) is a Semenoff mass which has the same sign at both Dirac points and yields $\sigma_{xy}=0$ which follows from the definition for the Chern invariant, in this case. However if time reversal invariance is absent the mass term ($\phi$ dependent) has opposite signs at these points and leads to a non-zero $\sigma_{xy}$.When both parameters $M$ and $\phi$ are zero the bands touch at points in the Brillouin zone called Dirac points owing to the linear dispersion in the vicinity of these degenracies. These are high symmetry points in the Brillouin zone in addition to the band center. In this situation the system is semi-metallic and allows a two-dimensional representation at these high symmetry points. When the symmetry breaking parameters take on other combinations of values the system is found to belong to insulating regions with the Chern number $\mathcal{C}$, for the valence band (lower band with the Fermi energy in the gap at zero temperature) taking values $\pm 1,0$ depending on the relative strengths of the two parameters. These regions of different conductance values $\sigma_{xy}=\mathcal{C}e^{2}/h$ are separated by a boundary where the gap closes at either one of the Dirac points in the Brillouin zone. These touchings are the transition band configurations where $\mathcal{C}$ ’s for the two bands can rearrange themselves by assuming values that add up to zero thereby ensuring the standard requirement that the total band bundle remains topologically trivial Avron et al. (1983). These and other properties of the model follow from its Hamiltonian and the linear approximation to it at the band touchings. We now take a closer look at this Hamiltonian which will serve as the target system for the intended driving scheme. The two dimensional Haldane Hamiltonian in reciprocal space, as obtained from its real space tight binding form is $$H(\mathbf{k})=2\mathbf{I}t_{2}\cos\phi\sum_{i}\cos(\mathbf{b_{i}}\cdot\mathbf{% k})+t_{1}\left[\sum_{i}\{\sigma^{x}\cos(\mathbf{a_{i}\cdot k})+\sigma^{y}\sin(% \mathbf{a_{i}\cdot k})\}\right]+\sigma^{z}\left[M-2t_{2}\sin(\phi)\sum_{i}\sin% (\mathbf{b_{i}}\cdot\mathbf{k})\right]$$ (1) Here, the quasi-momentum $\mathbf{k}$ is a good quantum number since the choice of magnetic field preserves the original translation symmetry of the lattice, $\mathbf{I}$ is the $2\times 2$ identity element and $\sigma^{x},\sigma^{y}$ and $\sigma^{z}$ are the Pauli matrices. From Fig.(1) the vectors $\mathbf{a_{1}}\equiv\left(\tfrac{\sqrt{3}a}{2},\tfrac{a}{2}\right)$, $\mathbf{a_{2}}\equiv\left(\tfrac{-\sqrt{3}a}{2},\tfrac{a}{2}\right)$ and $\mathbf{a_{3}}\equiv(0,-1)$ are the vectors from an $A$ sub-lattice site to the nearest neighboring $B$ sub-lattice sites. The $a$ stands for the length of the bond joining nearby $A$ and $B$ sites. This choice is a matter of convention here and is defined by these vectors forming a closed right handed system with the cross product of any two in increasing sequence of the indices being aligned out of the plane in the direction of positive $\hat{z}$. While, as seen in the same figure, the vectors to the next nearest neighbor sites are chosen as $\mathbf{b_{1}}=\mathbf{a_{2}}-\mathbf{a_{3}}$, $\mathbf{b_{2}}=\mathbf{a_{3}}-\mathbf{a_{1}}$ and $\mathbf{b_{3}}=\mathbf{a_{1}}-\mathbf{a_{2}}$. Thus the summation index in the above Hamiltonian extends over these three possibilities for both kinds of vectors. The reciprocal space lattice for this system is also hexagonal and therefore first Brillouin zone (FBZ) is a hexagon with band touchings occurring at the zone corners. The FBZ comprises of two in-equivalent band touchings or Dirac points $\mathbf{K}$ and $\mathbf{K^{\prime}}$. It is possible to rearrange this hexagonal FBZ into an equivalent rhomboidal one by shifting regions of the former by reciprocal lattice vectors. Within this description the Dirac points lie inside the FBZ and are given by $\mathbf{K}=\left(\tfrac{2\pi}{3\sqrt{3}a},\tfrac{2\pi}{3a}\right)$ and $\mathbf{K^{\prime}}=\left(\tfrac{4\pi}{3\sqrt{3}a},0\right)$. The two band energy dispersion that follows from the above Hamiltonian is $$\begin{split}&\displaystyle E^{\rm H}_{\pm}(\mathbf{k})=2t_{2}\cos(\phi)\left[% 2\cos\left(\frac{3ak_{y}}{2}\right)\cos\left(\frac{\sqrt{3}ak_{x}}{2}\right)+% \cos(\sqrt{3}ak_{x})\right]\pm\Biggl{\{}t_{1}^{2}\left[2\cos\left(\frac{ak_{y}% }{2}\right)\cos\left(\frac{\sqrt{3}ak_{x}}{2}\right)+\cos(k_{y})\right]^{2}\\ &\displaystyle+t_{1}^{2}\left[2\sin\left(\frac{ak_{y}}{2}\right)\cos\left(% \frac{\sqrt{3}ak_{x}}{2}\right)-\sin(k_{y})\right]^{2}+\Biggl{[}M-2t_{2}\sin(% \phi)\times\biggl{(}-2\cos\left(\frac{3ak_{y}}{2}\right)\sin\left(\frac{\sqrt{% 3}ak_{x}}{2}\right)+\sin(\sqrt{3}ak_{x})\biggr{)}\Biggr{]}^{2}\Biggr{\}}^{% \frac{1}{2}}\end{split}$$ (2) On substituting the coordinates for either of the Dirac points $\mathbf{K}$ or $\mathbf{K^{\prime}}$ in the above expression one gets $M-3\sqrt{3}t_{2}\sin(\phi)$ and $M+3\sqrt{3}t_{2}\sin(\phi)$ respectively. From this we arrive at the condition for the bands to touch at these points as $M=3\sqrt{3}\nu t_{2}\sin(\phi)$ where $\nu=\pm 1$ depending on the particular Dirac point under consideration. Touching at both points occurs only when both inversion and time-reversal are present i.e both $M$ and $t_{2}\sin(\phi)$ are zero. The touchings at individual Dirac points occur in Haldane’s Chern number phase diagram at the transition boundaries where $\mathcal{C}$ undergoes a discrete step in its value. An important aspect of the model is the constraint on the relative strengths of the hopping parameters given by $|t_{2}/t_{1}|<1/3$ that ensures that the bands of the model do not overlap. This is useful for a clear observation of the band touchings in any physical realization of the model as it ensures that the upper and lower bands are always well separated by a gap unless they touch with the energies at these touchings being extremal points or maximas if one considers the lower band. We will discuss this condition in the context of kicking later on to see how it gets modified for the kicked system and also ascertain how it may be used to define a magnitude scale for the strength of the driving. Now, we move on to the model of interest in the present work which is the Haldane Hamiltonian under kicking. II.2 Driven Haldane Model The choice of driving the Haldane model using a periodic train of delta function kicks allows an exact Floquet treatment of the stroboscopic kind without recourse to a high frequency approximation of the kind used in Inoue and Tanaka (2010); Wang et al. (2012). Central to such approaches, and the marked rise of interest in Floquet topological insulators, is the possibility of having a controllable parameter whose variation helps to tune the system from a normal to a topological insulator, or through different topological phases. Thus a system may be designed where, by sweeping an experimentally controllable parameter across a prescribed range of values, one could transition the total Chern number of the filled bands of the system between trivial and non-trivial values. Much like the different quantized conductance values assumed by the system in the IQHE when the magnetic field is swept adiabatically. The added advantage driving has to offer here, is that it achieves all this in relatively simpler, non-interacting effective static Hamiltonians. Since, in general, topological characteristics are robust features of a system and are unaffected by perturbations to a large extent, having systems which do show transitions from normal to topological insulators (and vice-versa) in a discrete manner is of considerable interest. This is so because interesting properties of the valence band Bloch functions are known to occur at the transitions such as, lack of a maximally localized Wannier representation in the Chern insulating phase and anomalous localization behavior of the wavefunctions Thonhauser and Vanderbilt (2006); Soluyanov and Vanderbilt (2011). Thus the transitions merit some attention in various systems where they can be realized in a manner which permits a simpler analytical/numerical approach to their study. Our kicked model belongs to this category of systems. Prior to expressing the Hamiltonian in the presence of kicking it would be useful to adopt some notation to denote terms in Eqs. (1) and (2). The structure of the Hamiltonian in Eq.(1) is of the general form $H(\mathbf{k})=h_{0}(\mathbf{k})\mathbf{I}+\mathbf{h}(\mathbf{k})\cdot\bm{\sigma}$, where $\bm{\sigma}=(\sigma^{x},\sigma^{y},\sigma^{z})$ is the vector of Pauli matrices and $$h_{0}(\mathbf{k})=2t_{2}\cos(\phi)\left[2\cos\left(\tfrac{3ak_{y}}{2}\right)% \cos\left(\tfrac{\sqrt{3}ak_{x}}{2}\right)+\cos(\sqrt{3}ak_{x})\right].$$ The $\mathbf{h}(\mathbf{k})$ here, is the vector $[t_{1}L(\mathbf{k}),t_{1}F(\mathbf{k}),M-2t_{2}\sin(\phi)N(\mathbf{k})]$ with $$L(\mathbf{k})=2\cos\left(\tfrac{ak_{y}}{2}\right)\cos\left(\tfrac{\sqrt{3}ak_{% x}}{2}\right)+\cos(k_{y})$$ $$F(\mathbf{k})=2\sin\left(\tfrac{ak_{y}}{2}\right)\cos\left(\frac{\sqrt{3}ak_{x% }}{2}\right)-\sin(k_{y})$$ $$N(\mathbf{k})=-2\cos\left(\tfrac{3ak_{y}}{2}\right)\sin\left(\tfrac{\sqrt{3}ak% _{x}}{2}\right)+\sin(\sqrt{3}ak_{x})$$ . It follows that $$|\mathbf{h}(\mathbf{k})|=\sqrt{t_{1}^{2}L^{2}(\mathbf{k})+t_{1}^{2}F^{2}(% \mathbf{k})+(M-2t_{2}\sin(\phi)N(\mathbf{k}))^{2}}.$$ The driving scheme is chosen to be a train of delta function kicks which are separated by fixed time interval $T$. Such a scheme was introduced in the context of driving a hexagonal lattice, in particular graphene, as a platform for synthesizing novel dispersion relations and wave packet dynamics Agarwala et al. (2016). This work proposes using a kicking which is applied as the following perturbing term to the Hamiltonian $$\mathcal{H}_{kick,\mathbf{k}}(t)=(\alpha_{x}\sigma^{x}+\alpha_{y}\sigma^{y}+% \alpha_{z}\sigma^{z})\sum_{m=-\infty}^{m=\infty}\delta(t-mT)$$ (3) and represents a general $2\times 2$ kicking protocol with the $SU(2)$ pseudo-spin structure of the 2-dimensional Haldane Hamiltonian. The $\alpha_{x}$, $\alpha_{y}$ and $\alpha_{z}$ stand for kicking amplitudes in the respective directions. Since we are consistently expressing the Hamiltonian and the perturbation to it in $\mathbf{k}$-space, the kicking is applied uniformly to every unit cell of the lattice to have the reciprocal space representation of the above form. The dynamics of the system over a period $T$, under such a perturbation, are governed by an evolution operator $U_{XYZ}=U_{kick}U_{static}=e^{-i\bm{\alpha}.\bm{\sigma}}e^{-i\mathcal{H}(% \mathbf{k})T}$ where, $U_{XYZ}=e^{-i\mathcal{H}_{XYZ}(\mathbf{k})T}$ with $\mathcal{H}_{XYZ}(\mathbf{k})$ as the Floquet Hamiltonian and, $\bm{\alpha}$ and $\bm{\sigma}$ are $(\alpha_{x},\alpha_{y},\alpha_{z})$ and $(\sigma^{x},\sigma^{y},\sigma^{z})$ respectively. Using the algebra of Pauli matrices and some standard results associated with them, it is possible (as illustrated in Agarwala et al. (2016)) to obtain the exact form of $\mathcal{H}_{XYZ}(\mathbf{k})$. In particular, we are interested in a kicking scheme where $\alpha_{z}\neq 0$ while $\alpha_{x}=\alpha_{y}=0$ and henceforth assume these parameter values in the perturbing Hamiltonian in eq.(3). Thus we are interested in the $\hat{z}$-kicked Haldane model whose Hamiltonian we denote $\mathcal{H}_{Z}(\mathbf{k})$ which is obtained from $\mathcal{H}_{XYZ}(\mathbf{k})$ by putting in the requisite conditions. The calculation of $\mathcal{H}_{XYZ}(\mathbf{k})$ in the manner outlined in Agarwala et al. (2016) will involve considering only the vector $\mathbf{h}(\mathbf{k})$ projected along the Pauli matrices. The diagonal part due to $h_{0}$ remains unmodified and finally shows up in the expression for $\mathcal{H}_{Z}(\mathbf{k})$ which is again of the structure $h_{0}(\mathbf{k})\mathbf{I}+\epsilon_{z}(\mathbf{k})\mathbf{h^{\prime}}(% \mathbf{k})\cdot\bm{\sigma}$. The vector $\mathbf{h^{\prime}}(\mathbf{k})$ is represented by components $(h^{\prime}_{x}(\mathbf{k}),h^{\prime}_{y}(\mathbf{k}),h^{\prime}_{z}(\mathbf{% k}))$ which are $$\displaystyle h^{\prime}_{x}(\mathbf{k})=\frac{1}{\sin(T\epsilon_{z})}\Biggl{[% }\frac{-t_{1}L(\mathbf{k})}{|\mathbf{h}(\mathbf{k})|}\sin(T|\mathbf{h}(\mathbf% {k})|)\cos(\alpha_{z})+{\rm sgn}(\alpha_{z})\frac{t_{1}F(\mathbf{k})}{|\mathbf% {h}(\mathbf{k})|}\sin(\alpha_{z})\sin(T|\mathbf{h}(\mathbf{k})|)\Biggr{]}$$ $$\displaystyle h^{\prime}_{y}(\mathbf{k})=\frac{1}{\sin(T\epsilon_{z})}\Biggl{[% }\frac{-t_{1}F(\mathbf{k})}{|\mathbf{h}(\mathbf{k})|}\sin(T|\mathbf{h}(\mathbf% {k})|)\cos(\alpha_{z})-{\rm sgn}(\alpha_{z})\frac{t_{1}L(\mathbf{k})}{|\mathbf% {h}(\mathbf{k})|}\sin(\alpha_{z})\sin(T|\mathbf{h}(\mathbf{k})|)\Biggr{]}$$ (4) $$\displaystyle h^{\prime}_{z}(\mathbf{k})=\frac{1}{\sin(T\epsilon_{z})}\Biggl{[% }-{\rm sgn}(\alpha_{z})\sin(\alpha_{z})\cos(T|\mathbf{h}(\mathbf{k})|)-\frac{M% -2t_{2}\sin(\phi)N(\mathbf{k})}{|\mathbf{h}(\mathbf{k})|}\sin(T|\mathbf{h}(% \mathbf{k})|)\cos(\alpha_{z})\Biggr{]}$$ The energy eigenvalues of $\mathcal{H}_{Z}(\mathbf{k})$, i.e the $\hat{z}$-kicked Haldane model, without the offset due to the $h_{0}(\mathbf{k})\mathbf{I}$ term of the undriven Haldane model, denoted by $\epsilon_{z}$, is given by $$\epsilon_{z}(\mathbf{k})=\pm\frac{1}{T}\cos^{-1}\Biggl{[}\cos(\alpha_{z})\cos(% T|\mathbf{h}(\mathbf{k})|)-\frac{{\rm sgn}(\alpha_{z})}{|\mathbf{h}(\mathbf{k}% )|}(M-2t_{2}\sin(\phi)N(\mathbf{k}))\sin(\alpha_{z})\sin(T|\mathbf{h}(\mathbf{% k})|)\Biggr{]}$$ (5) and ${\rm sgn}(\alpha_{z})$ in both the equations above is the sign of $\alpha_{z}$ function. This completes a description of the model Hamiltonian we are interested in. We now give a brief overview of the mathematical formalism that shall be used to compute the topological invariant for this model. III Computing the Chern Invariant and Hall conductance The Chern invariant or Chern number for 2-D systems is the topological invariant that captures and quantifies the topological non-trivialities associated with the bands of a periodic system. The general definition involves treating the Bloch functions of the filled bands in any solid as defining a principal fibre bundle over the FBZ which is a torus. The Chern invariant is then calculated for any given band as the integral of the Berry curvature, which may be obtained from the Berry connection defined on this bundle over the FBZ Berry (1984); Thouless et al. (1982); Kohmoto (1985). This integral may be written in the following manner $$\mathcal{C}=\int_{\rm BZ}\mathcal{F}_{k_{x},k_{y}}(\mathbf{k})dk_{x}\wedge dk_% {y}$$ (6) where $\mathcal{F}_{k_{x},k_{y}}$ is an antisymmetric tensor denoting a curvature 2-form, the Berry curvature or field. Haldane’s work Haldane (1988) suggests a simplified route to calculating the Chern number for the various topological phases by an effective linearization of the spectrum at a Dirac point where the gap is like a mass term and coefficient of the $\sigma^{z}$ matrix in the linearized Hamiltonian around this point. The total Chern number for the lower band is then given by the signs of the masses at the two in-equivalent Dirac points in the FBZ as $$\mathcal{C}=\frac{1}{2}\displaystyle\sum_{\nu=\pm 1}\nu~{}{\rm sgn}(m_{\nu}),$$ (7) where $m_{\nu}$ is the mass term at the corresponding Dirac point indexed by $\nu$. Both the expressions are demonstrably equivalent and one can in principle derive eq.(7) from eq.(6). In our calculations we use both methods to develop the Chern number phase diagram in the presence of driving. The integration is performed numerically to validate the Hall conductivity quantization expected from the second definition. We intend here to give a brief overview of the mathematical formalism adopted by us to compute the Berry curvature required in the above integral. This formalism is based on the concept of Bargmann invariants Mukunda and Simon (1993a, b); Rabei et al. (1999). It essentially involves the use of $U(1)$ invariant pure state density matrices $\rho=|\psi\rangle\langle\psi|$ denote physical states or rays in a complex projective ray space. Then, the Bargmann invariants, are products of these density matrices, $\rho_{1}\rho_{2}\cdots\rho_{j}$ with the $j$ states forming the vertices of a $j$-sided polygon in ray space. In more explicit terms a Bargmann invariant of order $j$ for a set of as many normalized states $|\psi_{j}\rangle$ such that $\langle\psi_{j}|\psi_{j+1}\rangle\neq 0$, is $$\mathcal{B}^{j}(\psi_{1},\cdots,\psi_{j})=\langle\psi_{1}|\psi_{2}\rangle% \langle\psi_{2}|\psi_{3}\rangle\cdots\langle\psi_{j-1}|\psi_{j}\rangle\langle% \psi_{j}|\psi_{1}\rangle$$ (8) The phase of the Bargmann invariant in Eq.(8) is obtained as, $$\mathcal{F}_{\alpha\beta}(\mathbf{x})=\frac{1}{2i}{\rm Tr}(\rho(\mathbf{x})% \bigl{[}\partial_{\alpha}\rho(\mathbf{x}),\partial_{\beta}\rho(\mathbf{x})% \bigr{]})$$ (9) i.e. the Berry curvature. The $\mathbf{x}=(x_{1},x_{2},\cdots,x_{2N-2})$ denotes coordinates of points in ray space under some suitable parametrization, ray space being $(2N-2)$ dimensional for an $N$-level quantum system. In the case of lattice systems and Bloch functions these coordinates are $\mathbf{k}$-space coordinates $(k_{x},k_{y},\cdots)$. The indices $\alpha$ and $\beta$ run over the ray space dimensions. It is interesting to note that one can recover the customary expression for the Berry curvature, over the Brillouin zone, for $2\times 2$ systems with translational invariance of the kind $H(\mathbf{k})=\bm{\sigma}.\hat{n}(\mathbf{k})$, which is in general given by $$\mathbf{\Omega}(\mathbf{k})=\frac{1}{2|\hat{n}(\mathbf{k})|^{3}}\hat{n}(% \mathbf{k}).[\partial_{k_{x}}\hat{n}(\mathbf{k})\times\partial_{k_{y}}\hat{n}(% \mathbf{k})]$$ upon making the substitution $\rho(\mathbf{k})=\frac{1}{2}(1+\bm{\sigma}.\hat{n}(\mathbf{k}))$ in Eq.(9), $\mathbf{k}$ serving the role of $\mathbf{x}$. This is drawn from a general analogy to the spin-$\frac{1}{2}$ Bloch sphere construction for $2$-level systems with Dirac structure. We use Eq.(9) with the same analogy for our $\hat{z}$-kicked Haldane Hamiltonian $\mathcal{H}_{Z}(\mathbf{k})$. IV Results and Discussion We shall now take up the discussion on (1) the range of driving parameters and their effects on the band structure, (2) effects of periodic kicking on the topological properties of the Haldane model and (3) the modification to Haldane’s overlap criterion due to kicking. IV.1 Range of Driving Parameters and Effects on Band structure The Floquet Hamitlonian we have calculated is obtained stroboscopically in an exact manner. Hence, there is in principle no restriction on the chosen driving frequency. However, there are still bounds as to how low one can go. The behavior of the band structure of the driven model requires this lower limit to be set by the convergence of the spectrum of the driven model to the undriven Haldane spectrum in the limit of $\alpha_{z}\rightarrow 0$ (i.e. taking the driving to zero). We observe that one can go to a driving frequency of the order of energy $\approx t_{1}$, if the undriven Haldane model, has parameter values $t_{2}=1$ and $t_{1}=3$. This choice of parameters satisfies the overlap prevention requirement. To put this lower limit in perspective we note when $M=0$ and $t_{2}\sin(\phi)=0$, the bandwidth of the Haldane model is $\approx 6t_{1}$ and hence one can work with a frequency up to this order. In this situation neither inversion nor time-reversal are violated and the system allows bands to touch at both Dirac points in the FBZ. The presence of $M$ alters the bandwidth but is of no substantial influence if considered smaller than the nearest neighbor hoppings $t_{1}$. For larger $M>3.5$, there are overlaps of the ground state Floquet bands with the Floquet sidebands for driving period $\approx 1/t_{1}$. In this case it is observed that an upper limit to the driving period $T=1/2t_{1}$ resolves this issue for all $M$ choices. The issue with larger $M$ s in the $1/t_{1}$ limit case can be resolved at non-zero driving amplitudes which remove the overlap to the sidebands but this does not hold true when one goes all the way down to zero driving amplitude. Thereby by making $1/2t_{1}$ the more favourable choice of upper limit for the period. These features are illustrated in Fig.2. So, in a driving scheme based on periodic kicking we are able to free the analysis of the constraint of limiting the driving to high frequencies and instead go to comparatively lower values. This feature is absent in the schemes involving continuous drives, such as circularly polarized light, that require the photons of the driving radiation to be of energies larger than the bandwidth Inoue and Tanaka (2010). This brings us to the question of how the amplitude of driving influences the features of the driven system. We restrict ourselves to a discussion of how the driving amplitude affects the band structure for a fixed choice of the hopping energies and at some particular choice of $M$ and $\phi$. The driving accentuates the inversion symmetry-breaking and the gap that opens in the spectrum increases as the amplitude is increased. There are however effects on the band curvature. It is known that when the kicking is applied to graphene, it leads to flat band structures at driving amplitudes of the magnitude $\alpha_{z}=\pi/2$ Agarwala et al. (2016). In the Haldane model one of the crucial differences in the band structure from that of ordinary graphene is the absence of particle hole symmetry (due to the next nearest neighbor hoppings governed by $t_{2}$). This feature is loosely understood in terms of the greater number of $B$ sites than $A$ sites in any finite bounded version of the system. Thus, for this model, when the amplitude of kicking is similarly increased, the band structure does not become completely flat, especially for the valence band. The conduction band does show nearly perfect flatness when the hopping energies are in the ratio satisfying $|t_{2}/t_{1}|<1/3$. The choice of $\phi$ here is kept fixed at $0$ and $M$ could be non-zero but within the range that shows topological behavior in the undriven case, i.e $[-3\sqrt{3},3\sqrt{3}]$. Though one may be cautioned that even in going upto this magnitude of driving the undriven overlap condition begins to break down in favor of a newer one hinted at earlier, but the signatures of flatness can be observed to occur well before this threshold is reached. In going beyond the $\alpha_{z}=\pm\pi/2$ limit the band structure is found to invert its curvature and as one proceeds to increase the driving to $\alpha_{z}=\pi$ the conduction and valence exchange their structure from what is seen near zero driving. The interplay of the magnitude of $M$ and $\alpha_{z}$ is found to effect the degree of flatness of the bands, especially the conduction band. These features are illustrated in Fig.3. We will see that due to the periodicity in the mass term stemming from the nature of the kicking, changing the magnitude of the driving causes the system to undergo transition in and out of topological phases in a periodic manner. In order to observe the full array of non-trivial topological behavior it suffices to work in the driving amplitude range of $\alpha_{z}\in[-(2n+1)\pi/2,(2n+1)\pi/2]$ and further within this range, the original condition to avoid overlap of bands when touchings occur i.e. $|t_{2}/t_{1}|<1/3$ is valid almost within $\alpha_{z}\in[-1,1]$. This range is sufficient to observe the competition between $M$ and the driving in terms of influencing the topological phase, for a fixed choice of hoppings satisfying the above criterion. However, to maintain sufficient generality in our discussion we will look at topological behavior at large driving amplitudes and the new overlap condition that comes into play in these regimes. A point to note here is that though we fix hopping values while discussing the topological properties at large drivings (thereby falling out of the criterion to avoid overlap of bands at these large driving amplitudes), this effect may be ignored so far as understanding of the topological phases is concerned. If one is indeed interested in a realization of the driven model at high amplitude kicking and in observing the band touchings in the spectra, an adjustment in the choice of hoppings, especially $t_{2}$, is necessary. Speaking in these terms necessarily assumes that one is working with a system where such parameters as the hopping energies and the site energies are free to be controlled and varied. This seems possible only in optical lattice setups where lattice depths and occupation densities of the ultracold atoms can be manipulated. IV.2 Topological features of the kicked model IV.2.1 Analytical Deductions We now come to a discussion of the topological properties of the driven Haldane model. Here, we analyze the effects of periodic kicking on the topological phase diagram for the Hamiltonian in eq.(1) Haldane (1988). We look at the mass term of our driven model, which is the coefficient of the $\sigma^{z}$ matrix in 2D systems, for the various topological phases the system could exhibit. Thus we make use of the definition for obtaining the Chern number $\mathcal{C}$ given in eq.(7). To apply this we consider $\epsilon_{z}(\mathbf{k})h^{\prime}_{z}(\mathbf{k})$ from eq.(4), which is the coefficient of $\sigma^{z}$ in the driven Haldane Hamiltonian. The technique requires one to consider the gap at the Dirac points $\mathbf{K}$ and $\mathbf{K^{\prime}}$, and look at the sign of $h^{\prime}_{z}(\mathbf{k})$ in the vicinity of these points. On doing so $\mathcal{C}$ is given by the expression $$\begin{split}\displaystyle\mathcal{C}=\frac{1}{2}\displaystyle\sum_{\nu=\pm 1}% \nu{\rm sgn}\Biggl{[}\frac{\epsilon_{z}(\mathbf{k})}{\sin(\gamma_{\nu})}\biggl% {(}-{\rm sgn}(\alpha_{z})\sin\alpha_{z}\cos(T|M-&\displaystyle 3\sqrt{3}\nu t_% {2}\sin\phi|)\\ &\displaystyle-{\rm sgn}(M-3\sqrt{3}\nu t_{2}\sin\phi)\sin(T|M-3\sqrt{3}\nu t_% {2}\sin\phi|)\cos\alpha_{z}\biggr{)}\Biggr{]}\end{split}$$ (10) where, $$\gamma_{\nu}=\cos^{-1}\biggl{[}\cos\alpha_{z}\cos(T|M-3\sqrt{3}\nu t_{2}\sin% \phi|)-{\rm sgn}(\alpha_{z}){\rm sgn}(M-3\sqrt{3}\nu t_{2}\sin\phi)\sin\alpha_% {z}\sin(T|M-3\sqrt{3}\nu t_{2}\sin\phi|)\biggl{]}$$ (11) The denominator in the above expression for $\mathcal{C}$ goes to zero for certain values of the driving $(\alpha_{z},T)$ and the Haldane model parameters $(M,\phi,t_{1},t_{2})$. Out of these the hopping parameters will usually be considered to be fixed for a given realization of the model. Here, we are interested in the general conditions that can be deduced from the form of the Chern number and the behavior of the mass term at the Dirac points under various choices of the driving and model parameters. Thus the condition for the denominator to go to zero, the mass terms to vanish, and hence Berry curvature to diverge at either of the Dirac points, is given by $\gamma_{\nu}=n\pi$, with $n=0,\pm 1,2,3\dotsc$. This essentially reduces to the condition $\cos(|\alpha_{z}|+T\left(M-3\sqrt{3}\nu t_{2}\sin\phi\right))=\pm 1$ which implies $|\alpha_{z}|+T\left(M-3\sqrt{3}\nu t_{2}\sin\phi\right)=n\pi$. The numerator of the expression for $\mathcal{C}$ (see eq.(10)), apart from the $\epsilon_{z}(\mathbf{k})$ term which does not play a role in determining the sign of the term (at the locations for the two Dirac points once one has chosen the valence band for calculating $\mathcal{C}$), go to zero for $\sin(|\alpha_{z}|+T\left(M-3\sqrt{3}\nu t_{2}\sin\phi\right)=0$ . The appearance of the indeterminate $0/0$ form which seems to occur is regulated in a limiting manner, by the presence of the $\epsilon_{z}(\mathbf{k})$ in the numerator. Thus what we have obtained is the condition for the bands to touch at either one of the Dirac points depending on the value of $\nu$ ($\pm 1$) in the equation $|\alpha_{z}|+T\left(M-3\sqrt{3}\nu t_{2}\sin\phi\right)=n\pi$. This is the modified condition for the boundary sinusoids which enclose the topologically non-trivial phases in the case of the Haldane model under kicking. A couple of features become apparent from this condition. We observe, that the periodic kicking has the effect of modifying the inversion breaking parameter $M$ to $M-\tfrac{(n\pi-|\alpha_{z}|)}{T}$ which depends on the driving parameters $\alpha_{z}$ and $T$. Thus for different values of $n$, there is a specific set of values for $(M,\alpha_{z},T)$ which would satisfy phase boundary conditions similar to the conditions satisfied by the Chern number in the Haldane model. In this case we have a periodic recurrence of the phase diagram plotted between $M/t_{2}$ and $\phi$ along the $M/t_{2}$ axis, as manifested in repeated copies of the original Chern diagram for the undriven model on moving along this axis. Thus, the broad topological behavior of the undriven model is preserved in the driven model but now extends to newer regions of $M$ values for a fixed choice of $t_{2}$. The system under driving begins to explore a larger space of parameters in terms of the occurrence of topological phases. Another feature that comes across is that the new condition for the phase boundaries depends on the magnitude of the driving $|\alpha_{z}|$ and is independent of its sign. In fact, the modification to the inversion breaking factor is such that it depends on the ratio $\alpha_{z}/T$ which encapsulates the complete effect of the driving. The appearance of the ratio indicates that the amplitude of the driving can be made to scale with the frequency in a linear fashion to obtain a class of driven models with identical topological behavior. There is even the possibility of choosing the amplitude of the kicking to gradually increase, in a linear fashion with time, on a scale adiabatic in comparison to the driving, so as to be effectively regarded as constant over several driving periods. With this one may realize a linear-in-time variation of the inversion breaking term and hence travel from a topologically non-trivial to a topologically trivial phase. This could be of use in schemes looking to quench Chern insulators across a topological phase boundary with a normal insulator to study various properties of dynamical topological phase transitions at the quantum critical pointBhattacharya and Dutta (2017). The effect of increasing the driving amplitude from zero (in either positive or negative sense), i.e. the undriven situation, is to shift the Haldane Chern number phases (pair of lobes due to the intersecting sinusoidal phase boundaries) vertically downwards, from their undriven position, along the $M/t_{2}$ axis. This effect applies to all the periodic copies of the phase diagram along this axis. Let $M^{\prime}$ be used to denote the new effective inversion breaking parameter in the presence of driving. Thus what we are effectively witnessing is a renormalization of the ‘Semenoff mass’ component $M$ in the Haldane mass. In the undriven case there was a unique inversion breaking site energy $M$ with the phase diagram center at $(M=0,\phi=0)$. This then had corresponded to a graphene like semi-metallic band structure with touchings at both Dirac points. In the driven model this admits multiple values as seen from $M^{\prime}\equiv M-\tfrac{(n\pi-|\alpha_{z}|)}{T}$ and hence multiple semi-metallic centres $M-\tfrac{(n\pi-|\alpha_{z}|)}{T}=0,\phi=0$. for the different $n$ and $\alpha_{z}$ values. The $n$ values define a set of several ‘Semenoff Masses’ at a give non-zero kicking all of which are valid choices around which topological phases can manifest. There is now a multiplicity of possible undriven Semenoff mass choices $M$ which yield $M^{\prime}=0$. The period of driving $T$, which we fix with a specific $t_{1}$, decides the separation between the centres for a given driving. Thus the zero driving case does not collapse to a single $M$ value topological phase structure ( the original Haldane model) but still shows a multitude of such phase diagrams which may be regarded as a consequence of the folding or periodicity in Floquet quasienergies. This hints that the topological phase diagrams repeat identically at a separation of $2\pi/T$ in the $M$ values which is exactly the width of a quasienergy Brillouin zone. Varying the driving $\alpha_{z}$ on the other hand, for a fixed choice of $n$ and $M$, is a more physically plausible and interesting as that would take a chosen undriven model $(M,\phi)$ through a topological transition.This is very much like quantum Hall plateau transitions with adibatically varying magnetic field. An interesting feature that shows up is that for a given kicking amplitude at $M$ values $\tfrac{(n\pi-|\alpha_{z}|)}{T}$, for different $n$ say, $0$ and $1$ the Chern number phases are reflected about the $\phi=0$ line in the phase diagram. This is of more significance when one varies the driving amplitude $\alpha_{z}$ to the relatively high regime of $\pi$ or $-\pi$. Then the Semenoff mass $M^{\prime}$ post driving is equal to the undriven one $M$ for $n=1$ which is clear from the relation. So the Chern number phase diagram with its phases reflected about $\phi=0$ now occupies the region of the phase diagram where earlier the undriven Haldane Chern number diagram was valid and thus in this extreme driving condition the topological phases undergo an exact inversion. This indicated that even an inversion breaking taken to a certain extreme may alter the band topology of a Chern insulator atleast in the presence of driving. However, physically there are issues with such large kicking amplitudes some of which have been discussed earlier. Again one has to exercise some caution here, on account of the folding of the quasienergies. There is always the possibility of band touchings which occur at the extreme ends of the spectrum (quasienergy Brillouin zone boundaries), besides the conventional ones at the middle of the spectrum which occur in the undriven and driven cases. This could cause the Chern numbers to invert for the two bands. Indeed, what we see here is that, the inversion in phases is due to these band touchings at the $\pm\pi/T$ limits of the folded spectrum and hence the gap closing at the edges of the quasienergy Brillouin zone. These arguments sit well with the previous discussion of the appearance of flat band behavior in the conduction band as driving amplitude is increased. As it starts to acquire the curvature characteristics which are present in the valence band in case of zero driving. Thereby an exact reversal of structure occurs between the valence and conduction bands and the Chern numbers flip due to this new closing opening transition. The band structure is shown in Fig. 3(b). IV.2.2 Evidence from Phase Diagrams To illustrate the various aspects of the topological phase diagram for the driven Haldane model we refer Fig.(4). These figures are for parameter values $t_{1}=3.5$ and $t_{2}=1$ that satisfy the band overlap prevention condition. Again, we caution that variation to this condition in the presence of driving which has been hinted on several earlier occasions and so $t_{2}$ has to be changed beyond a certain driving amplitude regime but here this is ignored as the broad topological behavior is unaffected by this. The driving period is fixed at $T=\tfrac{1}{2t_{1}}$. This choice, as stated earlier, ensures that the limits of the Floquet quasienergy Brillouin zone remain beyond the bandwidth of the undriven model and thus manifests in the phase diagrams as avoided overlaps between the different replicas of the intersecting sinusoids that are seen one below the other in Fig.(4). Other previously discussed features that become apparent include, for instance, one can look at the the plots in Fig.(4) (a) and (b) which are for $\alpha_{z}=0$ and $\alpha_{z}=\pi$ respectively and note that when the driving is taken to such extremes the band topology inversion spoken of earlier occurs. Additionally, though $\alpha_{z}$ is zero for the plot (a) and one does indeed see the undriven Haldane model phase diagram around the $(M=0,\phi=0)$ centre, there are still copies of similar non-trivial topological phases along the $M/t_{2}$ axis which are absent in the original Haldane model. This indicates that the stroboscopic Floquet Hamiltonian does not converge to the unperturbed Hamiltonian simply by taking the driving amplitude to zero. One also has to take the limit of the driving period becoming very small and ideally going to zero. It is in this limit that one recovers the undriven model and this is true for the phases in plot (a) of fig.(4) as the other topological phase regions will get pushed out to infinity and one obtains Haldane’s original phase diagram. An observation that is consistent with the fact that, in the limit of driving frequencies being infinitely large, one is precisely left with the undriven Hamiltonian as the exact description of the system. This is so as the separation between two pairs of intersecting sinusoids that delineate two topological phase regions is decided by the corresponding driving renormalized Semenoff masses and the difference between these masses can be seen to depend on the driving frequency. Thus one can easily see that the effect of varying the driving frequency, say decreasing it in our model, would be to bring the adjacent topological regions, enclosed between their respective pair of sinusoids, nearer to one another. Eventually, for the lower driving frequency limit , of which we have spoken earlier, the Haldane-like topological phase diagram copies are close enough for the sinusoids of adjacent diagrams to just touch each other. Going lower in the frequency would take one into the forbidden limit where these non-trivial regions begin to overlap. Further, if one looks at plots (c) and (d) of fig.(4) we see that for the driving amplitudes of $\alpha_{z}=\pi/4$ and $\pi/2$ respectively one has the effect of shifting the topological phases away from the parameter regions which were topologically non-trivial in the undriven situation. Thus in plot (c) one can clearly see the new topological region shifted with respect to the phase boundary for the undriven model, which is the pair of sinusoids intersecting at the origin in the phase plane. In particular the upper half of the region enclosed between the undriven model’s phase boundaries is now topologically trivial. Thus, increasing the driving, shifts the phases in a linear fashion. One may consider some choice of undriven model parameters $M$ and $\phi$ for which the system is in a topological phase and after a certain magnitude of driving the model enters a topologically trivial phase. Thus the change in the driving amplitude can, as discussed earlier, bring a plateau transition in the Chern number. This effect is more pronounced in plot (d), where, the entire parameter range which was topological in the undriven case is now trivial and hence the driving does offer a path to transition between non-zero and zero Chern numbers and is hence may be used to study the normal to Chern insulator transition in such simple non-interacting systems. Fig.(5) illustrates the topological phases of the kicked model when looked at from different cross-sectional views of the solid three dimensional structure that would result if the various phase plots for the $\alpha_{z}$ values , such as those in fig.(4), were stacked in proper sequence, one above the other, along an out of plane $\alpha_{z}$ axis. In this figure, all the parameter values that need specification to obtain the plots therein, are chosen to be the same as those used for fig.(4). Plot (a) in the figure depicts the behavior of the topological regions for a $\phi$ value fixed at $\pi/3$ and, $M/t_{2}$ and $\alpha_{z}$ being varied. The linear variation of the phases in this picture reveals the linear shift in the sinusoidal lobes seen in the plots of fig.(4), with change in driving. Additionally, the sharp turn in slope, as if a reflection, of these linear phase regions, which are basically tubes with sinusoidal cross-sections, at $\alpha_{z}=0$ is indicative of the driving dependence being purely on the magnitude i.e. $|\alpha_{z}|$. Once this picture is established it becomes easier to interpret the other two plots (b) and (c), which show the $\phi-\alpha_{z}$ phase plane for $M/t_{2}$ values $2$ and $10$ respectively. Since a constant $M/t_{2}$ can be understood as a plane that slices a kind of Pan flute structure of the tubes of intersecting sinusoids. Thus on the plane one expects to get the projections of the tubes that are cut and this naturally depends on where one chooses to slice. So, where such flutes of different inclination meet, which is at the turning point i.e. $\alpha_{z}=0$ or, if one considers the full periodicity, $n\pi$, they form an intersecting sinusoidal edge. If the slice is chosen that it cuts above or below the exact centre of this ridge i.e. $M\neq 0$ then the projection on the corresponding $\phi-\alpha_{z}$ plane will have a pair of non-touching sinusoids at the centre. This is what shows up in the middle of plot (b). Of course the slice may be so chosen that it lies outside this intersecting sinusoidal edge in which case it will cut the nearest sloping flute tubes and result in a projection with touching sinusoids, as is the case in plot (c). Due to the inherent periodicity in the phase diagram structure, as one goes through a complete period of the $M/t_{2}$ choices, the projections begin to show the underlying periodicity. IV.3 Modifications to Haldane’s overlap criterion due to kicking This broadly concludes our discussion of the topological features of the kicked Haldane model. We now turn our attention to the issue of avoiding band overlap in the presence of driving, a concern which has been repeatedly expressed at various points in the above discussion under different contexts. The prime consideration is to have the bands touch in a way that the spectrum allows these touchings to be detected without ambiguity. This imposes a relation on the hopping parameters. As the relative magnitude of $t_{1}$ and $t_{2}$ has the effect of influencing the degree of particle-hole symmetry breaking in the system, and hence, the nature of the touchings. Along lines similar to the arguments for Haldane’s criterion we obtain the following condition that needs to be satisfied $$\begin{split}\displaystyle 9t_{2}<\cos^{-1}\biggl{[}\cos(\alpha_{z})\cos\left(% T\sqrt{9t_{1}^{2}+M^{2}}\right)-&\displaystyle\frac{M\sin(\alpha_{z})\sin(T% \sqrt{9t_{1}^{2}+M^{2}})}{\sqrt{9t_{1}^{2}+M^{2}}}\biggr{]}\\ &\displaystyle-\cos^{-1}\biggl{[}\cos(\alpha_{z})\cos(T|M|)-{\rm sgn}(M)\sin(% \alpha_{z})\sin(T|M|)\biggr{]}\end{split}$$ (12) In the above inequality, a condition is imposed on the suitable values for $t_{2}$ once $t_{1}$ has been chosen. This is accompanied with the effects of driving also having a role to play in the determination of this value. Both the driving amplitude $\alpha_{z}$ and the driving period $T$ appear in the above expression. Since we have already done so in our earlier analysis $T$ can be taken to depend in an appropriate way on the nearest neighbor hoppings $t_{1}$. The $M$ can be written in terms of the driving amplitude using the previously derived expressions for the new Semenoff masses $M^{\prime}$ depending on which $n$-th order semi-metallic center one is looking at to observe the band touchings, by putting that particular choice of $M^{\prime}$ to zero or $n\pi$. Thus the condition can be reduced to depend solely on $\alpha_{z}$ and $t_{1}$. Another feature of this condition is that unlike the ordinary one given by Haldane which has a simpler reciprocal relationship between the two hopping energies, the above relation is not easily invertible to a case where one fixes $t_{1}$ and calculates the condition on $t_{2}$. In the context of varying $\alpha_{z}$ for a fixed $n$ in the choice of $M^{\prime}$ or changing $n$ for fixed $\alpha_{z}$ the variation in the choice of $t_{2}$ will have the effect of altering the boundary sinusoids of the corresponding phase diagrams in the parameter space. Thus if one were to rigorously enforce this condition, which we have ignored for now in the phase diagrams in fig.(4) where $t_{2}$ is fixed at unity, we would observe a flattening or broadening of the pair of intersecting sinusoids. This follows from the fact that changing $t_{2}$ say in the diagram of a given $\alpha_{z}$ for different $M$ and hence $n$ values would rescale the vertical axis of the diagram. We would like to point out that adjusting $t_{2}$ is a freedom available only in certain realizations, as mentioned earlier, hence, if one is interested in driving the system across a topological transition it would be reasonable to do so in the previously suggested range of $\alpha_{z}\in[-1,1]$. Since within this domain the ordinary haldane condition is a workable choice and one need not be concerned too much about the effects of driving in this regard. V Conclusion We have considered a $\hat{z}$-kicked Haldane model and examine the topological properties of this system. The effects of driving on the topological phase diagram of Haldane’s originally proposed model are illustrated. We find that, besides introducing a periodicity in the phase diagram where the Haldane phase diagram is repeated at regular intervals along the inversion breaking axis $M/t_{2}$, a signature of the periodicity of the Floquet quasienergy spectrum, the driving magnitude is solely responsible for a linear shift in the topological phases of the driven model relative to their undriven counterparts. This suggests the use of this driven model to study Floquet topological phase transitions. This is different from the optically driven Haldane models of Inoue and Tanaka (2010); Wang et al. (2012) where the tunable parameter toys with the time-reversal symmetry breaking by modifying those terms of the effective Hamiltonian which depend on the phase of the complex valued next-nearest neighbor hoppings of the undriven Haldane model. Although the overall effect is to still traverse between topological and non-topological regions of the Haldane Chern number phase plot as drawn against the symmetry breaking parameters, yet this is brought about in a different manner. To be precise this distinction becomes fully apparent when one considers the effective Hamiltonian post-driving in the vicinity of a Dirac point which of course would be usually gapped in the given case. Also, the driving at sufficiently large amplitudes causes a modification of the band overlap avoidance criterion as originally suggested by Haldane in his model. Finally, we would like to mention that this kicking scheme could be also applied to the Kane-Mele model for spin orbit coupling in hexagonal lattices Kane and Mele (2005a, b) to study the effect of driving on the $Z_{2}$ topological index which characterizes the topology in such QSHE systems. This is proposed as a future work that we intend to undertake. VI Acknowledgements T.M. thanks UGC, India for funding through a SRF and would like to acknowledge discussions with Prof. Diptiman Sen, Prof. Amit Dutta and Dr. Utso Bhattacharya in meetings at ICTS. A.P. would like to thank Dr. Adhip Agarwala for valuable inputs. T.G.S and J.N.B thank DST-SERB, India for Project No. EMR/2016/003289. They would also like to thank Prof. T. Oka for discussions and Prof. N. Mukunda for providing his lecture notes on various aspects of geometric phase. References Thouless et al. (1982) D. J. Thouless, M. Kohmoto, M. P. Nightingale,  and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982). Halperin (1982) B. I. Halperin, Phys. Rev. B 25, 2185 (1982). Avron et al. (1983) J. E. Avron, R. Seiler,  and B. Simon, Phys. Rev. Lett. 51, 51 (1983). Niu et al. (1985) Q. Niu, D. J. Thouless,  and Y.-S. Wu, Phys. Rev. B 31, 3372 (1985). Hatsugai (1993) Y. Hatsugai, Phys. Rev. Lett. 71, 3697 (1993). Klitzing et al. (1980) K. v. Klitzing, G. Dorda,  and M. Pepper, Phys. Rev. Lett. 45, 494 (1980). Laughlin (1981) R. B. Laughlin, Phys. Rev. B 23, 5632 (1981). Bernevig and Zhang (2006) B. A. Bernevig and S.-C. Zhang, Phys. Rev. Lett. 96, 106802 (2006). Bernevig et al. (2006) B. A. Bernevig, T. L. Hughes,  and S.-C. Zhang, Science 314, 1757 (2006), http://science.sciencemag.org/content/314/5806/1757.full.pdf . Kane and Mele (2005a) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005a). Kane and Mele (2005b) C. Kane and E. Mele, Phys. Rev. Lett. 95, 146802 (2005b). Fu et al. (2007) L. Fu, C. L. Kane,  and E. J. Mele, Phys. Rev. Lett. 98, 106803 (2007). Moore (2010) J. E. Moore, Nature 464, 194 (2010). Chen et al. (2009) Y. Chen, J. Analytis, J.-H. Chu, Z. Liu, S.-K. Mo, X.-L. Qi, H. Zhang, D. Lu, X. Dai, Z. Fang, et al., Science 325, 178 (2009). Hasan et al. (2014) M. Z. Hasan, S.-Y. Xu, D. Hsieh, L. A. Wray,  and Y. Xia, arXiv preprint arXiv:1401.0848  (2014). Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). Qi and Zhang (2011) X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. 83, 1057 (2011). Altland and Zirnbauer (1997) A. Altland and M. R. Zirnbauer, Phys. Rev. B 55, 1142 (1997). Schnyder et al. (2008) A. P. Schnyder, S. Ryu, A. Furusaki,  and A. W. W. Ludwig, Phys. Rev. B 78, 195125 (2008). Kitaev (2009) A. Kitaev, in AIP Conference Proceedings, Vol. 1134 (2009). Castro Neto et al. (2009) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov,  and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009). Das Sarma et al. (2011) S. Das Sarma, S. Adam, E. H. Hwang,  and E. Rossi, Rev. Mod. Phys. 83, 407 (2011). Goerbig (2011) M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011). Delplace et al. (2011) P. Delplace, D. Ullmo,  and G. Montambaux, Phys. Rev. B 84, 195452 (2011). Hatsugai et al. (2006) Y. Hatsugai, T. Fukui,  and H. Aoki, Phys. Rev. B 74, 205414 (2006). Zhang et al. (2005) Y. Zhang, Y.-W. Tan, H. L. Stormer,  and P. Kim, Nature 438, 201 (2005). Koghee et al. (2012) S. Koghee, L.-K. Lim, M. O. Goerbig,  and C. M. Smith, Phys. Rev. A 85, 023637 (2012). Tarruell et al. (2012) L. Tarruell, D. Greif, T. Uehlinger, G. Jotzu,  and T. Esslinger, Nature 483, 302 (2012). Rechtsman et al. (2013a) M. C. Rechtsman, Y. Plotnik, J. M. Zeuner, D. Song, Z. Chen, A. Szameit,  and M. Segev, Phys. Rev. Lett. 111, 103901 (2013a). Jotzu et al. (2014) G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif,  and T. Esslinger, Nature 515, 237 (2014). Oka and Aoki (2009) T. Oka and H. Aoki, Phys. Rev. B 79, 081406 (2009). Kitagawa et al. (2011) T. Kitagawa, T. Oka, A. Brataas, L. Fu,  and E. Demler, Phys. Rev. B 84, 235108 (2011). Gu et al. (2011) Z. Gu, H. A. Fertig, D. P. Arovas,  and A. Auerbach, Phys. Rev. Lett. 107, 216601 (2011). Suárez Morell and Foa Torres (2012) E. Suárez Morell and L. E. F. Foa Torres, Phys. Rev. B 86, 125449 (2012). Iadecola et al. (2013) T. Iadecola, D. Campbell, C. Chamon, C.-Y. Hou, R. Jackiw, S.-Y. Pi,  and S. V. Kusminskiy, Phys. Rev. Lett. 110, 176603 (2013). Delplace et al. (2013) P. Delplace, A. Gómez-León,  and G. Platero, Phys. Rev. B 88, 245422 (2013). Perez-Piskunow et al. (2014) P. M. Perez-Piskunow, G. Usaj, C. A. Balseiro,  and L. E. F. F. Torres, Phys. Rev. B 89, 121401 (2014). Usaj et al. (2014) G. Usaj, P. M. Perez-Piskunow, L. E. F. Foa Torres,  and C. A. Balseiro, Phys. Rev. B 90, 115423 (2014). Perez-Piskunow et al. (2015) P. M. Perez-Piskunow, L. E. F. Foa Torres,  and G. Usaj, Phys. Rev. A 91, 043625 (2015). Sentef et al. (2015) M. Sentef, M. Claassen, A. Kemper, B. Moritz, T. Oka, J. Freericks,  and T. Devereaux, Nature communications 6 (2015). Cayssol et al. (2013) J. Cayssol, B. Dóra, F. Simon,  and R. Moessner, physica status solidi (RRL) – Rapid Research Letters 7, 101 (2013). Katan and Podolsky (2013) Y. T. Katan and D. Podolsky, Phys. Rev. Lett. 110, 016802 (2013). Wang et al. (2013a) Y. Wang, H. Steinberg, P. Jarillo-Herrero,  and N. Gedik, Science 342, 453 (2013a). Tong et al. (2013) Q.-J. Tong, J.-H. An, J. Gong, H.-G. Luo,  and C. H. Oh, Phys. Rev. B 87, 201109 (2013). Grushin et al. (2014) A. G. Grushin, A. Gómez-León,  and T. Neupert, Phys. Rev. Lett. 112, 156801 (2014). He and Zhang (2014) C. He and Z. Zhang, Physics Letters A 378, 3200 (2014). Zhou et al. (2014) L. Zhou, H. Wang, D. Y. Ho,  and J. Gong, The European Physical Journal B 87, 204 (2014). Anisimovas et al. (2015) E. Anisimovas, G. Žlabys, B. M. Anderson, G. Juzeliūnas,  and A. Eckardt, Phys. Rev. B 91, 245135 (2015). Benito and Platero (2015) M. Benito and G. Platero, Physica E: Low-dimensional Systems and Nanostructures 74, 608 (2015). Farrell and Pereg-Barnea (2016) A. Farrell and T. Pereg-Barnea, Phys. Rev. B 93, 045121 (2016). Zhou et al. (2016) L. Zhou, C. Chen,  and J. Gong, Phys. Rev. B 94, 075443 (2016). Xiong et al. (2016) T.-S. Xiong, J. Gong,  and J.-H. An, Phys. Rev. B 93, 184306 (2016). Saha (2016) K. Saha, Phys. Rev. B 94, 081103 (2016). Inoue and Tanaka (2010) J.-i. Inoue and A. Tanaka, Phys. Rev. Lett. 105, 017401 (2010). Dóra et al. (2012) B. Dóra, J. Cayssol, F. Simon,  and R. Moessner, Phys. Rev. Lett. 108, 056602 (2012). Lindner et al. (2013) N. H. Lindner, D. L. Bergman, G. Refael,  and V. Galitski, Phys. Rev. B 87, 235131 (2013). Kundu et al. (2014) A. Kundu, H. A. Fertig,  and B. Seradjeh, Phys. Rev. Lett. 113, 236803 (2014). Ho and Gong (2012) D. Y. H. Ho and J. Gong, Phys. Rev. Lett. 109, 010601 (2012). Dehghani et al. (2014) H. Dehghani, T. Oka,  and A. Mitra, Phys. Rev. B 90, 195429 (2014). Dal Lago et al. (2015) V. Dal Lago, M. Atala,  and L. E. F. Foa Torres, Phys. Rev. A 92, 023624 (2015). Titum et al. (2016) P. Titum, E. Berg, M. S. Rudner, G. Refael,  and N. H. Lindner, Phys. Rev. X 6, 021013 (2016). Rechtsman et al. (2013b) M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, D. Podolsky, F. Dreisow, S. Nolte, M. Segev,  and A. Szameit, Nature 496, 196 (2013b), letter. Zheng and Zhai (2014) W. Zheng and H. Zhai, Phys. Rev. A 89, 061603 (2014). Reichl and Mueller (2014) M. D. Reichl and E. J. Mueller, Phys. Rev. A 89, 063628 (2014). Yan et al. (2015) Z. Yan, B. Li, X. Yang,  and S. Wan, Scientific Reports 5, 16197 EP (2015), article. Verdeny and Mintert (2015) A. Verdeny and F. Mintert, Phys. Rev. A 92, 063615 (2015). Leykam et al. (2016) D. Leykam, M. C. Rechtsman,  and Y. D. Chong, Phys. Rev. Lett. 117, 013902 (2016). Račiūnas et al. (2016) M. Račiūnas, G. Žlabys, A. Eckardt,  and E. Anisimovas, Phys. Rev. A 93, 043618 (2016). Kitagawa et al. (2010) T. Kitagawa, E. Berg, M. Rudner,  and E. Demler, Phys. Rev. B 82, 235114 (2010). Gómez-León and Platero (2013) A. Gómez-León and G. Platero, Phys. Rev. Lett. 110, 200403 (2013). Rudner et al. (2013) M. S. Rudner, N. H. Lindner, E. Berg,  and M. Levin, Phys. Rev. X 3, 031005 (2013). Carpentier et al. (2015) D. Carpentier, P. Delplace, M. Fruchart,  and K. Gawedzki, Phys. Rev. Lett. 114, 106806 (2015). Nathan and Rudner (2015) F. Nathan and M. S. Rudner, New Journal of Physics 17, 125014 (2015). Fulga and Maksymenko (2016) I. C. Fulga and M. Maksymenko, Phys. Rev. B 93, 075405 (2016). Fruchart (2016) M. Fruchart, Phys. Rev. B 93, 115429 (2016). Roy and Harper (2016) R. Roy and F. Harper, ArXiv e-prints  (2016), arXiv:1603.06944 [cond-mat.str-el] . Bomantara et al. (2016) R. W. Bomantara, G. N. Raghava, L. Zhou,  and J. Gong, Phys. Rev. E 93, 022209 (2016). Lababidi et al. (2014) M. Lababidi, I. I. Satija,  and E. Zhao, Phys. Rev. Lett. 112, 026805 (2014). Wang et al. (2013b) H. Wang, D. Y. H. Ho, W. Lawton, J. Wang,  and J. Gong, Phys. Rev. E 88, 052920 (2013b). Thakurathi et al. (2013) M. Thakurathi, A. A. Patel, D. Sen,  and A. Dutta, Phys. Rev. B 88, 155133 (2013). Babajanov et al. (2014) D. Babajanov, D. U. Matrasulov,  and R. Egger, The European Physical Journal B 87, 258 (2014). Bhattacharya et al. (2016) U. Bhattacharya, S. Dasgupta,  and A. Dutta, The European Physical Journal B 89, 216 (2016). Agarwala et al. (2016) A. Agarwala, U. Bhattacharya, A. Dutta,  and D. Sen, Phys. Rev. B 93, 174301 (2016). Plekhanov et al. (2017) K. Plekhanov, G. Roux,  and K. Le Hur, Phys. Rev. B 95, 045102 (2017). Wright (2013) A. R. Wright,  3, 2736 EP (2013), article. Cook and Paramekanti (2014) A. M. Cook and A. Paramekanti, Phys. Rev. Lett. 113, 077203 (2014). Cook (2016) A. M. Cook, Phys. Rev. B 94, 205135 (2016). Li and Sushkov (2016) T. Li and O. P. Sushkov, Phys. Rev. B 94, 155311 (2016). Jung et al. (2015) J. Jung, A. M. DaSilva, A. H. MacDonald,  and S. Adam, Nature Communications 6, 6308 EP (2015), article. Ortix et al. (2012) C. Ortix, L. Yang,  and J. van den Brink, Phys. Rev. B 86, 081405 (2012). Weinberg et al. (2016) M. Weinberg, C. Staarmann, C. Ã–lschläger, J. Simonet,  and K. Sengstock, 2D Materials 3, 024005 (2016). Haldane (1988) F. D. M. Haldane, Phys. Rev. Lett. 61, 2015 (1988). Wang et al. (2012) Y.-X. Wang, F.-X. Li,  and Y.-M. Wu, EPL (Europhysics Letters) 99, 47007 (2012). Thonhauser and Vanderbilt (2006) T. Thonhauser and D. Vanderbilt, Phys. Rev. B 74, 235111 (2006). Soluyanov and Vanderbilt (2011) A. A. Soluyanov and D. Vanderbilt, Phys. Rev. B 83, 035108 (2011). Berry (1984) M. V. Berry, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 392, 45 (1984), http://rspa.royalsocietypublishing.org/content/392/1802/45.full.pdf . Kohmoto (1985) M. Kohmoto, Annals of Physics 160, 343 (1985). Mukunda and Simon (1993a) N. Mukunda and R. Simon, Annals of Physics 228, 205 (1993a). Mukunda and Simon (1993b) N. Mukunda and R. Simon, Annals of Physics 228, 269 (1993b). Rabei et al. (1999) E. M. Rabei, Arvind, N. Mukunda,  and R. Simon, Phys. Rev. A 60, 3397 (1999). Bhattacharya and Dutta (2017) U. Bhattacharya and A. Dutta, Phys. Rev. B 95, 184307 (2017).
A brief survey on singularities of geodesic flows in smooth signature changing metrics on 2-surfaces N.G. Pavlova111 Department of Nonlinear Analysis and Optimization, RUDN University (Moscow, Russia). Email: [email protected].   and A.O. Remizov222 CMAP École Polytechnique (Palaiseau, France), CNRS. Email: [email protected]. Abstract We present a survey on generic singularities of geodesic flows in smooth signature changing metrics (often called pseudo-Riemannian) in dimension 2. Generically, a pseudo-Riemannian metric on a 2-manifold $S$ changes its signature (degenerates) along a curve $S_{0}$, which locally separates $S$ into a Riemannian ($R$) and a Lorentzian ($L$) domain. The geodesic flow does not have singularities over $R$ and $L$, and for any point $q\in R\cup L$ and every tangential direction $p\in{\mathbb{R}}{\mathbb{P}}$ there exists a unique geodesic passing through the point $q$ with the direction $p$. On the contrary, geodesics cannot pass through a point $q\in S_{0}$ in arbitrary tangential directions, but only in some admissible directions; the number of admissible directions is 1 or 2 or 3. We study this phenomenon and the local properties of geodesics near $q\in S_{0}$. 0002010 Mathematics Subject classification 53C22, 53B30, 34C05.000Key Words and Phrases. Pseudo-Riemannian metrics, Geodesics, Singular points, Normal forms. 1 Introduction Let $S$ be a real smooth manifold, $\operatorname{dim}S=n\geq 2$. By metric on $S$ we mean a symmetrical covariant tensor field of the second order on the tangent bundle $TS$, not necessary positive defined. Moreover, metrics whose signature has different signs at different points of $S$, are of the special interest. For instance, in the quantum theory of gravitation and general relativity two types of signature changing metrics are considered: • Smooth. The metric is degenerate on a hypersurface $S_{0}\subset S$ that divides the Riemannian region $R\subset S$ with signature $(+\cdots++)$ from the Lorentzian region $L\subset S$ with signature $(+\cdots+-)$. Example: $ds^{2}=dx_{1}^{2}+\cdots+dx_{n-1}^{2}+x_{n}dx_{n}^{2}$. • Discontinuous. The metric is smooth and non-degenerate everywhere except for a hypersurface $S_{0}\subset S$ (which separates $R$ and $L$ defined as above), where it fails to be continuous. Example: $ds^{2}=dx_{1}^{2}+\cdots+dx_{n-1}^{2}+\frac{1}{x_{n}}dx_{n}^{2}$. In the paper [27], Russian physicist A.D. Sakharov conjectured there exist states of the physical continuum which include regions with different signatures of the metric; the observed Universe and an infinite number of other Universes arose as a result of quantum transitions with a change in the signature of the metric. This concept is exemplified by Fig. 1. In his cosmological model, Sakharov used discontinuous metrics. However, some other authors consider models with smooth signature changing metrics; see e.g. [1, 17, 18, 19] and the references therein. From physical viewpoint, the difference between smooth and discontinuous signature changing metrics corresponds to different physical proposals, in particular, different solutions of the Einstein equation. Euclidean-Lorentzian transitions (junctions) between the domains $R$ and $L$ play an important role, both in the smooth and discontinuous models. The term Euclidean is used in sense of Riemannian, that is typical for physical literature, see e.g. [2]. Similarly, the term Lorentzian is referred to non-degenerate indefinite metrics. In this paper, we discuss a purely mathematical problem connected with smooth signature changing metrics (further called pseudo-Riemannian): the local behavior of geodesics in a neighborhood of the points where the metric has a generic degeneracy. Such points are singular points of the geodesic flow, and the standard existence and uniqueness theorem for ordinary differential equations is not applicable. This leads to an interesting geometric phenomenon: geodesics cannot pass through a degenerate point in arbitrary tangential directions, but only in certain directions said to be admissible. A study of this phenomenon for two-dimensional pseudo-Riemannian metrics is started in [13, 24, 25, 26]; similar results in three-dimensional case were announced in [22]. In these works, mainly the local properties of geodesics and geodesic flows were considered, some global properties of geodesics of pseudo-Riemannian metrics with differentiable groups of symmetries are investigated in [25]. This allows, in particular, to obtain the phase portraits of geodesics on surfaces of revolution (sphere, torus, etc.) embedded in three-dimensional Minkowski space. Various other aspects of pseudo-Riemannian metrics (including the Gauss–Bonnet formula) are treated by many authors, see e.g. [12, 16, 18, 19, 20, 21, 28] and the references therein. However, there exist a number of unsolved problem connected with degeneracy of metrics. According to our knowledge, the problem of local geodesic equivalence of pseudo-Riemannian metrics at degenerate points is not studied yet, although it is well studied for Riemannian and Lorentzian metrics, see e.g. [7] (in this paper, the authors call pseudo-Riemannian what we call Lorentzian, i.e., non-degenerate indefinite metrics). From now we always assume that $\operatorname{dim}S=2$. Similarly, just as Riemannian metrics naturally appear on surfaces embedded in Euclidean space, pseudo-Riemannian metrics can be generated in pseudo-Euclidean space. Let $S$ be a smooth surface embedded in 3D Minkowski space $(X,Y,Z)$ with the pseudo-Euclidean metric $dX^{2}+dY^{2}-dZ^{2}$. Then the pseudo-Euclidean metric in the ambient $(X,Y,Z)$-space induced a pseudo-Riemannian metric on $S$. For instance, let $S$ be the standard Euclidean sphere $$X^{2}+Y^{2}+Z^{2}=1.$$ The metric induced on the sphere $S$ degenerates on two parallels $Z=\pm 1/{\sqrt{2}}$, which separate $S$ into three regions, where the metric has constant signatures. The North $\bigl{(}Z>1/{\sqrt{2}}\bigr{)}$ and the South $\bigl{(}Z<-1/{\sqrt{2}}\bigr{)}$ regions are Riemannian, while the equatorial region $|Z|<1/{\sqrt{2}}$ is Lorentzian; see Fig. 2 (left). The condition of the point $q\in S$ belonging to $R$ or $S_{0}$ or $L$ depends on the mutual relationships between the tangent plane $T_{q}S$ and the isotropic (light) cone $$dX^{2}+dY^{2}-dZ^{2}=0;$$ see Fig. 2 (right). 2 Definition of geodesics Consider a two-dimensional manifold (surface) $S$ with pseudo-Riemannian metric $$ds^{2}=a(x,y)\,dx^{2}+2b(x,y)\,dxdy+c(x,y)\,dy^{2},$$ (1) whose coefficients are smooth (i.e., $C^{\infty}$). Geodesics in the metric (1) can be defined via variational principles similarly to the Riemannian case, with additional nuances. For instance, the arc-length parametrization is not defined for the isotropic lines (or lightlike lines or null curves). Moreover, the Lagrangian of the length functional $$J_{l}(\gamma)=\int\limits_{\gamma}\sqrt{a{\dot{x}}^{2}+2b{\dot{x}}\dot{y}+c{% \dot{y}}^{2}}\,dt\ \to\ {\rm extr},$$ where the dot means differentiation by the parameter $t$, fails to be differentiable on the isotropic surface $\mathscr{F}$ $$a(x,y)\,dx^{2}+2b(x,y)\,dxdy+c(x,y)\,dy^{2}=0,$$ (2) and the Euler–Lagrangian equation for the length functional is not defined on $\mathscr{F}$. Note that equation (2) defines the isotropic surface $\mathscr{F}$ in the complement of the zero section of $TS$ or, equivalently, in the projectivized tangent bundle $PTS$. Binary differential equation (2) defines a direction field on $\mathscr{F}$, whose integral curves correspond to isotropic lines in the metric (1). This equation plays an important role for understanding the behavior of geodesics, and we consider it in more detail below. As already mentioned above, the Euler–Lagrangian equation for the length functional $J_{l}$ does not allow to define extremals on $\mathscr{F}$. However, this problem does not arise if we define geodesics as extremals of the action functional $$J_{a}(\gamma)=\int\limits_{\gamma}(a{\dot{x}}^{2}+2b{\dot{x}}\dot{y}+c{\dot{y}% }^{2})\,dt\ \to\ {\rm extr}.$$ The corresponding Euler–Lagrange reads $$\left\{\ \begin{aligned} &\displaystyle 2(a\ddot{x}+b\ddot{y})=(c_{x}-2b_{y}){% \dot{y}}^{2}-2a_{y}{\dot{x}}{\dot{y}}-a_{x}{\dot{x}}^{2},\\ &\displaystyle 2(b\ddot{x}+c\ddot{y})=(a_{y}-2b_{x}){\dot{x}}^{2}-2c_{x}{\dot{% x}}{\dot{y}}-c_{y}{\dot{y}}^{2},\\ \end{aligned}\right.$$ (3) and the corresponding parametrization is called natural or canonical. Obviously, the definition of geodesics as auto-parallel curves in the Levi–Civita connection generated by the metric (1) leads to the same Equation (3). The natural parametrization is well defined for all types of geodesics, including isotropic. For non-isotropic geodesics it coincides with the arc-length (of course, here the length to be real or imaginary). The functionals $J_{l}$ (length) and $J_{a}$ (action) define the corresponding fields of extremals: $\chi_{l}$ on $PTS$ away of $\mathscr{F}$ and $\chi_{a}$ on the complement of the zero section of $TS$ (including $\mathscr{F}$). The relationship between the fields $\chi_{l}$ and $\chi_{a}$ is as follows (see also Fig. 3). The natural projectivization $\Pi\colon TS\to PTS$ sends the field $\chi_{a}$ to a direction field on $PTS$, which is parallel to the vector field $$\vec{V}=2\Delta\biggl{(}\frac{\partial}{\partial x}+p\frac{\partial}{\partial y% }\biggr{)}+M\frac{\partial}{\partial p},\ \quad p=\frac{dy}{dx},$$ (4) where $$\Delta(x,y)=ac-b^{2},\ \ \ M(x,y,p)=\sum\limits_{i=0}^{3}\mu_{i}(x,y)p^{i},$$ with the coefficients $$\displaystyle\mu_{0}=a(a_{y}-2b_{x})+a_{x}b,$$ (5) $$\displaystyle\mu_{1}=b(3a_{y}-2b_{x})+a_{x}c-2ac_{x},$$ $$\displaystyle\mu_{2}=b(2b_{y}-3c_{x})+2a_{y}c-ac_{y},$$ $$\displaystyle\mu_{3}=c(2b_{y}-c_{x})-bc_{y}.$$ The vector field $\vec{V}$ given by (4) is defined and smooth at all points of $PTS$ including the isotropic surface $\mathscr{F}$. It is worth observing that the direction field $\chi_{l}$ is parallel to (4) at all points where $\chi_{l}$ is defined, i.e., at all points away from the surface $\mathscr{F}$. One can interpret the direction field given by (4) as a natural extension of $\chi_{l}$ to $\mathscr{F}$. This brings us to the following definition: the projections of integral curves of the field (4) from $PTS$ to $S$ distinguished from a point are non-parametrized geodesics in the pseudo-Riemannian metric (1). Moreover, let $\vec{W}$ be the vector field on $PTS$ (determined uniquely up to multiplication by a non-vanishing scalar factor) that corresponds to the length functional $J_{l}$. Since the length functional is invariant with respect to reparametrizations, one can put $t=x$ and take as $\vec{W}$ the vector field corresponding to the Euler–Lagrange equation with the Lagrangian $\sqrt{F}$, where $F(x,y,p)=a(x,y)+2b(x,y)p+c(x,y)p^{2}$. A straightforward calculation (see [25]) shows that $$\vec{W}=\frac{1}{2F^{\frac{3}{2}}}\vec{V}\quad\textrm{and}\quad\operatorname{% div}\vec{W}=0\ \ \textrm{at all points where}\ \ F\neq 0.$$ (6) The field $\vec{W}$ is divergence-free, since it comes directly from an Euler–Lagrange equation, while $\vec{V}$ is not, since it is obtained via an additional procedure, the projectivization $\Pi\colon TS\to PTS$. The property (6) plays an important role, due to the following general fact: Theorem 1 ([13]) Let $\vec{V}(\xi)$, $\xi\in\mathbb{R}^{n}$, be a smooth vector field, $f(\xi)$ be a smooth scalar function such that the hypersurface $\mathscr{F}=\{\xi:f(\xi)=0\}$ is regular, $r$ be a positive real number. Suppose that the field $\vec{W}(\xi)=f^{-r}(\xi)\vec{V}(\xi)$ is divergence-free at all points where it is defined, i.e., at all points $\xi\notin\mathscr{F}$. Then $\mathscr{F}$ is an invariant hypersurface of the field $\vec{V}$. Moreover, let $\xi_{*}\in\mathscr{F}$ be a singular point of $\vec{V}$ and $\lambda_{1},\ldots,\lambda_{n}$ be the eigenvalues of the linearization of $\vec{V}$ at $\xi_{*}$. Then $\lambda_{1}+\cdots+\lambda_{n}=r\lambda_{j}$ for at least one $j$. By Theorem 1, we have the following assertions: • The isotropic surface $\mathscr{F}$ is an invariant surface of the field (4) and all isotropic lines are geodesics (with identically zero length).333 The first assertion is valid for any $\operatorname{dim}S\geq 2$, while the second assertion (about isotropic lines) is valid for $\operatorname{dim}S=2$ only. Indeed, in the case $\operatorname{dim}S>2$ there exist isotropic lines that are not geodesics; see the example in [25]. • Geodesics do not change their type (timelike, spacelike, isotropic) away of degenerate points. This statement follows from the previous one. 3 Equation of isotropic lines Suppose that set $$S_{0}=\{q=(x,y)\in S\colon\Delta(x,y)=0\}$$ is a regular curve. It is called the degenerate or discriminant curve of the metric (1), and points $q\in S_{0}$ are called degenerate points of the metric. Then the coefficients $a,b,c$ do not vanish simultaneously, and the isotropic direction $$p_{0}(q)=-\frac{a}{b}(q)=-\frac{b}{c}(q),\quad q\in S_{0},$$ (7) is defined and unique at every point $q\in S_{0}$. The projectivization $\Pi\colon TS\to PTS$ transforms binary differential equation (2) into the implicit differential equation $$F(x,y,p)=0,\quad\textrm{where}\quad F=a(x,y)+2b(x,y)p+c(x,y)p^{2}.$$ (8) In the space $PTS$, the surface $\mathscr{F}$ composes a two-sheeted covering of the Lorentzian domain of $S$ ($\Delta<0$) with branching along the discriminant curve $S_{0}$. Over the Riemannian domain ($\Delta>0$), the surface $\mathscr{F}$ does not pass. See Fig. 4. A well-known geometrical approach to study implicit equation (8) consists of the lift the multivalued direction field on $S$ to a single-valued direction field $X$ on the surface $\mathscr{F}$.444 This approach is applicable to implicit differential equations $F(x,y,p)=0$ with a smooth function $F$ not necessarily quadratic in $p$. The idea goes back to H. Poincaré and A. Clebsch, see [23] for details. The field $X$ is an intersection of the contact planes $dy=pdx$ with the tangent planes to the surface $\mathscr{F}$, that is, $X$ is defined by the vector field $$\dot{x}=F_{p},\ \ \ \dot{y}=pF_{p},\ \ \ \dot{p}=-(F_{x}+pF_{y}),$$ (9) whose integral curves become isotropic lines of the metric (1) after the projection $\pi:\mathscr{F}\to S$ along the $p$-direction. Further we shall call this direction vertical in the space $PTS$. The locus of the projection $\pi:\mathscr{F}\to S$ (given by the equations $F=F_{p}=0$) is called the criminant of equation (8). It is not hard to see that the criminant consists of the points $(q,p_{0}(q))$, $q\in S_{0}$ (see formula (7)). Since $\mathscr{F}$ is an invariant surface of the field (4) and both fields (4) and (9) are tangent to the contact planes $dy=pdx$, the restriction of (4) to the invariant surface $\mathscr{F}$ is parallel to (9). Moreover, the restriction of the field (4) to $\mathscr{F}$ is equal to the field (9) multiplied by a smooth scalar function vanishing along the criminant (see [13]). Generically, here there are two possible cases: • The case $C$: the isotropic direction $p_{0}(q)$ is transversal to $S_{0}$. Then the field (9) at the point $(q,p_{0}(q))$, $q\in S_{0}$, is non-singular, and binary equation (2) has Cibrario normal form $dx^{2}=y\,dy^{2}$. See Fig. 5 (left). • The case $D$: the isotropic direction $p_{0}(q)$ is tangent to $S_{0}$. The field (9) at $(q,p_{0}(q))$, $q\in S_{0}$, has a non-degenerate singular point: saddle or node or focus (subcases $D_{s},D_{n},D_{f}$, respectively). Under certain additional conditions (formulated below), binary equation (2) has Dara–Davydov normal form $$dy^{2}=(y-\varepsilon x^{2})\,dx^{2},$$ (10) where $\varepsilon<0$ (if saddle) or $0<\varepsilon<\tfrac{1}{16}$ (if node) or $\varepsilon>\tfrac{1}{16}$ (if focus). See Fig. 5. The normal form $dx^{2}=y\,dy^{2}$ is named after Italian mathematician Maria Cibrario who established it first in $C^{\omega}$ (real analytic) category when studying second-order linear partial differential equations of the mixed type [8]. Later on, a general (and rather simple) proof of the Cibrario normal form (in $C^{\omega}$ and $C^{\infty}$ categories) was presented in the famous Arnold’s book [4]. The normal form (10) was firstly conjectured by Brazilian mathematician Lak Dara [9] and then proved by A.A. Davydov [10] under the following genericity conditions. Let $\alpha_{1,2}$ be the eigenvalues of the linearization of the vector field (9) at the singular point considered. Then $\alpha_{1,2}$ are roots of the characteristic equation $\alpha^{2}-\alpha+4\varepsilon=0$, and the excluded values $\varepsilon=0$ and $\varepsilon=\tfrac{1}{16}$ correspond to a degenerate singular point (saddle-node or degenerate node, respectively). The additional conditions required for the normal form (10) are the following. First, the ratio of $\alpha_{1,2}$ is different from $\pm 1$, and the eigendirections are not tangent to the criminant. Second, the germ of the vector field (9) is $C^{\infty}$-linearizable, i.e., it is $C^{\infty}$-smoothly equivalent to its linear part. The $C^{\infty}$-linearizability condition holds true, for instance, if between the eigenvalues $\alpha_{1,2}$ there are no resonant relations $\alpha_{i}=n_{1}\alpha_{1}+n_{2}\alpha_{2}$ with integers $n_{1,2}\geq 0$, $n_{1}+n_{2}\geq 2$ (Sternberg–Chen Theorem, see e.g., [5, 14]). The proof presented in [10] is done in $C^{\infty}$ category, but is valid in $C^{\omega}$ as well (the requirement of $C^{\infty}$-linearizablity should be replaced with $C^{\omega}$-linearizablity), see also the recent paper [6]. 4 Singular points of the geodesic flow In addition to the isotropic surface $\mathscr{F}$, the vector field $\vec{V}$ given by (4) has one more evident invariant surface – the vertical surface $$\overline{S}_{0}=\{(q,p)\ \ q=(x,y)\in S_{0},\ \ p\in{\mathbb{R}}{\mathbb{P}}\}.$$ The restriction of the field (4) to $\overline{S}_{0}$ is vertical at almost all points (except for the points where $M=0$, and the field vanishes). Hence the surface $\overline{S}_{0}$ is filled with vertical integral curves of the filed (4) and its singular points. Singular points of the field (4) are given by two equations: $$\Delta(x,y)=0\ \ \ \textrm{and}\ \ \ M(x,y,p)=0,$$ (11) and consequently, they are not isolated, but form a curve (or curves) in $PTS$. Algebraically, this property can be expresses in the following form: all components of the vector field (4) belong to the ideal $I$ (in the ring of smooth functions) generated by two of them, namely, $I=\langle\Delta,M\rangle$. Remark 1 The fact that the horizontal generator $\Delta(x,y)$ of the field (4) does not depend on $p$ and the vertical generator $M(x,y,p)$ is a cubic polynomial in $p$, plays a crucial role in a general geometrical context, e.g., in the framework of Cartan’s theory of the projective connection [3, 4]. Let us list those of the properties of the field (4) that we are going to use: • Singular points of the field (4) are given by equation (11) and form a curve (or several curves) in $PTS$. • The spectrum of the linearization of the field (4) at every singular point contains one zero eigenvalue and two real eigenvalues $\lambda_{1,2}$, which vanish (simultaneously) at those points where the cubic polynomial $M(q,p)$ has a double root $p$. The latter condition is equivalent to the direction $p$ is tangent to $S_{0}$ at the point $q$. • For every point $q\in S_{0}$ and any $p\in{\mathbb{R}}{\mathbb{P}}$ such that $M(q,p)\neq 0$ there exists a unique integral curve of the field (4) that passes through the point $(q,p)$ – a vertical straight line, whose projection on $S$ is not a geodesic. Consequently, the vertical surface $\overline{S}_{0}$ is an invariant surface of (4). • Geodesics cannot enter a point $q\in S_{0}$ in arbitrary tangential direction, but only in admissible directions $p$ that satisfy the condition $M(q,p)=0$. • The isotropic direction $p_{0}(q)$ given by formula (7) is admissible at every point $q\in S_{0}$, i.e., $M(q,p_{0}(q))=0$ for all $q\in S_{0}$. Depending on the roots of the cubic polynomial $M$ (see Fig. 6), we have four cases: • $C_{1}$ : the isotropic direction $p_{0}$ is a unique real root of $M$, • $C_{2}$ : $M$ has a simple root $p_{0}$ and a double non-isotropic real root $p_{1}=p_{2}$, • $C_{3}$ : $M$ has three simple real roots: isotropic $p_{0}$ and non-isotropic $p_{1},p_{2}$, • $D\phantom{.}$ : the isotropic double root $p_{0}=p_{1}$ and a simple non-isotropic root $p_{2}$. If $\operatorname{Re}\lambda_{1,2}\neq 0$, the set $W$ is the center manifold of the field, and the restriction of the field to $W$ is identically zero. Hence in a neighborhood of every singular point where $\operatorname{Re}\lambda_{1,2}\neq 0$, the phase portrait of the field has a very simple topological structure. Indeed, the reduction principle [5, 15] asserts that the germ of the field is orbitally topologically equivalent to the direct product of the standard 2-dimensional node (if $\operatorname{Re}\lambda_{1,2}$ have the same sign) or saddle (if $\operatorname{Re}\lambda_{1,2}$ have different signs) and 1-dimensional zero vector field. However, the topological classification is not enough. The paper [23] presents finite-smooth local normal forms of such fields, [26] contains a brief survey (Appendix A) on the smooth and $C^{\omega}$ classifications. These results allow to establish smooth local normal forms of the field (4) at all singular points $(q,p_{i})$, $q\in S_{0}$, where $p_{i}$ is a simple real root of $M(q,p)$. This gives the description of geodesics that enter a degenerate point with all possible admissible directions for the cases $C_{1}$, $C_{3}$. To study geodesics with the isotropic admissible direction in the cases $C_{2}$ and $D$, one can use a blow-up procedure. Choosing appropriate local coordinates, we shall further assume that in a neighborhood of the point $q\in S_{0}$, equation (2) has the form $dx^{2}=y\,dy^{2}$ in the case $C$ and (10) in the case $D$. Consequently, the discriminant curve $S_{0}$ is the axis $y=0$ in the case $C$ and the parabola $y-\varepsilon x^{2}=0$ in the case $D$. Since multiplication the metric by the factor $-1$ does not change the geodesic flow, without loss of generality, assume that $y>\varepsilon x^{2}$ and $y<\varepsilon x^{2}$ (including the case $\varepsilon=0$) are Lorentzian and Riemannian domains, respectively. From now on, we shall consider geodesics outgoing from a degenerate point $q\in S_{0}$ with the isotropic admissible direction $p_{0}(q)$ as semitrajectories starting from $q$. We distinguish geodesics outgoing into the Lorentzian (resp. Riemannian) domains using the superscript $+$ (resp. $-$). Let us clarify this with the following example. Example 1 For the metric $ds^{2}=dx^{2}-ydy^{2}$, the discriminant curve $S_{0}=\{y=0\}$ divides the plane into the Lorentzian ($y>0$) and Riemannian ($y<0$) domains. Formula (5) yields $M(q,p)=p^{2}$, and we have the case $C_{2}$. At every degenerate point $q\in S_{0}$ there exist two admissible directions: $p_{1}=0$ (non-isotropic, double root) and $p_{0}=\infty$ (isotropic). To see that the direction $p_{0}=\infty$ is admissible, it is convenient to interchange $x$ and $y$. In the new coordinates $\bar{x}=y$, $\bar{y}=x$, $\bar{p}=1/p$, the polynomial $M(q,\bar{p})=-\bar{p}$ has the root $\bar{p}=0$. The corresponding field (4) has a unique integral curve $y=0$ that pass through every point $q\in S_{0}$ with tangential direction $p_{1}=0$. Substituting $y=0$ directly in (3), one can see that $y=0$ is an extremal of the action functional and its natural parametrization is given by the equation $\ddot{x}=0$. Moreover, given degenerate point $q\in S_{0}$ there exists a one-parameter family of geodesics outgoing from $q$ with the tangential direction $p=\infty$. For instance, consider the family $\Gamma_{0}$ of geodesics $\gamma_{\alpha}$, $\alpha\in\mathbb{R}$, outgoing from the origin; see Fig. 8 (right). They can be presented in the agreed upon way as $$\gamma_{\alpha}=\left\{\ \begin{aligned} \displaystyle\gamma_{\alpha}^{+}:&% \displaystyle x=\alpha y^{\frac{3}{2}},\ \ \ \ \ \,y\geq 0,\\ \displaystyle\gamma_{\alpha}^{-}:&\displaystyle x=\alpha(-y)^{\frac{3}{2}},\ y% \leq 0.\\ \end{aligned}\right.$$ (12) 4.1 The case $C$ The linearization of the field (4) at every singular point $(q,p_{i})$, $q\in S_{0}$, $i=0,1,2$, has the spectrum $(\lambda_{1},\lambda_{2},0)$ with non-zero real eigenvalues $\lambda_{1,2}$. Moreover, at a singular point $(q,p_{0})$ corresponding to the isotropic admissible direction the resonant relation $\lambda_{1}=2\lambda_{2}$ holds. On the other hand, at a singular point $(q,p_{i})$, $i=1,2$, corresponding to non-isotropic admissible direction the resonant relation $\lambda_{1}+\lambda_{2}=0$ holds.555 The relation $\lambda_{1}=2\lambda_{2}$ is a corollary of $\lambda_{1}+\lambda_{2}+\lambda_{3}=r\lambda_{1}$ with $r=\tfrac{3}{2}$ and $\lambda_{3}=0$, see Theorem 1 and formula (6). The relation $\lambda_{1}+\lambda_{2}=0$ follows form the fact that the field $\vec{W}$ is divergence-free and the function $F$ does not vanish in a neighborhood of $(q,p_{i})$, $i=1,2$. Using the smooth classification of vector fields with non-isolated singular points (see e.g. [26], Appendix A), we have the following results. The germ of the field (4) at any point $(q,p_{0})$, $q\in S_{0}$, has $C^{\infty}$ orbital normal form $$2\xi\frac{\partial}{\partial\xi}+\eta\frac{\partial}{\partial\eta}+0\frac{% \partial}{\partial\zeta}$$ (13) with the first integrals $I_{1}=\xi/\eta^{2}$ and $I_{2}=\zeta$. The germ of the field (4) at any point $(q,p_{i})$, $q\in S_{0}$, $i=1,2$, has $C^{\infty}$ orbital normal form $$\xi\frac{\partial}{\partial\xi}-\eta\frac{\partial}{\partial\eta}+\xi\eta\frac% {\partial}{\partial\zeta}$$ (14) with the first integral $I=\xi\eta$. One can see that to every singular point of the field (13) corresponds a one-parameter family of integral curves passing through this point, while to every singular point of the field (14) correspond only two integral curves. Projecting the integral curves down, we obtain the following results. Theorem 2 ([24, 25]) Suppose that $C$ holds true. Then to the isotropic direction $p_{0}$ corresponds a one-parameter family $\Gamma_{0}$ of geodesics outgoing from the point $q$. There exist smooth local coordinates centered at $q$ such that the discriminant curve $S_{0}$ coincides with the $x$-axis, the isotropic direction $p_{0}(q)=\infty$ and the geodesics $\gamma_{\alpha}^{\pm}\in\Gamma_{0}$ are semi-cubic parabolas $$x=\alpha\tau^{3}X_{\alpha}^{\pm}(\tau),\quad y=\tau^{2}Y_{\alpha}^{\pm}(\tau),% \quad\alpha\geq 0,$$ (15) where $X_{\alpha}^{\pm},Y_{\alpha}^{\pm}$ are smooth functions, $X_{\alpha}^{\pm}(0)=1$, $Y_{\alpha}^{\pm}(0)=\pm 1$. Theorem 3 ([24, 25]) Suppose that $C_{3}$ holds true. Then to each admissible direction $p_{i}$, $i=1,2$, corresponds a unique geodesic passing through the point $q$. Both these geodesics are smooth and timelike. In the left panel of Fig. 7 we present the invariant foliations of the field (4) in a neighborhood of the point $(q,p_{0})$, $q\in S_{0}$, that correspond to the first integrals $I_{1}=\xi/\eta^{2}$ (left) and $I_{2}=\zeta$ (right) of the normal form (13). Intersection of these foliations gives the family of integral curves of (4). The family $\Gamma_{0}$ of the geodesics (15) is obtained (by the projection $PTS\to S$) from the family of integral curves of the field (4) that pass through its singular point $(q,p_{0})$. The subfamily $\Gamma_{0}^{+}\subset\Gamma_{0}$ of the geodesics (15) outgoing into the Lorentzian semiplane, contains timelike, spacelike, and isotropic geodesics. In the right panel of Fig. 7 we present those of the leaves of the invariant foliation of the field (4) in a neighborhood of the point $(q,p_{i})$, $q\in S_{0}$, $i=1,2$, that pass through $(q,p_{i})$. This foliation corresponds to the first integral $I=\xi\eta$ in the normal form (14), and the leaves passing through $(q,p_{i})$ coincide with the planes $\xi=0$ and $\eta=0$, while none of the remaining leaves contains singular points of (4). One of these leaves coincides with the vertical surface $\overline{S}_{0}$ filled with vertical integral curves whose projection on $S$ are points of $S_{0}$. Another invariant surface is filled with non-vertical integral curves, through every point $(q,p_{i})$, $q\in S_{0}$, there pass exactly one curve. Example 2 To illustrate the above, return to Example 12. In the coordinates $\bar{x}=y$, $\bar{y}=x$, $\bar{p}=1/p$, the equation of isotropic lines coincides with Cibrario normal form. After multiplication by $-1$, the corresponding vector field (4) reads $$\vec{V}=2\bar{x}\biggl{(}\frac{\partial}{\partial\bar{x}}+\bar{p}\frac{% \partial}{\partial\bar{y}}\biggr{)}+\bar{p}\frac{\partial}{\partial\bar{p}}.$$ (16) It is easy to check that the field (16) possesses the invariant foliation $\bar{x}=c\bar{p}^{2}$, which includes, in particular, the vertical surface $\overline{S}_{0}$ (for $c=0$), the isotropic surface (for $c=1$). This foliation is presented in the left side of the left panel of Fig. 7. The restriction of the field (16) to every invariant leaf $\bar{x}=c\bar{p}^{2}$ reads $2c\bar{p}^{3}\frac{\partial}{\partial\bar{y}}+\bar{p}\frac{\partial}{\partial% \bar{p}}$. Canceling the factor $\bar{p}$, we obtain the non-singular field $2c\bar{p}^{2}\frac{\partial}{\partial\bar{y}}+\frac{\partial}{\partial\bar{p}}$, whose integral curves are presented in Fig. 5 (left). Fixing a degenerate point $q\in S_{0}$, in going through all invariant leaves $\bar{x}=c\bar{p}^{2}$ and projecting down, we obtain the family (12) of geodesics $\gamma_{\alpha}^{+}$ (for $c>0$) and $\gamma_{\alpha}^{-}$ (for $c<0$) presented in Fig. 8 (right).666 The attentive reader may remark that this invariant foliation contains also the leaf $\bar{p}=0$, which can be considered as the limiting case for $c\to\infty$. The restriction of (16) to this leaf is filled with integral curves parallel to the $\bar{x}$-axis. This gives the family of geodesics $x={\rm{const}}$, which are the limiting case of the semi-cubic parabolas (15): the two branches are glued together. Remark 2 If the pseudo-Riemannian metric on the surface $S$ is induced by the pseudo-Euclidean metric $dX^{2}+dY^{2}-dZ^{2}$ of the ambient space (see the example above), the difference between the cases $C_{1}$ and $C_{3}$ has a graphical interpretation. Namely, $C_{1}$ and $C_{3}$ correspond to positive and negative Gaussian curvature of the surface $S$ calculated in the Euclidean metric $dX^{2}+dY^{2}+dZ^{2}$. Theorem 4 Suppose that $C_{2}$ holds true. Generically, the point $q$ locally separates the curve $S_{0}$ in two parts, filled with $C_{1}$ and $C_{3}$ points, respectively, and there exist smooth local coordinates centered at $q$ such that the metric has the form $$ds^{2}=a(x,y)\,dx^{2}+ye(x,y)\,dy^{2},\ \ a(0)\neq 0,\ e(0)\neq 0,\ a_{y}(0)=0% ,\ a_{xy}(0)\neq 0.$$ (17) Then to the double admissible direction $p_{1}=p_{2}$ corresponds a unique geodesic passing through the point $q$, a semicubic parabola with branches outgoing from $q$ into the Lorentzian and Riemannian domains (depicted as long-dashed line in Fig. 10, center). The proof is not published yet. In Example 12 considered above, we deal with a non-generic case $C_{2}$, since the condition $a_{xy}(0)\neq 0$ in (17) does not hold true. This leads to the geodesic $y=0$ instead of a semicubic parabola mentioned in Theorem 4. 4.2 The case $D$ The cubic polynomial $M$ at $q\in S_{0}$ has the isotropic double root $p_{0}=p_{1}$ and a simple non-isotropic root $p_{2}$. For the admissible direction $p_{2}$, the analogous assertion to Theorem 3 holds true: the germ of the field (4) at $(q,p_{2})$ has $C^{\infty}$ normal form (14), and to the direction $p_{2}$ corresponds a unique smooth geodesic passing through the point $q$. However, the study of geodesics with the isotropic direction is more complicated. A special feature of the case $D$ is that the linear part of the germ (4) at $(q,p_{0})$, $q\in S_{0}$, has three zero eigenvalues. This prevents the possibility to obtain a normal form similar to (13) in Theorem 2 or similar to (14) in Theorem 3. Moreover, in this case even the reduction principle does not allow to establish the topological normal form of this filed, since the center subspace777 The center subspace $T_{c}$ of a vector filed $\vec{V}$ at its singular point $0$ is spanned by the generalized eigenvectors of the linearization of $\vec{V}$ at $0$ corresponding to the eigenvalues $\lambda$ with $\operatorname{Re}\lambda=0$. of the germ (4) at $(q,p_{0})$ coincides with the whole tangent space, see [5]. However, using appropriate blowing up procedure, one can reduce the germ (4) at $(q,p_{0})$ to a smooth vector field with non-zero spectrum and study the obtained vector field using the standard methods. Further we always assume that in the cases $D_{s}$ and $D_{n}$ the following genericity condition holds true: there are no non-trivial integer relations $$n_{1}\alpha_{1}+n_{2}\alpha_{2}+n_{3}\alpha_{3}=\alpha_{j},\ \ \,n_{1}+n_{2}+n% _{3}\geq 1,\ \ n_{i}\in\mathbb{Z}_{+},\ \ \,j=1,2,3,$$ where $\alpha_{1,2}$ are the eigenvalues of the linearization of the vector field (9) at $(q,p_{0})$ and $\alpha_{3}=2$. This condition implies the germ of a vector field obtained from (4) by the blowing up procedure is linearizable, as well as the germ of the field (9). 4.2.1 The cases $D_{n}$ and $D_{f}$ In a neighborhood of the considered point $(q,p_{0})$, $q\in S_{0}$, the field (4) above the Lorentzian domain has an invariant foliation $\{\mathscr{F}_{\alpha}\}$ presented in the left panel of Fig. 11. Here the invariant leaf $\mathscr{F}_{0}$ coincides with the isotropic surface $\mathscr{F}$. The invariant leaves above the Riemannian domain are not depicted, since they contain no integral curves that pass through $(q,p_{0})$. The linear part of the restriction of the field (4) to every invariant leaf $\mathscr{F}_{\alpha}$ at its singular point $(q,p_{0})$ is equal to (9) multiplied by a smooth scalar function $\sigma_{\alpha}$ vanishing along the criminant. Therefore, the restriction of the field (4) to every invariant leaf $\mathscr{F}_{\alpha}$ has the local phase portrait of the same type: node or focus. See Fig. 11 (right panel). In going through all invariant leaves $\mathscr{F}_{\alpha}$ and projecting the integral curves down, we obtain the following result. Theorem 5 ([26]) Let the case $D_{n}$ or $D_{f}$ holds true. Then to the isotropic direction $p_{0}$ corresponds a two-parameter family $\Gamma_{0}$ of $C^{2}$-smooth geodesics $\gamma_{\alpha}^{+}$ outgoing from $q$ into the Lorentzian domain, while there are no geodesics outgoing from $q$ into the Riemannian domain. Given $\alpha$, the geodesics $\gamma_{\alpha,\beta}^{+}\in\Gamma_{0}$ with fixed $\alpha$ and varying $\beta$ are projections of the integral curves from the leaf $\mathscr{F}_{\alpha}$; see Fig. 11, center for $D_{n}$ and right for $D_{f}$. The geodesics $\gamma_{\alpha,\beta}^{+}\in\Gamma_{0}$ are timelike if $\alpha<0$, spacelike if $\alpha>0$ and isotropic if $\alpha=0$. 4.2.2 The case $D_{s}$ In a neighborhood of the considered point $(q,p_{0})$, $q\in S_{0}$, the field (4) above the Lorentzian domain has an invariant foliation $\{\mathscr{F}_{\alpha}\}$ presented in the left panel of Fig. 12. Here the invariant leaf $\mathscr{F}_{0}$ coincides with the isotropic surface $\mathscr{F}$. The invariant leaves above the Riemannian domain are not depicted, since they contain no integral curves that pass through $(q,p_{0})$. The linear part of the restriction of the field (4) to every invariant leaf $\mathscr{F}_{\alpha}$ at its singular point $(q,p_{0})$ is equal to (9) multiplied by a smooth scalar function $\sigma_{\alpha}$ vanishing along the criminant. Therefore, the restriction of the field (4) to every invariant leaf $\mathscr{F}_{\alpha}$ has a saddle at $(q,p_{0})$. See Fig. 12 (right panel). Theorem 6 ([26]) Let the case $D_{s}$ holds true. Then to the isotropic direction $p_{0}$ corresponds a one-parameter family $\Gamma_{0}$ of $C^{2}$-smooth geodesics outgoing from $q$ into the Lorentzian domain, while there are no geodesics outgoing from $q$ into the Riemannian domain. There exist smooth local coordinates centered at $q$ such that $S_{0}$ is the parabola $y=\varepsilon x^{2}$ and the geodesics $\gamma_{\alpha}^{+}\in\Gamma_{0}$ outgoing from $q$ have the form $$y=\frac{\varepsilon_{1}}{2}x^{2}+Y_{\alpha}(x),\ \ \,Y_{\alpha}(x)=o(x^{2}),\,% \ \ \alpha\in\mathbb{R},$$ (18) together with one additional isotropic geodesic $$y=\frac{\varepsilon_{2}}{2}x^{2}+Y(x),\ \ \,Y(x)=o(x^{2}),$$ (19) where $\varepsilon_{1}\varepsilon_{2}=\varepsilon$, $\varepsilon_{1}+\varepsilon_{2}=\frac{1}{2}$, $\varepsilon_{1}>\tfrac{1}{2}$, $\varepsilon_{2}<0$. Geodesics (18) are timelike if $\alpha<0$, spacelike if $\alpha>0$, isotropic if $\alpha=0$; see Fig. 12, right. It is interesting to note that invariant foliations in the cases $D_{n}$, $D_{f}$ and $D_{s}$ have the different topological structures (compare the left panels of Figures 11 and 12). In the cases $D_{n}$, $D_{f}$ all invariant leaves intersect on the criminant only, while in the case $D_{s}$ they intersect on the criminant (dotted line) and on the double line, whose projection is the isotropic geodesic (19). 4.3 Example: Clairaut type It is of interest to observe an important difference between the families $\Gamma_{0}$ in the cases $C_{1}$, $C_{3}$ and $D$. In the cases $C_{1}$, $C_{3}$, the family $\Gamma_{0}$ is symmetric with respect to $S_{0}$ in the following sense: it contains an infinite number of geodesics $\gamma_{\alpha}^{+}\in\Gamma_{0}$ outgoing into the Lorentzian domain and an infinite number of geodesics $\gamma_{\alpha}^{-}\in\Gamma_{0}$ outgoing into the Riemannian domain. On the contrary, in the case $D$, the family $\Gamma_{0}$ is non-symmetric: it contains an infinite number of geodesics $\gamma_{\alpha}^{+}\in\Gamma_{0}$ outgoing into the Lorentzian domain and no geodesics $\gamma_{\alpha}^{-}\in\Gamma_{0}$ outgoing into the Riemannian domain. To understand this phenomenon better, consider the case when the isotropic direction $p_{0}$ is tangent to the curve $S_{0}$ at all points $q\in S_{0}$, for instance, the metric $dy^{2}+(\varepsilon x^{2}-y)dx^{2}$. The equation of geodesics in the metric $ds^{2}=dy^{2}-ydx^{2}$ can be studied using qualitative methods, see [25] (Section 3). The Lagrangian of the length functional $L=\sqrt{p^{2}-y}$ does not depend on the variable $x$, hence the field (4) possesses the energy integral $H=L-pL_{p}$. After evident transformations, equation $H={\rm{const}}$ can be reduced to $$p^{2}=y-\alpha y^{2},\ \ \,\alpha\in\mathbb{R},$$ (20) which is a family of implicit differential equations of Clairaut type [11]. Every (unparametrized) geodesic in the metric $ds^{2}=dy^{2}-ydx^{2}$ is a solution of equation (20). Conversely, every solution of (20) is a geodesic except the horizontal lines $y\equiv{\rm{const}}$, each of which is the envelop of the family of integral curves of (20) for a given $\alpha$ (see [25]). For instance, the value $\alpha=0$ corresponds to the isotropic surface $p^{2}=y$ (a parabolic cylinder) and gives, in particular, the isotropic geodesic $y=\frac{1}{4}x^{2}$ passing through the origin. For determining non-isotropic geodesics, observe that every invariant surface (20) is a cylinder whose generatrices are parallel to the $x$-axis and the base is an ellipse (if $\alpha>0$) or a hyperbola (if $\alpha<0$). In the latter case, the hyperbolic cylinder $p^{2}=y-\alpha y^{2}$ consists of two connected components: positive and negative lying in the domains $y\geq 0$ and $y\leq\alpha^{-1}$, respectively. Positive components of the hyperbolic cylinders ($\alpha<0$) together with all other cylinders ($\alpha\geq 0$) form an invariant foliation over the Lorentzian domain $y>0$. Negative components of the hyperbolic cylinders form an invariant foliation over the Riemannian domain $y<0$; they do not intersect the plane $y=0$, and consequently, do not contain integral curves whose projections to the $(x,y)$-plane are geodesics passing through the $x$-axis. See Fig. 14 (left). Thus to every $\alpha\geq 0$ corresponds a geodesic $\gamma_{\alpha}^{+}\in\Gamma_{0}$ which is timelike if $\alpha>0$ or isotropic if $\alpha=0$. To every $\alpha<0$ corresponds a spacelike geodesic $\gamma_{\alpha}^{+}\in\Gamma_{0}$, whose lift belongs to the positive component of the hyperbolic cylinder $p^{2}=y-\alpha y^{2}$. In contrast to this, the negative component of the same cylinder is filled with integral curves of the field (4) whose projections on the $(x,y)$-plane are separated from the $x$-axis by the horizontal strip $\alpha^{-1}<y<0$. Therefore, there are no geodesics outgoing into the Riemannian domain. See Fig. 14, right. Acknowledgement.  The publication was supported by the Russian Foundation for Basic Research (research projects 16-01-00766, 17-01-00849). References [1] Aguirre, E., Fernandez, V., Lafuente, J., On the conformal geometry of transverse Riemann–Lorentz manifolds. J. Geometry and Physics 57 (2007), 1541–1547. [2] Al’tshuler, B. L., Barvinski, A. O., Quantum cosmology and physics of transitions with a change of spacetime signature. Uspekhi Fiz. Nauk 166:5 (1996), 459.492; English transl. in Physics-Uspekhi 39, 429. [3] Aminova, A. V., Projective transformations and symmetries of differential equations. Mat. Sb., 186:12 (1995), 21–36. [4] Arnol’d, V. I., Geometrical methods in the theory of ordinary differential equations, Springer-Verlag, New York, 1988. [5] Anosov, D. V., Arnold, V. I. (eds.). Dynamical systems I. Ordinary differential equations and smooth dynamical systems. Encyclopaedia of Mathematical Sciences 1. Springer-Verlag (1988). [6] Bogaevsky, I. A., Implicit ordinary differential equations: bifurcations and sharpening of equivalence. Izvestiya: Mathematics, 78:6 (2014), 1063–1078. [7] Bolsinov, A. V., Matveev, V. S., Local normal forms for geodesically equivalent pseudo-Riemannian metrics. Trans. Amer. Math. Soc., 367 (2015), 6719–6749. [8] Cibrario, M., Sulla reduzione a forma delle equationi lineari alle derviate parziale di secondo ordine di tipo misto. Accademia di Scienze e Lettere, Instituto Lombardo Redicconti 65 (1932), 889–906. [9] Dara, L., Singularités générique des équations différentielles multiformes. Bol. Soc. Bras. Math. 6, n. 2 (1975), 95–128. [10] Davydov, A. A., The normal form of a differential equation, that is not solved with respect to the derivative, in the neighborhood of its singular point, Funktsional. Anal. i Prilozhen. 19 (1985), 1–10. [11] Davydov, A. A., Ishikawa, G., Izumiya, S., Sun, W.-Z., Generic singularities of implicit systems of first order differential equations on the plane. Jpn. J. Math. 3 (2008), 93–119. [12] Genin, D., Khesin, B., Tabachnikov, S., Geodesics on an ellipsoid in Minkowski space. Enseign. Math. 53 (2007), 307–331. [13] Ghezzi, R., Remizov, A. O., On a class of vector fields with discontinuities of divide-by-zero type and its applications to geodesics in singular metrics. J. Dyn. Control Syst., 18 (2012), 135–158. [14] Hartman, Ph., Ordinary differential equations. Birkhauser, Boston, Mass., 1982. [15] Hirsch M. W., Pugh C. C., Shub M., Invariant manifolds. Lecture Notes in Mathematics, Vol. 583. Springer-Verlag, Berlin-New York, 1977. [16] Khesin, B., Tabachnikov, S., Pseudo-Riemannian geodesics and billiards. Adv. Math. 221 (2009), 1364–1396 [17] Kossowski, M., Kriele, M., Smooth and discontinuous signature type change in general relativity. Class. Quantum Grav. 10, 2363–2371 (1993). [18] Kossowski, M., Kriele, M., Transverse, type changing, pseudo-Riemannian metrics and the extendability of geodesics. Proc. Roy. Soc. Lond. Ser. A Math. Phys. 444:1921, 297–306 (1994). [19] Kossowski, M., Kriele, M., The Einstein equation for signature type changing spacetimes. Proc. Roy. Soc. Lond. Ser. A Math. Phys. 446:1926, 115–126 (1994). [20] Kossowski, M., Pseudo-Riemannian metrics singularities and the extendability of parallel transport. Proc. Amer. Math. Soc. 99 (1987), 147–154. [21] Miernowski, T., Formes normales d’une métrique mixte analytique réelle générique. Ann. Fac. Sci. Toulouse Math. 16 (2007), 923–946. [22] Pavlova, N. G., Remizov, A. O., Geodesics on hypersurfaces in the Minkowski space: singularities of signature change. Russian Math. Surveys 66 (2011), 1201–1203. [23] Remizov, A. O., Multidimensional Poincaré construction and singularities of lifted fields for implicit differential equations. J. Math. Sci. (N.Y.) 151:6 (2008), 3561–3602. [24] Remizov, A. O., Geodesics on 2-surfaces with pseudo-Riemannian metric: singularities of changes of signature. Mat. Sb., 200:3 (2009), 75–94. [25] Remizov, A. O., On the local and global properties of geodesics in pseudo-Riemannian metrics. Differential Geometry and its Applications, 39 (2015), 36–58. [26] Remizov, A. O., Tari, F., Singularities of the geodesic flow on surfaces with pseudo-Riemannian metrics. Geometriae Dedicata 185 (2016), no. 1, pp. 131–153. [27] Sakharov, A.D., Cosmological transitions with changes in the signature of the metric. Zh. Eksper. Teor. Fiz. 87:2 (8) (1984), 375–383. English transl. in Soviet Phys. JETP 60 (2), August 1984, 214–218. [28] Steller, M., A Gauss-Bonnet formula for metrics with varying signature. Z. Anal. Anwend. 25 (2006), pp. 143–162.
Global existence of weak solutions to the FENE dumbbell model of polymeric flows Nader Masmoudi Courant Institute, New York University 251 Mercer St, New York NY 10012 email:[email protected] (Date:: ) Key words: Nonlinear Fokker-Planck equations, Navier-Stokes equations, FENE model, micro-macro interactions, defect measure, global existence. AMS subject classification: 35Q30, 82C31, 76A05. Abstract Systems coupling fluids and polymers are of great interest in many branches of sciences. One of the models to describe them is the FENE (Finite Extensible Nonlinear Elastic) dumbbell model. We prove global existence of weak solutions to the FENE dumbbell model of polymeric flows for a very general class of potentials. The main problem is the passage to the limit in a nonlinear term that has no obvious compactness properties. The proof uses many weak convergence techniques. In particular it is based on the control of the propagation of strong convergence of some well chosen quantity by studying a transport equation for its defect measure. 1. introduction Systems coupling fluids and polymers are of great interest in many branches of applied physics, chemistry and biology. They are of course used in many industrial and medical applications such as food processing, blood flows… Although a polymer molecule may be a very complicated object, there are simple theories to model it. One of these models is the FENE (Finite Extensible Nonlinear Elastic) dumbbell model. In this model, a polymer is idealized as an “elastic dumbbell” consisting of two “beads” joined by a spring which can be represented by a vector $R$ (see Bird, Curtis, Amstrong and Hassager [7, 8], Doi and Edwards [18] for some physical introduction to the model and Ottinger [55] for a more mathematical treatment (in particular the stochastic point of view) of it and Owens and Phillips [57] for the computational aspect). In the FENE model (1), the polymer elongation $R$ cannot exceed a limit $R_{0}$. This yields some nice mathematical problems near the boundary, namely when $|R|$ approaches $R_{0}$. At the level of the polymeric liquid, we get a system coupling the Navier-Stokes equation for the fluid velocity with a Fokker-Planck equation describing the evolution of the polymer density. This density depends on $t,x$ and $R$. The coupling comes from an extra stress term in the fluid equation due to the microscopic effect of the polymers. This is the micro-macro interaction. There is also a drift term in the Fokker-Planck equation that depends on the spatial gradient of the velocity. This is a macro-micro term. The coupling satisfies the fact that the free-energy dissipates which is important from the physical point of view. Mathematically, this is also important to get uniform bounds and hence prove global existence of weak solutions. The system obtained attempts to describe the behavior of this complex mixture of polymers and fluid, and as such, it presents numerous challenges, simultaneously at the level of their derivation [15], the level of their numerical simulation [57, 34], the level of their physical properties (rheology) and that of their mathematical treatment (see references below). In this paper we concentrate on the mathematical treatment and more precisely the global existence of weak solutions to the FENE dumbbell model (1). These solutions are the generalization of the Leray weak solutions [43, 42] of the incompressible Navier-Stokes system to the FENE model. An approximate closure of the linear Fokker-Planck equation reduces the description to a closed viscoelastic equation for the added stresses themselves. This leads to well-known non-Newtonian fluid models such as the Oldroyd B model or the FENE-P model (see for instance [19, 15]). These models have been studied extensively. Guillopé and Saut [26, 27] proved the existence of local strong solutions, Fernández-Cara, Guillén and Ortega [22], [21] and [23] proved local well posedness in Sobolev spaces. In Chemin and Masmoudi [9] local and global well-posedness in critical Besov spaces was given. For global existence of weak solutions, we refer to Lions and Masmoudi [48]. We also mention Lin, Liu and Zhang [45] where a formulation based on the deformation tensor is used to study the Oldroyd-B model. Global existence for small data was also proved in [41, 39]. At the micro-macro level, there are also several works. Indeed, from the mathematical point of view, the FENE model and some simplifications of it were studied by several authors. In particular Renardy [58] proved the local existence in Sobolev space where the potential ${\mathcal{U}}$ is given by ${\mathcal{U}}(R)=(1-|R|^{2})^{1-\sigma}$ for some $\sigma>1$. W. E, Li and Zhang [20] proved local existence when $R$ is taken in the whole space and under some growth condition on the potential. Also, Jourdain, Lelievre and Le Bris [33] proved local existence in the case $b=2k>6$ for a Couette flow by solving a stochastic differential equation (see also [31] for the use of entropy inequality methods to prove exponential convergence to equilibrium). Zhang and Zhang [61] proved local well-posedness for the FENE model when $b>76$. Local well-posedness was also proved in [51] when $b=2k>0$ (see also [36]). One of the main ingredients of [51] is the use of Hardy type inequalities to control the extra stress tensor by the $H^{1}$ norm in $R$ which comes from the diffusion in $R$. In particular no regularity in $R$ is necessary for the initial data. Moreover, Lin, Liu and Zhang [46] proved global existence near equilibrium under some restrictions on the potential (see also the related work [39]). Recently many other works dealt with different aspect of the system. In particular the problem in a thin film was considered in [11], the problem of the long time behavior was considered in [60, 30, 1], the problem of global existence in smooth spaces in 2D for some simplified models (when there is a bound on $\tau$ in $L^{\infty}$) was considered in [13, 47, 14, 54], the problem of non-blow up criterion was considered in [40], the problem of stationary solution was considered in [11, 10], the study of the boundary condition at $\partial B$ was considered in [28, 50]. More related to this paper, the construction of global weak solutions for simplified models was considered in [3, 4, 5, 62, 60, 6] in the case the system is regularized by some diffusion in the space variable or by a microscopic cut-off. The case of the co-rotational model was considered in [49]. The co-rotational model preserves some of the compactness difficulties of the full model. It allows to get more integrability on the $\psi$ which makes the compactness analysis much simpler. We end this introduction by mentioning other micro-macro models. Indeed, a principle based on an energy dissipation balance was proposed in [12], where the regularity of nonlinear Fokker-Planck systems coupled with Stokes equations in 3D was also proved. In particular the Doi model (or Rigid model) was considered in [56] where the linear Fokker-Planck system is coupled with a stationary Stokes equations. The nonlinear Fokker-Planck equation driven by a time averaged Navier-Stokes system in 2D was studied in [13] (see also [14]). Recently, there were many review papers dealing with different mathematical aspects of these models [59, 44, 38]. In particular we refer to [38] for an exhaustive list of references dealing with the numerical point of view. 1.1. The FENE model A macro-molecule is idealized as an “elastic dumbbell” consisting of two “beads” joined by a spring which can be modeled by a vector $R$ (see [8]). Before writing our main system (1), let us discuss the main physical assumptions that lead to it: • The polymers are described by their density at each time t, position x and elongation $R$. This is a kinetic description of the polymers. • The inertia of the polymers is neglected and hence the sum of the forces applied on each polymer vanishes. We refer to [16] where inertia is taken into account. Moreover, the limit $m$ goes to zero it studied where $m$ is the mass of the beads. • The polymer solution is supposed to be dilute and hence the interaction between different polymers is neglected. This is why we get a linear Fokker-Planck equation. Let us also mention that there are models for polymer melts such as the reptation model (see for instance [55]). • The polymer is described by one vector $R$ in $B(0,R_{0})$. Let us mention that there are models where each polymers is described by one vector $R$ such that $|R|=1$ (the rigid case, see [14]) or by $K$ vectors $R_{i}$, $1\leq i\leq K$ (see [6]). Usually the difference between these models comes from the length of the polymers as well as their electric properties. • In the Fokker-Planck equation an upper-convected derivative is used. This is can be seen as the most physical one. Other used derivatives are the lower-convected and the co-rotational ones (see [7, 8]). The co-rotational one has the mathematical advantage that one has better a priori estimates (see [49]). • We neglect the diffusion in $x$ in the Fokker-Planck equation. Indeed, this diffusion is much smaller than the diffusion in $R$. Actually, it makes the mathematical problem much simpler. Under these assumptions, the micro-macro approach consists in writing a coupled multi-scale system : (1) $$\left\{\begin{array}[]{l}{\partial_{t}u}+(u\cdot\nabla)u-\nu\Delta u+\nabla p=% {{\rm div}}\tau,\quad{{\rm div}}u=0,\\ \\ \partial_{t}\psi+u.\nabla\psi={\rm div}_{R}\Big{[}-\nabla u\,R\psi+{\beta}% \nabla\psi+\nabla{\mathcal{U}}\psi\Big{]}\\ \\ \tau_{ij}=\int_{B}(R_{i}\otimes\nabla_{j}{\mathcal{U}})\psi(t,x,R)dR\,\quad% \quad(\nabla{\mathcal{U}}\psi+{\beta}\nabla\psi).n=0\;\hbox{on}\;\partial B(0,% R_{0}).\end{array}\right.$$ In (1), $\psi(t,x,R)$ denotes the distribution function for the internal configuration and $F(R)=\nabla_{R}{\mathcal{U}}$ is the spring force which derives from a potential ${\mathcal{U}}$ and ${\mathcal{U}}(R)=-{k}{\rm log}(1-|R|^{2}/|R_{0}|^{2})$ for some constant $k>0$. Besides, $\beta$ is related to the temperature of the system and $\nu>0$ is the viscosity of the fluid. In the sequel, we will take $\beta=1$. Here, $R$ is in a bounded ball $B(0,R_{0})$ of radius $R_{0}$ which means that the extensibility of the polymers is finite and $x\in\Omega$ where $\Omega$ is a bounded domain of ${\mathbb{R}}^{D}$ where $D\geq 2$ or $\Omega={\mathbb{T}}^{D}$ or $\Omega={\mathbb{R}}^{D}$. In the case $\Omega$ has a boundary, we add the Dirichlet boundary condition $u=0$ on $\partial\Omega$. We have also to add a boundary condition to insure the conservation of $\psi$, namely $(-\nabla uR\psi+\nabla{\mathcal{U}}\psi+{\beta}\nabla\psi).n=0$ on $\partial B(0,R_{0})$. The boundary condition on $\partial B(0,R_{0})$ insures the conservation of the polymer density and should be understood in the weak sense, namely for any function $g(R)\in C^{1}(B)$, we have (2) $$\partial_{t}\int_{B}g\psi dR+u.\nabla_{x}\int_{B}g\psi dR=-\int_{B}\nabla_{R}g% \Big{[}-\nabla u\,R\,\psi+{\beta}\nabla\psi+\nabla{\mathcal{U}}\psi\Big{]}dR.$$ Notice in particular that it implies that $\psi=0$ on $\partial B(0,R_{0})$ and that if initially $\int\psi(t=0,x,R)dR=1$, then for all $t$ and $x$, we have $\int\psi(t,x,R)dR=1$. We will see later an other way of understanding this singular boundary condition. When doing numerical simulation on the FENE model, it is usually better to think of the distribution function $\psi$ as the density of a random variable $R$ which solves (see [55]) (3) $$dR+u.\nabla Rdt=(\nabla uR-\nabla_{R}{\mathcal{U}}(R))dt+\sqrt{2}dW_{t}$$ where the stochastic process $W_{t}$ is the standard Brownian motion in ${\mathbb{R}}^{N}$ and the additional stress tensor is given by the following expectation $\tau={\mathbb{E}}(R_{i}\otimes\nabla_{j}{\mathcal{U}})$. Of course, we may need to add a boundary condition for (3) if $R$ reaches the boundary of $B$. This is done by requiring that $R$ stays in $\overline{B}$ (see [32]). Using this stochastic formulation has the advantage of replacing the second equation of (2.1) which has $2D+1$ variables by (3). Of course one has to solve (3) several times to get the expectation $\tau$ which is the only information needed in the fluid equation. This strategy was used for instance by Keunings [35] (see also [24]) and by Öttinger [55] (see also [25]). In the sequel, we will only deal with the FENE model and we will take $\beta=1$ and $R_{0}=1$. 2. Statement of the results This paper is devoted to the proof of global existence of free-energy weak solutions to the FENE model. The main difficulty of the construction is the passage to the limit in an approximate system in the nonlinear term $\nabla u^{n}\psi^{n}$. Indeed, we only have a uniform bound on $\nabla u^{n}$ in $L^{2}((0,T)\times\Omega)$ and $\psi^{n}$ in $L^{\infty}((0,T)\times\Omega;L^{1}(B))$ for all $T>0$ and so assuming that $u^{n}$ and $\psi^{n}$ converge weakly to $u$ and $\psi$, it is not clear how to deduce that $\nabla u^{n}\psi^{n}$ converges weakly to $\nabla u\psi$. Before mentioning our main result, let us recall that the construction of global weak solutions to simplified models was considered in [4, 5, 60, 49, 62]. In particular in [4] a diffusion in the space variable in the $\psi$ equation is added. Mathematically this yields a bound on $\nabla_{x}\sqrt{\psi}$ in $L^{2}((0,T)\times\Omega\times B)$ and hence one can easily pass to the limit in the product $\nabla u^{n}\psi^{n}$ using the Lions-Aubin lemma. This extra diffusion term is physically justifiable but it is much smaller than the diffusion in the $R$ variable and this is why we did not include it here. Recently, Barrett and Suli [6] extended their results to the case of bead-spring chain models where each polymer is described by $K$ springs $R^{i}$, $1\leq i\leq K$ again with diffusion in the $x$ variable. Also, in [49], the co-rotational model was considered. It allowed us to get more a priori estimates on $\psi^{n}$, namely one can get that $\psi^{n}$ is in all $L^{p}$ spaces. An argument based on propagation of compactness similar to the one used in [48] allowed us to conclude. Here, we consider the more physical model (1). The system (1) has to be complemented with an initial data $u(t=0)=u_{0}$ and $\psi(t=0)=\psi_{0}$. Notice that $(u=0,\psi_{\infty})$ where $\psi_{\infty}$ (4) $$\psi_{\infty}(R)=\frac{e^{-{\mathcal{U}}(R)}}{\int_{B}e^{-{\mathcal{U}}(R^{% \prime})}dR^{\prime}}$$ defines a stationary solution of (1). To state our result, we first impose some conditions on the initial data. We take $u_{0}(x)\in L^{2}(\Omega)$, div$(u_{0})=0$ and $\psi_{0}(x,R)\geq 0$ such that $\rho_{0}(x)=\int\psi_{0}dR\in L^{\infty}(\Omega)$. Here $\rho_{0}(x)$ is the initial density of polymers at the position $x$. We also assume the following entropy bound : $\frac{\psi_{0}}{\rho_{0}\psi_{\infty}}\in L\log L(\Omega\times B,dx{\rho_{0}(x% )\psi_{\infty}dR}))$, namely (5) $$\|\frac{\psi_{0}}{\rho_{0}\psi_{\infty}}\|_{L\log L(\Omega\times B,{\rho_{0}(x% )\psi_{\infty}dRdx})}=\int\int_{\Omega\times B}(\frac{\psi_{0}}{\rho_{0}\psi_{% \infty}}\log\frac{\psi_{0}}{\rho_{0}\psi_{\infty}}-\frac{\psi_{0}}{\rho_{0}% \psi_{\infty}}+1)\rho_{0}(x)\psi_{\infty}dRdx<\infty.$$ Finally, we also assume the following $L^{1/2}_{x}L\log^{2}L$ bound, that we will call “$\log^{2}$” bound: (6) $$\int_{\Omega}\frac{\int_{B}\psi_{0}\log^{2}\frac{\psi_{0}}{\rho_{0}\psi_{% \infty}}}{1+\left[\int_{B}\psi_{0}\log^{2}\frac{\psi_{0}}{\rho_{0}\psi_{\infty% }}\right]^{1/2}}dx<\infty.$$ Notice that interpolating (6) with the $L^{\infty}$ bound on $\rho_{0}$, we can deduce the $L\log L$ bound (5). Theorem 2.1. Take a divergence free field $u_{0}(x)\in L^{2}(\Omega)$ and $\psi_{0}(x,R)\geq 0$ such that $\rho_{0}(x)=\int\psi_{0}dR\in L^{\infty}(\Omega)$ and (5) and (6) hold. Then, (1) has a global weak solution $(u,\psi)$ such that $u\in L^{\infty}({\mathbb{R}}_{+};L^{2})\cap L^{2}({\mathbb{R}}_{+};\dot{H}^{1})$, $\frac{\psi}{\rho\psi_{\infty}}\in L^{\infty}({\mathbb{R}}_{+};L\log L(\Omega% \times B,dx{\rho(x)\psi_{\infty}dR})))$ where $\rho(x)=\int_{B}\psi dR$ and $\sqrt{\frac{\psi}{\psi_{\infty}}}\in L^{2}({\mathbb{R}}_{+};L^{2}(\Omega;\dot{% H}^{1}_{R}({\psi_{\infty}}dR)))$ and (32) holds with an inequality $\leq$ instead of the equality and (42) holds (with $\Omega$ replaced by any compact $K$ of $\Omega$ in the whole space case). Remark 2.2. 1) Of course $u$ and $\psi$ have also some time regularity in some negative Sobolev spaces in $x$ and $R$. This allows to give a sense to the initial data (see [48] for more details). 2) By $f\in L\log L(\Omega\times B,dx{\rho(x)\psi_{\infty}dR})$ we mean that $\int\int_{\Omega\times B}(f\log f-f+1)\rho(x)\psi_{\infty}dR<\infty$. Notice that (5) does not really define a norm. One can of course define a norm using Orlicz spaces. However, we do not need to do it here. 3) If the domain $\Omega$ has finite measure (bounded domain or torus) then, the extra bound (6) reduces to $\int_{\Omega}\left[\int_{B}\psi_{0}\log^{2}\frac{\psi_{0}}{\rho_{0}\psi_{% \infty}}\right]^{1/2}dx<\infty.$ This extra bound on the initial data allows us to prove the extra bound (42) on the solution. This is useful to get some sort of equi-integrability of the extra stress tensor. Of course this is a very mild extra assumption, but it would be nice to see if one can prove the same result without it. Moreover, due to the local character of the weak compactness proof, the assumption (42) can be weakened by assuming the bound to hold locally in space, namely $\int_{K}\left[\int_{B}\psi_{0}\log^{2}\frac{\psi_{0}}{\rho_{0}\psi_{\infty}}% \right]^{1/2}dx<\infty$ for any compact set $K$ of $\Omega$. 4) For the simplicity of the presentation, the proof will be given in the case $\rho_{0}(x)$ is constant equal $1$ and $\Omega$ has finite measure. We will also indicate the necessary changes to be done in the general case. The paper is organized as follows. In the next section, we give some preliminaries where we prove some Hardy type inequalities. In section 4, we derive some a priori estimates for the full model (1). In particular we recall the free energy estimate as well as a new “$log^{2}$” a priori estimate which is useful in controlling the transport of the defect measures. In section 5, we prove the main theorem 2.1. As is classical when proving global existence of weak solutions, the only none trivial part is the proof of the weak compactness of a sequence of global solutions satisfying the a priori estimates and we will only detail this part of the proof. In section 6, we present one way of approximating the system. In section 7 we present some concluding remarks and some open problems. 3. Preliminaries 3.1. Hardy type inequalities The dissipation term in the free energy estimate (32) measures the distance between $\psi$ and the equilibrium $\psi_{\infty}$. We would like to use that bound to control the extra stress tensor in $L^{2}$. This will be done using the following Hardy [29] type inequality. Lemma 3.1. If $k>1$, then we have (7) $$\int_{0}^{1}\frac{\psi}{x^{2}}\leq C\int_{0}^{1}x^{k}\left|\left(\sqrt{\frac{% \psi}{x^{k}}}\right)^{\prime}\right|^{2}+\psi.$$ For $k>0$, we have (8) $$\left(\int_{0}^{1}\frac{\psi}{x}\right)^{2}\leq C\left(\int_{0}^{1}\psi\right)% \ \left(\int_{0}^{1}x^{k}|\left(\sqrt{\frac{\psi}{x^{k}}}\right)^{\prime}|^{2}% +\psi\right).$$ For $-1\leq\beta<k\leq 1$, we have (9) $$\left(\int_{0}^{1}\frac{\psi}{x^{1+\beta}}\right)\leq C\left(\int_{0}^{1}\psi% \right)^{1-\beta\over 2}\ \left(\int_{0}^{1}x^{k}|\left(\sqrt{\frac{\psi}{x^{k% }}}\right)^{\prime}|^{2}+\psi\right)^{1+\beta\over 2}\quad\quad\hbox{and more % generally for all $\gamma\geq 0$}$$ (10) $$\left(\int_{0}^{1}\frac{\psi\log^{\gamma}\Big{(}C+\frac{\psi}{x^{k}}\Big{)}}{x% ^{1+\beta}}\right)\leq C\left(\int_{0}^{1}\psi\log^{2\gamma\over 1-\beta}\Big{% (}C+\frac{\psi}{x^{k}}\Big{)}\right)^{1-\beta\over 2}\ \left(\int_{0}^{1}x^{k}% |\left(\sqrt{\frac{\psi}{x^{k}}}\right)^{\prime}|^{2}+\psi\right)^{1+\beta% \over 2}.\quad\quad$$ Remark 3.2. Before giving the proof, let us mention that this lemma should be compared to the results of section 3.2 of [51]. In particular Proposition 3.1 was used to control the extra stress tensor. However, the main difference is that the results of section 3.2 of [51] are done in an $L^{2}$ frame work since we were dealing with strong solutions there, however the results of lemma 3.1 are in an $L^{1}$ frame work since we only have a control on the free energy and its dissipation. Inequality (7) for $k>1$ is just Hardy inequality. Notice that there is no requirement on the boundary data since $k>1$. To prove it, we make the change of variable $y=x^{1-k}$ and $h(y)=\sqrt{\frac{\psi(x)}{x^{k}}}$. Hence, to prove (7), it is enough to prove that (11) $$\int_{1}^{\infty}\frac{h^{2}}{y^{2}}dy\leq C\int_{1}^{\infty}h^{\prime}(y)^{2}% +\frac{h^{2}}{y^{2\alpha}}dy$$ where $\alpha=\frac{k}{k-1}>1$. To prove (11), we integrate by parts in (12) $$\int_{1}^{A}\frac{h\,h^{\prime}}{y}dy=\int_{1}^{A}\frac{h^{2}}{2y^{2}}dy\,+% \frac{h(A)^{2}}{2}-\frac{h(1)^{2}}{2}$$ for each $A>1$. The left hand side is bounded by $C(\int_{1}^{A}\frac{h^{2}}{y^{2}}dy)^{1/2}$ $(\int_{1}^{A}{h^{\prime}(y)^{2}}dy)^{1/2}.$ To bound, $h(1)^{2}$ by the right hand side of (11), we use that $h(y)\leq C\sqrt{y}$ since $\int_{1}^{\infty}{h^{\prime}(y)^{2}}dy<\infty$ hence, $\frac{h^{2}}{y^{\alpha}}$ goes to zero when $y$ goes to infinity. This yields that (13) $$h^{2}(1)=-\int_{1}^{\infty}\left(\frac{h^{2}}{y^{\alpha}}\right)^{\prime}dy=-% \int_{1}^{\infty}2\frac{h}{y^{\alpha}}h^{\prime}-\alpha\frac{h^{2}}{y^{\alpha+% 1}}dy$$ which is controlled by the right hand side of (11) using Cauchy-Schwarz and the fact that $\alpha>1$. Letting $A$ go to infinity, we get the result. The proof of (8) when $k>1$ follows by interpolation. In the case $0<k\leq 1$, (7) only holds if we add a vanishing boundary condition at $x=0$. However, we can still prove that (8) holds without any extra condition. Indeed, making the change of variables $y=x^{1-k}$ (when $k<1$) and denoting $h(y)=\sqrt{\frac{\psi(x)}{x^{k}}}$, we see that (8) is equivalent to (14) $$\left(\int_{0}^{1}y^{\alpha-1}h^{2}dy\right)^{2}\leq C\left(\int_{0}^{1}y^{2% \alpha}h^{2}\,dy\right)\ \left(\int_{0}^{1}h^{\prime}(y)^{2}+y^{2\alpha}h^{2}\right)$$ where $\alpha={k\over 1-k}$. To prove (14), we integrate by parts in the following integral : (15) $$\int_{0}^{1}y^{\alpha}h\ h^{\prime}dy=-\frac{\alpha}{2}\int_{0}^{1}y^{\alpha-1% }h^{2}+\frac{h^{2}(1)}{2}.$$ and notice that the left hand side is bounded by $\left(\int_{0}^{1}y^{2\alpha}h^{2}\,\int_{0}^{1}h^{\prime}(y)^{2}\right)^{1/2}$ using Cauchy-Schwarz inequality. Moreover, we have (16) $$\displaystyle h(1)^{2}=\int_{0}^{1}(y^{2\alpha+1}h^{2})^{\prime}dy$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{1}y^{2\alpha+1}h\,h^{\prime}+(2\alpha+1)y^{2\alpha}h^{2}$$ $$\displaystyle\leq$$ $$\displaystyle C\left(\int_{0}^{1}h^{\prime}(y)^{2}+y^{2\alpha}h^{2}\,\int_{0}^% {1}y^{2\alpha}h^{2}\right)^{1/2}.$$ Hence, (14) follows. When $k=1$, we make the change of variable $y=-\log x$ and hence (8) is equivalent to (17) $$\left(\int_{0}^{\infty}e^{-y}h^{2}dy\right)^{2}\leq C\left(\int_{0}^{\infty}e^% {-2y}h^{2}\,dy\right)\ \left(\int_{0}^{\infty}h^{\prime}(y)^{2}+e^{-2y}h^{2}\right)$$ and the proof of (17) can be done in a similar way as that of (14). To prove (9), we first notice that if $-1\leq\beta\leq 0$, then the inequality can be easily deduced from (8) by interpolation. When $\beta>0$, (9) is equivalent (in the case $k<1$) to (18) $$\left(\int_{0}^{1}y^{\alpha_{\beta}-1}h^{2}dy\right)^{2}\leq C\left(\int_{0}^{% 1}y^{2\alpha}h^{2}\,dy\right)^{1-\beta\over 2}\ \left(\int_{0}^{1}h^{\prime}(y% )^{2}+y^{2\alpha}h^{2}\right)^{1+\beta\over 2}$$ where $\alpha_{\beta}={k-\beta\over 1-k}$ and $\alpha={k\over 1-k}$. Applying (14) with $\alpha$ replaced by $\alpha_{\beta}$, we get (19) $$\left(\int_{0}^{1}y^{\alpha_{\beta}-1}h^{2}dy\right)\leq C\left(\int_{0}^{1}y^% {2\alpha_{\beta}}h^{2}\,dy\right)^{1/2}\ \left(\int_{0}^{1}h^{\prime}(y)^{2}+y% ^{2\alpha}h^{2}\right)^{1/2}$$ Notice that we kept $\alpha$ in the last term instead of putting $\alpha_{\beta}$. Indeed, the last integral comes from the estimate of $h^{2}(1)$ and we can keep $\alpha={k\over 1-k}$ in (16). Now, we can apply (19) replacing $\alpha_{\beta}-1$ by $2\alpha_{\beta}$ and we get (20) $$\left(\int_{0}^{1}y^{2\alpha_{\beta}}h^{2}dy\right)\leq C\left(\int_{0}^{1}y^{% 2(2\alpha_{\beta}+1)}h^{2}\,dy\right)^{1/2}\ \left(\int_{0}^{1}h^{\prime}(y)^{% 2}+y^{2\alpha}h^{2}\right)^{1/2}$$ We can iterate this, replacing $\alpha-1$ by $2\alpha_{\beta}$, $2(2\alpha_{\beta}+1)$, … in (19) till we get an index greater than $2\alpha=2{k\over 1-k}$. Interpolating with the last inequality, yields (9). In the case $k=1$, (9) is equivalent to (21) $$\left(\int_{0}^{\infty}e^{-(1-\beta)y}h^{2}dy\right)\leq C\left(\int_{0}^{% \infty}e^{-2y}h^{2}\,dy\right)^{1-\beta\over 2}\ \left(\int_{0}^{\infty}h^{% \prime}(y)^{2}+e^{-2y}h^{2}\ dy\right)^{1+\beta\over 2}$$ The proof of (21) is similar and is left to the reader. For the proof of (10), we use that it is equivalent (in the case $k<1$) to (22) $$\left(\int_{0}^{1}y^{\alpha_{\beta}-1}h^{2}\log^{\gamma}(h^{2})dy\right)^{2}% \leq C\left(\int_{0}^{1}y^{2\alpha}h^{2}\log^{2\gamma\over 1-\beta}(h^{2})\,dy% \right)^{1-\beta\over 2}\ \left(\int_{0}^{1}h^{\prime}(y)^{2}+y^{2\alpha}h^{2}% \right)^{1+\beta\over 2}.$$ Again, one can prove (22) in the case $\beta=0$ by an integration by parts similar to the one used in (14). The case where $-1\leq\beta\leq 0$ can be deduced by interpolation from the case $\beta=0$ and the case $0<\beta<k$ can be deduced by a bootstrap argument as the one used in the proof of (9). 3.2. Control of the stress tensor We recall that $\psi_{\infty}(R)=\frac{e^{-{\mathcal{U}}(R)}}{\int_{B}e^{-{\mathcal{U}}(R^{% \prime})}dR^{\prime}}=(1-|R|^{2})^{k/\beta}/\int_{B}(1-|R^{\prime}|^{2})^{k/% \beta}\ dR^{\prime}$ and since $\beta=1$, $\psi_{\infty}(R)$ behaves like $(1-|R|)^{k}$ when $|R|$ goes to 1. In particular we will apply lemma 3.1 with $x=1-|R|$. Using the inequality (8) in the radial variable with $x=1-|R|$, we get Corollary 3.3. There exists a constant $C$ such that we have the following bound (23) $$|\tau(\psi)|^{2}\leq(\int_{B}\psi dR)\int_{B}\left|\nabla_{R}\sqrt{\psi\over% \psi_{\infty}}\right|^{2}\psi_{\infty}\,dR$$ This Corollary can be seen as the $L^{1}$ version of Proposition 3.1 of [51]. It will allow us to control the extra stress tensor by the free energy dissipation. 3.3. Weighted Sobolev inequality In subsection (5.1), we have to prove the equi-integrability of $N^{n}_{2}$. This will require the control of some higher $L^{p}$ space of $\sqrt{\psi\over\psi_{\infty}}$. We have the following proposition Proposition 3.4. There exists $p>2$ and a constant $C$ such that we have the following bound (24) $$\left(\int_{B}\left|\sqrt{\psi\over\psi_{\infty}}\right|^{p}\psi_{\infty}% \right)^{1/p}\leq\left(\int_{B}\left|\nabla_{R}\sqrt{\psi\over\psi_{\infty}}% \right|^{2}\psi_{\infty}+\psi\,dR\right)^{1/2}.$$ For the proof we first notice that the only difficulty comes from the weight and hence we can restrict to the region where $|R|>\frac{1}{2}$. We also use some spherical coordinates, namely $R=(1-x)\omega$ where $\omega\in{\mathbb{S}}^{D-1}$ and $0<x<\frac{1}{2}$. The square of the right hand side of (24) can be written as the sum of a radial part and an angular part : (25) $$\int_{{\mathbb{S}}^{D-1}}\left(\int_{0}^{1/2}\left[\left|\partial_{x}\sqrt{% \psi\over\psi_{\infty}}\right|^{2}+|\sqrt{\psi\over\psi_{\infty}}|^{2}\right]% \,x^{k}\,dx\right)d\omega.$$ (26) $$\int_{0}^{1/2}\int_{{\mathbb{S}}^{D-1}}\left(\int_{{\mathbb{S}}^{D-1}}\left[% \left|\partial_{\omega}\sqrt{\psi\over\psi_{\infty}}\right|^{2}+|\sqrt{\psi% \over\psi_{\infty}}|^{2}\right]d\omega\right)\,x^{k}\,dx.$$ We recall the following 1D weighted $L^{p}-L^{q}$ Hardy inequality (one can also call it weighted Sobolev inequality) (27) $$\left(\int_{0}^{1/2}|F(x)|^{q}x^{k}\,dx\right)^{1/q}\leq C\left(\int_{0}^{1/2}% |F^{\prime}(x)|^{2}x^{k}\,dx\right)^{1/2}.$$ This inequality can be easily deduced from Theorem 6 of [37], taking $u(x)=v(x)=x^{k}$ for any $q<\infty$ if $k\leq 1$ and for $q\leq\frac{2(k+1)}{k-1}$ if $k>1$. Indeed, Theorem 6 of [37] stated that (27) holds for any $F$, with $F(\frac{1}{2})=0$ if $$\sup_{0<r<\frac{1}{2}}(\int_{0}^{r}x^{k}dx)^{1/q}(\int_{r}^{1\over 2}(x^{k})^{% -1}dx)^{1/2}<\infty.$$ Hence, we get a control of $\sqrt{\psi\over\psi_{\infty}}$ in the space $L^{2}({\mathbb{S}}^{D-1};L^{q}((0,\frac{1}{2}),x^{k}dx))$ using the radial part of the norm (25). On the other hand we can use the classical Sobolev inequality in $D-1$ dimension to control $\sqrt{\psi\over\psi_{\infty}}$ in the space $L^{2}_{x}((0,\frac{1}{2});L^{s}({\mathbb{S}}^{D-1}),x^{k}dx)$ where $s=\frac{2(D-1)}{(D-1)-2}$ if $D>3$, $s<\infty$ if $D=3$ and $s\leq\infty$ if $D=2$. Interpolating between the two spaces $L^{2}_{\omega}L^{q}_{x}$ and $L^{2}_{x}L^{s}_{\omega}$, we deduce the existence of some $p>2$ such that (24) holds. 3.4. Young measures and Chacon limit We recall here two important weak convergence objects used in this paper, namely the Young measure and the Chacon’s biting lemma. Actually, these two notions are very related as was observed in Ball and Murat [2]. Proposition 3.5. (Young measures) If $f^{n}$ is a sequence of functions bounded in $L^{1}(U;{\mathbb{R}}^{m})$ where $U$ is an open set of ${\mathbb{R}}^{N}$, then there exists a family $(\nu_{x})_{x\in U}$ of probability measures on ${\mathbb{R}}^{m}$ (the Young measures), depending measurably on $x$ and a subsequence also denoted $f^{n}$ such that if $g:{\mathbb{R}}^{m}\,\to\,{\mathbb{R}}$ is continuous, if $A\subset U$ is measurable and $$g(f^{n})\rightharpoonup z(x)\quad\hbox{weakly in}\,L^{1}(A;{\mathbb{R}}),$$ then $g(.)\in L^{1}({\mathbb{R}}^{m};\nu_{x})$ for a.e. $x\in A$ and $$z(x)=\int_{{\mathbb{R}}^{m}}g(\lambda)d\nu_{x}(\lambda)\quad a.e.\quad x\in A.$$ In the case where $f^{n}$ is bounded in $L^{p}(U;{\mathbb{R}}^{m})$ for some $p>1$ (or when $f^{n}$ is equi-integrable), we can always take $A=U$ and we have (extracting a subsequence) $$g(f^{n})\rightharpoonup\int_{{\mathbb{R}}^{m}}g(\lambda)d\nu_{x}(\lambda).$$ Proposition 3.6. (Chacon limit) If $f^{n}$ is a sequence of functions bounded in $L^{1}(U;{\mathbb{R}}^{m})$ where $U$ is an open set of ${\mathbb{R}}^{N}$, then there exists a function $f\in L^{1}(U;{\mathbb{R}}^{m})$, a subsequence $f^{n}$ and a non-increasing sequence of measurable sets $E_{k}$ of $U$ with $\lim_{k\to\infty}{\mathcal{L}}_{N}(E_{k})=0$ (where ${\mathcal{L}}_{N}$ is the Lebesgue measure on ${\mathbb{R}}^{N}$) such that for all $k\in{\mathbb{N}}$, $f^{n}\,\rightharpoonup\,f$ weakly in $L^{1}(U-E_{k};{\mathbb{R}}^{m})$ as $n$ goes to infinity. $f$ is called the Chacon limit of $f^{n}$. It is easy to see that if $f^{n}$ is equi-integrable then the Chacon limit of $f^{n}$ is equal to the weak limit of $f^{n}$ in the sense of distribution. If we consider continuous functions $g_{k}:{\mathbb{R}}^{m}\,\to\,{\mathbb{R}}^{m}$, $k\in{\mathbb{N}}$ satisfying the conditions : (a) $g_{k}(\lambda)\to\lambda$ when $k\to\infty$, for each $\lambda\in{\mathbb{R}}^{m}$, (b) $|g_{k}(\lambda)|\leq C(1+|\lambda|)$, for all $k\in{\mathbb{N}}$ and $\lambda\in{\mathbb{R}}^{m}$, (c) $\lim_{|\lambda|\to\infty}|\lambda|^{-1}|g_{k}(\lambda)|=0$ for each $k$, then, under the hypotheses of Proposition 3.5, for each fixed $k$, the sequence of functions $g_{k}(f^{n})$ is equi-integrable and hence (extracting a subsequence) converges weakly in $L^{1}(U;{\mathbb{R}}^{m})$, to some $f_{k}$. Applying a diagonal process, as $k$ goes to infinity, the sequence $f_{k}$ converges strongly to some $f$ in $L^{1}(U;{\mathbb{R}}^{m})$. The limit $f$ is the Chacon’s limit of the subsequence $f^{n}$ and it is given by $$f(x)=\int_{{\mathbb{R}}^{m}}\lambda d\nu_{x}(\lambda)\quad a.e.\quad x\in U.$$ This gives an other possible definition of Chacon’s limit which is equivalent to the one given in Proposition 3.6. For the proof of these results we refer to [2]. 4. A priori estimates 4.1. Free energy The second equation of (1) can be written as (28) $$\partial_{t}\psi+u.\nabla\psi={\rm div}_{R}\Big{[}-\nabla u\cdot R\psi\Big{]}+% div_{R}\Big{[}\psi_{\infty}\nabla_{R}{\psi\over\psi_{\infty}}\Big{]}.$$ We define $\rho(t,x)=\int_{B}\psi dR$. Integrating (28) in $R$, we get the transport of $\rho$, namely $\partial_{t}\rho+u.\nabla\rho=0.$ Multiplying (28) by $\log\frac{\psi}{\rho\psi_{\infty}}$ and integrating in $R$ and $x$, we get (29) $$\partial_{t}\int_{\Omega}\int_{B}\psi\log({\psi\over\rho\psi_{\infty}})-\psi+% \rho\psi_{\infty}=\int_{\Omega}\int_{B}\nabla u\cdot R\,\nabla_{R}{\mathcal{U}% }\psi-4\int_{\Omega}\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{\psi\over\psi_{% \infty}}\right|^{2}dR$$ where we have used that $\nabla\psi_{\infty}=-\psi_{\infty}\nabla{\mathcal{U}}$. The first equation of (1) yields the classical energy estimate for the Navier-Stokes equation (30) $$\partial_{t}\int_{\Omega}\frac{|u|^{2}}{2}=-\int_{\Omega}\nabla u:\tau-\nu\int% _{\Omega}|\nabla u|^{2}.$$ Adding (29) and (30) yields the following decay of the free-energy (31) $$\partial_{t}\int_{\Omega}\int_{B}[\psi\log({\psi\over\rho\psi_{\infty}})-\psi+% \rho\psi_{\infty}]+\frac{|u|^{2}}{2}=-\nu\int_{\Omega}|\nabla u|^{2}-4\int_{% \Omega}\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{\psi\over\psi_{\infty}}% \right|^{2}.$$ Integrating in time, we get the following uniform bound for all $t>0$ (32) $$\int_{\Omega}\int_{B}[\psi\log({\psi\over\rho\psi_{\infty}})-\psi+\rho\psi_{% \infty}]+\frac{|u|^{2}}{2}\ (t)+\int_{0}^{t}\nu\int_{\Omega}|\nabla u|^{2}+4% \int_{\Omega}\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{\psi\over\psi_{\infty}% }\right|^{2}=C_{0}.$$ To simplify the notations in the rest of this section, we will assume that $\rho_{0}(x)=1$. The proof in the general case is identical and we will indicate the changes to be made at the end. The general idea is the following: When proving a priori estimates, one has just to replace $\psi_{\infty}$ by $\rho(t,x)\psi_{\infty}$ and take advantage of the fact that $\rho$ is just transported by the flow. When proving weak compactness, one can use that $\rho^{n}$ converges strongly to $\rho$ in all $L^{p}((0,T)\times\Omega)$ spaces and use $\rho^{n}(t,x)\psi_{\infty}$. Due to the local character of the proof of weak compactness, a simpler way is to just use $\psi_{\infty}$ and so the calculations given in section 5 hold even when $\rho_{0}$ is not constant. 4.2. $\log^{2}$ estimate The free energy only gives an $L\log L(\psi_{\infty}dR)$ bound on $\frac{\psi}{\psi_{\infty}}$. For some technical reasons, we will need to control a slightly higher growth of $\psi$ in the $R$ variable. We introduce $\tilde{\psi}=\psi+a\psi_{\infty}$ for some $a>1$. This is done to insure that $\log\frac{\tilde{\psi}}{\psi_{\infty}}$ does not take negative values. It will also add a new term in the equation which will not present any extra difficulties. Hence, $\tilde{\psi}$ solves (33) $$\partial_{t}\tilde{\psi}+u.\nabla\tilde{\psi}={\rm div}_{R}\Big{[}-\nabla u% \cdot R\tilde{\psi}\Big{]}+div_{R}\Big{[}\psi_{\infty}\nabla_{R}{\tilde{\psi}% \over\psi_{\infty}}\Big{]}-a\nabla u\cdot R\psi_{\infty}\nabla_{R}{\mathcal{U}}.$$ We first derive this extra bound in the case the domain $\Omega$ is bounded and then discuss the modification of the argument in the whole space case. 4.2.1. Case of a bounded domain Multiplying (33) by $\log^{2}\frac{\tilde{\psi}}{\psi_{\infty}}$ and integrating by parts in $R$, we get $$\displaystyle\displaystyle(\partial_{t}+u.\nabla_{x})\int_{B}\tilde{\psi}[\log% ^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{\psi}\over\psi_{\infty}})% +2]=-2ak\nabla_{i}u_{j}\int_{B}\frac{R_{i}R_{j}}{1-|R|^{2}}\psi_{\infty}\log^{% 2}\frac{\tilde{\psi}}{\psi_{\infty}}$$ (34) $$\displaystyle\qquad\qquad\qquad\qquad\displaystyle+\int_{B}\nabla u\cdot R% \tilde{\psi}\,2\log({\tilde{\psi}\over\psi_{\infty}}){\psi_{\infty}\over\tilde% {\psi}}\nabla_{R}{\tilde{\psi}\over\psi_{\infty}}\,-8\int_{B}\psi_{\infty}% \left|\nabla_{R}\sqrt{\tilde{\psi}\over\psi_{\infty}}\right|^{2}\log({\tilde{% \psi}\over\psi_{\infty}})$$ The second term on the right hand side of (34) can be rewritten $$2\int_{B}\nabla u\cdot R\psi_{\infty}\,\nabla_{R}\left({\tilde{\psi}\over\psi_% {\infty}}\log({\tilde{\psi}\over\psi_{\infty}})-{\tilde{\psi}\over\psi_{\infty% }}\right)=2\int_{B}\nabla u\cdot R\,\nabla_{R}{\mathcal{U}}\tilde{\psi}\left(% \log({\tilde{\psi}\over\psi_{\infty}})-1\right)$$ Taking the square root of (34), we get $$\displaystyle\displaystyle(\partial_{t}+u.\nabla_{x})\left(\int_{B}\tilde{\psi% }[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{\psi}\over\psi_{% \infty}})+2]\right)^{1/2}=\frac{-ak\nabla_{i}u_{j}\int_{B}\frac{R_{i}R_{j}}{1-% |R|^{2}}\psi_{\infty}\log^{2}\frac{\tilde{\psi}}{\psi_{\infty}}}{\left(\int_{B% }\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{\psi}% \over\psi_{\infty}})+2]\right)^{1/2}}$$ (35) $$\displaystyle\qquad\qquad\qquad\qquad+\displaystyle\frac{\int_{B}\nabla u\cdot R% \tilde{\psi}\,2\log({\tilde{\psi}\over\psi_{\infty}}){\psi_{\infty}\over\tilde% {\psi}}\nabla_{R}{\tilde{\psi}\over\psi_{\infty}}}{\left(\int_{B}\tilde{\psi}[% \log^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{\psi}\over\psi_{% \infty}})+2]\right)^{1/2}}-4\frac{\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{% \tilde{\psi}\over\psi_{\infty}}\right|^{2}\log({\tilde{\psi}\over\psi_{\infty}% })}{\left(\int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2% \log({\tilde{\psi}\over\psi_{\infty}})+2]\right)^{1/2}}$$ $$\displaystyle\qquad\qquad\qquad\qquad=I_{1}+I_{2}+I_{3}.$$ Let us introduce the notation (36) $$N_{2}=\left(\int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2% \log({\tilde{\psi}\over\psi_{\infty}})+2]\right)^{1/2}.$$ To bound $I_{1}$ we use that, $\psi_{\infty}\log^{2}\frac{\tilde{\psi}}{\psi_{\infty}}\leq C\tilde{\psi}$. Hence, the numerator of $I_{1}$ is bounded by $C|\nabla u|\int\frac{\tilde{\psi}}{1-|R|^{2}}dR$ which is clearly in $L^{1}((0,T)\times\Omega\times B)$. Indeed, by using (8) and Corollary (3.3), we see that (37) $$(\int\frac{\psi}{1-|R|^{2}}dR)\leq C\left(\int_{B}\psi_{\infty}\left|\nabla_{R% }\sqrt{\psi\over\psi_{\infty}}\right|^{2}dR\right)^{1/2}.$$ To bound the second term on the right hand side of (35), we use that the numerator can be bounded by (38) $$\displaystyle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\displaystyle\left|\int_{B}\nabla u% \cdot R\tilde{\psi}\,\log({\tilde{\psi}\over\psi_{\infty}}){\psi_{\infty}\over% \tilde{\psi}}\nabla_{R}{\tilde{\psi}\over\psi_{\infty}}\right|\leq$$ $$\displaystyle\leq$$ $$\displaystyle C|\nabla u|\left(\int_{B}\psi_{\infty}|\log({\tilde{\psi}\over% \psi_{\infty}})|\left|\nabla_{R}\sqrt{\tilde{\psi}\over\psi_{\infty}}\right|^{% 2}\right)^{1/2}\left(\int_{B}\psi_{\infty}|\log({\tilde{\psi}\over\psi_{\infty% }})|{\tilde{\psi}\over\psi_{\infty}}\right)^{1/2}$$ (39) $$\displaystyle\leq$$ $$\displaystyle C|\nabla u|^{2}\left(\int_{B}\tilde{\psi}|\log({\tilde{\psi}% \over\psi_{\infty}})|\right)+\left(\int_{B}\psi_{\infty}|\log({\tilde{\psi}% \over\psi_{\infty}})|\left|\nabla_{R}\sqrt{\tilde{\psi}\over\psi_{\infty}}% \right|^{2}\right)$$ (40) $$\displaystyle\leq$$ $$\displaystyle C|\nabla u|^{2}(1+a)^{1/2}\left(\int_{B}\tilde{\psi}\log^{2}({% \tilde{\psi}\over\psi_{\infty}})\right)^{1/2}+\left(\int_{B}\psi_{\infty}|\log% ({\tilde{\psi}\over\psi_{\infty}})|\left|\nabla_{R}\sqrt{\tilde{\psi}\over\psi% _{\infty}}\right|^{2}\right)$$ Dividing (38) by $N_{2}$, we deduce that (41) $$I_{2}\leq C|\nabla u|^{2}-\frac{1}{4}I_{3}.$$ Integrating (35) in time and space and using the fact that $I_{3}$ has a sign, we deduce the following a priori bound (42) $$\int_{\Omega}\left(\int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{% \infty}})-2\log({\tilde{\psi}\over\psi_{\infty}})+2]\right)^{1/2}(t)+\int_{0}^% {T}\int_{\Omega}\frac{\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{\tilde{\psi}% \over\psi_{\infty}}\right|^{2}\log({\tilde{\psi}\over\psi_{\infty}})}{\left(% \int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{% \psi}\over\psi_{\infty}})+2]\right)^{1/2}}\leq C_{T}$$ for $0\leq t\leq T$, if the initial condition satisfies $\int_{\Omega}\left(\int_{B}\tilde{\psi}_{0}[\log^{2}({\tilde{\psi}_{0}\over% \psi_{\infty}})-2\log({\tilde{\psi}_{0}\over\psi_{\infty}})+2]\right)^{1/2}% \leq C_{0}.$ Hence, we see that (35) can be written as (43) $$(\partial_{t}+u.\nabla)N_{2}=F_{2}$$ where $F_{2}$ is in $L^{1}((0,T)\times\Omega)$. It turns out that passing to the limit in the bound (42) is not clear. Actually, one can find sequences of functions $\tilde{\psi}^{n}$ such that (42) holds and the weak limit does not satisfy (42). This is the reason, we prefer to write the second bound as (44) $$\int_{\Omega}\left(\int_{B}g^{2}\log(g^{2})\psi_{\infty}dR\right)^{1/2}dx+\int% _{0}^{T}\int_{\Omega}\int_{B}\frac{\psi_{\infty}\left|\nabla_{R}g\right|^{2}\ % dR}{\left(\int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2% \log({\tilde{\psi}\over\psi_{\infty}})+2]\right)^{1/2}}\leq C_{T}$$ where $g$ is given by $g=\sqrt{{\tilde{\psi}\over\psi_{\infty}}}\log^{1/2}({\tilde{\psi}\over\psi_{% \infty}})$. 4.2.2. Case of unbounded domain In the case $\Omega={\mathbb{R}}^{D}$, we first take $c_{1}$ and $c_{2}$ the two constants such that the function $\phi(x)=x[\log^{2}x-2\log x+c_{1}]+c_{2}$ satisfies the fact that $\phi(1+a)=\phi^{\prime}(1+a)=0$. This is achieved by taking $c_{1}=2-\log^{2}(1+a)$ and $c_{2}=2(1+a)[\log(1+a)-1]$. Notice also that the function $\phi(x)$ is nonnegative for $x\geq a$ since $a$ is taken big enough. It is clear that the extra bound (6) implies that (45) $$\int_{\Omega}\frac{\int_{B}\phi({\tilde{\psi}_{0}\over\psi_{\infty}})dR}{1+% \left[\int_{B}\phi({\tilde{\psi}_{0}\over\psi_{\infty}})dR\right]^{1/2}}dx\leq C% _{0}$$ and hence, we can perform the same calculations as (34) and (35) with $\int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{% \psi}\over\psi_{\infty}})+2]dR$ replaced by $\int_{B}\phi({\tilde{\psi}\over\psi_{\infty}})dR$ and with the function $s\to\sqrt{s}$ used to go from (34) to (35) replaced by $s\to\frac{s}{1+\sqrt{s}}$ which behaves like $\phi_{1}(s)=$ min$(\sqrt{s},s)$. The rest of the proof is identical. 4.2.3. Case $\rho$ is not constant In the case $\rho$ is not constant and we are in a bounded domain, we have to modify (34) slightly and multiply by $\log^{2}\frac{\tilde{\psi}}{\rho\psi_{\infty}}$. In the case we are also in an unbounded domain, we have to replace $\int_{B}\tilde{\psi}[\log^{2}({\tilde{\psi}\over\psi_{\infty}})-2\log({\tilde{% \psi}\over\psi_{\infty}})+2]dR$ by $\int_{B}\phi({(1+a)\tilde{\psi}\over(\rho+a)\psi_{\infty}})dR$. The extra factor $\frac{1+a}{\rho+a}$ is used to insure that when $\tilde{\rho}$ is at microscopic equilibrium, namely $\tilde{\psi}=(\rho+a)\psi_{\infty}$, the integrand reduces to $\phi(1+a)$. The rest of the proof is identical and yields at the end the following bound instead of (42) (46) $$\int_{\Omega}\phi_{1}\left(\int_{B}\phi({(1+a)\tilde{\psi}\over(\rho+a)\psi_{% \infty}})\right)(t)+\int_{0}^{T}\int_{\Omega}\frac{\int_{B}\psi_{\infty}\left|% \nabla_{R}\sqrt{\tilde{\psi}\over\psi_{\infty}}\right|^{2}\log({(1+a)\tilde{% \psi}\over(\rho+a)\psi_{\infty}})}{1+\left(\int_{B}\phi({(1+a)\tilde{\psi}% \over(\rho+a)\psi_{\infty}})\right)^{1/2}}\leq C_{T}.$$ One can then deduce from (46) that (42) and (44) hold with the integration set $\Omega$ replaced by any compact $K$ of ${\mathbb{R}}^{D}$. 5. Weak compactness As it is classical when proving global existence of weak solutions, it is enough to prove the weak compactness of a sequence of weak solutions satisfying the a priori estimates of the previous section. In the next section, we present one way of approximating the system. We consider $(u^{n},\psi^{n})$ a sequence of weak solutions to (1) satisfying, uniformly in $n$, the free energy bound (32) and the $\log^{2}$ bound (42) with an initial data $(u^{n}_{0},\psi^{n}_{0})$ such that $(u^{n}_{0},\psi^{n}_{0})$ converge strongly to $(u_{0},\psi_{0})$ in $L^{2}(\Omega)\times L^{1}_{loc}(\Omega;L^{1}(B))$ and $\psi^{n}_{0}\log\frac{\rho^{n}_{0}\psi^{n}_{0}}{\psi_{\infty}}-\psi^{n}_{0}+% \rho^{n}_{0}\psi_{\infty}$ converges strongly to $\psi_{0}\log\frac{\psi_{0}}{\rho_{0}\psi_{\infty}}-\psi_{0}+\rho_{0}\psi_{\infty}$ in $L^{1}(\Omega\times B)$. We also assume that $(u^{n},\psi^{n})$ has some extra regularity with bounds that depend on $n$ such that we can perform all the following calculations. We extract a subsequence such that $u^{n}$ converges weakly to $u$ in $L^{p}((0,T);L^{2}(\Omega))\cap L^{2}((0,T);H^{1}_{0}(\Omega))$ and $\psi^{n}$ converges weakly to $\psi$ in $L^{p}((0,T);L^{1}_{loc}(\Omega\times B))$ for each $p<\infty$. We would like to prove that $(u,\psi)$ is still a solution of (1). The main difficulty is to pass to the limit in the nonlinear term $\nabla u^{n}R\psi^{n}$ appearing in the second equation of (1). We introduce $g^{n}=\sqrt{\frac{\tilde{\psi}^{n}}{\psi_{\infty}}}\log^{1/2}({\tilde{\psi}^{n% }\over\psi_{\infty}})$ and $f^{n}=\sqrt{\frac{\tilde{\psi}^{n}}{\psi_{\infty}}}$ where $\tilde{\psi}^{n}={\psi^{n}+a\psi_{\infty}}$ and $a>1$ is any constant. We also assume, extracting a subsequence if necessary, that $g^{n}$ and $f^{n}$ converge weakly to some $g$ and $f$ in $L^{p}((0,T);L^{2}_{loc}(\Omega\times B,\,dx\psi_{\infty}dR))$ for each $p<\infty$. To prove that $(u,\psi)$ is a solution of (1), it will be enough to prove that $(g^{n})^{2}=\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\log({\tilde{\psi}^{n}\over% \psi_{\infty}})$ converges weakly to $g^{2}=\frac{\tilde{\psi}}{\psi_{\infty}}\log({\tilde{\psi}\over\psi_{\infty}})$, namely that $g^{n}$ converges strongly to $g$ in $L^{2}((0,T);L^{2}(\Omega\times B,\,dx\psi_{\infty}dR))$. First, it is clear that $u,\tilde{\psi}$ and $g$ satisfy the same a priori estimates that the sequence $u^{n},\tilde{\psi}^{n}$ and $g^{n}$ satisfy since all those functionals have good convexity properties. In particular it is clear that $u,\psi$ satisfy (32). We just point out that to pass to the limit in the last term on the left hand side of (32), we can use the fact that the function $\phi_{2}(x,y)=\frac{x^{2}}{y}$ is convex. To pass to the limit in (44), we also use the fact that $\phi_{2}(x,y)$ is convex. Hence, we deduce that (47) $$\displaystyle\sup_{0\leq t\leq T}\int_{\Omega}\left(\Big{(}\int_{B}g^{2}\log(g% ^{2})\psi_{\infty}dR\Big{)}^{1/2}+\overline{N^{n}_{2}}\right)dx(t)+\int_{0}^{T% }\int_{\Omega}\int_{B}\frac{\psi_{\infty}\left|\nabla_{R}g\right|^{2}}{% \overline{N^{n}_{2}}}\leq C_{T}$$ where $\overline{N^{n}_{2}}$ is the weak limit of $\left(\int_{B}\tilde{\psi}^{n}[\log^{2}({\tilde{\psi}^{n}\over\psi_{\infty}})-% 2\log({\tilde{\psi}^{n}\over\psi_{\infty}})+2]\right)^{1/2}$. Dividing (33) by $\psi_{\infty}$ we get (48) $$\partial_{t}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}+u^{n}.\nabla\frac{\tilde{% \psi}^{n}}{\psi_{\infty}}={\rm div}_{R}\Big{[}-\nabla u^{n}\cdot R\frac{\tilde% {\psi}^{n}}{\psi_{\infty}}\Big{]}+\nabla{\mathcal{U}}.\nabla u^{n}R\frac{% \tilde{\psi}^{n}}{\psi_{\infty}}+\Delta_{R}\frac{\tilde{\psi}^{n}}{\psi_{% \infty}}-\nabla{\mathcal{U}}.\nabla_{R}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}-% a\nabla u^{n}\cdot R.\nabla_{R}{\mathcal{U}}.$$ From (48), we deduce that for any smooth function $\Theta$ from $(0,\infty)$ to ${\mathbb{R}}$, we have (49) $$\begin{split}\displaystyle\partial_{t}\Theta(\frac{\tilde{\psi}^{n}}{\psi_{% \infty}})+u^{n}.\nabla\Theta(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})&% \displaystyle=-\nabla u^{n}R\cdot\nabla_{R}\Theta(\frac{\tilde{\psi}^{n}}{\psi% _{\infty}})+\nabla_{R}{\mathcal{U}}.\nabla uR\frac{\tilde{\psi}^{n}}{\psi_{% \infty}}\Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})\\ &\displaystyle\quad+\Delta\Theta(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})-% \nabla_{R}{\mathcal{U}}\cdot\nabla_{R}\Theta(\frac{\tilde{\psi}^{n}}{\psi_{% \infty}})-\Theta^{\prime\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})|\nabla% _{R}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}|^{2}\\ &\displaystyle\quad-2ak\nabla_{i}u^{n}_{j}\frac{R_{i}R_{j}}{1-|R|^{2}}\Theta^{% \prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}}).\end{split}$$ We take $\Theta(t)=t^{1/2}\log^{1/2}(t)$ and recall that $g^{n}=\Theta(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})$. We introduce the following defect measures $\gamma_{ij},\gamma^{\prime}_{ij}$ and $\beta_{ij}$ such that (50) $$\begin{split}\displaystyle\nabla u^{n}g^{n}\to\nabla ug+\gamma,\quad\quad% \nabla u^{n}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Theta^{\prime}(\frac{\tilde% {\psi}^{n}}{\psi_{\infty}})\to\nabla u\overline{\frac{\tilde{\psi}^{n}}{\psi_{% \infty}}\Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})}+\gamma^{% \prime}\\ \displaystyle\nabla u^{n}\tilde{\psi}^{n}\to\nabla u\tilde{\psi}+\beta\end{split}$$ where $\gamma,\gamma^{\prime}\in L^{2}((0,T)\times\Omega\times B)$ and $\beta\in L^{2}((0,T)\times\Omega;L^{1}(B))$ are matrix valued. On one hand, passing to the limit in (49) with $\Theta(t)=t^{1/2}\log^{1/2}(t)$, we get (51) $$\begin{split}\displaystyle\partial_{t}g+u.\nabla g&\displaystyle={\rm div}_{R}% \Big{[}-\nabla_{i}u_{j}R_{j}g-\gamma_{ij}R_{j}\Big{]}\\ &\displaystyle\quad+\nabla_{R}{\mathcal{U}}.\nabla uR\overline{\frac{\tilde{% \psi}^{n}}{\psi_{\infty}}\Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}% })}+\nabla_{R}{\mathcal{U}}R:\gamma^{\prime}\\ &\displaystyle\quad+\frac{1}{\psi_{\infty}}{\rm div}_{R}\Big{[}\psi_{\infty}% \nabla_{R}g\Big{]}+\overline{\frac{|\nabla_{R}f^{n}|^{2}(\log^{1/2}+\log^{-3/2% })(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})}{f^{n}}}\\ &\displaystyle\quad-ak\,\overline{\Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi% _{\infty}})\nabla_{i}u^{n}_{j}}\frac{2R_{i}R_{j}}{1-|R|^{2}}\end{split}$$ where, here and below, $\overline{F_{n}}$ denotes the weak limit of $F_{n}$ and where we have used that (52) $$\left\{\begin{split}\displaystyle\Theta^{\prime}(s)&\displaystyle=\frac{1}{2}s% ^{-1/2}(\log^{1/2}(s)+\log^{-1/2}(s))\\ \displaystyle\Theta^{\prime\prime}(s)&\displaystyle=-\frac{1}{4}s^{-3/2}(\log^% {1/2}(s)+\log^{-3/2}(s)).\end{split}\right.$$ Multiplying by $g$, we get (53) $$\begin{split}\displaystyle\partial_{t}g^{2}+u.\nabla g^{2}&\displaystyle={\rm div% }_{R}\Big{[}-\nabla uRg^{2}\Big{]}+\nabla uR\cdot\nabla_{R}{\mathcal{U}}% \overline{\Big{(}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Theta^{\prime}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})\Big{)}}2g\\ &\displaystyle\quad-{\rm div}_{R}(\gamma_{ij}R_{j})2g+\nabla_{R}{\mathcal{U}}R% :\gamma^{\prime}2g\\ &\displaystyle\quad+\frac{1}{\psi_{\infty}}{\rm div}_{R}\Big{[}\psi_{\infty}% \nabla_{R}g^{2}\Big{]}-2|\nabla_{R}g|^{2}\\ &\displaystyle\quad+\overline{\frac{|\nabla_{R}f^{n}|^{2}(\log^{1/2}+\log^{-3/% 2})(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})|^{2}}{f^{n}}}2g-ak\,\overline{% \Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})\nabla_{i}u_{j}^{n}}% \frac{4R_{i}R_{j}}{1-|R|^{2}}g\end{split}$$ Multiplying (53) by ${\psi_{\infty}}$ and integrating in $R$ yields (54) $$\begin{split}&\displaystyle(\partial_{t}+u.\nabla)\int_{B}\psi_{\infty}g^{2}=-% \nabla u:\tau\left(\psi_{\infty}\Big{(}g^{2}-2g\overline{\frac{\tilde{\psi}^{n% }}{\psi_{\infty}}\Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})}\Big{% )}\right)\\ &\displaystyle\quad-\int_{B}{\rm div}_{R}(\gamma_{ij}R_{j})2g\psi_{\infty}+% \nabla_{R}{\mathcal{U}}R:\gamma^{\prime}2g\psi_{\infty}\\ &\displaystyle\quad+\int_{B}\psi_{\infty}\overline{\frac{|\nabla_{R}f^{n}|^{2}% (\log^{1/2}+\log^{-3/2})(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})|^{2}}{f^{n}}}% 2g-2\psi_{\infty}|\nabla_{R}g|^{2}\\ &\displaystyle\quad-\int_{B}\psi_{\infty}ak\,\overline{\Theta^{\prime}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})\nabla_{i}u_{j}^{n}}\frac{4R_{i}R_{j}}{1-|R|^% {2}}g\end{split}$$ where we recall that $\tau_{ij}(\psi)=2k\int_{B}\psi\frac{R_{i}R_{j}}{1-|R|^{2}}dR$. Here, there is a small problem of definition: The terms on the second line of the right hand side are not well defined in the sense of distribution and we need some further analysis to make sense of them. Also, the transport term is not well defined even if we write it as div$(u\int_{B}\psi_{\infty}g^{2})$. Actually, as we will see later, we will not use (54) but a renormalized form of it. Indeed, we will construct in the next subsection a renormalizing factor $N$ that satisfies $(\partial_{t}+u.\nabla)\frac{1}{N}=0$ and we will make sense of (54) after dividing each term by $N^{4}$. On the other hand, passing to the limit in the equation satisfied by $\tilde{\psi}_{n}$, we get (55) $$\partial_{t}\tilde{\psi}+u.\nabla\tilde{\psi}={\rm div}_{R}\Big{[}-\nabla u% \cdot R\tilde{\psi}-\beta_{ij}R_{j}\Big{]}+div_{R}\Big{[}\psi_{\infty}\nabla_{% R}{\tilde{\psi}\over\psi_{\infty}}\Big{]}-2ak\nabla u:\psi_{\infty}\frac{R_{i}% R_{j}}{1-|R|^{2}}.$$ Besides, $\tilde{\psi}^{n}\log(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})$ satisfies (56) $$\begin{split}&\displaystyle(\partial_{t}+u^{n}.\nabla)\left[\int_{B}\tilde{% \psi}^{n}\log(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})\right]=\nabla u^{n}:\tau% (\tilde{\psi}^{n})\\ &\displaystyle\quad\quad\quad-4\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{% \frac{\tilde{\psi}^{n}}{{\psi_{\infty}}}}\right|^{2}-2ak\int_{B}\nabla u^{n}% \frac{R_{i}R_{j}}{1-|R|^{2}}\psi_{\infty}\log(\frac{\tilde{\psi}^{n}}{\psi_{% \infty}}).\end{split}$$ We would like to pass to the limit weakly in (56) and deduce that (57) $$\begin{split}\displaystyle(\partial_{t}+u.\nabla)\int_{B}\overline{\tilde{\psi% }^{n}\log(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})}&\displaystyle=\nabla u:\tau% (\tilde{\psi})+\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}\\ &\displaystyle-\int_{B}\psi_{\infty}\overline{\left|\nabla_{R}\sqrt{\frac{% \tilde{\psi}^{n}}{{\psi_{\infty}}}}\right|^{2}}-2ak\int_{B}\overline{\log(% \frac{\tilde{\psi}^{n}}{\psi_{\infty}})\nabla u^{n}}\frac{R_{i}R_{j}}{1-|R|^{2% }}\psi_{\infty}.\end{split}$$ However, we can not use (50) to pass to the limit in $\nabla u^{n}:\tau(\tilde{\psi}^{n})=\int_{B}\nabla u^{n}\frac{R_{i}R_{j}}{1-|R% |^{2}}\tilde{\psi}^{n}$ and deduce that (58) $$\nabla u^{n}:\tau(\tilde{\psi}^{n})\rightharpoonup\nabla u:\tau(\tilde{\psi})+% \int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}$$ since $\nabla u^{n}\frac{R_{i}R_{j}}{1-|R|^{2}}\tilde{\psi}^{n}$ is only bounded in $L^{1}(dt\,dx\,dR)$. Besides, we can not pass to the limit in the transport term even if we write it in divergence form. To overcome these difficulties, we will divide (56) by $1+\delta N_{2}^{n}$ where $N_{2}^{n}$ solves (43) before passing to the limit. Then, we will send $\delta$ to zero. To be able to deal with the limit $\delta$ to zero, we need to renormalize (56) too. We denote $N_{1}^{n}=\int_{B}\tilde{\psi}^{n}\log(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})$, $N_{2}^{n}=\left(\int_{B}\tilde{\psi}^{n}[\log^{2}({\tilde{\psi}^{n}\over\psi_{% \infty}})-2\log({\tilde{\psi}^{n}\over\psi_{\infty}})+2]\right)^{1/2}$ and $\theta_{\kappa}(s)=\frac{s}{1+\kappa s}$. We first multiply (56) by $\theta_{\kappa}^{\prime}(N_{1}^{n})$ and get an equation for $\theta_{\kappa}(N_{1}^{n})$. Dividing the resulting equation by $1+\delta N_{2}^{n}$, using (43) and taking the weak limit when $n$ goes to infinity (extracting a subsequence if necessary), we get for $\kappa,\delta>0$ $$\displaystyle(\partial_{t}+u.\nabla)\overline{\frac{\theta_{\kappa}(N_{1}^{n})% }{{1+\delta N_{2}^{n}}}}=$$ $$\displaystyle\overline{\nabla u^{n}:\frac{\tau(\tilde{\psi}^{n})}{(1+\delta N_% {2}^{n})(1+\kappa N_{1}^{n})^{2}}}-\overline{\frac{1}{(1+\delta N_{2}^{n})(1+% \kappa N_{1}^{n})^{2}}\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{\frac{\tilde{% \psi}^{n}}{{\psi_{\infty}}}}\right|^{2}}$$ (59) $$\displaystyle-\overline{\frac{2ak}{(1+\delta N_{2}^{n})(1+\kappa N_{1}^{n})^{2% }}\int_{B}\nabla u^{n}\frac{R_{i}R_{j}}{1-|R|^{2}}\psi_{\infty}\log(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})}$$ $$\displaystyle-\overline{\frac{\delta F^{n}}{(1+\delta N_{2}^{n})^{2}}{\theta_{% \kappa}(N_{1}^{n})}}.$$ Now, we can send $\delta$ to zero. Notice that due to the fact that ${\theta_{\kappa}(N_{1}^{n})}$ is bounded and that $F^{n}$ is bounded in $L^{1}$, we deduce that the last term goes to zero when $\delta$ goes to zero. Then, we send $\kappa$ to zero and recover at the limit $$\displaystyle(\partial_{t}+u.\nabla)\theta=$$ $$\displaystyle\overline{\nabla u^{n}:{\tau(\tilde{\psi}^{n})}}^{\delta,\kappa}-% \overline{\int_{B}\psi_{\infty}\left|\nabla_{R}\sqrt{\frac{\tilde{\psi}^{n}}{{% \psi_{\infty}}}}\right|^{2}}^{\delta,\kappa}$$ (60) $$\displaystyle-\overline{{2ak}\int_{B}\nabla u^{n}\frac{R_{i}R_{j}}{1-|R|^{2}}% \psi_{\infty}\log(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})}^{\delta,\kappa}.$$ where $\theta=\lim_{\kappa\to 0}\lim_{\delta\to 0}\overline{\frac{\theta_{\kappa}(N_{% 1}^{n})}{{1+\delta N_{2}^{n}}}}=\lim_{\kappa\to 0}\overline{{\theta_{\kappa}(N% _{1}^{n})}}$ is the Chacon limit of $N_{1}^{n}$ and $$\overline{F_{n}}^{\delta,\kappa}=\lim_{\kappa\to 0}\lim_{\delta\to 0}\overline% {\frac{F_{n}}{(1+\delta N_{2}^{n})(1+\kappa N_{1}^{n})^{2}}}$$ for any sequence $F_{n}$ bounded in $L^{1}$. We will prove in the next subsection that $N^{n}_{1}$ is equiintegrable and hence that $\theta$ is also the weak limit of $N^{n}_{1}$. We also notice that if $F_{n}$ is equi-integrable then $\overline{F_{n}}^{\delta,\kappa}=\overline{F_{n}}$. To be really precise the term $u.\nabla\theta={\rm div}(\theta u)$ on the left hand side of (60) is not well defined since $\theta$ is only in $L^{\infty}_{t}L^{1}_{x}$ and $u$ is in $L^{2}_{t}\dot{H}^{1}_{x}$. To give sense to (60) we need to use a renormalizing factor. Actually, we made the same remark for the transport term in the equation (54). Recall that at the end, we would like to take the difference between (60) and (54) after dividing it by the factor $N^{4}$ that we are going to define now. The fact of dividing by $N$ will insure that $\frac{\theta}{N}$ is bounded and that all the terms will make sense. So the point is to divide (59) by $N^{4}$ and then send $\delta$ and $\kappa$ to zero. 5.1. The renormalizing factor $N$ We recall that from, (43), $\beta_{M}(N_{2}^{n})$ solves (61) $$(\partial_{t}+u^{n}.\nabla)\beta_{M}(N_{2}^{n})=\beta_{M}^{\prime}(N_{2}^{n})F% ^{n}.$$ where $\beta_{M}(s)=\theta_{1/M}(s)=\frac{s\,M}{M+s}$. Passing to the limit in (61), we get (62) $$(\partial_{t}+u.\nabla)\overline{\beta_{M}(N_{2}^{n})}=\overline{\beta_{M}^{% \prime}(N_{2}^{n})F^{n}}.$$ This equation does not seem to be very useful since the right hand side is a measure. To overcome this problem, we first introduce the unique a.e flow $X^{n}$ in the sense of DiPerna and Lions [17] of $u^{n}$, solution of (63) $$\partial_{t}X^{n}(t,x)=u^{n}(t,X(t,x))\quad\quad X^{n}(t=0,x)=x.$$ We also denote by $X$ the a.e flow of $u$. Let $Q^{n}$ be the solution of (43) with $F^{n}$ replaced by $|F^{n}|$ and taking the same initial data as $N^{n}_{2}$ at $t=0$. Hence, (64) $$\frac{d[\beta_{M}(Q^{n})(t,X^{n}(t,x))]}{dt}=\beta_{M}^{\prime}(Q^{n})|F^{n}(t% ,X^{n}(t,x))|$$ where the equation holds in the sense of distribution. Passing to the limit weakly in (64), we get (65) $$\frac{d\overline{[\beta_{M}(Q^{n})(t,X^{n}(t,x))]}}{dt}=\overline{\beta_{M}^{% \prime}(Q^{n})|F^{n}(t,X^{n}(t,x))|}$$ From the stability of the notion of a.e flow with respect to the weak limit of $u^{n}$ to $u$, we know that $X^{n}(t,x)$ converges to $X(t,x)$ in $L^{1}_{loc}$ and also that $(X^{n}(t)^{-1})(x)$ converges to $(X(t)^{-1})(x)$ in $L^{1}_{loc}$. This allows us to get the following equality of the weak limits (66) $$\overline{[\beta_{M}(Q^{n})(t,X^{n}(t,x))]}=\overline{[\beta_{M}(Q^{n})(t,X(t,% x))]}.$$ Now, sending $M$ to infinity in (65), we deduce that (67) $$\frac{d[Q(t,X(t,x))]}{dt}=F$$ where $Q=\lim_{M\to\infty}\overline{[\beta_{M}(Q^{n})]}$ is the Chacon limit of $Q^{n}$ and $F=\lim_{M\to\infty}\overline{\beta_{M}^{\prime}(Q^{n})|F^{n}(t,X^{n}(t,x))|}$. It is easy to see that $Q\in L^{\infty}(0,T;L^{1}(\Omega))$ and that $F\in{\mathcal{M}}((0,T)\times\Omega)$. Integrating in $t$, we deduce that a.e in $x\in\Omega$, we have (68) $$Q(t,X(t,x))=Q(0,x)+\int_{0}^{t}F(s,X(s,x))\,ds$$ for a.e $t\in(0,T)$. Due to the fact that $F$ is nonnegative, we deduce that $Q(t,X(t,x))$ is increasing in time. We define the normalizing factor $N$ by the following (69) $$N(t,X(t,x))=Q(T_{0},X(T_{0},x))=Q(0,x)+\int_{0}^{T_{0}}F(s,X(s,x))\,ds$$ for $t\in(0,T_{0})$ where $T_{0}<T$ is a fixed time. In the sequel, we will denote $T=T_{0}$ and will not make the distinction between these two times. Notice that $N$ is constant along the characteristics of $u$, that $N$ is in $L^{\infty}(0,T;L^{1}(\Omega))$ and that $N(t,X(t,x))$ is in $L^{1}(\Omega;L^{\infty}(0,T))$. Moreover it is bounded from below by $1$. Hence, it solves $$(\partial_{t}+u.\nabla)\frac{1}{N}=0.$$ Also, the following two inequalities hold (70) $$\overline{\beta_{M}(N_{2}^{n})}\leq\overline{\beta_{M}(Q^{n})}\leq Q\leq N$$ and hence the weak limit of $N_{2}^{n}$ which is equal to the Chacon limit of $N_{2}^{n}$ is bounded by $N$. The fact that the weak limit of $N_{2}^{n}$ is equal to its Chacon limit comes from the fact that the sequence $N_{2}^{n}$ is equiintegrable. This is a simple consequence of the dissipation of the free energy and the weighted Sobolev inequality (24). Indeed, from (24), we deduce that $\sqrt{\psi^{n}\over\psi_{\infty}}$ is bounded in $L^{2}((0,T)\times\Omega;L^{p}(\psi_{\infty}dR))$ on the other hand from the conservation of mass, we know that $\sqrt{\psi^{n}\over\psi_{\infty}}$ is bounded in $L^{\infty}((0,T)\times\Omega;L^{2}(\psi_{\infty}dR))$. Interpolating between these two bounds, we easily deduce that $\sqrt{\psi^{n}\over\psi_{\infty}}$ is bounded in $L^{r}((0,T)\times\Omega\times B,dtdx\psi_{\infty}dR)$ for some $r>2$ and hence $N_{2}^{n}$ is equiintegrable. We also get that $N^{n}_{1}$ is equiintegrable and hence $\theta$ which is the Chacon limit of $N^{n}_{1}$ is equal to the weak limit of $N^{n}_{1}$. 5.2. The term $\overline{\nabla u^{n}:{\tau(\tilde{\psi}^{n})}}^{\delta,\kappa}$ In this subsection, we will prove that $\overline{\nabla u^{n}:{\tau(\tilde{\psi}^{n})}}^{\delta,\kappa}=\nabla u:\tau% +\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}.$ This will follow from the following two lemmas Lemma 5.1. (71) $$\overline{\frac{\nabla u^{n}:{\tau(\tilde{\psi}^{n})}}{(1+\delta N_{2}^{n})(1+% \kappa N_{1}^{n})^{2}}}=\int_{B}z^{\delta,\kappa}\frac{R_{i}R_{j}}{1-|R^{2}|}$$ where $z^{\delta,\kappa}=\overline{\frac{{\tilde{\psi}^{n}}\,\nabla u^{n}}{(1+\delta N% _{2}^{n})(1+\kappa N_{1}^{n})^{2}}}$ Lemma 5.2. $z^{\delta,\kappa}$ converges strongly to $\overline{\nabla u^{n}\tilde{\psi}^{n}}=\nabla u\tilde{\psi}+\beta$ in $L^{1}((0,T)\times\Omega\times B;dt\,dx\,\frac{dR}{1-|R|})$ when $\delta$ goes to zero and then $\kappa$ goes to zero. Denoting $\tau^{n,\delta,\kappa}=\frac{{\tau(\tilde{\psi}^{n})}}{(1+\delta N_{2}^{n})(1+% \kappa N_{1}^{n})^{2}}$, we get that Corollary 5.3. (72) $$\overline{\nabla u^{n}:\tau(\tilde{\psi}^{n})}^{\delta,\kappa}=\lim_{\kappa\to 0% }\lim_{\delta\to 0}\overline{\nabla u^{n}:\tau^{n,\delta,\kappa}}=\nabla u:% \tau(\psi)+\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}dR.$$ Proof of Lemma 5.1. The proof of (71) follows from the fact that $z^{n,\delta,\kappa}=\frac{\nabla u^{n}\tilde{\psi}^{n}}{\psi_{\infty}(1+\delta N% _{2}^{n})(1+\kappa N_{1}^{n})^{2}}$ is equi-integrable in $L^{1}((0,T)\times\Omega\times B;dt\,dx\,\frac{\psi_{\infty}dR}{1-|R|})$ for $\delta,\kappa$ fixed. Indeed, consider the real valued function $\Phi(x)=x\log(1+x)+1$. It is enough to prove that $\Phi(z^{n,\delta,\kappa})=\Phi\left(\frac{\nabla u^{n}\tilde{\psi}^{n}}{\psi_{% \infty}(1+\delta N_{2}^{n})(1+\kappa N_{1}^{n})^{2}}\right)$ is bounded in $X=L^{1}((0,T)\times\Omega\times B;dt\,dx\,\frac{\psi_{\infty}dR}{1-|R|})$. To simplify notation, we denote $N^{n}=(1+\delta N_{2}^{n})(1+\kappa N_{1}^{n})^{2}$. Hence, it is enough to bound (73) $$\frac{\nabla u^{n}}{N^{n}}\left[\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\log% \left(\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\right)+\frac{\tilde{\psi}^{n}}{% \psi_{\infty}}\log\left(\frac{\nabla u^{n}}{N^{n}}\right)\right]$$ in $X$ (see definition above). To bound the first term appearing in (73) we use the Hardy type inequality (8) to get that (74) $$\displaystyle\frac{\nabla u^{n}}{N^{n}}\int_{B}\tilde{\psi}^{n}\log\left(\frac% {\tilde{\psi}^{n}}{\psi_{\infty}}\right)\frac{1}{1-|R|}\ dR$$ (75) $$\displaystyle\quad\lesssim\frac{\nabla u^{n}}{N^{n}}\left[\int_{B}\tilde{\psi}% ^{n}\log\left(\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\right)\right]^{1/2}\left[% \int_{B}\psi_{\infty}\left|\nabla\Big{(}\sqrt{\frac{\tilde{\psi}^{n}}{\psi_{% \infty}}}\log^{1/2}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Big{)}\right|^{2}% \right]^{1/2}$$ (76) $$\displaystyle\quad\lesssim|\nabla u^{n}|^{2}+\frac{1}{(N^{n})^{2}}\left[\int_{% B}\tilde{\psi}^{n}\log\left(\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\right)% \right]\left[\int_{B}\psi_{\infty}\left|\nabla\Big{(}\sqrt{\frac{\tilde{\psi}^% {n}}{\psi_{\infty}}}\log^{1/2}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Big{)}% \right|^{2}\right]$$ and using the a priori bound (42), we see that the last term in uniformly bounded in $L^{1}((0,T)\times\Omega)$. To bound the second term in (73), we first use the inequality $x\,y\leq C(x^{2}\log^{2}(x)+\frac{y^{2}}{\log^{2}y})$ for $x,y\geq 2$ and then apply Jensen inequality. Hence, (77) $$\displaystyle\frac{\nabla u^{n}}{N^{n}}\log\left(\frac{\nabla u^{n}}{N^{n}}% \right)\int_{B}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\frac{\psi_{\infty}}{1-|R% |}\ dR$$ (78) $$\displaystyle\quad\lesssim|\nabla u^{n}|^{2}+\frac{1}{(N^{n})^{2}}\left[\int_{% B}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\frac{\psi_{\infty}}{1-|R|}\ dR\right]% ^{2}\log^{2}\left[\int_{B}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\frac{\psi_{% \infty}}{1-|R|}\ dR\right]$$ (79) $$\displaystyle\quad\lesssim|\nabla u^{n}|^{2}+\frac{1}{(N^{n})^{2}}\left[\int_{% B}\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\log\left(\frac{\tilde{\psi}^{n}}{\psi% _{\infty}}\right)\frac{\psi_{\infty}}{1-|R|}\ dR\right]^{2}$$ and the last term can be bounded as in (74). We notice here that the last inequality implies in particular that $|\tau^{n,\delta,\kappa}|^{2}$ is equi-integrable in $L^{1}$ for fixed $\delta$ and $\kappa$. This is actually a very important fact that will be used again later. Proof of Lemma 5.2. To prove this lemma, we use dominated convergence and monotone convergence. Indeed, $|z^{n,\delta,\kappa}|$ is decreasing in $\delta,\kappa$, namely for $0<\delta\leq\delta^{\prime}$ and $0<\kappa\leq\kappa^{\prime}$, we have (80) $$|z^{n,\delta^{\prime},\kappa^{\prime}}|\leq|z^{n,\delta,\kappa}|\leq|\nabla u^% {n}{(\tilde{\psi}^{n})}|.$$ Passing to the limit weakly in $n$, we deduce that (81) $$\overline{|z^{n,\delta^{\prime},\kappa^{\prime}}|}\leq\overline{|z^{n,\delta,% \kappa}|}\leq\overline{|\nabla u^{n}{(\tilde{\psi}^{n})}|}$$ and by monotone convergence, we deduce that $G=\overline{|z^{n,\delta,\kappa}|}^{\delta,\kappa}\in X$ and that for all $0<\delta$ and $0<\kappa$, we have $|z^{\delta,\kappa}|\leq G$. Moreover, we have (82) $$|z^{\delta,\kappa}-z^{\delta^{\prime},\kappa^{\prime}}|\leq\left|\overline{|z^% {n,\delta,\kappa}|}-\overline{|z^{n,\delta^{\prime},\kappa^{\prime}}|}\right|.$$ Hence, there exists $g\in X$ such that $z^{\delta,\kappa}$ converges strongly to $g$ in $X$. Now, we would like to prove that the limit $g$ is equal to $\overline{\nabla u^{n}\tilde{\psi}^{n}}$. This follows from the fact that $\nabla u^{n}\tilde{\psi}^{n}$ is equi-integrable in $L^{1}((0,T)\times\Omega\times B;dt\,dx\,dR)$ (without the weight). Indeed, denoting $\Phi(x)=|x|\log^{1/2}(1+|x|)$, we have (83) $$\displaystyle\Phi(|\nabla u^{n}\tilde{\psi}^{n}|)$$ $$\displaystyle\lesssim|\nabla u^{n}\tilde{\psi}^{n}|(\log(1+\tilde{\psi}^{n})+% \log(1+|\nabla u^{n}|))$$ (84) $$\displaystyle\lesssim\tilde{\psi}^{n}[|\nabla u^{n}|^{2}+\log(\tilde{\psi}^{n})]$$ which is clearly bounded in $L^{1}(dt\,dx\,dR)$. 5.3. Identification of $\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}dR$. In this subsection, we give a relation between $\beta$ and some defect measure related to the lack of strong convergence of $\nabla u^{n}$ in $L^{2}$. To state the main proposition of this subsection, we introduce few notations. Let $u^{n}=v^{n}+w^{n}$ where $v^{n}$ and $w^{n}$solve (85) $$\displaystyle\left\{\begin{array}[]{l}\partial_{t}v^{n}-\Delta v^{n}+\nabla p_% {1}^{n}=\nabla.\tau^{n}\\ v^{n}(t=0)=0\end{array}\right.$$ (86) $$\displaystyle\left\{\begin{array}[]{l}\partial_{t}w^{n}-\Delta w^{n}+\nabla p_% {2}^{n}=-u^{n}.\nabla u^{n}\\ v^{n}(t=0)=u^{n}(t=0).\end{array}\right.$$ We further split $w^{n}$ into $w^{n}_{1}+w^{n}_{2}$ where $w^{n}_{1}$ is the solution with zero initial data and $w^{n}_{2}$ is the solution with zero right hand side. In the rest of this subsection we will use ${\bm{\delta}}$ to denote $\delta,\kappa$. We define $v^{n,{\bm{\delta}}}=v^{n,\delta,\kappa}$ the solution of (87) $$\displaystyle\left\{\begin{array}[]{l}\partial_{t}v^{n,{\bm{\delta}}}-\Delta v% ^{n,{\bm{\delta}}}+\nabla p_{1}^{n,{\bm{\delta}}}=\nabla.\tau^{n,{\bm{\delta}}% }\\ v^{n,{\bm{\delta}}}(t=0)=0\end{array}\right.$$ Extracting a subsequence, we assume that $(\tau^{n,{\bm{\delta}}},\nabla v^{n,{\bm{\delta}}},\nabla v^{n},\nabla w^{n})$ converges weakly in $L^{2}$ to some $(\tau^{{\bm{\delta}}},\nabla v^{{\bm{\delta}}},\nabla v,\nabla w)$ and that (88) $$\overline{|\nabla v^{n,{\bm{\delta}}}|^{2}}=|\nabla v^{{\bm{\delta}}}|^{2}+\mu% ^{{\bm{\delta}}}$$ for some defect measure $\mu^{{\bm{\delta}}}\in{\mathcal{M}}((0,T)\times\Omega)$. We also denote $\mu$ the limit of $\mu^{{\bm{\delta}}}$ when $\delta$ and then $\kappa$ go to zero (extracting a subsequence), namely (89) $$\mu=\lim_{\kappa\to 0}\lim_{\delta\to 0}\mu^{{\bm{\delta}}}=\lim_{{\bm{\delta}% }\to 0}\mu^{{\bm{\delta}}}.$$ Proposition 5.4. We have (90) $$\mu=-\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}dR.$$ Proof of Proposition 5.4. We introduce the following weak limits (91) $$\displaystyle\overline{\tau^{n,{\bm{\delta}}}:\nabla v^{n,{\bm{\delta}}}}$$ $$\displaystyle=W^{{\bm{\delta}}{\bm{\delta}}}$$ (92) $$\displaystyle\overline{\tau^{n,{\bm{\delta}}}:\nabla v^{n}}$$ $$\displaystyle=W^{\bm{\delta}}.$$ Step 1: First, we would like to prove that $W^{{\bm{\delta}}{\bm{\delta}}}$ and $W^{\bm{\delta}}$ have the same limit $W$ when ${\bm{\delta}}$ goes to zero and that this limit is in $L^{1}$. To prove this, we introduce for $M>0$, the following weak limits (93) $$\displaystyle\overline{\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|\leq M% }:\nabla v^{n,{\bm{\delta}}}}$$ $$\displaystyle=W^{{\bm{\delta}}{\bm{\delta}}}_{M}$$ (94) $$\displaystyle\overline{\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|>M}:% \nabla v^{n,{\bm{\delta}}}}$$ $$\displaystyle=W^{{\bm{\delta}}{\bm{\delta}}}-W^{{\bm{\delta}}{\bm{\delta}}}_{M}$$ (95) $$\displaystyle\overline{\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|\leq M% }:\nabla v^{n}}$$ $$\displaystyle=W^{\bm{\delta}}_{M}$$ and (96) $$\displaystyle\overline{|\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|\leq M% }|^{2}}=G^{{\bm{\delta}}}_{M}\quad\quad\overline{|\tau^{n,{\bm{\delta}}}|^{2}}% =G^{{\bm{\delta}}}.$$ Since for a fixed ${\bm{\delta}}$, $|\tau^{n,{\bm{\delta}}}|^{2}$ is equi-integrable, we deduce that $G^{{\bm{\delta}}}_{M}$ converges to $G^{{\bm{\delta}}}$ in $L^{1}$ when $M$ goes to infinity and is monotone in $M$. Also, by monotone convergence, we deduce that there exists $G\in L^{1}$ such that $G^{{\bm{\delta}}}$ converges to $G$ in $L^{1}$ when ${\bm{\delta}}$ goes to zero. Actually, $G$ is the weak limit of $|\tau^{n}|^{2}$ in the sense of Chacon. Let us fix $\varepsilon>0$. We choose ${\bm{\delta}}_{0}$ and $M_{0}$ such that for ${\bm{\delta}}<{\bm{\delta}}_{0}$ and $M>M_{0}$, we have $\|G-G^{\bm{\delta}}\|_{L^{1}}+\|G-G^{\bm{\delta}}_{M}\|_{L^{1}}\leq\varepsilon$. We have (97) $$\displaystyle\overline{|\tau^{n,{\bm{\delta}}}|^{2}}$$ $$\displaystyle=\overline{|\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|\leq M% }|^{2}}+\overline{|\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|>M}|^{2}}$$ (98) $$\displaystyle=G^{{\bm{\delta}}}_{M}+\quad(G^{\bm{\delta}}-G^{{\bm{\delta}}}_{M% }).$$ Hence, we deduce that for ${\bm{\delta}}<{\bm{\delta}}_{0}$ and $M>M_{0}$, we have for all $n$, $\|\tau^{n,{\bm{\delta}}}1_{|\tau^{n,{\bm{\delta}}}|>M}\|_{L^{2}}^{2}\leq\varepsilon$ and hence, by Cauchy-Schwarz we deduce that $\|W^{{\bm{\delta}}{\bm{\delta}}}-W^{{\bm{\delta}}{\bm{\delta}}}_{M}\|_{L^{1}}% \leq C\sqrt{\varepsilon}$ and that $\|W^{{\bm{\delta}}}-W^{{\bm{\delta}}}_{M}\|_{L^{1}}\leq C\sqrt{\varepsilon}$. Hence to prove that $\lim_{{\bm{\delta}}}W^{{\bm{\delta}}{\bm{\delta}}}=\lim_{{\bm{\delta}}}W^{{\bm% {\delta}}}$, it is enough to prove it for the $M$ approximation, namely that (99) $$\lim_{{\bm{\delta}}}W^{{\bm{\delta}}{\bm{\delta}}}_{M}=\lim_{{\bm{\delta}}}W^{% {\bm{\delta}}}_{M}.$$ To prove (99), we first notice that $\tau^{n,{\bm{\delta}}}-\tau^{n}$ goes to zero in $L^{p}$ for $p<2$ when ${\bm{\delta}}$ goes to zero uniformly in $n$. Then, by parabolic regularity of the Stokes system, we deduce that $\|\nabla v^{n,{\bm{\delta}}}-\nabla v^{n}\|_{L^{p}((0,T)\times\Omega)}$ goes to zero when ${\bm{\delta}}$ goes to zero uniformly in $n$ for $p<2$. Hence, (99) holds. Step 2: In this second step, we will compare the local energy identity of the weak limit of (87) with the weak limit of the local energy identity of (87). On one hand, passing to the limit in (87) and multiplying by $v^{\bm{\delta}}$, we deduce that (100) $$\partial_{t}\frac{|v^{{\bm{\delta}}}|^{2}}{2}-\Delta\frac{|v^{{\bm{\delta}}}|^% {2}}{2}+|\nabla v^{{\bm{\delta}}}|^{2}+{\rm div}(p_{1}^{{\bm{\delta}}}v^{{\bm{% \delta}}})={\rm div}(v^{\bm{\delta}}.\tau^{{\bm{\delta}}})-\nabla v^{\bm{% \delta}}:\tau^{\bm{\delta}}$$ On the other hand, reversing the order, we get (101) $$\partial_{t}\frac{|v^{{\bm{\delta}}}|^{2}}{2}-\Delta\frac{|v^{{\bm{\delta}}}|^% {2}}{2}+|\nabla v^{{\bm{\delta}}}|^{2}+\mu_{\bm{\delta}}+{\rm div}(p_{1}^{{\bm% {\delta}}}v^{{\bm{\delta}}})={\rm div}(v^{\bm{\delta}}.\tau^{{\bm{\delta}}})-W% ^{{\bm{\delta}}{\bm{\delta}}}.$$ For a justification of these two calculations, we refer to [48]. Comparing (100) and (101), we deduce that $W^{{\bm{\delta}}{\bm{\delta}}}=\nabla v^{\bm{\delta}}:\tau^{\bm{\delta}}-\mu_{% \bm{\delta}}$. We would like now to send ${\bm{\delta}}$ to zero. First, it is clear that $\tau^{\bm{\delta}}$ converges strongly to $\tau$ in $L^{2}$ when ${\bm{\delta}}$ goes to zero. Hence, $\nabla v^{\bm{\delta}}$ also converges to $\nabla v$ in $L^{2}$. Besides, from the energy estimate, we recall that $u^{n}$ is bounded in $L^{\infty}((0,T);L^{2}(\Omega))\cap L^{2}((0,T);\dot{H}^{1}(\Omega))$ and hence by Sobolev embeddings that $u^{n}$ is bounded in $L^{\frac{2(D+2)}{D}}((0,T)\times\Omega)$ and that $u^{n}\nabla u^{n}$ is bounded in $L^{\frac{D+2}{D+1}}((0,T)\times\Omega)$. By parabolic regularity of the Stokes operator applied to (86) with zero initial data, we deduce that $\nabla w^{n}_{1}$ is bounded in $L^{\frac{D+2}{D+1}}((0,T);W^{1,\frac{D+2}{D+1}}\Omega)$ and that $\partial_{t}w^{n}_{1}$ is bounded in $L^{\frac{D+2}{D+1}}((0,T)\times\Omega)$. Since $\tau^{n}$ is bounded in $L^{2}$, we deduce from (85) that $\nabla v^{n}$ is also bounded in $L^{2}((0,T)\times\Omega)$ and hence $\nabla w^{n}$ is also bounded in $L^{2}((0,T)\times\Omega)$. Moreover, it is clear that $\nabla w^{n}_{2}$ is compact in $L^{2}((0,T)\times\Omega)$ and hence $\nabla w^{n}_{1}$ is also bounded in $L^{2}$ and from the previous bounds on $\nabla w^{n}_{1}$, we deduce that $\nabla w^{n}_{1}$ is compact in $L^{p}((0,T)\times\Omega)$ for $p<2$. Hence, we deduce that $\overline{\nabla w^{n}:\tau(\tilde{\psi}^{n})}^{\delta,\kappa}=\nabla w:\tau(\psi)$ (where we have used that $\tau^{n,{\bm{\delta}}}$ is equi-integrable for each fixed ${\bm{\delta}}$) and from Corollary 5.3 that $\lim_{{\bm{\delta}}}W^{{\bm{\delta}}{\bm{\delta}}}=\lim_{{\bm{\delta}}}W^{\bm{% \delta}}=\overline{\nabla v^{n}:\tau(\tilde{\psi}^{n})}^{\delta,\kappa}=\nabla v% :\tau(\psi)+\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}dR.$ Finally, we deduce that $\mu=\lim_{{\bm{\delta}}\to 0}\mu^{\bm{\delta}}=-\int_{B}\beta_{ij}\frac{R_{i}R% _{j}}{1-|R^{2}|}dR.$ 5.4. Gronwall along the characteristics Taking the difference between (60) and (54) and dividing by $N^{4}$, we get (to be more precise, we have to take the difference between (59) and (54), divide by $N^{4}$ and then send ${\bm{\delta}}$ to zero): $$\displaystyle(\partial_{t}+u.\nabla)\frac{\eta}{N^{4}}$$ $$\displaystyle\quad=\frac{1}{N^{4}}\left[\overline{\nabla u^{n}:{\tau(\tilde{% \psi}^{n})}}^{\delta,\kappa}+\nabla u:\tau\left(\psi_{\infty}\Big{(}g^{2}-2g% \overline{\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Theta^{\prime}(\frac{\tilde{% \psi}^{n}}{\psi_{\infty}})}\Big{)}\right)\right]$$ (102) $$\displaystyle\quad\quad-\frac{1}{N^{4}}\int_{B}\psi_{\infty}\left[4\overline{% \left|\nabla_{R}f^{n}\right|^{2}}^{\delta,\kappa}-2|\nabla_{R}g|^{2}+\overline% {\frac{|\nabla_{R}f^{n}|^{2}(\log^{1/2}+\log^{-3/2})(\frac{\tilde{\psi}^{n}}{% \psi_{\infty}})}{f^{n}}}2g\right]$$ $$\displaystyle\quad\quad-\frac{2ak}{N^{4}}\int_{B}\left(\overline{\nabla u^{n}% \Big{(}\log(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})+1\Big{)}}^{\delta,\kappa}-% \Big{(}\overline{2\Theta^{\prime}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})% \nabla_{i}u^{n}_{j}}\Big{)}g\right)\frac{R_{i}R_{j}}{1-|R|^{2}}\psi_{\infty}$$ $$\displaystyle\quad\quad-\frac{2}{N^{4}}\int\psi_{\infty}\left[\gamma_{ij}R_{j}% \nabla g-\nabla_{R}{\mathcal{U}}R:(\gamma-\gamma^{\prime})g\right]$$ $$\displaystyle\quad=-\sum_{i=1}^{4}A_{i}$$ where we denote the 4 terms appearing on the right hand side by $A_{i},1\leq i\leq 4$ and we also denote $\eta=\overline{N^{n}_{1}}^{\delta,\kappa}-\int_{B}g^{2}\psi_{\infty}=\int_{B}[% \overline{(g^{n})^{2}}-g^{2}]\psi_{\infty}dR$. It measures the lack of strong convergence of $g^{n}$ to $g$ in $L^{2}(dtdx\psi_{\infty}dR)$. Notice that by the choice of the normalizing factor $N$, the defect measure $\frac{\eta}{N}$ is in $L^{\infty}$. First, we prove that $A_{2}$ is nonnegative, namely we have the following lemma Lemma 5.5. We have (103) $$A_{2}\geq\frac{c}{N^{4}}\int_{B}\psi_{\infty}\overline{\left|\nabla_{R}(f^{n}-% f)\right|^{2}}^{\delta,\kappa}=\frac{c}{N^{4}}\int_{B}\psi_{\infty}\varpi$$ for some constant $c$. For the proof, we rewrite $|\nabla_{R}g|^{2}$ as (104) $$\displaystyle|\nabla_{R}g|^{2}$$ $$\displaystyle=\left|\overline{\nabla_{R}f^{n}(\log^{1/2}(f^{n})^{2}+\log^{-1/2% }(f^{n})^{2})}\right|^{2}$$ (105) $$\displaystyle=\left|\overline{\nabla_{R}f^{n}(\log^{1/2}(f^{n})^{2})}\right|^{% 2}+\left|\overline{\nabla_{R}f^{n}(\log^{-1/2}(f^{n})^{2})}\right|^{2}$$ (106) $$\displaystyle\quad+2\overline{\nabla_{R}f^{n}(\log^{1/2}(f^{n})^{2})}\cdot% \overline{\nabla_{R}f^{n}(\log^{-1/2}(f^{n})^{2})}.$$ Hence, we deduce that (107) $$A_{2}=\frac{1}{N^{4}}\int_{B}\psi_{\infty}(\bm{\alpha}+\bm{\beta}+\bm{\gamma})$$ where $\bm{\alpha},\bm{\beta}$ and $\bm{\gamma}$ are given by (108) $$\frac{\bm{\alpha}}{2}=\overline{\frac{|\nabla_{R}f^{n}|^{2}(\log^{1/2}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}}))}{f^{n}}}\overline{f^{n}\log^{1/2}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})}-\overline{(\nabla f^{n})\log^{1/2}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})}^{2}$$ (109) $$\frac{\bm{\beta}}{2}=\overline{\frac{|\nabla_{R}f^{n}|^{2}(\log^{-3/2}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}}))}{f^{n}}}\overline{f^{n}\log^{1/2}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})}-\overline{(\nabla f^{n})\log^{-1/2}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})}^{2}$$ (110) $$\frac{\bm{\gamma}}{2}=2\overline{|\nabla f^{n}|^{2}}^{\delta,\kappa}-2% \overline{(\nabla f^{n})\log^{1/2}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}})}% \quad\overline{(\nabla f^{n})\log^{-1/2}(\frac{\tilde{\psi}^{n}}{\psi_{\infty}% })}$$ We introduce the Young measure $\nu_{t,x,R}(\Lambda,\lambda)$ associated to the sequence $(\nabla f^{n},f^{n})$ where $\Lambda\in{\mathbb{R}}^{D}$ and $\lambda\in{\mathbb{R}}$. Hence, the defect measure $\overline{\left|\nabla_{R}(f^{n}-f)\right|^{2}}^{\delta,\kappa}$ satisfies : (111) $$\displaystyle\overline{\left|\nabla_{R}(f^{n}-f)\right|^{2}}\geq\overline{% \left|\nabla_{R}(f^{n}-f)\right|^{2}}^{\delta,\kappa}$$ $$\displaystyle\geq\int|\Lambda-\int\Lambda^{\prime}\nu_{t,x,R}(\Lambda^{\prime}% ,\lambda^{\prime})|^{2}\nu_{t,x,R}(\Lambda,\lambda)$$ (112) $$\displaystyle=\frac{1}{2}\int\int|\Lambda-\Lambda^{\prime}|^{2}\nu_{t,x,R}(% \Lambda^{\prime},\lambda^{\prime})\nu_{t,x,R}(\Lambda,\lambda)$$ Indeed, it is easy to see that $\overline{\left|\nabla_{R}(f^{n}-f)\right|^{2}}^{\delta,\kappa}$ is bounded from above by the weak limit and from below by the Chacon limit of $\left|\nabla_{R}(f^{n}-f)\right|^{2}$. In the sequel, we will drop the $t,x$ and $R$ dependence of $\nu$ and will denote $\nu^{\prime}=\nu(\Lambda^{\prime},\lambda^{\prime})$ and $\nu=\nu(\Lambda,\lambda)$. Besides, $\bm{\alpha},\bm{\beta}$ and $\bm{\gamma}$ satisfy (113) $$\displaystyle\bm{\alpha}\geq\int\int A(\Lambda,\lambda,\Lambda^{\prime},% \lambda^{\prime})\nu(\Lambda^{\prime},\lambda^{\prime})\nu(\Lambda,\lambda)$$ and the same for $\bm{\beta}$ and $\bm{\gamma}$ with $A$ replaced by $B$ or $C$ where $A,B$ and $C$ are given by (114) $$\displaystyle A$$ $$\displaystyle=\frac{|\Lambda|^{2}\log^{1/2}(\lambda^{2})}{\lambda}\lambda^{% \prime}\log^{1/2}(\lambda^{\prime})^{2}+\frac{|\Lambda^{\prime}|^{2}\log^{1/2}% (\lambda^{\prime})^{2}}{\lambda^{\prime}}\lambda\log^{1/2}(\lambda^{2})-2% \Lambda.\Lambda^{\prime}\log^{1/2}(\lambda^{2})\log^{1/2}(\lambda^{\prime})^{2}$$ (115) $$\displaystyle B$$ $$\displaystyle=\frac{|\Lambda|^{2}\log^{-3/2}(\lambda^{2})}{\lambda}\lambda^{% \prime}\log^{1/2}(\lambda^{\prime})^{2}+\frac{|\Lambda^{\prime}|^{2}\log^{-3/2% }(\lambda^{\prime})^{2}}{\lambda^{\prime}}\lambda\log^{1/2}(\lambda^{2})-2% \Lambda.\Lambda^{\prime}\log^{-1/2}(\lambda^{2})\log^{-1/2}(\lambda^{\prime})^% {2}$$ (116) $$\displaystyle C$$ $$\displaystyle=2|\Lambda|^{2}+2|\Lambda^{\prime}|^{2}-2\Lambda.\Lambda^{\prime}% \Big{(}\log^{1/2}(\lambda^{2})\log^{-1/2}(\lambda^{\prime 2})+\log^{-1/2}(% \lambda^{2})\log^{1/2}(\lambda^{\prime})^{2}\Big{)}.$$ To prove lemma 5.5, it is enough to show that $A+B+C\geq\frac{c}{2}|\Lambda-\Lambda^{\prime}|^{2}$. First, we rewrite $A+B+C$ as (117) $$\displaystyle A+B+C$$ $$\displaystyle=|\Lambda|^{2}B_{1}+|\Lambda^{\prime}|^{2}B_{2}-2\Lambda.\Lambda^% {\prime}B_{3}$$ (118) $$\displaystyle=|\Lambda-\Lambda^{\prime}|^{2}+|\Lambda|^{2}(B_{1}-1)+|\Lambda^{% \prime}|^{2}(B_{2}-1)-2\Lambda.\Lambda^{\prime}(B_{3}-1)$$ where $B_{1},B_{2}$ and $B_{3}$ are given by (119) $$\displaystyle B_{1}$$ $$\displaystyle=\frac{\log^{1/2}(\lambda^{2})}{\lambda}\lambda^{\prime}\log^{1/2% }(\lambda^{\prime})^{2}+\frac{\lambda^{\prime}\log^{1/2}(\lambda^{\prime})^{2}% }{\lambda\log^{3/2}(\lambda^{2})}+2$$ (120) $$\displaystyle B_{2}$$ $$\displaystyle=\frac{\log^{1/2}(\lambda^{\prime 2})}{\lambda^{\prime}}\lambda% \log^{1/2}(\lambda^{2})+\frac{\lambda\log^{1/2}(\lambda^{2})}{\lambda^{\prime}% \log^{3/2}(\lambda^{\prime 2})}+2$$ (121) $$\displaystyle B_{3}$$ $$\displaystyle=\log^{1/2}(\lambda^{2})\log^{1/2}(\lambda^{\prime 2})+\frac{1}{% \log^{1/2}(\lambda^{2})\log^{1/2}(\lambda^{\prime 2})}+\frac{\log^{1/2}(% \lambda^{2})}{\log^{1/2}(\lambda^{\prime 2})}+\frac{\log^{1/2}(\lambda^{\prime 2% })}{\log^{1/2}(\lambda^{2})}$$ Actually, we will prove that if $a$ is chosen big enough then $(B_{1}-1)(B_{2}-1)\geq(B_{3}-1)^{2}$ from which we deduce that $A+B+C\geq|\Lambda-\Lambda^{\prime}|^{2}$ and the lemma would follow. Indeed, after simple calculations, we get $$\displaystyle(B_{1}-1)(B_{2}-1)-(B_{3}-1)^{2}=$$ $$\displaystyle\quad\quad\log^{1/2}(\lambda^{2})\log^{1/2}(\lambda^{\prime 2})% \left[\frac{\lambda}{\lambda^{\prime}}+\frac{\lambda^{\prime}}{\lambda}+2-2% \frac{\log^{1/2}(\lambda^{2})}{\log^{1/2}(\lambda^{\prime 2})}-2\frac{\log^{1/% 2}(\lambda^{\prime 2})}{\log^{1/2}(\lambda^{2})}\right]$$ $$\displaystyle\quad\quad+2\left[\frac{\log^{1/2}(\lambda^{2})}{\log^{1/2}(% \lambda^{\prime 2})}+\frac{\log^{1/2}(\lambda^{\prime 2})}{\log^{1/2}(\lambda^% {2})}-2\right]$$ $$\displaystyle\quad\quad+\frac{1}{\log^{1/2}(\lambda^{2})\log^{1/2}(\lambda^{% \prime 2})}\left[\frac{\lambda\log(\lambda^{2})}{\lambda^{\prime}\log(\lambda^% {\prime 2})}+\frac{\lambda^{\prime}\log(\lambda^{\prime 2})}{\lambda\log(% \lambda^{2})}+2-2\frac{\log^{1/2}(\lambda^{2})}{\log^{1/2}(\lambda^{\prime 2})% }-2\frac{\log^{1/2}(\lambda^{\prime 2})}{\log^{1/2}(\lambda^{2})}\right]$$ We will prove that the three terms appearing inside the brackets are nonnegative. This is obvious for the second one since it is of the form $x+\frac{1}{x}-2$ for some $x>0$. We recall that since $(f^{n})^{2}\geq a$, we get that $\lambda\geq\sqrt{a}$ on the support of $\nu$. For the first bracket, we assume that $\lambda^{\prime}\geq\lambda$ and write $\lambda^{\prime}=\lambda(1+\varepsilon)$. Hence, the term in the first bracket is given by (122) $$1+\varepsilon+\frac{1}{1+\varepsilon}+2-2\sqrt{1+\frac{\log(1+\varepsilon)}{% \log\lambda}}-2\frac{1}{\sqrt{1+\frac{\log(1+\varepsilon)}{\log\lambda}}}$$ and one can check easily that if $\lambda\geq\sqrt{a}$ is big enough then (122) is nonnegative. The same argument can be used for the third bracket. This end the proof of lemma 5.5. To bound $A_{1}$, we first observe that (123) $$g^{2}-2g\overline{\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Theta^{\prime}(\frac{% \tilde{\psi}^{n}}{\psi_{\infty}})}=-\overline{f^{n}\log^{1/2}(f^{n})^{2}}\ % \overline{f^{n}\log^{-1/2}(f^{n})^{2}}$$ and hence, (124) $$\displaystyle A_{1}$$ $$\displaystyle=-\frac{1}{N^{4}}\left[\overline{\nabla u^{n}:{\tau(\tilde{\psi}^% {n})}}^{\delta,\kappa}+\nabla u:\tau\left(\psi_{\infty}\Big{(}g^{2}-2g% \overline{\frac{\tilde{\psi}^{n}}{\psi_{\infty}}\Theta^{\prime}(\frac{\tilde{% \psi}^{n}}{\psi_{\infty}})}\Big{)}\right)\right]$$ (125) $$\displaystyle=-\frac{1}{N^{4}}\left[\overline{\nabla u^{n}:{\tau(\tilde{\psi}^% {n}-\psi)}}^{\delta,\kappa}+\nabla u:\tau(\psi-\psi_{\infty}(\overline{f^{n}% \log^{1/2}(f^{n})^{2}}\ \overline{f^{n}\log^{-1/2}(f^{n})^{2}}))\right]$$ (126) $$\displaystyle=\frac{1}{N^{4}}\left[\mu-\nabla u:\tau(\psi-\psi_{\infty}(% \overline{f^{n}\log^{1/2}(f^{n})^{2}}\ \overline{f^{n}\log^{-1/2}(f^{n})^{2}})% )\right]$$ By convexity, it is clear that $\overline{(f^{n}-f)^{2}}=\overline{(f^{n})^{2}}-f^{2}\geq\overline{(f^{n})^{2}% }-\overline{f^{n}\log^{1/2}(f^{n})^{2}}\ \overline{f^{n}\log^{-1/2}(f^{n})^{2}}$ and hence, (127) $$\displaystyle|\tau(\psi-\psi_{\infty}(\overline{f^{n}\log^{1/2}(f^{n})^{2}}\ % \overline{f^{n}\log^{-1/2}(f^{n})^{2}}))|$$ $$\displaystyle\leq\left(\int_{B}\psi_{\infty}\overline{(f^{n}-f)^{2}}\int_{B}% \psi_{\infty}\overline{|\nabla(f^{n}-f)|^{2}}^{\delta,\kappa}\right)^{1/2}$$ Hence, (128) $$\displaystyle-A_{1}$$ $$\displaystyle\leq-\frac{\mu}{N^{4}}+C\frac{|\nabla u|}{N^{4}}\left(\int_{B}% \psi_{\infty}\overline{(f^{n}-f)^{2}}\int_{B}\psi_{\infty}\overline{|\nabla(f^% {n}-f)|^{2}}^{\delta,\kappa}\right)^{1/2}$$ (129) $$\displaystyle\leq-\frac{\mu}{N^{4}}+C|\nabla u|^{2}\frac{\eta}{N^{4}}+\frac{1}% {10{N^{4}}}\int_{B}\psi_{\infty}\overline{|\nabla(f^{n}-f)|^{2}}^{\delta,% \kappa}.$$ The term between parentheses in the definition of $A_{3}$ can be written as (130) $$\overline{\frac{\nabla u^{n}}{f^{n}}\Big{(}\log^{1/2}(f^{n})^{2}+\log^{-1/2}(f% ^{n})^{2}\Big{)}\Big{[}f^{n}\log^{1/2}(f^{n})^{2}-\overline{f^{n}\log^{1/2}(f^% {n})^{2}}\Big{]}}$$ If we denote $\nu_{t,x,R}(\Pi,\lambda)$ the Young measure associated to the sequence $(\nabla_{x}u^{n},f^{n})$, then we see easily that $A_{3}$ is given by (131) $$\displaystyle A_{3}$$ $$\displaystyle=-\frac{2ak}{N^{4}}\int_{B}\int\int\left(\frac{\Pi}{\lambda}(\log% ^{1/2}\lambda^{2}+\log^{-1/2}\lambda^{2})-\frac{\Pi^{\prime}}{\lambda^{\prime}% }(\log^{1/2}\lambda^{\prime 2}+\log^{-1/2}\lambda^{\prime 2})\right)$$ (132) $$\displaystyle\quad\quad\quad\quad\quad(\lambda\log^{1/2}\lambda^{2}-\lambda^{% \prime}\log^{1/2}\lambda^{\prime 2})\frac{R_{i}R_{j}}{1-|R|^{2}}\psi_{\infty}% \,\,d\nu\,d\nu^{\prime}\,dR.$$ The absolute value of the two factors inside the integral can be bounded respectively by (133) $$\displaystyle|{\Pi}-{\Pi^{\prime}}|(\frac{\log^{1/2}\lambda^{2}}{\lambda}+% \frac{\log^{1/2}\lambda^{\prime 2}}{\lambda^{\prime}})+(|\Pi|+|\Pi^{\prime}|)(% \frac{\log^{1/2}\lambda^{2}}{\lambda}-\frac{\log^{1/2}\lambda^{\prime 2}}{% \lambda^{\prime}})\quad\hbox{and}$$ (134) $$\displaystyle|\lambda-\lambda^{\prime}|(\log^{1/2}\lambda^{2}+\log^{1/2}% \lambda^{\prime 2}).\quad\hbox{Hence}$$ (135) $$\displaystyle|A_{3}|$$ $$\displaystyle\leq\frac{1}{10{N^{4}}}\int_{B}\int\int|{\Pi}-{\Pi^{\prime}}|^{2}% (\frac{\log\lambda^{2}}{\lambda}+\frac{\log\lambda^{\prime 2}}{\lambda^{\prime% }})^{2}\frac{1}{1-|R|^{2}}\psi_{\infty}\,d\nu\,d\nu^{\prime}\,dR$$ (136) $$\displaystyle\quad\quad\quad+\frac{C}{N^{4}}\int_{B}\int\int(|{\Pi}|+|{\Pi^{% \prime}}|)|\lambda-\lambda^{\prime}|^{2}(\frac{\log\lambda^{2}}{\lambda^{2}}+% \frac{\log\lambda^{\prime 2}}{\lambda^{\prime 2}})^{2}\frac{1}{1-|R|^{2}}\psi_% {\infty}\,d\nu\,d\nu^{\prime}\,dR$$ (137) $$\displaystyle\quad\quad\quad\quad\quad\quad+\frac{C}{N^{4}}\int_{B}\int\int(1+% |{\Pi}|+|{\Pi^{\prime}}|)|\lambda-\lambda^{\prime}|^{2}\frac{1}{1-|R|^{2}}\psi% _{\infty}\,d\nu\,d\nu^{\prime}\,dR$$ (138) $$\displaystyle\leq\frac{1}{10{N^{4}}}\mu+\frac{1}{10{N^{4}}}\kappa+\frac{C}{{N^% {4}}}|\nabla u|^{2}\eta$$ Finally, to bound $-A_{4}$, we split it into two terms : (139) $$\displaystyle|A_{4}^{1}|$$ $$\displaystyle\leq\frac{2}{N^{4}}\int_{B}\psi_{\infty}|\gamma_{ij}||\nabla g|\ dR$$ (140) $$\displaystyle\leq\frac{1}{10N^{4}}\overline{|\nabla u^{n}-\nabla u||}^{\delta,% \kappa}+\frac{C}{N^{4}}\overline{\left(\int_{B}(g^{n}-g)|\nabla_{R}g|\psi_{% \infty}\right)^{2}}$$ (141) $$\displaystyle\leq\frac{1}{10N^{4}}\mu+\frac{C}{N^{4}}\int_{B}\psi_{\infty}|% \nabla_{R}g|^{2}\int_{B}\psi_{\infty}\overline{(g^{n}-g)^{2}}$$ To bound $A_{4}^{2}$, we first consider the case $k>1$ where the term can be treated as $A_{4}^{1}$ using (7): $$\displaystyle|A_{4}^{2}|$$ $$\displaystyle\leq\frac{2}{N^{4}}\int_{B}\psi_{\infty}(|\gamma_{ij}|+|\gamma_{% ij}^{\prime}|)\frac{g}{1-|R|}\ dR$$ (142) $$\displaystyle\leq\frac{1}{10{N^{4}}}\mu+\frac{C}{{N^{4}}}\int_{B}\psi_{\infty}% \overline{(g^{n}-g)^{2}}\ dR\int_{B}\psi_{\infty}\frac{|g|^{2}}{(1-|R|)^{2}}\ dR$$ $$\displaystyle\leq\frac{1}{10{N^{4}}}\mu+\frac{C}{N^{4}}\left(\int_{B}\psi_{% \infty}|\nabla_{R}g|^{2}\right)\eta.$$ In the case $k\leq 1$, we have to use (10) instead of (7). Take $0\leq\beta<k$ and $\gamma=\frac{1-\beta}{2}$ (143) $$\left(\int_{B}\psi_{\infty}\frac{|g^{n}-g|\,g}{(1-|R|^{2})}\ dR\right)^{2}% \quad\leq\int_{B}\psi_{\infty}\frac{(g^{n}-g)^{2}\log^{-\gamma}(f^{n})^{2}}{(1% -|R|^{2})^{1-\beta}}\,dR\quad\int_{B}\psi_{\infty}\frac{g^{2}\log^{\gamma}(f^{% n})^{2}}{(1-|R|^{2})^{1+\beta}}\ dR$$ To bound the second term, we use the following Young’s inequality for $a,b\geq 1$ $ab\leq a\log^{\gamma}a+e^{(b^{\frac{1}{\gamma}})}$. We denote $d=1-|R|^{2}$ and hence $$\displaystyle\int_{B}\psi_{\infty}\frac{g^{2}\log^{\gamma}(f^{n})^{2}}{(1-|R|^% {2})^{1+\beta}}\ dR$$ $$\displaystyle\leq\int_{B}\psi_{\infty}\left[\frac{g^{2}}{d^{1+\beta}}\log^{% \gamma}\frac{g^{2}}{d^{1+\beta}}+|f^{n}|^{2}\right]dR.$$ On the set $\{g^{2}\geq\frac{1}{d^{\varepsilon}}\}$ where $\varepsilon=\frac{k-\beta}{2}$, we have $\log^{\gamma}\frac{g^{2}}{d^{1+\beta}}\leq C\log^{\gamma}g^{2}$. Besides, we have using (10) $$\int_{B}\psi_{\infty}\frac{g^{2}\log^{\gamma}{g^{2}}}{d^{1+\beta}}dR\ \leq% \left(\int_{B}\psi_{\infty}g^{2}\log{g^{2}}\right)^{1-\beta\over 2}\left(\int_% {B}\psi_{\infty}(|\nabla_{R}g|^{2}+g^{2})\right)^{1+\beta\over 2}.$$ On the set $\{g^{2}\leq\frac{1}{d^{\varepsilon}}\}$, we have $$\frac{g^{2}}{d^{1+\beta}}\log^{\gamma}\frac{g^{2}}{d^{1+\beta}}\leq\frac{C}{d^% {1+\beta+\varepsilon}}\log^{\gamma}(\frac{1}{d})$$ which is integrable in the ball $B$ with the measure $\psi_{\infty}dR$. To bound the first term on the right hand side of (143), we first notice that $$\overline{(g^{n}-g)^{2}\log^{-\gamma}(f^{n})^{2}}\leq\overline{(f^{n}-f)^{2}% \log^{1-\gamma}(C+(f^{n}-f)^{2})}$$ which can be easily proved using Young measures. Besides, we have using (10) $$\displaystyle|\int_{B}\psi_{\infty}\frac{(f^{n}-f)^{2}\log^{1-\gamma}(C+(f^{n}% -f)^{2})}{(1-|R|^{2})^{1-\beta}}\,dR|$$ $$\displaystyle\quad\leq\left(\int_{B}\psi_{\infty}(f^{n}-f)^{2}\log(C+(f^{n}-f)% ^{2})\right)^{1+\beta\over 2}\left(\int_{B}\psi_{\infty}|\nabla_{R}(f^{n}-f)|^% {2}\right)^{1-\beta\over 2}$$ $$\displaystyle\quad\leq\frac{C}{\lambda^{2\over 1+\beta}}\left(\int_{B}\psi_{% \infty}(f^{n}-f)^{2}\log(C+(f^{n}-f)^{2})\right)+\lambda^{2\over 1-\beta}\left% (\int_{B}\psi_{\infty}|\nabla_{R}(f^{n}-f)|^{2}\right).$$ for each $\lambda>0$. Passing to the limit weakly (more precisely, applying $\overline{F_{n}}^{\delta,\kappa}$) to both sides and optimizing in $\lambda$, we deduce that, (144) $$\frac{1}{N^{4}}\overline{\left(\int_{B}\psi_{\infty}\frac{|g^{n}-g|\,g}{(1-|R|% ^{2})}\ dR\right)^{2}}\leq\frac{C}{N^{4}}\left(\int_{B}\psi_{\infty}g^{2}\log{% g^{2}}\right)^{1-\beta\over 2}\left(\int_{B}\psi_{\infty}(|\nabla_{R}g|^{2}+g^% {2})\right)^{1+\beta\over 2}\eta^{1+\beta\over 2}\varpi^{1-\beta\over 2}.$$ Putting all these estimates together, we deduce that (145) $$\displaystyle(\partial_{t}+u.\nabla)\frac{\eta}{N^{4}}+\frac{\mu+\varpi}{4N^{4% }}\leq C|\nabla u|^{2}\frac{\eta}{N^{4}}+\frac{C}{N^{4}}\left(1+\int_{B}\psi_{% \infty}|\nabla_{R}g|^{2}\right)\Big{(}\int_{B}\psi_{\infty}g^{2}\log g^{2}\Big% {)}^{1-\beta\over 1+\beta}\eta.$$ We can take $\beta=0$. Next, we observe that $\int_{B}\psi_{\infty}g^{2}\log g^{2}dR\leq CN^{2}$. Indeed, if we introduce $h^{n}=g^{n}\log^{1/2}g^{n}$, we see that $N^{n}_{2}\geq\left(\int_{B}\psi_{\infty}(h^{n})^{2}\right)^{1/2}$ and then it is easy to see using that $(x,y)\to\frac{x^{2}}{y}$ is convex that $$\overline{\left(\int_{B}\psi_{\infty}(h^{n})^{2}\right)^{1/2}}\geq\left(\int_{% B}\psi_{\infty}h^{2}\right)^{1/2}$$ from which we deduce the claim. Hence (145) becomes (146) $$\displaystyle\frac{d}{dt}\frac{\eta}{N^{4}}(t,X(t,x))+\frac{\mu+\varpi}{4N^{4}% }(t,X(t,x))\leq C|\nabla u|^{2}\frac{\eta}{N^{4}}(t,X(t,x))+C\left[1+\int_{B}% \psi_{\infty}\frac{|\nabla_{R}g|^{2}}{N}\right]\frac{\eta}{N}(t,X(t,x)).$$ First notice that the right hand side of (146) is in $L^{1}((0,T)\times K)$ for any bounded measurable set of $\Omega$ (To prove this, we can observe that $\frac{\eta}{N}$ is bounded and that using (47), the term between brackets in (146) is in $L^{1}((0,T)\times K)$). Hence (146) is well justified in the sense of distribution. In particular this justifies all the calculations done in this subsection starting from (102). Now, since the term between brackets in (146) is in $L^{1}((0,T)\times K)$, for almost all $x$, $\int_{0}^{T}\left[1+\int_{B}\psi_{\infty}\frac{|\nabla_{R}g|^{2}}{N}\right](t,% X(t,x))$ is finite. Besides, for almost all $x$, ${N}(t,X(t,x))$ (which is constant in $t$) is bounded. Hence, we deduce that for almost all $x$, $\int_{0}^{T}N^{3}\left[1+\int_{B}\psi_{\infty}\frac{|\nabla_{R}g|^{2}}{N}% \right]+|\nabla u|^{2}(t,X(t,x))$ is finite. Hence, by Gronwall lemma, we deduce that for a.e $x$, we have for all $t<T$, $\frac{\eta(t,x)}{N^{4}}\leq\frac{\eta(0,x)}{N^{4}}e^{C_{T}(x)}$ and since $\eta(0,x)=0$ due to the initial strong convergence, we deduce that $\frac{\eta(t,x)}{N}^{4}=0$ and hence $\eta=0$ and we deduce the strong convergence of $g^{n}$ to $g$. This yields that $(u,\psi)$ is a weak solution of (1) with the initial data $(u_{0},\psi_{0})$. 6. Approximate system In the previous section, we proved the weak compactness of a sequence of solutions to the system (1). Of course one has to construct a sequence of (approximate) weak solutions to which we can apply the strategy of the previous sections. The only thing we have to make sure is that the calculations done in the previous section can be made on the approximate system. We consider a sequence of global smooth solutions $(u^{n},\psi^{n})$ to the following regularized system where $k$ is some integer that depends on $D$. In particular one can take $k=1$ for $D=2$ or $3$: (147) $$\left\{\begin{array}[]{l}{\partial_{t}u^{n}}+(u^{n}\cdot\nabla)u^{n}-\nu\Delta u% ^{n}+\frac{1}{n}(\Delta)^{2k}u^{n}+\nabla p^{n}={{\rm div}}\tau^{n},\quad{{\rm div% }}u=0,\\ \\ \partial_{t}\psi^{n}+u^{n}.\nabla\psi^{n}={\rm div}_{R}\Big{[}-\nabla u^{n}\,R% \psi^{n}+{\beta}\nabla\psi^{n}+\nabla{\mathcal{U}}\psi^{n}\Big{]}\\ \\ \tau^{n}_{ij}=\int_{B}(R_{i}\otimes\nabla_{j}{\mathcal{U}})\psi^{n}(t,x,R)dR\,% \quad\quad(\nabla{\mathcal{U}}\psi^{n}+{\beta}\nabla\psi^{n}).n=0\;\hbox{on}\;% \partial B(0,R_{0}).\end{array}\right.$$ with a smooth initial condition $(u^{n}_{0},\psi^{n}_{0})$ such that $(u^{n}_{0},\psi^{n}_{0})$ converges strongly to $(u_{0},\psi_{0})$ in $L^{2}(\Omega)\times L^{1}(\Omega\times B)$ and $\psi^{n}_{0}\log\frac{\psi^{n}_{0}}{\rho^{n}_{0}\psi_{\infty}}-\psi^{n}_{0}+% \rho^{n}_{0}\psi_{\infty}$ converges strongly to $\psi_{0}\log\frac{\psi_{0}}{\rho_{0}\psi_{\infty}}-\psi_{0}+\rho_{0}\psi_{\infty}$ in $L^{1}(\Omega\times B)$. We also assume that (6) holds uniformly in $n$. In the case $\Omega$ is a bounded domain of ${\mathbb{R}}^{D}$, we also impose the following boundary condition $u^{n}=\Delta u^{n}=...=(\Delta)^{2k-1}u^{n}=0$ at the boundary $\partial\Omega$. We do not detail the proof of existence for the system (147). We only mention that we have to combine classical results about strong solutions to Navier-Stokes system with the study of the linear Fokker-Planck equation (see [51]). In particular the following operator was used (148) $$L\psi=-div(\psi_{\infty}\nabla\frac{\psi}{\psi_{\infty}})$$ on the space ${\mathcal{H}}=L^{2}(\frac{dR}{\psi_{\infty}})$ and with domain (149) $$D(L)=\left\{\psi\in{\mathcal{H}}|\psi_{\infty}\nabla\frac{\psi}{\psi_{\infty}}% \in{\mathcal{H}},\quad div(\psi_{\infty}\nabla\frac{\psi}{\psi_{\infty}})\in{% \mathcal{H}}\,\quad\hbox{and}\,\psi_{\infty}\nabla\frac{\psi}{\psi_{\infty}}|_% {\partial B}=0\right\}.$$ Also the following two Hilbert spaces ${\mathcal{H}}^{1}$ and ${\mathcal{H}}^{2}$ are used in the construction : (150) $$\displaystyle{\mathcal{H}}^{1}=\left\{\psi\in{\mathcal{H}}\ |\quad\int\psi_{% \infty}\left|\nabla\frac{\psi}{\psi_{\infty}}\right|^{2}+\frac{\psi^{2}}{\psi_% {\infty}}\ dR<\infty\right\}$$ (151) $$\displaystyle{\mathcal{H}}^{2}=\left\{\psi\in{\mathcal{H}}^{1}\,|\quad\int% \left(div(\psi_{\infty}\nabla\frac{\psi}{\psi_{\infty}})\right)^{2}\frac{dR}{% \psi_{\infty}}<\infty.\right\}$$ Following the the proof of existence given in [51], we can prove Proposition 6.1. Take $u_{0}^{n}\in H^{s}(\Omega)$ and $\psi_{0}^{n}\geq 0$ such that $\psi^{n}_{0}-\rho^{n}_{0}\psi_{\infty}\in H^{s}(\Omega;L^{2}({dR\over\psi_{% \infty}}))$ with $\rho^{n}_{0}=\int\psi_{0}^{n}dR\in L^{\infty}(\Omega)$. Then, there exists a global unique solution $(u^{n},\psi^{n})$ to (147) such that $(u^{n},\psi^{n}-\rho^{n}\psi_{\infty})$ is in $C([0,T);H^{s})\times C([0,T);H^{s}({\mathbb{R}}^{N};L^{2}({dR\over\psi_{\infty% }})))$ for all $0<T$. Moreover, $u^{n}\in L^{2}([0,T);H^{s+k})$ and $\psi^{n}-\rho^{n}\psi_{\infty}\in L^{2}([0,T);H^{s}({\mathbb{R}}^{N};{\mathcal% {H}}^{1}))$. Remark 6.2. The proof is exactly the same as the proof of Theorem 2.1 of [51] with few differences: • In [51], we only had local existence whereas here, we have global existence since we have more regularity. • Theorem 2.1 of [51] was stated in the whole space. Of course in the case of a bounded domain, we have to use energy bounds for Navier-Stokes written in a bounded domain. • In theorem 2.1 of [51] we assumed that $\int\psi_{0}dR=1$. The result can be easily extended to this more general case. We also point out that there is a small mistake in the statement of the theorem 2.1 of [51]. Indeed, one has to read $\psi_{0}-\psi_{\infty}\in H^{s}(\Omega;L^{2}({dR\over\psi_{\infty}}))$ instead of $\psi_{0}\in H^{s}(\Omega;L^{2}({dR\over\psi_{\infty}}))$ when the problem is in the whole space. It is clear that the solutions constructed in Proposition 6.1 satisfy the free-energy bound (32) and the extra bound (42) (with $\Omega$ replaced by $K$ in the whole space case). Once we have our sequence of regular approximate solutions, we have to check that all the computations performed in the previous section can be done on this sequence $(u^{n},\psi^{n})$. The only point to be checked is that Proposition 5.4 still holds since the rest of the proof only involves the transport equation. Now, $v^{n}$ and $w^{n}$solve (152) $$\displaystyle\left\{\begin{array}[]{l}\partial_{t}v^{n}-\Delta v^{n}+\frac{1}{% n}\Delta^{2k}v^{n}+\nabla p_{1}^{n}=\nabla.\tau^{n}\\ v^{n}(t=0)=0\end{array}\right.$$ (153) $$\displaystyle\left\{\begin{array}[]{l}\partial_{t}w^{n}-\Delta w^{n}+\frac{1}{% n}\Delta^{2k}w^{n}+\nabla p_{2}^{n}=-u^{n}.\nabla u^{n}\\ v^{n}(t=0)=u^{n}(t=0)\end{array}\right.$$ and we define $v^{n,{\bm{\delta}}}$ the solution of (154) $$\displaystyle\left\{\begin{array}[]{l}\partial_{t}v^{n,{\bm{\delta}}}-\Delta v% ^{n,{\bm{\delta}}}+\frac{1}{n}\Delta^{2k}v^{n,{\bm{\delta}}}+\nabla p_{1}^{n,{% \bm{\delta}}}=\nabla.\tau^{n,{\bm{\delta}}}\\ v^{n,{\bm{\delta}}}(t=0)=0\end{array}\right.$$ Step 1 of the proof of Proposition 5.4 is the same with the difference that one has to apply parabolic regularity for the perturbed Stokes operator which yields the same uniform in $n$ estimate. Hence, we deduce that $\|\nabla v^{n,{\bm{\delta}}}-\nabla v^{n}\|_{L^{p}((0,T)\times\Omega)}$ goes to zero when ${\bm{\delta}}$ goes to zero uniformly in $n$ for $p<2$. For the second step, we first notice that (100) remains the same since $\frac{1}{n}(\Delta)^{2k}u^{n}$ converges weakly to zero. Moreover, multiplying the first equation of (152) by $v^{n,{\bm{\delta}}}$, we get (155) $$\partial_{t}\frac{|v^{n,{\bm{\delta}}}|^{2}}{2}-\Delta\frac{|v^{n,{\bm{\delta}% }}|^{2}}{2}+|\nabla v^{n,{\bm{\delta}}}|^{2}+G^{n,{\bm{\delta}}}+{\rm div}(p_{% 1}^{n,{\bm{\delta}}}v^{n,{\bm{\delta}}})={\rm div}(v^{n,{\bm{\delta}}}.\tau^{n% ,{\bm{\delta}}})-\tau^{n,{\bm{\delta}}}:\nabla v^{n,{\bm{\delta}}}.$$ where $G^{n,{\bm{\delta}}}$ is given by (156) $$\begin{array}[]{l}G^{n,{\bm{\delta}}}=\frac{1}{n}[{\rm div}_{i}(\nabla_{i}% \Delta^{2k-1}v^{n,{\bm{\delta}}}.v^{n,{\bm{\delta}}}-\Delta^{2k-1}v^{n,{\bm{% \delta}}}.\nabla_{i}v^{n,{\bm{\delta}}}+\\ \quad\quad\quad\quad\quad\quad\nabla_{i}\Delta^{2k-2}v^{n,{\bm{\delta}}}.% \Delta v^{n,{\bm{\delta}}}-...-\Delta^{k}v^{n,{\bm{\delta}}}.\nabla_{i}\Delta^% {k-1}v^{n,{\bm{\delta}}})+\Delta^{k}v^{n,{\bm{\delta}}}.\Delta^{k}v^{n,{\bm{% \delta}}}]\end{array}$$ Using the fact that $\frac{1}{n}\int_{0}^{T}\int_{\Omega}|\Delta^{k}v^{n,{\bm{\delta}}}|^{2}$ and $\int_{0}^{T}\int_{\Omega}|\nabla v^{n,{\bm{\delta}}}|^{2}$ are uniformly bounded, we deduce easily that $\overline{G^{n,{\bm{\delta}}}}=\overline{|\Delta^{k}v^{n,{\bm{\delta}}}|^{2}}\geq 0$ and hence passing to the limit in (157), we deduce that (157) $$\partial_{t}\frac{|v^{{\bm{\delta}}}|^{2}}{2}-\Delta\frac{|v^{{\bm{\delta}}}|^% {2}}{2}+|\nabla v^{{\bm{\delta}}}|^{2}+\mu_{\bm{\delta}}+{\rm div}(p_{1}^{{\bm% {\delta}}}v^{{\bm{\delta}}})\leq{\rm div}(v^{\bm{\delta}}.\tau^{{\bm{\delta}}}% )-W^{{\bm{\delta}}{\bm{\delta}}}.$$ and hence Proposition 5.4 is replaced by an inequality $\mu\leq-\int_{B}\beta_{ij}\frac{R_{i}R_{j}}{1-|R^{2}|}dR$ which is the inequality that we need in the rest of the proof. 7. Conclusion In this paper we gave a proof of existence of weak solutions to the system (1), using the fact that a sequence of regular solutions to the approximate system (147) converges weakly to a weak solution of (1). We would like here to mention few important open problems (with increasing level of difficulty, at least this is what the author thinks): • The zero diffusion limit in $x$. If we add a diffusion term $\frac{1}{n}\Delta_{x}\psi$ in the Fokker-Planck equation of (1), then one can prove the global existence of weak solutions to the regularized model. A natural question is whether we recover a weak solution of the unregularized system (1) when $n$ goes to zero. This is the object of a forthcoming paper [53]. The difficulty comes from the fact that the calculation of section 5 used in a critical way the fact that we had a transport equation in the $x$ variable. • Relaxing the assumption (42). This extra bound was only used to give some extra control on the stress tensor. Can we prove the same existence result without it ? • Other models. A natural question is whether we can extend this to the Hooke model (where the system can be reduced to a macroscopic model). We were not able to perform this. The main difficulty is that we do not know whether the extra stress tensor $\tau$ is in $L^{2}$. Nevertheless, we know how to use the strategy to this paper to prove global existence for the FENE-P model [52]. • Regularity in 2D. Many works on polymeric flows are motivated by similar known results for the Navier-Stokes system. In particular a natural question is whether one can prove global existence of smooth solutions to (1) in 2D. We point out that this is known for the co-rotational model [47, 51]. Of course this seems to be a very difficult problem since, we only have an $L^{2}$ bound on $\tau$ and that an $L^{\infty}$ bound on $\tau$ was necessary in the previously mentioned works. In particular the similar result is not known for the co-rotational Oldroyd-B model where one can prove $L^{p}$ bounds on $\tau$ for each $p>1$. • Is system (1) better behaved than Navier-Stokes. One does not expect to prove results on (1) which are not known for Navier-Stokes since (1) is more complicated than Navier-Stokes. However, one can speculate that due to the polymers and the extra stress tensor, system (1) may behave better than Navier-Stokes and that one can prove global existence of smooth solutions to (1) even if such result is not proved or disproved for the Navier-Stokes system. 8. Acknowledgments The work of N. M. is partially supported by NSF-DMS grant 0403983. The author would like to thank P.-L. Lions and Ping Zhang for many discussions about this model. He also would like to thank the IMA where part of this work was done. References [1] A. Arnold, J. A. Carrillo, and C. Manzini. Refined long-time asymptotics for some polymeric fluid flow models. preprint, 2009. [2] J. M. Ball and F. Murat. Remarks on Chacon’s biting lemma. Proc. Amer. Math. Soc., 107(3):655–663, 1989. [3] J. W. Barrett, C. Schwab, and E. Süli. Existence of global weak solutions for some polymeric flow models. Math. Models Methods Appl. Sci., 15(6):939–983, 2005. [4] J. W. Barrett and E. Süli. Existence of global weak solutions to some regularized kinetic models for dilute polymers. Multiscale Model. Simul., 6(2):506–546 (electronic), 2007. [5] J. W. Barrett and E. Süli. Existence of global weak solutions to dumbbell models for dilute polymers with microscopic cut-off. Math. Models Methods Appl. Sci., 18(6):935–971, 2008. [6] J. W. Barrett and E. Süli. Existence and equilibration of global weak solutions to finitely extensible nonlinear bead-spring chain models for dilute polymers. preprint, 2010. [7] R. B. Bird, R. Amstrong, and O. Hassager. Dynamics of polymeric liquids Vol. 1,. Wiley, New York, 1977. [8] R. B. Bird, C. Curtiss, R. Amstrong, and O. Hassager. Dynamics of polymeric liquids, Kinetic Theory Vol. 2,. Wiley, New York, 1987. [9] J.-Y. Chemin and N. Masmoudi. About lifespan of regular solutions of equations related to viscoelastic fluids. SIAM J. Math. Anal., 33(1):84–112 (electronic), 2001. [10] L. Chupin. Fokker-planck equation in bounded domain. preprint 2009. [11] L. Chupin. The FENE model for viscoelastic thin film flows. Methods Appl. Anal., 16(2):217–261, 2009. [12] P. Constantin. Nonlinear Fokker-Planck Navier-Stokes systems. Commun. Math. Sci., 3(4):531–544, 2005. [13] P. Constantin, C. Fefferman, E. S. Titi, and A. Zarnescu. Regularity of coupled two-dimensional nonlinear Fokker-Planck and Navier-Stokes systems. Comm. Math. Phys., 270(3):789–811, 2007. [14] P. Constantin and N. Masmoudi. Global well-posedness for a Smoluchowski equation coupled with Navier-Stokes equations in 2D. Comm. Math. Phys., 278(1):179–191, 2008. [15] P. Degond, M. Lemou, and M. Picasso. Viscoelastic fluid models derived from kinetic equations for polymers. SIAM J. Appl. Math., 62(5):1501–1519 (electronic), 2002. [16] P. Degond and H. Liu. Kinetic models for polymers with inertial effects. Netw. Heterog. Media, 4(4):625–647, 2009. [17] R. J. DiPerna and P.-L. Lions. Ordinary differential equations, transport theory and Sobolev spaces. Invent. Math., 98(3):511–547, 1989. [18] M. Doi and S. F. Edwards. The theory of polymer Dynamics. Oxford University press, Oxford, 1986. [19] Q. Du, C. Liu, and P. Yu. FENE dumbbell model and its several linear and nonlinear closure approximations. Multiscale Model. Simul., 4(3):709–731 (electronic), 2005. [20] W. E, T. Li, and P. Zhang. Well-posedness for the dumbbell model of polymeric fluids. Comm. Math. Phys., 248(2):409–427, 2004. [21] E. Fernández-Cara, F. Guillén, and R. R. Ortega. Some theoretical results for viscoplastic and dilatant fluids with variable density. Nonlinear Anal., 28(6):1079–1100, 1997. [22] E. Fernández-Cara, F. Guillén, and R. R. Ortega. Some theoretical results concerning non-Newtonian fluids of the Oldroyd kind. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 26(1):1–29, 1998. [23] E. Fernández-Cara, F. Guillén, and R. R. Ortega. The mathematical analysis of viscoelastic fluids of the Oldryod kind. 2000. [24] X. Gallez, P. Halin, G. Lielens, R. Keunings, and V. Legat. The adaptive Lagrangian particle method for macroscopic and micro-macro computations of time-dependent viscoelastic flows. Comput. Methods Appl. Mech. Engrg., 180(3-4):345–364, 1999. [25] M. Grmela and H. C. Öttinger. Dynamics and thermodynamics of complex fluids. I and II. Development of a general formalism. Phys. Rev. E (3), 56(6):6620–6655, 1997. [26] C. Guillopé and J.-C. Saut. Existence results for the flow of viscoelastic fluids with a differential constitutive law. Nonlinear Anal., 15(9):849–869, 1990. [27] C. Guillopé and J.-C. Saut. Global existence and one-dimensional nonlinear stability of shearing motions of viscoelastic fluids of Oldroyd type. RAIRO Modél. Math. Anal. Numér., 24(3):369–401, 1990. [28] L. Hailiang and S. Jaemin. Global well-posedness for the microscopic fene model with a sharp boundary condition. preprint 2010. [29] G. H. Hardy. Notes on some points in the integral claculus, lx. an inequality between integrals,. Messenger of Math, 54:150–156, 1925. [30] L. He and P. Zhang. $L^{2}$ decay of solutions to a micro-macro model for polymeric fluids near equilibrium. SIAM J. Math. Anal., 40(5):1905–1922, 2008/09. [31] B. Jourdain, C. Le Bris, T. Lelièvre, and F. Otto. Long-time asymptotics of a multiscale model for polymeric fluid flows. Arch. Ration. Mech. Anal., 181(1):97–148, 2006. [32] B. Jourdain and T. Lelièvre. Mathematical analysis of a stochastic differential equation arising in the micro-macro modelling of polymeric fluids. In Probabilistic methods in fluids, pages 205–223. World Sci. Publ., River Edge, NJ, 2003. [33] B. Jourdain, T. Lelièvre, and C. Le Bris. Existence of solution for a micro-macro model of polymeric fluid: the FENE model. J. Funct. Anal., 209(1):162–193, 2004. [34] R. Keunings. Simulation of Viscoelastic Fluid Flow, in Fundamentals of Computer Modeling for Polymer Processing. C.L Tucker III (Ed.). Carl Hanser Verlag,, 1989. [35] R. Keunings. On the Peterlin approximation for finitely extensible dumbbells. J. Non-Newtonian Fluid Mech, 86:85–100, 1997. [36] O. Kreml and M. Pokorný. On the local strong solutions for the fene dumbbell model. preprint 2010. [37] A. Kufner, L. Maligranda, and L.-E. Persson. The Hardy inequality. Vydavatelský Servis, Plzeň, 2007. About its history and some related results. [38] C. Le Bris and T. Lelièvre. Multiscale modelling of complex fluids: a mathematical initiation. In Multiscale modeling and simulation in science, volume 66 of Lect. Notes Comput. Sci. Eng., pages 49–137. Springer, Berlin, 2009. [39] Z. Lei, C. Liu, and Y. Zhou. Global solutions for incompressible viscoelastic fluids. Arch. Ration. Mech. Anal., 188(3):371–398, 2008. [40] Z. Lei, N. Masmoudi, and Y. Zhou. Remarks on the blowup criteria for Oldroyd models. J. Differential Equations, 248(2):328–341, 2010. [41] Z. Lei and Y. Zhou. Global existence of classical solutions for the two-dimensional Oldroyd model via the incompressible limit. SIAM J. Math. Anal., 37(3):797–814 (electronic), 2005. [42] J. Leray. Etude de diverses équations intégrales nonlinéaires et de quelques problèmes que pose l’hydrodynamique. J. Math. Pures Appl., 12:1–82, 1933. [43] J. Leray. Essai sur les mouvements plans d’un liquide visqueux emplissant l’espace. Acta. Math., 63:193–248, 1934. [44] T. Li and P. Zhang. Mathematical analysis of multi-scale models of complex fluids. Commun. Math. Sci., 5(1):1–51, 2007. [45] F.-H. Lin, C. Liu, and P. Zhang. On hydrodynamics of viscoelastic fluids. Comm. Pure Appl. Math., 58(11):1437–1471, 2005. [46] F.-H. Lin, C. Liu, and P. Zhang. On a micro-macro model for polymeric fluids near equilibrium. Comm. Pure Appl. Math., 60(6):838–866, 2007. [47] F.-H. Lin, P. Zhang, and Z. Zhang. On the global existence of smooth solution to the 2-D FENE dumbbell model. Comm. Math. Phys., 277(2):531–553, 2008. [48] P.-L. Lions and N. Masmoudi. Global solutions for some Oldroyd models of non-Newtonian flows. Chinese Ann. Math. Ser. B, 21(2):131–146, 2000. [49] P.-L. Lions and N. Masmoudi. Global existence of weak solutions to some micro-macro models. C. R. Math. Acad. Sci. Paris, 345(1):15–20, 2007. [50] C. Liu and H. Liu. Boundary conditions for the microscopic FENE models. SIAM J. Appl. Math., 68(5):1304–1315, 2008. [51] N. Masmoudi. Well-posedness for the FENE dumbbell model of polymeric flows. Comm. Pure Appl. Math., 61(12):1685–1714, 2008. [52] N. Masmoudi. Global existence of weak solutions to the macroscopic fene-p model. in preparation, 2010. [53] N. Masmoudi. Zero diffusion limit in the fene model of polymeric flows. in preparation, 2010. [54] N. Masmoudi, P. Zhang, and Z. Zhang. Global well-posedness for 2D polymeric fluid models and growth estimate. Phys. D, 237(10-12):1663–1675, 2008. [55] H. C. Öttinger. Stochastic processes in polymeric fluids. Springer-Verlag, Berlin, 1996. Tools and examples for developing simulation algorithms. [56] F. Otto and A. E. Tzavaras. Continuity of velocity gradients in suspensions of rod-like molecules. Comm. Math. Phys., 277(3):729–758, 2008. [57] R. G. Owens and T. N. Phillips. Computational rheology. Imperial College Press, London, 2002. [58] M. Renardy. An existence theorem for model equations resulting from kinetic theories of polymer solutions. SIAM J. Math. Anal., 22(2):313–327, 1991. [59] M. Renardy. Mathematical analysis of viscoelastic flows, volume 73 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000. [60] M. E. Schonbek. Existence and decay of polymeric flows. SIAM J. Math. Anal., 41(2):564–587, 2009. [61] H. Zhang and P. Zhang. Local existence for the FENE-dumbbell model of polymeric fluids. Arch. Ration. Mech. Anal., 181(2):373–400, 2006. [62] L. Zhang, H. Zhang, and P. Zhang. Global existence of weak solutions to the regularized Hookean dumbbell model. Commun. Math. Sci., 6(1):85–124, 2008.
Quantum Decoherence and Higher Order Corrections to the Large Time Exponential Behaviour I.Ya. Aref’eva and I.V. Volovich Steklov Mathematical Institute, Russian Academy of Sciences Gubkin St.8, GSP-1, 117966, Moscow, Russia [email protected]@mi.ras.ru () Abstract There exists the well known approximate expression describing the large time behaviour of matrix elements of the evolution operator in quantum theory: $<U(t)>\simeq\exp(at)$. This expression plays the crucial role in considerations of problems of quantum decoherence, radiation, decay, scattering theory, stochastic limit, derivation of master and kinetic equations etc. This expression was obtained in the Weisskopf-Wigner approximation and in the van Hove (stochastic) limit. We derive the exact general formula which includes the higher order corrections to the above approximate expression: $<U(t)>=\exp(At+B+C(t))$. The constants $A$ and $B$ and the oscillating function $C(t)$ are computed in perturbation theory. The method of perturbation of spectra and renormalized wave operators is used. The formula is valid for a general class of Hamiltonians used in statistical physics and quantum field theory. 1 Introduction The study of the large time behaviour of the evolution operator in statistical physics and quantum field theory is the subject of numerous investigations. The basic object to study in quantum field theory is the scattering matrix. The physical idea behind the scattering matrix approach is that in the scattering processes there exists a characteristic time scale such that in a time regime larger then this time scale one can neglect interaction and particles evolve according to the free dynamics [1]. However, due to the infinite number of degrees of freedom in quantum field theory asymptotic dynamics is not simply governed by the free Hamiltonian. There are effects of renormalization of vacuum energy and one particle states and the asymptotic states become in fact the states of dressed particles [2]. Therefore one has to deal with renormalized, or dressed wave operators [3, 4, 5, 24]. In this paper we will use the method of renormalized wave operators to compute matrix elements of the evolution operator for finite time. There are many important problems in quantum field theory where we are interested in the large but not infinite time and where the standard $S$-matrix description is not very convenient or even not applicable. These include processes with unstable particles [7, 8] (in fact almost all particles are unstable), atom-photon interactions [9], elementary particles in ”semidressed states” with non-equilibrium proper fields [10], electroweak baryogenesis and phase transitions in the early Universe and in high-energy collisions [11], quantum optics [12], quantum decoherence (see for example [12, 13, 14]) etc. In the consideration of such processes we are interested in the time regime smaller than the ”infinite” time when the $S$-matrix description becomes applicable. The consideration of such processes belongs to non-equilibrium quantum field theory, see [15] for more discussions. In statistical physics there is not just one but several relevant time scales and as a result we don’t have here a universal method comparable with the $S$-matrix approach in quantum field theory. One can say that the role of $S$-matix approach in non-equilibrium statistical physics is played by various master and kinetic equations. Various methods of consideration of time evolution for classical and quantum systems have been developed by Bogoliubov [16], Weisskopf and Wigner (see [8]), van Hove [18], Prigogine [19] and many others, for a review see for example [20, 21]. A general method in non-equilibrium statistical physics and quantum field theory is the method of stochastic limit, see [22, 23]. The idea of this method is the systematic application of the $\lambda^{2}t$-limit and quantum stochastic differential equations. One considers the evolution operator $U(t)$ of quantum system for small coupling constant $\lambda$ and large time $t$. The limit $$\lambda\to 0,\qquad t\rightarrow\infty,\qquad\lambda^{2}t=\mbox{fixed}=\tau$$ (1.1) has been considered by Bogoliubov [16], Friedrichs [17] and van Hove [18]. In the quantum theory of open systems, the limit (1.1) is known as the van Hove or the $\lambda^{2}t$ limit. In this paper we study corrections to the (van Hove) stochastic limit. For this purpose a general formula for the matrix elements of the evolution operator is obtained. This formula is used for the investigation of the large time and small coupling constant asymptotic behaviour. We consider a very general class of Hamiltonians used in solid state physics and quantum field theory $$H=H_{0}+\lambda V,$$ (1.2) where $H_{0}$ is a free Hamiltonian, $V$ describes an interaction and $\lambda$ is the coupling constant. It is well known that the large time behaviour of the vacuum expectation value of the evolution operator $$U(t)=e^{itH_{0}}e^{-itH},$$ (1.3) is given by $$\langle U(t)\rangle\simeq e^{at},$$ (1.4) where $a$ is a constant. This expression can be obtained in the Weisskopf-Wigner approximation [8] or in the van Hove (stochastic) limit. In this paper we obtain the following exact general formula valid for any time $t$ $$\langle U(t)\rangle=e^{iAt+B+C(t)}.$$ (1.5) Here $A$ and $B$ are constants for which a representation in perturbation theory will be given and $C(t)$ is a function which under rather general assumptions can be represented for large time $t$ as $$C(t)={f(t)\over t^{\alpha}}.$$ (1.6) Here $f(t)$ is a bounded oscillating function and the exponent $\alpha$ depends on the model and on the dimension of space ($\alpha=3/2$ for the physical 3-dimensional space and for a general class of models). We derive the main formula (1.5) by using the theory of perturbation of spectra and renormalized wave operators. One of the remarkable features of quantum mechanics which most distinguishes it from classical mechanics is the coherent superposition of physical states. The important point is the physical distinction between the coherent superposition of states and the classical mixture, see for example [12, 13]. In particular the maintenance of quantum coherence is a crucial requirement of the ability of quantum computers to be more efficient in certain problems than classical computers (see for example [14] and refs therein). Any state of a quantum system can be described by the off-diagonal elements of the density operator. In fact the dominant contribution for large time to any matrix element of the evolution operator comes from vacuum and if $\Re a<0$ in (1.4) then one has the exponential decay. The constant $a$ is a function of the coupling constant and other parameters of the model. To suppress decoherence we would like to have the regime with $\Re a=0$. In such a case we have to investigate corrections to the approximate expression (1.4). The solution of this problem is given in this paper and it is presented in (1.5). Expectation value in (1.5) is taken over vacuum. For the case of one-particle states we obtain $$\langle p|U(t)|p^{\prime}\rangle=e^{iA(p)t+B(p)+C(t,p)}\delta(p-p^{\prime}).$$ (1.7) The formulae (1.5) and (1.7) have a very general character. We prove (1.5) in Section 3 for a very general class of Hamiltonians. From this formula one gets the stochastic limit of evolution operator $$\langle U(\tau/\lambda^{2})\rangle=e^{iA_{2}\tau}(1+o(\lambda)),$$ (1.8) as well as corrections to stochastic limit $$\langle U(\tau/\lambda^{2})\rangle=e^{iA_{2}\tau+\lambda^{2}(B_{2}+i\tau A_{4}% )+o(\lambda^{2})}.$$ (1.9) In Section 4 we calculate corrections to the stochastic limit for the one particle matrix elements of the evolution operator for the case of translation invariant Hamiltonians $$\langle p|U(\tau/\lambda^{2})|p^{\prime}\rangle=e^{iA_{2}(p)\tau}(1+\lambda^{2% }(B_{2}(p)+i\tau A_{4}(p))+o(\lambda^{2}))\delta(p-p^{\prime})$$ (1.10) The class of considered Hamiltonians includes the Bose and Fermi gases, phonon self-interaction and electron-phonon interaction, quantum electrodynamics in external fields etc. We give two independent derivations of formula (1.5) and (1.10). The first method consists in the direct examination of perturbation theory. The second one uses powerful results of spectral theory and renormalized wave operators [24]. Although the second method is simpler we present both of them since the first one can be used also in the case of decay when the results of the standard scattering theory are not applicable. 2 Notations and auxiliary results 2.1 Hamiltonians We consider Hamiltonians of the form (1.2) where $H_{0}$ is a free Hamiltonian $$H_{0}=\sum_{i}\int\omega_{i}(k)a^{*}_{i}(k)a_{i}(k)d^{d}k$$ (2.1) and $V$ is the sum of Wick monomials. Creation and annihilation operators $a^{*}_{i}(k)$, $a_{i}(k)$ describe particles or quasiparticles and they satisfy the commutation or anticommutation relations $$[a_{i}(k),a^{*}_{j}(k^{\prime})]_{\pm}=\delta_{ij}\delta(k-k^{\prime})$$ (2.2) Here $k,k^{{}^{\prime}}\in R^{d}$ and $i,j=1,..,N$ label the finite number of different types of (quasi)particles. Examples of one-particle energy $\omega_{i}(k)$ include the relativistic ($\omega(k)=(k^{2}+m^{2})^{1/2}$) and non-relativistic ($\omega(k)=k^{2}/2-\omega_{0}$) laws, the Bogoliubov spectrum ($\omega(k)=(bk^{4}+k^{2}v(k))^{1/2}$), the Fermi quasiparticle spectrum ($\omega(k)=|k^{2}/2m-\mu|$) etc. We consider two different types of Wick polynomials. The first type describes an interaction in the case when there is not the translation invariance $$V=\sum_{I,J}\int v(p_{1},i_{1}\dots p_{I},i_{I}|q_{1},j_{1}\dots q_{J},j_{J})% \prod^{I}_{l=1}a^{*}_{i_{l}}(p_{l})dp_{l}\prod^{J}_{r=1}a_{j_{r}}(q_{r})dq_{r}$$ (2.3) were $v(p_{1}\dots p_{I}|q_{1}\dots q_{J})$ are some test functions. The second type is described by the translation invariant Hamiltonian $$V=\sum_{I,J}V_{I,J}=\sum_{I,J}\int\hat{v}(p_{1},i_{1},\dots p_{I},i_{I}|q_{1},% j_{1}\dots q_{J-1},j_{J-1},J_{j})$$ (2.4) $$\delta\left(\sum^{I}_{l}p_{l}-\sum^{J}_{r}q_{r}\right)\prod^{I}_{l=1}a^{*}_{i_% {l}}(p_{l})dp_{l}\prod^{J}_{r=1}a_{j_{r}}(q_{r})dq_{r}$$ Clearly the delta function causes the trouble and there are singular terms in (2.4). Namely, $V_{I,0}\phi_{n}$ does not belong to the Fock space unless $\phi_{n}=0$. This singularity is called the volume singularity. To give a meaning to the Hamiltonian with interaction (2.4) one has to introduce a volume cut-off, then perform the vacuum renormalization and vacuum dressing and only after that remove the cut-off. This procedure defines the hamiltonian in a new space (see [5, 24] for details). To avoid this difficulty in this paper we will assume that for translation invariant interaction there are no pure creation and annihilation terms. 2.2 Evolution operator We will investigate the evolution operator $$U(t)=e^{itH_{0}}e^{-it(H_{0}+\lambda V)}.$$ (2.5) In perturbation theory the evolution operator (2.5) has the representation $$U(t)=1-i\lambda\int^{t}_{0}V(t_{1})dt_{1}+(-i\lambda)^{2}\int^{t}_{0}dt_{1}% \int^{t_{1}}_{0}dt_{2}V(t_{1})V(t_{2})+\dots$$ (2.6) where $$V(t)=e^{itH_{0}}Ve^{-itH_{0}}$$ We will also use the evolution operator with the adiabatic cut-off $$U_{\epsilon}(t)=1-i\lambda\int^{t}_{0}V_{\epsilon}(t)U_{\epsilon}(t)dt$$ (2.7) $$V_{\epsilon}(t)=e^{-\epsilon|t|}e^{itH_{0}}Ve^{-itH_{0}}$$ In this case one also has the perturbation series $$U_{\epsilon}(t)=1-i\lambda\int^{t}_{0}V_{\epsilon}(t)dt+(-i\lambda)^{2}\int^{t% }_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}V_{\epsilon}(t_{1})V_{\epsilon}(t_{2})+\dots.$$ (2.8) We will use for $V_{I,J}$ the diagram representation. The corresponding diagram has one vertex and $I$ lines going from the vertex to the left and $J$ lines going to the right. The first $I$ lines represent creation operators and the last $J$ lines represent annihilation operators. In what follows we will use the Wick theorem: $$V_{I,J}W_{N,M}=:V_{I,J}W_{N,M}:+\sum_{s}^{\min\{J,N\}}V_{I,J}\underbrace{-% \circ-}_{s}W_{N,M}.$$ (2.9) The kernel of the Wick monomial $:V_{I,J}W_{N,M}:$ is $$v_{I,J}\otimes w_{N,M}$$ (2.10) and $V_{I,J}\underbrace{-\circ-}_{s}W_{N,M}$ is the Wick monomial $$V_{I,J}\underbrace{-\circ-}_{s}W_{N,M}=\int\prod_{r=1}^{I+N-s}(dp_{r}a^{*}(p_{% r}))\cdot\prod_{l=1}^{J+M-s}(dq_{l}a(q_{l}))C_{J}^{s}C_{N}^{s}t!\int\prod_{r=1% }^{s}dk_{r}(v\circ_{s}w)(k_{1},...k_{r},p_{1},...;q_{1}...)$$ (2.11) with the following kernel $$(v\circ_{s}w)(k_{1},...k_{r},p_{1},...;q_{1}...)=$$ (2.12) $$v_{I,J}(p_{1},...,p_{I};k_{1},...k_{s},q_{1},...q_{J-s})w_{N,M}(k_{1},...k_{s}% ;p_{I+1},...,p_{I+N-1},q_{J-s+1},...q_{J-s+M}).$$ Below we will drop out the symbol $s$ in $\circ_{s}$ and to specify the concrete form of contractions will refer to corresponding diagrams. The equality (2.9) for the 4-point interaction has the diagram representation as shown on Fig. 1. We will use also the following notations. The line of the graph is called the internal if it connects two vertices of the graph. A graph is the connected graph if all its vertices are connected by a set of internal lines otherwise it is called the disconnected one. A connected graph is called the one-particle reducible (1PR) if after the removing a line it becomes disconnected. A connected graph is called the one-particle irreducible (1PI) if after the removing any line it is still connected. The following ”linked cluster theorem” [5] will be used: $$U(t)=:e^{U_{c}(t)}\ :$$ (2.13) where $$U_{c}(t)=\sum_{n=1}^{\infty}(-i\lambda)^{n}\int^{t}_{0}dt_{1}...\int_{0}^{t_{n% -1}}dt_{n}\left(V(t_{1})...V(t_{n})\right)_{c}$$ Here the index $c$ in $U_{c}$ indicates that one takes only the connected diagrams. The similar relation is true for evolution operator with the adiabatic cutoff. Below for simplicity of notations we consider interactions with the only one type of particles, but the main results are valid for arbitrary number of types of particles. 3 Evolution operator for non-translation invariant Hamiltonians 3.1 Second order For the vacuum matrix element of the evolution operator we obtain from (2.13) the representation: $$\langle 0|U(t)|0\rangle=e^{{\cal E}(t)}$$ (3.1) where $${\cal E}(t)=\langle 0|(-i\lambda\int^{t}_{0}dt_{1}V(t)+(-i\lambda)^{2}\int^{t}% _{0}dt_{1}\int^{t_{1}}_{0}dt_{2}V(t_{1})V(t_{2})+\dots)_{c}|0\rangle.$$ (3.2) Here the symbol $(...)_{c}$ means that we keep only connected diagrams. Representation (3.1), (3.2) permits us to calculate the leading terms of the asymptotic behaviour of the matrix elements of the evolution operator for large time $t$ as well the corrections to the leading terms. In fact we will show that ${\cal E}(t)$ has the following form $${\cal E}(t)=At+B+C(t)$$ (3.3) where one has the perturbative expansions $$A=\lambda^{2}A_{2}+\lambda^{3}A_{3}+...,~{}~{}~{}B=\lambda^{2}B_{2}+\lambda^{3% }B_{3}+...,~{}~{}~{}C(t)=\lambda^{2}C_{2}(t)+\lambda^{3}C_{3}(t)+...$$ (3.4) and $C_{n}(t)$ vanishes for large $t$. Let us find explicitly these terms in the second order of perturbation theory for the Hamiltonian $$H=H_{0}+\lambda V,$$ (3.5) where $$H_{0}=\int\omega(p)a^{*}(p)a(p)dp$$ (3.6) and the interaction has the form $$V=\int(v(p_{1},...,p_{n})a^{*}(p_{1})...a^{*}(p_{n})+c.c.).dp_{1}...dp_{n}$$ (3.7) Here $\omega(p)$ is a positive smooth function, for example $\omega(p)=\sqrt{p^{2}+m^{2}},m>0$ and $v(p_{1},...,p_{n})$ is a test function. For this interaction the first term in (3.2) is identically zero. The second term in (3.2) equals to $${\cal E}^{(2)}(t)=(-i\lambda)^{2}\int dp_{1}...dp_{n}|v(p_{1},...,p_{n})|^{2}% \int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}e^{it_{1}E_{1}+it_{2}E_{2}}$$ (3.8) where $$E_{2}=-E_{1}=E(p_{1},...,p_{n})=\sum_{i=1}^{n}\omega(p_{i})$$ (3.9) By using the equality $$\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}e^{-it_{1}E+it_{2}E}=-\frac{i}{E}t+% \frac{1}{E^{2}}-\frac{1}{E^{2}}e^{-itE}$$ (3.10) we get $${\cal E}^{(2)}(t)=(-i\lambda)^{2}\int dp_{1}...dp_{n}|v(p_{1},...,p_{n})|^{2}(% -\frac{i}{E}t+\frac{1}{E^{2}}-\frac{1}{E^{2}}e^{-itE})$$ (3.11) Therefore we obtain the expression of the form (3.3) $${\cal E}(t)=\lambda^{2}A_{2}t+\lambda^{2}B_{2}+\lambda^{2}C_{2}(t)+...$$ (3.12) where $$A_{2}=i\int\frac{|v(p_{1},...,p_{n})|^{2}}{E(p_{1},...,p_{n})}dp_{1}...dp_{n},$$ (3.13) $$B_{2}=-\int\frac{|v(p_{1},...,p_{n})|^{2}}{E(p_{1},...,p_{n})^{2}}dp_{1}...dp_% {n},$$ (3.14) $$C_{2}(t)=\int\frac{|v(p_{1},...,p_{n})|^{2}}{E(p_{1},...,p_{n})^{2}}e^{-itE(p_% {1},...,p_{n})}dp_{1}...dp_{n}$$ (3.15) We have obtained the following Theorem 1. The vacuum expectation value of the evolution operator for the Hamiltonian (3.5) in the second order of perturbation theory has the form $$<U(t)>=e^{\lambda^{2}A_{2}t+\lambda^{2}B_{2}+\lambda^{2}C_{2}(t)}$$ (3.16) where $A_{2},B_{2}$ and $C_{2}(t)$ are given by (3.13),(3.14) and (3.15). Remark. By using the stationary phase method one can prove that the function $C_{2}(t)$ vanishes as $t\to\infty$ (see below). 3.2 Decay We have proved theorem 1 under the assumption $\omega(p)>0$. However the obtained formula (3.16) is valid in the more general case when one has the decay. In this case formula (3.16) still is true but in the expressions (3.13)-(3.15) one has to substitute $E\to E-i0$. Let us consider the important case when $$\omega(p)={p^{2}\over 2}-\omega_{0},~{}~{}\omega_{0}>0$$ (3.17) Instead of (3.10) we will use now the identity $$\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}e^{i(t_{2}-t_{1})E}=t\int_{0}^{t}(1-{% \sigma\over t})e^{-i\sigma E}d\sigma.$$ (3.18) We have $${\cal E}^{(2)}(t)=(-i\lambda)^{2}\int dp_{1}...dp_{n}|v(p_{1},...,p_{n})|^{2}% \int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}e^{i(t_{2}-t_{1})E(p_{1},...,p_{n})}$$ (3.19) $$=-\lambda^{2}t\int_{0}^{t}d\sigma(1-{\sigma\over t})\int dp_{1}...dp_{n}|v(p_{% 1},...,p_{n})|^{2}e^{-i\sigma E(p_{1},...,p_{n})}$$ $$=\lambda^{2}tA_{2}(t)+\lambda^{2}B_{2}(t)$$ where $$A_{2}(t)=-\int_{0}^{t}d\sigma F(\sigma),~{}~{}B_{2}(t)=\int_{0}^{t}d\sigma% \sigma F(\sigma)$$ (3.20) and $$F(\sigma)=\int dp_{1}...dp_{n}|v(p_{1},...,p_{n})|^{2}e^{-i\sigma E(p_{1},...,% p_{n})}.$$ (3.21) By using the stationary phase method we obtain the following asymptotic behaviour of the function $F(\sigma)$ as $\sigma\to\infty$: $$F(\sigma)=({2\pi i\over\sigma})^{dn/2}e^{in\sigma\omega_{0}}|v(0)|^{2}[1+o({1% \over\sigma})].$$ Therefore for $dn\geq 3$ there exist the limits $$\lim_{t\to\infty}A_{2}(t)=A_{2}=-\int_{0}^{\infty}d\sigma F(\sigma)$$ (3.22) $$\lim_{t\to\infty}B_{2}(t)=B_{2}=\int_{0}^{\infty}d\sigma\sigma F(\sigma)$$ because there exists the limit $$\lim_{t\to\infty}\int_{1}^{t}e^{i\sigma\omega_{0}}{d\sigma\over\sigma^{1/2}}.$$ Moreover one has $$A_{2}(t)=-\int_{0}^{\infty}d\sigma F(\sigma)+o({1\over t^{2}}),~{}~{}B_{2}(t)=% \int_{0}^{\infty}d\sigma\sigma F(\sigma)+o({1\over t}).$$ If $dn\geq 5$ one gets also $$A_{2}=i\int dp_{1}...dp_{n}{|v(p_{1},...,p_{n})|^{2}\over E(p_{1},...,p_{n})-i0}$$ (3.23) $$B_{2}=-\int dp_{1}...dp_{n}{|v(p_{1},...,p_{n})|^{2}\over(E(p_{1},...,p_{n})-i% 0)^{2}}.$$ (3.24) Indeed one has $$A_{2}=-\int_{0}^{\infty}d\sigma F(\sigma)=-\lim_{\epsilon\to 0}\int_{0}^{% \infty}d\sigma F(\sigma)e^{-\sigma\epsilon}.$$ This is true due to the Lebesgue theorem since $|F(\sigma)e^{-\sigma\epsilon}|\leq|F(\sigma)|$ and $F(\sigma)\in L_{1}(R_{+})$ ($L_{1}$ is the space of absolute integrable functions) if $nd\geq 3$. Substituting in the above formula the representation (3.21) and changing the order of integrations (we can do this due to the Fubini theorem since for positive $\epsilon$ the function $|v(p_{1},...,p_{n})|^{2}e^{-i\sigma(E(p_{1},...,p_{n})-i\epsilon)}$ belongs to the space $\L_{1}(R_{+}\times R^{nd})$ of absolute integrable functions), we can perform the integration over $\sigma$ explicitly $$A_{2}=\lim_{\epsilon\to 0}(i)\int dp_{1}...dp_{n}{|v(p_{1},...,p_{n})|^{2}% \over E(p_{1},...,p_{n})-i\epsilon}=i\int dp_{1}...dp_{n}{|v(p_{1},...,p_{n})|% ^{2}\over E(p_{1},...,p_{n})-i0}.$$ The same calculation is true for $B_{2}$ with the more strong assumption : $dn\geq 5$, $$B_{2}=\int_{0}^{\infty}d\sigma\sigma F(\sigma)=\lim_{\epsilon\to 0}\int_{0}^{% \infty}d\sigma\sigma\int dp_{1}...dp_{n}|v(p_{1},...,p_{n})|^{2}e^{-i\sigma(E(% p_{1},...,p_{n})-i\epsilon)}$$ $$=-\lim_{\epsilon\to 0}\int dp_{1}...dp_{n}{|v(p_{1},...,p_{n})|^{2}\over(E(p_{% 1},...,p_{n})-i\epsilon)^{2}}=-\int dp_{1}...dp_{n}{|v(p_{1},...,p_{n})|^{2}% \over(E(p_{1},...,p_{n})-i0)^{2}}.$$ We have proved the following theorem. Theorem 2. The asymptotic behaviour as $t\to\infty$ of the vacuum expectation value of the evolution operator for the Hamiltonian (3.5) with the dispersion low (3.17) in the second order of perturbation theory is $$<U(t)>=e^{\lambda^{2}A_{2}t+\lambda^{2}B_{2}+\lambda^{2}o(1/t)}$$ (3.25) where $A_{2}$ and $B_{2}$ are given by (3.22) (or (3.23) and (3.24)). After the rescaling $t\to t/\lambda^{2}$ one gets the $\lambda^{2}$ corrections to the stochastic limit $$<U(t/\lambda^{2})>=e^{A_{2}t+\lambda^{2}B_{2}+\lambda^{2}o(\lambda^{2}/t)}.$$ (3.26) 3.3 Example We discuss here the evolution operator for the simple explicitly solvable model described by the Hamiltonian $$H=\int\omega(k)a^{+}(k)a(k)d^{d}k+\lambda\int(a(k)\overline{v}(k)+a^{*}(k)v(k)% )d^{d}k.$$ (3.27) We will see that the vacuum expectation value of the evolution operator has the form obtained in theorems 1 and 2. Under assumptions $$A=\lambda^{2}A_{2}=i\lambda^{2}\int{|v(k)|^{2}\over\omega(k)}\,d^{d}k<\infty,~% {}\quad B=\lambda^{2}B_{2}=-\lambda^{2}\int{|v(k)|^{2}\over\omega^{2}(k)}\,d^{% d}k<\infty\ ,$$ (3.28) one has the following Proposition 1. The vacuum expectation value of the evolution operator $U(t)=e^{itH_{0}}$ $e^{-itH}$ is $$\langle U(t)\rangle=\exp\left[At+B+\lambda^{2}\int dk{|v(k)|^{2}\over\omega^{2% }(k)}\,e^{-i\omega(k)t}\right]$$ (3.29) Proof. It follows from the explicit solution $$H=W^{*}(H_{0}+E_{0})W,~{}~{}E_{0}=-\int{|v(k)|^{2}\over\omega(k)}dk.$$ (3.30) $$W=\exp\lambda\int{(a^{*}(k)v(k)-a(k)\overline{v}(k))\over\omega(k)}dk=\exp% \lambda\int{a^{*}(k)v(k)\over\omega(k)}dk\exp-\lambda\int{a(k)\overline{v}(k))% \over\omega(k)}dke^{B/2}$$ From Proposition 1 we obtain Proposition 2. The asymptotic behaviour of the expectation value (3.29) for $t\to\infty$ has the form $$\langle U(t)\rangle=\exp\left[At+B+\lambda^{2}\left({1\over t}\right)^{d\over 2% }\,(2i\pi)^{d\over 2}\frac{|v(k_{0})|^{2}}{\omega^{2}(k_{0})}\,e^{-i\omega(k_{% 0})t}+...\right].$$ (3.31) where $k_{0}$ is a critical point, $\nabla\omega(k_{0})=0$ and we assume there is only one nondegenerate critical point. Proof. It follows immediately from (3.29) by using the stationary phase method. Remark 1. After the rescaling $t\to t/\lambda^{2}$ we get from (3.31) corrections to the stochastic limit $$\langle U(t/\lambda^{2})\rangle=\exp\left[A_{2}t+B+\lambda^{2}\left({\lambda^{% 2}\over t}\right)^{d\over 2}(2i\pi)^{d\over 2}{|v(k_{0})|^{2}\over\omega^{2}(k% _{0})}\cdot e^{-i\omega(k_{0})t/\lambda^{2}}+\dots\right].$$ (3.32) Remark 2. One can take for example $$v(k)=\omega(k)f(k)\ ,$$ $$\omega(k)={k^{2}\over 2}\,-\omega_{0}\ ,\quad\omega_{0}>0$$ where $f(k)$ is a test function. Then the assumptions (3.28) are satisfied, $$B=-\lambda^{2}\int|f(k)|^{2}d^{d}k\ ,\quad A=i\lambda^{2}\int\left({k^{2}\over 2% }\,-\omega_{0}\right)|f(k)|^{2}d^{d}k\ ,$$ the critical point $k_{0}=0$ and (3.32) has the form $$\langle U(t)\rangle=\exp\left[At+B+\lambda^{2}\left({1\over t}\right)^{d\over 2% }(2i\pi)^{d\over 2}|f(0)|^{2}e^{i\omega_{0}t}+\dots\right]$$ Remark 3. If $\omega(k)=k^{2}-\omega_{0}$ and $v(k)$ is an arbitrary test function then one has the decay. We can not use in this case the diagonalization of the Hamiltonian (3.27) but the formula (3.1) still is true. We have $$<U(t)>=\exp[-\lambda^{2}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\int d^{d}k|v(% k)|^{2}e^{i(t_{2}-t_{1})\omega(k)}]=\exp[\lambda^{2}tA_{2}(t)+\lambda^{2}B_{2}% (t)]$$ where $$A_{2}(t)=-\int_{0}^{t}d\sigma F(\sigma),~{}~{}B_{2}(t)=\int_{0}^{t}d\sigma% \sigma F(\sigma)$$ and $$F(\sigma)=\int d^{d}k|v(k)|^{2}e^{-i\sigma\omega(k)}$$ By using this representation and the expansion $$F(\sigma)=({2i\pi\over\sigma})^{d\over 2}|v(0)|^{2}e^{i\sigma\omega(0)}(1+o({1% \over\sigma}))$$ we compute corrections to the stochastic limit. Notice also that the stochastic limit $$\lim_{\lambda\to 0}\lambda^{2}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}F(t_{1}-% t_{2})=\lim_{\lambda\to 0}\int_{0}^{t}d\tau\int^{0}_{-\tau/\lambda^{2}}d\sigma F% (-\sigma)=t\int_{0}^{\infty}d\sigma F(\sigma)$$ exists in any dimension $d=1,2,...$. 3.4 Higher orders To formulate the theorem about the asymptotic behaviour of the vacuum expectation value of the evolution operator it is convenient to use the Friedrichs $\Gamma$ - operation $$\Gamma(V)=\lim_{\epsilon\to+0}(-i)\int_{0}^{\infty}e^{-\epsilon t}e^{itH_{0}}% Ve^{-itH_{0}}dt$$ (3.33) acting to the monomials as $$\Gamma(V_{I,J})=\int(\Gamma v)(p_{1},i_{1}\dots p_{I},i_{I}|q_{1},j_{1}\dots q% _{J},j_{J})\prod^{I}_{l=1}a^{*}_{i_{l}}(p_{l})dp_{l}\prod^{J}_{r=1}a_{j_{r}}(q% _{r})dq_{r}$$ (3.34) where $$(\Gamma v)(p_{1},i_{1}\dots p_{I},i_{I}|q_{1},j_{1}\dots q_{J},j_{J})=\frac{v(% p_{1},i_{1}\dots p_{I},i_{I}|q_{1},j_{1}\dots q_{J},j_{J})}{\sum_{l=1}^{I}% \omega_{l}(p_{l})-\sum_{r=1}^{J}\omega_{r}(p_{r})+i0}.$$ It has the property $$[H_{0},\Gamma_{\pm}(V)]=V.$$ (3.35) The following theorem gives a very effective exact representation for the vacuum expectation value of the evolution operator. Theorem 3. Vacuum expectation value of the evolution operator for the Hamiltonian (2.3) has the following representation $$<U(t)>=e^{At+B+C(t)}$$ (3.36) where $$A=i\sum_{n=2}^{\infty}\lambda^{n}\langle 0|(\underbrace{V\Gamma(V...\Gamma(V).% ..))}_{n})_{c}|0\rangle,~{}~{}~{}$$ (3.37) $$B=\sum_{n=2}^{\infty}\lambda^{n}\sum_{l=2}^{\infty}(-1)^{l-1}\sum_{k_{1}+...+k% _{l}=n}\langle 0|(W_{k_{1}}...W_{k_{l}})_{c}|0\rangle,$$ (3.38) and $$W_{n}=\underbrace{\Gamma(V...\Gamma(V)...)}_{n}$$ (3.39) The function $C(t)$ is given by the sum of terms of the form $$\int dp...e^{it\sum_{l=1}^{k_{1}}E_{k}}\underbrace{\Gamma(v_{i1}\circ...\Gamma% (v_{i2}\circ)...)}_{k_{1}}\underbrace{\Gamma(v_{i3}\circ...\Gamma(v_{i4}\circ)% ...)}_{k_{2}}...\underbrace{\Gamma(v_{i5}\circ...\Gamma(v_{i6})...)}_{k_{l+1}}% (p,...).$$ (3.40) To prove this Theorem let us consider one of the terms representing the n-th order of the perturbation theory with adiabatic parameter $\epsilon>0$ $$\int dp...{\cal E}^{(n)}_{0}(p,...)=$$ (3.41) $$(-i\lambda)^{n}\int dp...\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}...\int^{n-1}% _{0}dt_{n}e^{\sum_{k}(iE_{k}t_{k}-\epsilon t_{k})}(v_{i_{1}}\circ...\circ v_{i% _{n}})(p...),$$ Integrating over $t_{n}$ one gets two terms. One contains $e^{iE_{n}t_{n-1}}$ and the other contains $1$. Together with the factor coming from $V(t_{n-1})$ the exponent $e^{iE_{n}t_{n-1}}$ yields to $e^{i{\cal E}_{n-1}t_{n-1}}$, where ${\cal E}_{n-1}=E_{n}+E_{n-1}$ is the total energy of the lines crossed by $n-1$-cut (see Fig.2). Therefore, the contribution coming from the upper bounds of integration over $t_{n},...t_{2}$ is equal to $e^{i{\cal E}_{2}t_{1}}$, where ${\cal E}_{2}=\sum_{k=2}^{n}E_{k}.$ It is evident that ${\cal E}_{2}=-E_{1}$ and therefore the integrand does not depend on $t_{1}$ and intergration of this terms over $t_{1}$ produces $$t\int dp...(\underbrace{v\circ\Gamma(v\circ...\Gamma(v\circ)...)}_{n}(p....)$$ (3.42) All these terms correspond to $$t\langle 0|(\underbrace{V\Gamma(V...\Gamma(V)...)}_{n})_{c}|0\rangle.$$ (3.43) Or, in other words, we have proved that contributions from the upper bounds produce the linear in $t$ terms. To prove that the term (3.43) gives the leading term in asymptotic expansion we have to prove that the rest terms are of order 1 when $t\to\infty$. Or in other words we have to prove that the terms which contain at least one contribution from the down bound are of order 1. To make more clear the proof we present in Appendix A an explicit calculation of ${\cal E}_{0}$ up to the fourth order of the perturbation theory. Let us first consider contribution from only one the lower bound. For example, the integration over $t_{1}$ of the term produced by the low bound of integration over $t_{k+1}$ and all other upper bounds gives: $$\int dp...\int_{0}^{t_{1}}dt_{1}e^{i\sum_{l=1}^{k}E_{l}t_{1}}(\underbrace{v% \circ\Gamma(v\circ...\Gamma(v\circ)...)}_{k}\underbrace{\Gamma(v\circ...\Gamma% (v)...)}_{n-k})(p...)=$$ $$i\int dp...[e^{it\sum_{l=1}^{k}E_{l}}-1](\underbrace{\Gamma(v\circ\Gamma(v% \circ...\Gamma(v\circ)...))}_{k}\underbrace{\Gamma(v\circ...\Gamma(v)...)}_{n-% k})(p...).$$ (3.44) We see that (3.44) gives a contribution to terms in (3.2) that do not depend on $t$. All these contributions from the down bound of integration over $t_{k}$ and $t_{1}$ and all other upper bounds sum up to the following expression $$-\langle 0|(\underbrace{\Gamma V(\Gamma(V...\Gamma(V)...))}_{k}\underbrace{% \Gamma(V...\Gamma(V)...)}_{n-k})_{c}|0\rangle.$$ (3.45) Let us note the obvious difference between $$\underbrace{\Gamma(V...\Gamma(V)...)}_{k}\underbrace{\Gamma(V...\Gamma(V)...)}% _{l}~{}~{}\mbox{and }~{}~{}~{}\underbrace{\Gamma(V...\Gamma(V)...)}_{k+l}.$$ (3.46) This expression can be presented on graphs, see Fig.3. On the graph the directions of applications of $\Gamma$ operators are indicated by brackets. The term in (3.44) with $e^{it\sum_{l=1}^{k}E_{l}}$ produces the decreasing contribution as $t\to\infty$ since we have $\sum_{l=1}^{k}E_{l}>0$. For the n-th order of perturbation theory the terms corresponding to contributions of $l$ lower bounds have the following structure $$\int dp...[e^{it\sum_{k=1}^{k_{1}}E_{k}}-1]\underbrace{\Gamma(v\circ...\Gamma(% v\circ)...)}_{k_{1}}\underbrace{\Gamma(v\circ...\Gamma(v\circ)...)}_{k_{2}}...% \underbrace{\Gamma(v\circ...\Gamma(v)...)}_{k_{l+1}}(p,...)$$ (3.47) $\sum_{i=1}^{l+1}k_{i}=n.$ All these terms either contain oscillator factors or give the constant factors. The constant factors can be sum up to expressions $$(-)^{l}<0|(\underbrace{\Gamma(V...\Gamma(V)...)}_{k_{1}}\underbrace{\Gamma(V..% .\Gamma(V)...)}_{k_{2}}...\underbrace{\Gamma(V...\Gamma(V)...)}_{k_{l+1}})_{c}% |0>.$$ (3.48) Therefore we have shown that $${\cal E}_{0}=tA+B+C(t),$$ (3.49) where $A$ is given in perturbation theory by (3.37) and $B$ is given by $$B=\sum_{n}\lambda^{n}\sum_{l}\sum_{k_{1}+...k_{l}=n}(-1)^{l-1}\langle 0|(% \underbrace{\Gamma(V...\Gamma(V)...)}_{k_{1}}\underbrace{\Gamma(V...\Gamma(V).% ..)}_{k_{2}}...\underbrace{\Gamma(V...\Gamma(V)...)}_{k_{l}})_{c}|0\rangle.$$ (3.50) The function $C(t)$ is the sum of terms of the form $$\int dp...e^{it\sum_{l=1}^{k_{1}}E_{k}}\underbrace{\Gamma(v\circ...\Gamma(v% \circ)...)}_{k_{1}}\underbrace{\Gamma(v\circ...\Gamma(v\circ)...)}_{k_{2}}...% \underbrace{\Gamma(v\circ...\Gamma(v)...)}_{k_{l+1}}(p,...).$$ (3.51) These terms vanish as $t\to\infty$ because $\sum_{l=1}^{k_{1}}E_{k}\neq 0$. 3.5 Wave operators and the main formula In this section we show how the spectral theory and renormalized wave operators can be used for the derivation of the main formula. In particular explicit expressions for the parameters $A,B$ and $C(t)$ will be obtained. We consider the Hamiltonian (2.3) for one type of particles with $\omega(p)=\sqrt{p^{2}+m^{2}},m>0$ in the space $R^{d},d>2$. We will work with the formal perturbation series for the evolution operator. In fact if the interaction (2.3) includes only fermionic operators or it is linear in bosonic operators then one can prove that the series are absolutely convergent. The following operator plays the crucial role in the scattering theory $$T=:\exp\sum_{n=1}^{\infty}(-\lambda)^{n}\Gamma(V...\Gamma(V)...)_{L}:$$ (3.52) Here $()_{L}$ means that only connected non-vacuum diagrams are included. The operator $T$ is equal in fact to the non-vacuum part of the conjugate wave operator : $$T=\lim_{\epsilon\to 0}\lim_{t\to\infty}U^{*}_{\epsilon}(t)/<U^{*}_{\epsilon}(t)>$$ One has the following relations [5]: $$HT=T(H_{0}+E_{0}),$$ (3.53) $$E_{0}=\sum_{n=1}^{\infty}(-\lambda)^{n+1}<V\Gamma(V...\Gamma(V)...)_{c}>,$$ $$T^{*}T=TT^{*}=Z^{-1},$$ $$Z^{-1}=||T\Phi_{0}||^{2}$$ We will use these relations to derive the main formula for matrix elements of the evolution operator and in particular to compute corrections to the stochastic limit. From (3.53) it follows $$H=T(H_{0}+E_{0})T^{*}e^{B}$$ where $$B=\ln Z$$ (3.54) Therefore one has $$U(t)=e^{itH_{0}}e^{-itH}=e^{itH_{0}}Te^{-it(H_{0}+E_{0})}T^{*}e^{B}=e^{At+B}e^% {itH_{0}}Te^{-itH_{0}}T^{*}$$ (3.55) where $$A=-iE_{0}$$ (3.56) By taking the expectation value of the equality (3.55) we obtain $$<\psi,U(t)\psi>=e^{At+B+C(t)}$$ (3.57) where $$e^{C(t)}=<\psi,e^{itH_{0}}Te^{-itH_{0}}T^{*}\psi>$$ If $\psi$ is the vacuum vector then one can prove that $C(t)\to 0$ as $t\to\infty$ and we obtain the main formula (3.36). If $\psi$ is a non-vacuum vector then the asymptotic behaviour of $C(t)$ is more complicated. We have proved the following theorem. Theorem 4. If the Hamiltonian satisfies the indicated above assumptions then there exists the following representation for the vacuum expectation value of the evolution operator $$<\Phi_{0},U(t)\Phi_{0}>=e^{At+B+C(t)}$$ where constants $A$ and $B$ are given by (3.50) and (3.38) and $C(t)$ is defined as $$e^{C(t)}=<\Phi_{0},T(t)T^{*}\Phi_{0}>,~{}~{}T(t)=e^{itH_{0}}Te^{-itH_{0}}$$ (3.58) here the weak limit of $T(t)$ as $t\to\infty$ is equal to 1 and $\lim_{t\to\infty}C(t)=0$. This theorem is closely related with the theorem 3 and it shows the physical meaning of constants $A,B$ and the function $C(t)$. 4 One particle matrix element of the evolution operator for translation invariant Hamiltonian In this section we study the asymptotic behaviour of one particle matrix elements of evolution operator $$<p|U(t,\lambda)|p^{\prime}>=\delta(p-p^{\prime})U_{1,1}(t,p,\lambda),$$ (4.1) for translation invariant Hamiltonian (2.4) without vacuum polarization, i.e. when $V_{I0}=V_{0J}=0$, and we assume also $V_{1,1}=0$. We will prove the following Theorem 5. For $t\lambda^{2}$=fixed =$\tau$ and small $\lambda^{2}$ one has the following representation $$U_{1,1}(t,p,\lambda)=\exp\{i\tau A_{2}(p)\}(1+i\lambda^{2}\tau A_{4}(p)+% \lambda^{2}B_{2}(p)+o(\lambda^{2}))$$ (4.2) where $$A_{2}(p)=(V\Gamma(V))_{1,1}(p)$$ (4.3) $$B_{2}(p)=(V\Gamma^{2}(V))_{1,1}(p)$$ (4.4) $$A_{4}(p)=A_{4}^{1PI}(p)+A_{4}^{1PR}(p),$$ $$A_{4}^{1PI}(p)=(V\Gamma(V\Gamma(V\Gamma(V))))^{1PI}_{1,1}(p)$$ (4.5) $$A_{4}^{1PR}(p)=(V\Gamma(V)(V\Gamma^{2}(V))))^{1PR}_{1,1}(p)$$ (4.6) Here subscript ${}_{1,1}$ means that we left only connected diagrams in the corresponding expression with one creations and one annihilation operator and we take only the coefficient in front of $\delta(p-p^{\prime})$, $$<p|{\cal O}|p^{\prime}>={\cal O}_{1,1}(p)\delta(p-p^{\prime})$$ (4.7) Remark 1. We will give the proof of Theorem 5 for the Hamiltonian of the form $$H=\int\omega(p)a^{+}(p)a(p)dp+\lambda\int(v(p,q)a^{+}(q)a^{+}(p-q)a(p)+c.c.)dpdq.$$ (4.8) Generalization to the Hamiltonians in the general form (2.4) is obvious and the suitable comments will be given below. Remark 2. In fact the following representation is true $$U_{1,1}(t,p,\lambda)=\exp\{itA(p,\lambda)+B(p,\lambda)\}\left(1+C(t,p,\lambda)\right)$$ (4.9) where $A(p,\lambda)$, $B(p,\lambda)$ and $C(t,p,\lambda)$ are formal series in $\lambda$ $$A(p,\lambda)=\sum_{n=1}^{\infty}\lambda^{n}A_{n}(p),~{}~{}B(p,\lambda)=\sum_{n% =1}^{\infty}\lambda^{n}B_{n}(p),~{}~{}C(t,p,\lambda)=\sum_{n=1}^{\infty}% \lambda^{n}C_{n}(t,p)$$ (4.10) and functions $C_{n}(t,p)$ vanish as $t\to\infty$. In the case of $<p|V|p^{\prime}>=0$ , $A_{1}=0$, , $B_{1}=0$ and low order terms in these series are given by (4.3), (4.4) and $$C_{2}(t,p)=(e^{itH_{0}}Ve^{-itH_{0}}\Gamma^{2}(V))_{1,1}(p).$$ (4.11) We will give the proof of (4.2) in the next section using the methods of scattering theory. This proof assumes that one has not decay. Remark 3. Representation (4.2) means a special dependence of $U_{1,1}(t,\lambda,p)$ on $t$. Namely it says that (2n)-order on $\lambda$ of $U_{1,1}(t,\lambda,p)$ $$U_{1,1}(t,\lambda,p)=\sum_{n}\lambda^{2n}U^{(2n)}_{1,1}(t,p),$$ can be represent as $$U^{(2n)}_{1,1}(t,p)=\sum_{k=0}^{n}t^{k}~{}U^{(2n,k)}_{1,1}(p)~{}+~{}r_{n}(t,p)$$ (4.12) with $r_{n}(t)\to 0$ for $t\to\infty$. Remark 4. Performing the stochastic limit we get $$U_{1,1}(\tau/\lambda^{2},p)\to\exp i\tau A_{2}(p)~{}~{}\mbox{as }~{}~{}\lambda% \to 0,$$ (4.13) i.e. the stochastic limit of one particle matrix elements of evolution operator, $U(\tau/\lambda^{2})$, for translation invariant Hamiltonian without vacuum polarization is given by the simple second order diagram (Fig.9). This result takes place for arbitrary interaction (2.4) and the answer can be representing in the form $$U_{1,1}(\tau/\lambda^{2},p)\to\exp\{i\tau(V\Gamma(V))^{(c)}_{1,1}(p)\}~{}~{}% \mbox{as }~{}~{}\lambda\to 0.$$ (4.14) Here the symbol $(...)^{(c)}_{1,1}$ means that we left only connected diagrams in $V\Gamma(V)$ with one creations and one annihilation operator corresponding to the same type of particles. Remark 5. There are two sources of corrections to the stochastic limit. One is the terms in the second order approximation of $U^{(2)}_{1,1}(t)$ which does not depend on $t$, i.e. it corresponds to diagram Fig.9 . There are also corrections to the leading terms of the stochastic limit which come from 4-th order terms with linear dependence on $t$. These corrections come from the 4-th order 1PI diagrams such as shown on Fig.5. (see for details Appendix where we show that contributions of these diagrams include the terms proportional to $t$ and after rescaling produce a factor $\tau\lambda^{2}$). $$U_{1,1}(\tau/\lambda^{2},p)\to\exp\{i\tau A_{2}(p)\}[1+\lambda^{2}(i\tau A_{4}% (p)it+B_{2}(p))]~{}~{}\mbox{as }~{}~{}\lambda\to 0.$$ (4.15) Remark 6. Let us note that $A_{2}(p)$ in (4.3) is positive in the case when we assume that there is no decay in the model. For Hamiltonian (4.8) the explicit expression for $A_{2}(p)$ is $$A_{2}(p)=\int dq(\overline{v}\circ\Gamma v)(p,q),$$ (4.16) where $$(\overline{v}\circ\Gamma v)(p,q)=\frac{|v(p,q)|^{2}}{\omega(p-q)+\omega(q)-% \omega(p)}$$ (4.17) We see that for the typical example $\omega(q)=\sqrt{q^{2}+m^{2}}$ the denominator in (4.17) does not vanish. In the case of different particles, say, 3 different particles (Lee model) $$H=\int\omega_{a}(p)a^{+}(p)a(p)dp+\int\omega_{b}(p)b^{+}(p)b(p)dp+\int\omega_{% c}(p)c^{+}(p)c(p)dp~{}+$$ (4.18) $$\lambda\int dpdq(v(p,q)(a^{+}(q)b^{+}(p-q)c(p)+c.c.)$$ we have the $\epsilon$-prescription for the denominator $$A_{2}(p)=\int dq~{}\frac{|v(p,q)|}{\omega_{a}(p-q)+\omega_{b}(q)-\omega_{c}(p)% +i\epsilon}$$ (4.19) and if one has a decay then $A_{2}(p)$ gets the imaginary part. Proof. Now let us prove the Theorem 5. Let us first make calculations up to the second order 111 An explicit calculation of $U_{1,1}(t,p,\lambda)$ up to 4-th order is presented in Appendix B . There is one the second-order diagram (Fig.4) and it gives : $${\cal U}_{2}(t,p,q)=(-i\lambda)^{2}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}|v(% p,q)|^{2}e^{iE_{2}(-t_{1}+t_{2})}=$$ $$it\lambda^{2}(\overline{v}\circ\Gamma v)(p,q)-\lambda^{2}(\overline{v}\circ% \Gamma^{2}v)(p,q)(1-e^{iE_{1}t})$$ (4.20) where $E_{1}=-E_{2},E_{2}=\omega(k)+\omega(p-k)-\omega(p)$. Here and below for simplicity of writing only the kernel ${\cal U}_{1,1}(T,p,q_{1},...)$ of $U_{1,1}(t,p)$ is written, $$U^{(2n)}_{1,1}(t,p)=\int dq_{1}...dq_{r}{\cal U}_{2n}(t,p,q_{1},...q_{r}).$$ (4.21) In the case of a general Hamiltonian instead of (4.20) one has $$U^{(2)}_{1,1}(t,p)=it\lambda^{2}(V\Gamma(V))_{1,1}(p)-\lambda^{2}(V\Gamma^{2}(% V))_{1,1}(p)+\lambda^{2}(e^{iH_{0}t}Ve^{-iH_{0}t}\Gamma^{2}(V))_{1,1}(p).$$ (4.22) We will give the proof by induction. We already has proved the claim of the theorem at the second order of perturbation theory. Let us suppose that the 1PR diagrams including $n$ irreducible second order mass insertions (see Fig.7) have the following behaviour $$U^{1P(n)R}_{1,1~{}(2n)}(t)(p)=\frac{(it)^{n}}{n!}A^{n}_{2}(p)+\frac{n(it)^{n-1% }}{(n-1)!}B_{2}(p)A^{n-1}_{2}(p)+....]$$ (4.23) Here down index means the order in $\lambda$ and index $n$ in upper script means the number of irreducible insertions and dots means low order on T. We also assume that the 1PR diagrams which include $(n-2)$ irreducible second order mass insertions and one irreducible 4-th order mass insertion (see Fig.8) have the following behaviour $$U^{1P(n-1)R}_{2n}(t)(p)=\frac{(it)^{n-1}}{(n-2)!}A^{1PI}_{4}(p)A^{n-2}_{2}(p)+% ....$$ (4.24) 1RR diagrams with (n+1) irreducible second order mass insertions (see Fig.7) have the following representation $$U^{1P(n+1)R}_{2(n+1)}(t,p)=(-i)^{2}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}% \int dq|v(p,q)|^{2}e^{iE_{2}(-t_{1}+t_{2})}U^{1P(n)R}_{2n}(t_{2},p),$$ (4.25) Using the assumption (4.23) and $$\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}e^{iE(-t_{1}+t_{2})}t^{n}_{2}=\frac{t^% {n+1}}{(n+1)(iE)}-\frac{t^{n}}{(iE)^{2}}+\frac{n(n-1)}{n+1}\frac{t^{n-1}}{(iE)% ^{3}}+...$$ (4.26) we see that the RHS of (4.25) has the following behaviour $$(-i)^{2}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}\int dq|v(p,q)|^{2}e^{iE_{2}(-% t_{1}+t_{2})}~{}[~{}\frac{(it_{1})^{n}}{n!}A^{n}_{2}(p)+\frac{n(it_{1})^{n-1}}% {(n-1)!}B_{2}(p)A^{n-1}_{2}(p)+...]=$$ $$(i)^{2}\int dq\{\frac{(it)^{n+1}}{i(n+1)!}\frac{|v(p,q)|^{2}}{iE_{2}}A^{n}_{2}% (p)-\frac{(it)^{n}}{n!}\frac{|v(p,q)|^{2}}{(iE_{2})^{2}}A^{n}_{2}(p)+n\frac{(% it)^{n}}{in!}\frac{|v(p,q)|^{2}}{iE_{2}}B_{2}(p,q)A^{n-1}_{2}(p)+....\},$$ i.e. the contribution of 1RR diagrams with $(n+1)$ irreducible second order parts (see Fig.7) has the following asymptotic $$U^{1P(n+1)R}_{2(n+1)}(t,p)=\frac{(it)^{n+1}}{(n+1)!}A^{n+1}_{2}(p)+(n+1)\frac{% (it)^{n}}{n!}B_{2}(p)A^{n}_{2}(p)+...$$ (4.27) Here $$A_{2}(p)=\int dq\frac{|v(p,q)|^{2}}{E_{2}},~{}~{}~{}B_{2}(p)=-\int dq\frac{|v(% p,q)|^{2}}{E_{2}^{2}},$$ (4.28) or in the more general case $$A_{2}(p)=(V\Gamma(V))_{1,1}^{1PI}(p),~{}~{}~{}B_{2}(p)=-(V\Gamma^{2}(V))_{1,1}% ^{1PI}(p)$$ (4.29) Therefore, (4.27) is in accordance with (4.23), i.e. (4.23) is proved by induction. 1PR diagrams included $(n-1)$ irreducible second order mass insertions and one irreducible 4-th order mass insertion (see Fig.8) have the following representation $$U^{1P(n)R}_{2(n+1)}(t,p)=(-i)^{2}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}\int dq% |v(p,q)|^{2}e^{iE_{2}(-t_{1}+t_{2})}U^{1P(n-1)R}_{2n}(t_{2},p)+$$ (4.30) $$(-i)^{4}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}\int^{t_{2}}_{0}dt_{3}\int^{t_% {3}}_{0}dt_{4}\int dqdq_{1}(\overline{v}\circ\overline{v}\circ v\circ v)_{1,1}% ^{(1PI)}(p,q,q_{1})~{}e^{i\sum_{i=1}^{4}E_{i}t_{i}}~{}U^{1P(n-1)R}_{2(n-1)}(t_% {4},p)$$ Using $$\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}\int^{t_{2}}_{0}dt_{3}\int^{t_{3}}_{0}% dt_{4}~{}t^{n}_{4}~{}e^{i\sum_{i=1}^{4}E_{i}t_{i}}=\frac{1}{i^{3}}\frac{t^{n+1% }}{n+1}\frac{1}{(E_{4})(E_{4}+E_{3})(E_{4}+E_{3}+E_{2})},$$ (4.31) (4.26) and the assumptions (4.23) and (4.24) we see that the RHS of (4.30) has the following behaviour $$(i)^{2}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}\int dq|v(p,q)|^{2}e^{iE_{2}(-t% _{1}+t_{2})}[\frac{(it_{2})^{n-1}}{(n-2)!}A^{1PI}_{4}(p)A^{n-2}_{2}(p)+....]+$$ $$(i)^{4}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}\int^{t_{2}}_{0}dt_{3}\int^{t_{% 3}}_{0}dt_{4}dqdq_{1}(\overline{v}\circ\overline{v}\circ v\circ v)_{1,1}^{1PI}% (p,q,q_{1})e^{i\sum_{i=1}^{4}E_{i}t_{i}}~{}[~{}\frac{(it_{4})^{n-1}}{(n-1)!}A^% {n-1}_{2}(p)+...]=$$ $$=\frac{(it)^{n}}{(n-2)!n}\int dq(\overline{v}\circ\Gamma(v))_{1,1}^{1PI}(p,q)A% ^{1PI}_{4}A_{2}^{n-2}+i\frac{(it)^{n}}{n!}\int dqdq_{1}(\overline{v}\circ% \Gamma(\overline{v}\circ\Gamma(v\Gamma(v))))_{1,1}^{1PI}(p,q,q_{1})A_{2}^{n-1},$$ Here $$A^{1PI}_{4}(p)=\int dqdq_{1}(\overline{v}\circ\Gamma(\overline{v}\circ\Gamma(v% \Gamma(v))))_{1,1}^{1PI}(p,q,q_{1})$$ (4.32) or more generally, $$A^{1PI}_{4}(p)=(V\Gamma(V\Gamma(V\Gamma(V))))_{1,1}^{1PI}(p)$$ (4.33) Therefore, $$U^{1P(n-1)R}_{2(n+1)}(t,p)=\frac{(it)^{n}}{(n-1)!}tA_{4}(p)A_{2}^{n-1}(p),$$ that is in accordance with assumption (4.24). It is evident that the behaviour (4.23) and (4.24) implies (4.2) with $A_{2}$ and $B_{2}$ as in the formulation of Theorem 5. 4.1 Wave operators and the main formula In this section we show how the spectral theory and renormalized wave operators can be used for the derivation of the main formula. In particular, explicit recursive relations for the parameters $A_{n},B_{n}$ and $C_{n}(t)$ will be obtained. Note that for this derivation we have to assume that the is no decay. The intertwining operator $T$ is defined as a solution of the following equation $$HT=T(H_{0}+M),$$ (4.34) Here $M$ has the form $$M=\int m(p)a^{*}(p)a(p)dp$$ (4.35) The operator $T$ plays the crucial role in the scattering theory. Its singular part defines the renormalized wave operators. The renormalized wave operators also give a solution of intertwining condition. Taking $T$ in the following form $$T=:\exp W:,~{}~{}~{}W=\Gamma(Q)$$ (4.36) one gets [24] equations to define $Q$ and $M$. $$Q+V\hskip 2.845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}% \put(4.5,2.0){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134ptT-(V\hskip 2% .845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}\put(4.5,2.0% ){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134ptT)_{1,1}-W-\circ-M=0$$ (4.37) $$M=(V\hskip 2.845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}% \put(4.5,2.0){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134ptT)_{1,1}$$ (4.38) Here the symbol $\hskip 2.845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}\put% (4.5,2.0){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134pt$ means that for connected $A$ in the operators $A\hskip 2.845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}% \put(4.5,2.0){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134ptB$ all connected parts of B are paired with $A$. If $B$ is connected then $A\hskip 2.845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}% \put(4.5,2.0){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134ptB=A-\circ-B=% (AB)_{c}$. For a special form of the interaction, when $V_{0I}=V_{I0}=0$ we can write $M=(V\hskip 2.845276pt\begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}% \put(4.5,2.0){\mbox{\scriptsize{/}}}\end{picture}\hskip-6.544134ptT)_{1,1}=(V% \Gamma(Q))_{1,1}$. Expanding $M$ and $Q$ in the power series in $\lambda$ $$M=\sum_{n=1}\lambda^{2n}M_{2n},~{}~{}Q=\sum_{n=1}\lambda^{2n+1}Q_{2n+1},$$ (4.39) we get recursive relations to define $M_{2n}$ and $Q_{2n+1}$. Let us compute explicitly the first terms solving these equations. We obtain $$Q_{1}=-V$$ (4.40) $$M_{2}=-(V\Gamma V)_{1,1}$$ (4.41) $$Q_{2}=(V\Gamma V)_{c}-(V\Gamma V)_{1,1}$$ (4.42) $$Q_{3}=-(V\Gamma_{r}(V\Gamma(V)))_{c}-\frac{1}{2}V\hskip 2.845276pt% \begin{picture}(20.0,10.0)\put(0.0,0.0){\line(1,0){10.0}}\put(4.5,2.0){\mbox{% \scriptsize{/}}}\end{picture}\hskip-6.544134pt:\Gamma(V)^{2}:+\Gamma V-\circ-M% _{2}$$ (4.43) $$M_{4}=-(V\Gamma(V\Gamma_{r}(V\Gamma(V))))_{1,1}+(V\Gamma^{2}(V)M_{2})_{1,1}$$ (4.44) or $$M_{4}=-(V\Gamma(V\Gamma_{r}(V\Gamma(V))))_{1,1}-(V\Gamma^{2}(V)(V\Gamma(V))_{1% ,1})_{1,1}$$ (4.45) Here we use the notation $$\Gamma_{r}(Q)=\Gamma(Q-Q_{1,1})$$ (4.46) One can construct in perturbation theory the operator $Z$ such that $$TT^{*}Z=1$$ (4.47) In the second order in $\lambda$ $$<p|Z|p^{\prime}>=(1+\lambda^{2}B_{2}(p))\delta(p-p^{\prime})$$ (4.48) Therefore one has $$U(t)=e^{itH_{0}}e^{-itH}=e^{-itM}e^{it(H_{0}+M)}Te^{-it(H_{0}+M)}T^{*}Z$$ (4.49) By taking the one particle expectation value of the equality (4.49) $$<p|U(t)|p^{\prime}>=e^{-iM(p,\lambda)t}Z(p,\lambda)(1+C(p,t,\lambda))$$ (4.50) where $$(1+C(t,p,\lambda))\delta(p-p^{\prime})=<p|e^{it(H_{0}+M)}Te^{-it(H_{0}+M)}T^{*% }|p^{\prime}>$$ (4.51) and $M$ and $T$ should be computed from the recursive relations. We have proved the following Theorem 6. For translation invariant Hamiltonians without vacuum polarization the one particle matrix elements of evolution operator are given by the formula (4.50) where functions $M(p,\lambda)$ and $Z(p,\lambda)$ are solutions of equations (4.36)-(4.38) and (4.47). Function $C(t,p,\lambda)$ is defined by (4.51) and it vanishes as $t\to\infty$. 5 Conclusion We have obtained in this paper the explicit representations for the vacuum and one-particle matrix elements of the evolution operator. By using these representations we have computed the corrections to the known results for the large time exponential behaviour of these matrix elements. This opens the way for further investigations of the large time behaviour in quantum theory. In particular the problems of quantum decoherence and decay require the further study by using these methods. Acknowledgments. This work is supported in part by INTAS grant 96-0698, I.Ya.A. is supported also by RFFI-99-01-00166 and I.V.V. by RFFI-99-01-00105 and by grant for the leading scientific schools. APPENDIX Appendix A 4-th order for hamiltonian with vacuum polarization For an illustration of the form (3.47) of vacuum expactation value of evolution operator we present on Fig.9 all contributions to the 4-th order of perturbation theory for vacuum expactation value of evolution operator. For simplicity we consider the interaction in the form $$V=\int(v(p_{1},p_{2}p_{3})(a^{*}(p_{1})+a(p_{1}))((a^{*}(p_{2})+a(p_{2}))a^{*}% (p_{3})+a(p_{3}))dp_{1}dp_{2}dp_{3}$$ (A.1) On this example we see once again that only one term from all terms represented on Fig.9 has linear dependens on $t$. This term is represented by bracket-diagram on which all right bracked are on the right from the last vertex. $~{}~{}~{}~{}~{}~{}$Diagrams Corresponding order of application of $\Gamma$-operator, exponential factors and denominators: $T<0|V\Gamma(V\Gamma(V\Gamma(V))))|0>=$ $~{}$ $T\int dp...\frac{(\overline{v}\circ\overline{v}\circ v\circ v)(p,..)}{(E_{1}+E% _{2}+E_{3}+E_{4})(E_{2}+E_{3}+E_{4})(E_{3}+E_{4})E_{4}}$ $-\int dp...[e^{i(E_{1}+E_{2}+E_{3})T}-1]\cdot\Gamma(v\circ\Gamma(v\circ)\Gamma% (v\circ))\Gamma(v)(p,...)=$ $~{}$ $-\int dp...[e^{i(E_{1}+E_{2}+E_{3})T}-1]\frac{(\overline{v}\circ\overline{v}% \circ v\circ v)(p,..)}{(E_{1}+E_{2}+E_{3}+io)(E_{2}+E_{3}+io)(E_{3}+io)E_{4}}$ $-\int dp...[e^{i(E_{1}+E_{2})T}-1]\cdot\Gamma(\overline{v}\circ\Gamma(% \overline{v}\circ))\Gamma(v\circ\Gamma(v))(p,...)$ $~{}$ $-\int dp...[e^{i(E_{1}+E_{2})T}-1]\cdot\frac{(\overline{v}\circ\overline{v}% \circ v\circ v)(p,..)}{(E_{1}+E_{2}+io)(E_{2}+io)(E_{3}+E_{4}+io)E_{4}}$ $-\int dp...[e^{i(E_{1})T}-1]\Gamma(\overline{v}\circ)\Gamma(\overline{v}\circ% \Gamma(\overline{v}\circ\Gamma(v)))(p,..)=$ $~{}$ $-\int dp...[e^{i(E_{1})T}-1]\cdot\frac{(\overline{v}\circ\overline{v}\circ v% \circ v)(p,..)}{(E_{1}+io)](E_{2}+E_{3}+E_{4}+io)(E_{3}+E_{4}+io)E_{4}}$ $-\int dp...[e^{i(E_{1}+E_{2})T}-1]\cdot\Gamma(\overline{v}\circ\Gamma(% \overline{v}\circ))\Gamma(v\circ)\Gamma(v)(p,..)=$ $~{}$ $-\int dp...[e^{i(E_{1}+E_{2})T}-1]\cdot\frac{(\overline{v}\circ\overline{v}% \circ v\circ v)(p,..)}{(E_{1}+E_{2}+io)(E_{2}+io)(E_{3}+io)E_{4}}$ $\int dp...[e^{iE_{1}T}-1]\cdot|\Gamma(\overline{v}\circ)\Gamma(\overline{v}% \circ\Gamma(v\circ))\Gamma(v)(p,..)=$ $~{}$ $\int dp...[e^{iE_{1}T}-1]\cdot\frac{(\overline{v}\circ\overline{v}\circ v\circ v% )(p,..)}{E_{1}(E_{2}+E_{3}+io)(E_{3}+io)E_{4}}$ $\int dp...[e^{iE_{1}T}-1]\cdot\Gamma(\overline{v}\circ)\Gamma(\overline{v}% \circ)\Gamma(v\circ\Gamma(v))(p,..)$ $~{}$ $\int dp...[e^{iE_{1}T}-1]\cdot\frac{(\overline{v}\circ\overline{v}\circ v\circ v% )(p,..)}{E_{1}(E_{2}+io)(E_{3}+E_{4}+io)E_{4}}$ $-\int dp...[e^{iE_{1}T}-1]\cdot\Gamma(\overline{v}\circ)\Gamma(\overline{v}% \circ)\Gamma(v\circ)\Gamma(v)(p,..)$ $~{}$ $-\int dp...[e^{iE_{1}T}-1]\cdot\frac{(\overline{v}\circ\overline{v}\circ v% \circ v)(p,..)}{E_{1}(E_{2}+io)(E_{3}+io)E_{4}}$ Appendix B 4-th order of one particle expactation value of evolution operator for translation invariant interaction Here we prove the representation (4.2) - (4.6) for one particle matrix elements of evolution operator for translation invariant Hamiltonian (2.4) without vacuum polarization in the 4-th order of perturbation theory. We obtain $$A_{4}(p)=A_{4}^{1PI}(p)+A_{4}^{1PR}(p),$$ $$A_{4}^{1PI}(p)=(V\Gamma(V\Gamma(V\Gamma(V))))^{1PI}_{1,1}(p),$$ (B.1) $$A_{4}^{1PR}(p)=(V\Gamma(V)(V\Gamma^{2}(V)))^{1PR}_{1,1}(p)$$ (B.2) There are two one-particle irreducible (1PI) diagrams (Fig. 5) and one one-particle reducible (1PI) diagram, (Fig. 6). The contribution of the 4-th order 1PI-diagram is $${\cal U}^{1PR}_{4}(t,p,q,k)=(-i\lambda)^{4}\int^{T}_{0}dt_{1}\int^{t_{1}}_{0}% dt_{2}\int^{t_{2}}_{0}dt_{3}\int^{t_{3}}_{0}dt_{4}|v(p,q)|^{2}\cdot|v(p,k)|^{2% }e^{iE_{1}(t_{1}-t_{2})}e^{iE_{4}(t_{4}-t_{3})}=$$ $$(-i\lambda)^{2}\int^{t}_{0}dt_{1}\int^{t_{1}}_{0}dt_{2}|v(p,k)|^{2}e^{iE_{1}(t% _{1}-t_{2})}{\cal U}_{2}(t,p,q)$$ Substituting here (4.20) we have $${\cal U}^{1PR}_{4}(t)(p,q,k)={(iT)^{2}\over 2}\,{\cal A}_{2}(p,q){\cal A}_{2}(% p,k)$$ (B.3) $$-iT\,[B_{2}(p,q){\cal A}_{2}(p,k)+{\cal A}_{4}^{(1PR)}(p,q,k)]+{\cal B}^{1PR}_% {4}(p,q,k)+{\cal C}^{1PR}_{4}(t,p,q,k)$$ where $${\cal A}_{2}(p,q)=(\overline{v}\circ\Gamma(v))(p,q)$$ (B.4) $${\cal A}_{4}^{(1PR)}(p,q,k)=(\overline{v}\circ\Gamma(v))(p,q)(\overline{v}% \circ\Gamma^{2}(v))(p,k)$$ (B.5) $${\cal B}_{4}^{(1PR)}(p,q,k)=[(\overline{v}\circ\Gamma^{3}(v))(p,k)(\overline{v% }\circ\Gamma(v))(p,q)+(\overline{v}\circ\Gamma^{2}(v))(p,k)(\overline{v}\circ% \Gamma^{2}(v))(p,q)]+$$ $$(\overline{v}\circ\Gamma(\Gamma^{2}(v)\circ\overline{v})\circ\Gamma(v))(p.q,k)% +(\overline{v}\circ\Gamma(\Gamma(v)\circ\overline{v})\circ\Gamma^{2}(v))(p.q,k)$$ (B.6) $$C_{4}^{(1PR)}(t,p,q,k)=-e^{iE_{1}T}~{}[(\overline{v}\circ\Gamma^{3}(v))(p,k)(% \overline{v}\circ\Gamma(v))(p,q)+(\overline{v}\circ\Gamma^{2}(v))(p,k)(% \overline{v}\circ\Gamma^{2}(v))(p,q)]$$ $$e^{iE_{1}t}(\overline{v}\circ\Gamma(\Gamma^{2}(v)\circ\overline{v})\circ\Gamma% (v))(p.q,k)+e^{iE_{3}t}(\overline{v}\circ\Gamma(\Gamma(v)\circ\overline{v})% \circ\Gamma^{2}(v))(p.q,k),$$ (B.7) here $E_{1}=\omega(p)-\omega(q)-\omega(p-q)$ and $E_{3}=\omega(p)-\omega(k)-\omega(p-k)$. (B.3) can be rewritten in more compact form $$U^{1PR}_{1,1~{}(4)}(t,p)={(it)^{2}\over 2}\,A^{2}_{2}(p)-it\,[B_{2}(p)A_{2}(p)% +A_{4}^{(1PR)}(p)]+B^{1PR}_{4}(p)+C^{1PR}_{4}(T,p)$$ (B.8) where $$A_{2}(p)=(V\Gamma(V))_{1,1}(p),~{}~{}~{}A_{4}^{(1PR)}(p)=(V\Gamma(V))_{1,1}(p)% (V\Gamma^{2}(V))_{1,1}(p)$$ (B.9) $$B_{4}^{(1PR)}(p)=(V\Gamma^{3}(V))_{1,1}(p)(V\Gamma(V))_{1,1}(p)+(V\Gamma^{2}(V% ))_{1,1}(p)(V\Gamma^{2}(V))_{1,1}(p)+$$ (B.10) $$(V\Gamma(\Gamma^{2}(V)V)\Gamma(V))_{1,1}^{1PR}(p)+(V\Gamma(\Gamma(V)V)\Gamma^{% 2}(V))_{1,1}^{1PR}(p)$$ $$C_{4}^{(1PR)}(t,p)=-(e^{iH_{0}T}V\Gamma^{3}(V))_{1,1}(p)(V\Gamma(V))_{1,1}(p)-% (e^{iH_{0}t}V\Gamma^{2}(V))_{1,1}(p)(V\Gamma^{2}(V))_{1,1}(p)+$$ $$(e^{iH_{0}t}V\Gamma(\Gamma^{2}(V)V)\Gamma(V))_{1,1}^{1PR}(p)+(V\Gamma(\Gamma(V% )V)\Gamma^{2}(V)E^{-iH_{0}t})_{1,1}^{1PR}(p)$$ To 4-th order of 1-particle average of wave operator give also contributions the 1PI diagrams, see Fig.5. The leading terms corresponding to these diagrams are of order t. We have $$U^{1PI}_{4}(t,p)=it(V\Gamma(V\Gamma(V\Gamma(V))))_{1,1}^{1PI}(p)$$ (B.11) Above consideration proves that up to the $t^{2}$-order the formula (4.2) is true. References [1] N.N. Bogoliubov and D.V. Shirkov, Introduction to the theory of quantum fields, Nauka, 1973 [2] S.S. Schweber, An introduction to relativistic quantum field theory , Row, Peterson and Co, N.Y. 1961. [3] K.O. Friedrichs, Perturbation of Spectra in Hilbert Space, AMS, Providence, 1965 [4] L.D. Faddeev, Dokladi AN USSR, 152 (1963) 573 [5] K. Hepp, Theorie de la renormalisation, Springer, 1969. [6] I.Ya. Aref’eva, Teor. Mat. Fis. 14 (1973) 3 [7] J. Schwinger, Field theory of unstable particles, Ann. Phys. 9(1960)169-193 [8] M.L. Goldberger and K.M. Watson, Collision theory, John Wiley & Sons, Inc, New York-London-Sydney, 1964 [9] C. Cohen-Tannoudji, J. Dupont-Roc and G. Grinberg, Atom-Photon Interactions, Basic Processes and Applications, John Wiley & Sons, Inc., 1992 [10] E.L. Feinberg, A particle with non-equilibrium proper field, in: Problems of theoretical physics, Memorial volume to Igor E. Tamm, Nauka, Moscow, 1972 [11] V.A. Rubakov and M.E. Shaposhnikov, Elektroweak baryon number non-conservation in the early Universe and in high-enegry collisions, Uspehi Fisicheskih nauk, 166 (1996) 493-538 [12] D.F. Walls and G.J. Milburn, Quantum Optics, Springer-Verlag, 1994. [13] W.H. Zurek, Phys. Rev. D26(1981)1516 [14] I.V.Volovich, Models of Quantum Computers and Decoherence Problem, quant-ph/9902055; Proc. of the International Conference on Quantum Information, Meijo Univ. Nagoya, 4-8 Nov. 1997. [15] L. Accardi, I.Ya.Aref’eva and I.V.Volovich, Non-Equilibrium Quantum Field Theory and Entangled Commutation Relations, hep-th/9905035. [16] N.N. Bogoliubov, Problems of dynamical theory in statistical physics, Gostehizdat, 1946 [17] K.O. Friedrichs, On the perturbation of continuous spectra, Comm. Pure Appl. Math. 1(1948)361-406. [18] L. van Hove, Quantum mechanical perturbations giving rise to a transport equation, Physica, 21(1955)517-540 [19] I. Prigogine, Non-equilibrium statistical mechanics Pergamon, 1963, [20] D.N. Zubarev, Non-equilibrium statistical thermodynamics, Nauka, Moscow, 1971 [21] E.M.Lifchiz and L.P.Pitaevskii, Physical kinetics, Nauka, Moscow, 1979 [22] L. Accardi, S.V.Kozyrev, I.V. Volovich, Dynamics of dissipative two–state systems in the stochastic approximation, Phys. Rev. A 57 N. 3 (1997); quant-ph/9706021 [23] L. Accardi, Y.G. Lu and I.V.Volovich, Quantum theory and its stochastic limit, Springer, (in press) [24] I.Ya. Aref’eva, Teor. Mat. Fis. 15 (1973) 207.
Some qualitative studies of the focusing inhomogeneous Gross-Pitaevskii equation Alex H. Ardila ICEx, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Caixa Postal 702, 30123-970, Belo Horizonte-MG, Brazil [email protected]  and  Van Duong Dinh Institut de Mathématiques de Toulouse UMR5219, Université Toulouse CNRS, 31062 Toulouse Cedex 9, France and Department of Mathematics, HCMC University of Pedagogy, 280 An Duong Vuong, Ho Chi Minh, Vietnam [email protected] Abstract. We study the Cauchy problem for an inhomogeneous Gross-Pitaevskii equation. We first derive a sharp threshold for global existence and blow up of the solution. Then we construct and classify finite time blow up solutions at the minimal mass threshold. Additionally, using variational techniques, we study the existence, the orbital stability and instability of standing waves. Key words and phrases:Inhomogeneous NLS, ground states, stability, instability, blow up 2010 Mathematics Subject Classification: 35Q55; 35Q40 1. Introduction In this paper, we give some results concerning the Cauchy problem and the dynamics for an nonlinear inhomogeneous Gross-Pitaevskii equation in the following form: (1.1) $$\begin{cases}i\partial_{t}u+\Delta u-\gamma^{2}|x|^{2}u+|x|^{-b}|u|^{p-1}u=0,% \\ u(x,0)=u_{0},\end{cases}$$ where $\gamma>0$, $u=u(x,t)$ is a complex-valued function of $(x,t)\in\mathbb{R}^{N}\times\mathbb{R}$, $N\geq 1$, $0<b<\min\left\{2,N\right\}$ and $1<p<2^{\circ}$. Here, $2^{\circ}$ is defined by ${2}^{\circ}=1+\frac{4-2b}{N-2}$ if $N\geq 3$, and $2^{\circ}=\infty$ if $N=1$, $2$. The Schrödinger equation (1.1) is a model from various physical contexts in the description of nonlinear waves such as propagation of a laser beam in the optical fiber. In particular, it models the Bose-Einstein condensates with the attractive interparticle interactions under a magnetic trap. The operator $-\gamma^{2}|x|^{2}$ is the isotropic harmonic potential modelling a magnetic field whose role is to confine the movement of particles. The inhomogeneous nonlinearity $|x|^{-b}|u|^{p-1}u$ describes the attractive interaction between particles. When $b>0$, it can be thought of as modeling inhomogeneities in the medium in which the wave propagates; we refer the readers to [1, 2] for more information on the related physical backgrounds. In recent years, this type of equations has attracted attention of numerous researchers due to their significance in theory and applications, see [7, 8, 19, 20, 31, 25, 11, 10, 12]. In the absence of the harmonic potential, i.e., (1.1) with $\gamma=0$, we refer the reader to [13, 19, 20, 21, 31, 11, 10, 12] for more information. In the classical case $b=0$, many authors have been studying the problem of stability of standing waves, see [5, 16, 17, 15, 29, 30]. On the other hand, if $\gamma>0$ and $b<0$ the problem (1.1) was treated in [7, 8, 23, 9, 24]. If $\gamma>0$ and $b>0$, to the best of our knowledge, there are no results concerning the Cauchy problem and the dynamics for (1.1). By [20, Appendix K] and [6, Theorem 9.2.6] we can get the time local well-posedness for the Cauchy problem to (1.1) in the space $$\Sigma(\mathbb{R}^{N}):=\left\{u\in H^{1}(\mathbb{R}^{N}):|x|u\in L^{2}(% \mathbb{R}^{N})\right\},$$ equipped with the norm $$\|u\|^{2}_{\Sigma}=\int_{\mathbb{R}^{N}}\left(\left|\nabla u\right|^{2}+|x|^{2% }|u|^{2}\right)dx.$$ More precisely, we have the following proposition. Proposition 1.1. For every $u_{0}\in\Sigma(\mathbb{R}^{N})$ there exists a unique maximal solution of Cauchy problem (1.1), $T\in(0,\infty]$, such that $u(0)=u_{0}$ and $u\in C([0,T),\Sigma(\mathbb{R}^{N}))$. If $T=\infty$, $u$ is called a global solution. If $T<\infty$, $u$ is called blow-up in finite time and $\lim_{t\rightarrow T}\|\nabla u(t)\|^{2}_{L^{2}}=\infty$. Moreover, we have the conservation of energy and charge: for every $t\in[0,T)$, $$E(u(t))=E(u_{0})\quad and\quad\|u(t)\|^{2}_{L^{2}}=\|u_{0}\|^{2}_{L^{2}},$$ where (1.2) $$E(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{\gamma^{2}}{2}\int% _{\mathbb{R}^{N}}|x|^{2}|u|^{2}dx-\frac{1}{p+1}\int_{\mathbb{R}^{N}}|x|^{-b}|u% |^{p+1}dx.$$ We remark that if $1<p<1+\frac{4-2b}{N}$, then we have the global existence of Cauchy problem (1.1) in $\Sigma(\mathbb{R}^{N})$. Indeed, let $u$ be a solution of (1.1) as in Proposition 1.1. From Gagliardo-Nirenberg inequality ( see [20, 13]) (1.3) $$\int_{\mathbb{R}^{N}}|x|^{-b}|u|^{p+1}dx\leq C\|\nabla u\|^{\frac{N(p-1)}{2}+b% }_{L^{2}}\|u\|^{p+1-\frac{N(p-1)}{2}-b}_{L^{2}},$$ we have that $$E(u(t))\geq\|\nabla u(t)\|^{2}_{L^{2}}\left(\frac{1}{2}-C\|\nabla u(t)\|^{% \frac{N(p-1)}{2}+b-2}_{L^{2}}\|u(t)\|^{p+1-\frac{N(p-1)}{2}-b}_{L^{2}}\right).$$ Since $1<p<1+\frac{4-2b}{N}$, in view of the conservation of energy and charge, we see that $\|\nabla u(t)\|^{2}_{L^{2}}$ is bounded; that is, (1.1) is globally well-posed. On the other hand, assume that $p\geq 1+\frac{4-2b}{N}$ and let $u_{0}\in\Sigma(\mathbb{R}^{N})$. From Lemma 2.2 below we see that if $E(u_{0})<0$, then the solution $u$ of the Cauchy problem (1.1) corresponding to $u_{0}$ blows up in finite time. In the case $p=1+\frac{4-2b}{N}$, we are motived to investigate a sharp sufficient conditions of global existence to the solutions of the Cauchy problem (1.1). Let $Q$ be denote the unique (up to symmetries) positive radial solution of the following elliptic equation (see [11, 19]) (1.4) $$-\Delta Q+Q-|x|^{-b}|Q|^{\frac{4-2b}{N}}Q=0.$$ From [19], we have that $\frac{N}{2+N-b}\|Q\|^{\frac{4-2b}{N}}_{L^{2}}$ is the minimum of Weinstein functional $$J(u)=\left[\|\nabla u\|^{2}_{L^{2}}\|u\|^{\frac{4-2b}{N}}_{L^{2}}\right]\div% \int_{\mathbb{R}^{N}}|x|^{-b}|u|^{\frac{4-2b}{N}+2}dx.$$ Following the argument of Zhang [28], we have Theorem 1.2. Let $p=1+\frac{4-2b}{N}$. Assume that $u_{0}\in\Sigma(\mathbb{R}^{N})$. (i) If $u_{0}$ satisfies $\|u_{0}\|_{L^{2}}<\|Q\|_{L^{2}}$, then the corresponding solution $u(x,t)$ of the Cauchy problem (1.1) given in Proposition 1.1 exists globally in the time. (ii) For arbitrary positive $\lambda$ and complex number $c$ satisfying $|c|\geq 1$, if we take initial data $u_{0}=c\lambda^{\frac{N}{2}}Q(\lambda x)$, then $\|u_{0}\|_{L^{2}}\geq\|Q\|_{L^{2}}$ and the corresponding solution $u(x,t)$ of the Cauchy problem (1.1) blows up in finite time. Notice that if $1<p<1+\frac{4-2b}{N}$, then we have global well-posedness of the Cauchy problem (1.1). On the other hand, from Theorem 1.2, when $p=1+\frac{4-2b}{N}$, all solutions with a mass strictly below that $\|Q\|_{L^{2}}$ are global. If the mass is greater than or equal to $\|Q\|_{L^{2}}$ there are collapse solutions to exists for Eq. (1.1). So, in the case $p=1+\frac{4-2b}{N}$, we call $\|Q\|_{L^{2}}$ the critical mass for (1.1). Let us consider the function (1.5) $$S_{\beta,\theta_{0}}(x,t)=e^{i\theta_{0}}e^{i\frac{\beta^{2}}{t}}e^{-i\frac{|x% |^{2}}{4t}}\left(\frac{\beta}{t}\right)^{\frac{N}{2}}Q\left(\frac{\beta x}{t}% \right),$$ where $t>0$, $\beta$, $\theta_{0}\in\mathbb{R}$ and $Q$ is defined by (1.4). In the next result, inspired by the work of R. Carles [4], we classify finite time blow-up solutions at the minimal mass threshold. Theorem 1.3. Let $p=1+\frac{4-2b}{N}$ and $\gamma>0$. Assume that $u$ is a critical mass solution of (1.1) which blows up in finite time $0<T<\frac{\pi}{4\gamma}$, that is, $\|u_{0}\|_{L^{2}}=\|Q\|_{L^{2}}$ and $\lim_{t\rightarrow T}\|\nabla u(t)\|_{L^{2}}=+\infty$. Then there exist $\theta_{0}\in\mathbb{R}$ and $\lambda_{0}>0$ such that $$u(t)=\left(\frac{1}{\mathrm{cos}\,2\gamma t}\right)^{\frac{N}{2}}e^{-i\frac{% \gamma}{2}|x|^{2}\mathrm{tan}\,2\gamma t}S_{{\lambda_{0}},\theta_{0}}\left(% \frac{x}{\mathrm{cos}\,2\gamma t},\frac{\mathrm{sin}\,2\gamma(T-t)}{2\gamma% \mathrm{cos}\,2\gamma T\,\,\mathrm{cos}\,2\gamma t}\right)$$ for every $t\in[0,T)$, where $S_{\lambda_{0},\theta_{0}}$ is defined in (1.5). In particular, with the change of variable $\beta_{0}=\lambda_{0}\,\mathrm{cos}\,2\gamma T$, we see that the initial data is of the form $$u_{0}(x)=e^{i\,\theta_{0}}e^{4\gamma\frac{\beta_{0}}{\mathrm{sin}\,4\gamma T}}% e^{-i\frac{\gamma}{2}|x|^{2}\mathrm{cot}\,2\gamma T}\left(\frac{2\gamma\,\beta% _{0}}{\mathrm{sin}\,2\gamma T}\right)^{\frac{N}{2}}Q\left(\frac{2\gamma\,\beta% _{0}}{\mathrm{sin}\,2\gamma T}\,x\right).$$ Remark 1.4. (i) Obviously we can prove similar result as Theorem 1.3 also in the case where $u$ is a critical mass solution of (1.1) which blows up in the past, i.e., for $-{\pi}/{4\gamma}<T<0$. (ii)Let $u$ satisfy the hypotheses of the Theorem 1.3. Since $Q$ is spherically symmetric, it is not difficult to show that the function $u$ satisfies the relation $u(x,t+\frac{n\pi}{2\gamma})=u(x,t)$ for every $n\in\mathbb{N}$ and $0\leq t<T$. This implies, by using a time-translation and (i), that if $u(t)$ does not collapse in finite time $0<T<{\pi}/{2\gamma}$, then it will never collapse in the future. By a standing wave, we mean a solution of (1.1) with the form $u(x,t)=e^{i\omega t}\varphi(x)$ with $\omega\in\mathbb{R}$ and $\varphi$ satisfying the following nonlinear elliptic problem (1.6) $$\begin{cases}-\Delta\varphi+\omega\varphi+\gamma^{2}|x|^{2}\varphi-|x|^{-b}|% \varphi|^{p-1}\varphi=0,\\ \varphi\in\Sigma(\mathbb{R}^{N})\setminus\left\{0\right\}.\end{cases}$$ We remember that $\lambda_{1}=\gamma\,N$ is the simple first eigenvalue of the many-dimensional harmonic oscillator $-\Delta+\gamma^{2}|x|^{2}$. More precisely, (1.7) $$\gamma N=\inf\left\{\|\nabla u\|_{L^{2}}^{2}+\gamma^{2}\|xu\|_{L^{2}}^{2}:u\in% \Sigma(\mathbb{R}^{N}),\|u\|_{L^{2}}^{2}=1\right\}.$$ The corresponding eigenfunction to $\lambda_{1}$ is (1.8) $$\Phi(x):=\pi^{-\frac{N}{2}}e^{-\gamma\frac{|x|^{2}}{2}}$$ and we have the inequality (1.9) $$\gamma\,N\|u\|^{2}_{L^{2}}\leq\|\nabla u\|_{L^{2}}^{2}+\gamma^{2}\|xu\|_{L^{2}% }^{2}.$$ Notice that if $\omega\leq-\gamma\,N$, then the problem (1.6) does not admit positive solutions. Indeed, suppose that $\varphi$ is a positive solution of (1.6). After multiplication of (1.6) by the function $\Phi$ defined above, and integrating, we infer $$(\omega+\gamma N)\int_{\mathbb{R}^{N}}\varphi(x)\Phi(x)\,dx=\int_{\mathbb{R}^{% N}}|x|^{-b}\varphi^{p}(x)\Phi(x)>0.$$ Thus $\omega>-\gamma\,N$. On the other hand, since $\Sigma(\mathbb{R}^{N})\hookrightarrow L^{r+1}(\mathbb{R}^{N})$ is compact, where $1\leq r<1+4/(N-2)$($N\geq 3$), $1\leq r<\infty$ ($N=1$, $2$), we have that there is at least one solution $\varphi\in C(\mathbb{R}^{N})\cap C^{2}(\mathbb{R}^{N}\setminus\left\{0\right\})$ of (1.6) that is spherically symmetric and positive. Indeed, let $\omega>-\gamma N$. We denote $$\displaystyle\|u\|^{2}_{H_{\omega}}$$ $$\displaystyle:=\|\nabla u\|^{2}_{L^{2}}+\gamma^{2}\|xu\|^{2}_{L^{2}}+\omega\|u% \|^{2}_{L^{2}},$$ $$\displaystyle P(u)$$ $$\displaystyle:=\int_{\mathbb{R}^{N}}|x|^{-b}|u|^{p+1}dx.$$ By (1.9), we have for every $\omega>-\gamma N$, $\|u\|^{2}_{H_{\omega}}\sim\|u\|^{2}_{\Sigma}$. We define the following functionals $$\displaystyle S_{\omega}(u)$$ $$\displaystyle:=E(u)+\frac{\omega}{2}M(u)=\frac{1}{2}\|u\|^{2}_{H_{\omega}}-% \frac{1}{p+1}P(u),$$ $$\displaystyle K_{\omega}(u)$$ $$\displaystyle:=\|u\|^{2}_{H_{\omega}}-P(u),$$ $$\displaystyle I(u)$$ $$\displaystyle:=\|\nabla u\|^{2}_{L^{2}}-\gamma^{2}\|xu\|^{2}_{L^{2}}-\frac{N(p% -1)+2b}{2(p+1)}P(u).$$ Note that the elliptic equation (1.6) can be written as $S^{\prime}_{\omega}(\varphi)=0$. We now consider the minimizing problem (1.10) $$\displaystyle d_{\omega}:=\inf\{S_{\omega}(u)\ :\ u\in\Sigma\backslash\{0\},K_% {\omega}(u)=0\}$$ and define the set of minimizers of (1.10) by (1.11) $$\displaystyle\mathcal{M}_{\omega}=\left\{u\in\Sigma\backslash\{0\}:S_{\omega}(% u)=d_{\omega},K_{\omega}(u)=0\right\}.$$ We have the following result. Theorem 1.5. Let $\gamma>0$, $N\geq 1$, $0<b<\min\{2,N\}$, $1<p<2^{\circ}$ and $\omega>-\gamma N$. Then $d_{\omega}>0$ and $d_{\omega}$ is attained by a function which is a solution to the elliptic equation (1.6). Moreover, every minimizer is the form $e^{i\theta}\varphi(x)$, where $\varphi$ a real-valued, positive and spherically symmetric function. Notice that $\varphi$ being radially symmetric, satisfies the ordinary differential equation $$\varphi^{\prime\prime}+\frac{N-1}{r}\varphi^{\prime}-(\omega+\gamma^{2}r^{2})% \varphi+r^{-b}\varphi^{p}=0\quad\text{in $(0,+\infty)$.}$$ Using the general results of Shioji and Watanabe [26], we have that for any $\omega>-\gamma\,N$, $0<b<1$, $N\geq 3$ and $1<p<{2}^{\circ}$ such a solution $\varphi$ is unique, i.e, $\mathcal{M}_{\omega}=\left\{e^{i\theta_{0}}\varphi;\theta_{0}\in\mathbb{R}\right\}$; see Apendix for more details. We consider the following cross-constrained minimization problem $$d_{n}:=\inf\{S_{\omega}(u)\ :\ u\in\mathcal{N}\},$$ where the constrain $\mathcal{N}$ is given by $$\mathcal{N}:=\{u\in\Sigma\backslash\{0\},\ K_{\omega}(u)<0,I(u)=0\},$$ and we define $$d:=\min\{d_{\omega},d_{n}\},$$ where $d_{\omega}$ is given by (1.10). From Lemma 5.4 we obtain that $d>0$. Now we define the sets $$\displaystyle K_{-}$$ $$\displaystyle:=\{u\in\Sigma\backslash\{0\}\ :\ S_{\omega}(u)<d,K_{\omega}(u)<0% ,I(u)<0\},$$ $$\displaystyle K_{+}$$ $$\displaystyle:=\{u\in\Sigma\backslash\{0\}\ :\ S_{\omega}(u)<d,K_{\omega}(u)<0% ,I(u)>0\},$$ $$\displaystyle R_{-}$$ $$\displaystyle:=\{u\in\Sigma\backslash\{0\}\ :\ S_{\omega}(u)<d,K_{\omega}(u)<0\},$$ $$\displaystyle R_{+}$$ $$\displaystyle:=\{u\in\Sigma\backslash\{0\}\ :\ S_{\omega}(u)<d,K_{\omega}(u)>0\}.$$ Remark 1.6. By the definition, we see that $$\{u\in\Sigma\backslash\{0\}\ :\ S_{\omega}<d\}=R_{+}\cup K_{+}\cup K_{-}.$$ We are now able to show the sharp threshold for global existence and blow up of solutions to (1.1). Theorem 1.7. Let $\gamma>0,N\geq 1,0<b<\min\{2,N\},1+\frac{4-2b}{N}\leq p<2^{\circ}$ and $\omega>-\gamma N$. (i) If $u_{0}\in K_{-}$, then the corresponding solution to (1.1) blows up in finite time. (ii) If $u_{0}\in R_{+}\cup K_{+}$, then the corresponding solution to (1.1) exists globally in time. From Theorem 1.7 and Remark 1.6 we infer that if $S_{\omega}(u_{0})<d$, then the solution of Cauchy problem (1.1) exists globally if and only if $u_{0}\in R_{+}\cup K_{+}$. From a physical point of view, the most important solutions of the stationary problem (1.6) are the so-called ground states solutions; that is, which are the minimizers of the energy functional $E$ subject to a prescribed mass constraint $q>0$, (1.12) $$I_{q}=\inf\left\{E(u),\quad u\in\Sigma(\mathbb{R}^{N}),\quad\|u\|^{2}_{L^{2}}=% q\right\}.$$ Eventually, we introduce the set of ground states of (1.6) by $$\mathcal{G}_{q}:=\left\{\varphi\in\Sigma(\mathbb{R}^{N})\quad\text{such that}% \quad I_{q}=E(\varphi),\quad\|\varphi\|^{2}_{L^{2}}=q\right\}.$$ Notice that if $\varphi\in\mathcal{G}_{q}$, then there exists a Lagrange multiplier $\omega\in\mathbb{R}$ such that (1.6) is satisfied. Thus, $u(x,t)=e^{i\omega t}\varphi(x)$ is a solution of the Cauchy problem (1.1) with initial condition $u_{0}=\varphi$. We present a result about the existence of ground state. Theorem 1.8. Let $\gamma>0$, $N\geq 1$, $0<b<\min\left\{2,N\right\}$ and $1<p<1+\frac{4-2b}{N}$. (i) Any minimizing sequence of $I_{q}$ is relatively compact in $\Sigma(\mathbb{R}^{N})$. In particular, the set of ground states $\mathcal{G}_{q}$ is not empty. (ii) If $u\in\mathcal{G}_{q}$, then there exists a real-valued, positive and spherically symmetric function $\varphi\in\Sigma(\mathbb{R}^{N})$ such that $u(x)=e^{i\theta_{0}}\varphi(x)$ with $\theta_{0}\in\mathbb{R}$. For the critical case $p=1+\frac{4-2b}{N}$, under appropriate assumption on $q$, we have similar results. Theorem 1.9. Let $\gamma>0$, , $N\geq 1$, $0<b<\min\left\{2,N\right\}$ and $p=1+\frac{4-2b}{N}$. Let $q$ satisfy that $q<\|Q\|^{2}_{L^{2}}$. Then the set $\mathcal{G}_{q}$ is not empty. Moreover, every minimizer is of the form $e^{i\theta_{0}}\varphi(x)$, where $\varphi$ is a positive and spherically symmetric function and $\theta_{0}\in\mathbb{R}$. Notice that if $p>1+\frac{4-2b}{N}$, then we have $I_{q}=-\infty$. Indeed, for $v\in\Sigma(\mathbb{R}^{N})$ with $\|v\|^{2}_{L^{2}}=q$ we define $v_{\mu}(x):=\mu^{\frac{N}{2}}v(\mu x)$. It is clear that $\|v_{\mu}\|^{2}_{L^{2}}=\|v\|^{2}_{L^{2}}$ and $$E(v_{\mu})=\frac{\mu^{2}}{2}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+\mu^{-2}% \frac{\gamma^{2}}{2}\int_{\mathbb{R}^{N}}|x|^{2}|v|^{2}dx-\frac{\mu^{\frac{N}{% 2}(p-1)+b}}{p+1}\int_{\mathbb{R}^{N}}|x|^{-b}|u|^{p+1}dx.$$ Thus, since $p>1+\frac{4-2b}{N}$, it follows that $E(v_{\mu})\rightarrow-\infty$ as $\mu$ goes to $+\infty$. To show the existence of ground states in the supercritical case $1+\frac{4-2b}{N}<p<{2}^{\circ}$, we consider a local minimization problem. Following [3], we introduce the following sets $$\displaystyle S_{q}:=$$ $$\displaystyle\left\{u\in\Sigma(\mathbb{R}^{N}):\|u\|^{2}_{L^{2}}=q\right\}$$ $$\displaystyle B_{r}:=$$ $$\displaystyle\left\{u\in\Sigma(\mathbb{R}^{N}):\|u\|^{2}_{H}\leq r\right\},$$ where $\|\cdot\|_{H}$ denotes the norm (1.13) $$\|u\|^{2}_{H}:=\|\nabla u\|^{2}_{L^{2}}+\gamma^{2}\|xu\|^{2}_{L^{2}}.$$ For a fixed $q>0$ and $r>0$, we set the following local variational problem (1.14) $$I^{r}_{q}=\inf\left\{E(u),\quad u\in S_{q}\cap B_{r}\right\}.$$ Notice that if $S_{q}\cap B_{r}\neq\emptyset$, then by (1.3) we infer that $I^{r}_{q}>-\infty$. We denote the set of nontrivial solutions of (1.14) by $$\mathcal{G}^{r}_{q}:=\left\{\varphi\in S_{q}\cap B_{r}\quad\text{such that}% \quad I^{r}_{q}=E(\varphi)\right\}.$$ Theorem 1.10. Let $\gamma>0$, $N\geq 1$, $0<b<\min\left\{2,N\right\}$ and $1+\frac{4-2b}{N}<p<{2}^{\circ}$. For any $r>0$ there exists $q_{0}>0$ such that for every $q<q_{0}:$ (i) Any minimizing sequence for problem $I^{r}_{q}$ is precompact in $\Sigma(\mathbb{R}^{N})$. (ii) For every $\varphi\in\mathcal{G}_{q}$ there exists a Lagrange multiplier $\omega\in\mathbb{R}$ such that (1.6) is satisfied with the estimates $$-\gamma N<\omega\leq-\gamma N(1-Cq^{\frac{p-1}{2}}).$$ In particular, $\omega\rightarrow-\gamma N$ as $q\rightarrow 0$. (iii) If $u\in\mathcal{G}^{r}_{q}$, then $u(x)=e^{i\theta_{0}}\varphi(x)$, where $\varphi$ is a positive and radially symmetric function and $\theta_{0}\in\mathbb{R}$. We now discuss the orbital stability of standing waves. For $\mathcal{M}\subset\Sigma(\mathbb{R}^{N})$, we say that the set $\mathcal{M}$ is $\Sigma(\mathbb{R}^{N})$-stable under the flow generated by (1.1) if for all ${\varepsilon}>0$ there exists $\delta>0$ with the following property: if $u_{0}\in\Sigma(\mathbb{R}^{N})$ and $$\inf_{\varphi\in\mathcal{M}}\|u_{0}-\varphi\|_{\Sigma(\mathbb{R}^{N})}<\delta,$$ then the solution $u(t)$ of the Cauchy problem exists for all $t\in\mathbb{R}$ and $$\sup_{t\in\mathbb{R}}\inf_{\varphi\in\mathcal{M}}\|u(t)-\varphi\|_{\Sigma(% \mathbb{R}^{N})}<{\varepsilon}.$$ Moreover, we say that the standing wave $e^{i\omega t}\varphi$ is strongly unstable if for each ${\varepsilon}>0$, there exists $u_{0}\in\Sigma(\mathbb{R}^{N})$ such that $\|u_{0}-\varphi\|_{\Sigma(\mathbb{R}^{N})}<{\varepsilon}$ and the solution $u(t)$ of (1.1) with $u(0)=0$ blows up in finite time. We have the following stability results for the standing waves of equation (1.1). Theorem 1.11. Let $\gamma>0$, $N\geq 1$ and $0<b<\min\left\{2,N\right\}$. (i) If $1<p<1+\frac{4-2b}{N}$, then $\mathcal{G}_{q}$ is $\Sigma(\mathbb{R}^{N})$-stable with respect to (1.1). (ii) If $p=1+\frac{4-2b}{N}$ and $q<\|Q\|^{2}_{L^{2}}$, then $\mathcal{G}_{q}$ is $\Sigma(\mathbb{R}^{N})$-stable with respect to (1.1). (iii) If $1+\frac{4-2b}{N}<p<{2}^{\circ}$, then for any fixed $r>0$ and $q<q_{0}$ given in the Theorem 1.10 we have that the set $\mathcal{G}^{r}_{q}$ is $\Sigma(\mathbb{R}^{N})$-stable with respect to (1.1). For instability of standing wave solution of (1.1), we have the following result. Theorem 1.12. Let $N\geq 1$, $0<b<\min\{2,N\}$, $\omega>-\gamma N$ and $1+\frac{4-2b}{N}\leq p<2^{\circ}$. Let $\varphi\in\mathcal{M}_{\omega}$. (i) If $d_{n}\geq d_{\omega}$, then the standing wave $e^{i\omega t}\varphi$ is strongly unstable in $\Sigma(\mathbb{R}^{N})$. (ii) If $d_{n}<d_{\omega}$, then there exists $\delta>0$ and an initial data $u_{0}$ with $\|u_{0}-\varphi\|_{\Sigma}>\delta$ such that the corresponding solution blows up in finite time. This paper is organized as follows. In Section 2, the sharp condition for global existence is established (Theorem 1.2). In Section 3, we construct and classify finite time blow up solutions at the minimal mass threshold. In Section 4 we prove the existence of a minimizer for $d_{\omega}$. Section 5 is devoted to the proof of Theorem 1.7. Section 6 contains the proof of Theorems 1.8 and 1.9. In Section 7, we establish the proof of Theorem 1.10. Finally, Theorems 1.11 and 1.12 are proved in Section 8. In Appendix 9, we prove a uniqueness result for (1.6). Notation. The space $L^{2}(\mathbb{R}^{N},\mathbb{C})$ will be denoted by $L^{2}$ and its norm by $\|\cdot\|_{L^{2}}$. This space will be equipped with the real scalar product $$\left(u,v\right)_{L^{2}}=\text{Re}\int_{\mathbb{R}^{N}}u\,\overline{v}\,dx% \quad u,v\in L^{2}(\mathbb{R}^{N},\mathbb{C}).$$ The space $L^{p}(\mathbb{R}^{N})$, denoted by $L^{p}$ for shorthand, is equipped with the norm $L^{p}$. Throughout this paper, the letter $C$ denotes a constant which may vary from line to line. 2. The critical mass-case : sharp existence The aim of this section is to prove Theorem 1.2. First we observe Remark 2.1. (i) Let $u\in\Sigma(\mathbb{R}^{N})$. Then the following estimate holds: (2.1) $$\int_{\mathbb{R}^{N}}|u|^{2}dx\leq\frac{2}{N}\left(\int_{\mathbb{R}^{N}}|% \nabla u|^{2}dx\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^{N}}|x|^{2}|u|^{2}% dx\right)^{\frac{1}{2}}.$$ Notice that $2/N$ is the best constant for the inequality (2.1). (ii) If $Q\in H^{1}(\mathbb{R}^{N})$ satisfies (1.4), then the following identity holds: (2.2) $$\left(\frac{N+2-b}{N}\right)\int_{\mathbb{R}^{N}}|\nabla Q|^{2}dx=\int_{% \mathbb{R}^{N}}|x|^{-b}|Q|^{2+\frac{4-2b}{N}}dx.$$ As in [29], which deal with the classical case $b=0$, we use the virial identity for the proof of Theorem 1.2. From (2.1), to show that the $H^{1}(\mathbb{R}^{N})$- norm blow up, it suffices to show that the variance $f(t)$, which is defined by $$f(t):=\int_{\mathbb{R}^{N}}|x|^{2}|u(x,t)|^{2}dx$$ vanishes as $t\rightarrow\tau$ for some $\tau<\infty$. Lemma 2.2. Let $u$ be a solution of (1.1) on an interval $I=[0,T)$. Then the variance $f$ is the class $C^{2}$ on $I$ and satisfies the following identities: $$\displaystyle f^{\prime}(t)$$ $$\displaystyle=4\mathrm{Im}\int_{\mathbb{R}^{N}}\overline{u}(x,t)(\nabla u(x,t)% \cdot x)\,dx,$$ $$\displaystyle f^{\prime\prime}(t)$$ $$\displaystyle=16E(u(t))+\frac{4}{p+1}\left(N-Np-2b+4\right)\int_{\mathbb{R}^{N% }}|x|^{-b}|u(x,t)|^{p+1}dx-16\gamma^{2}f(t).$$ This result can be proved along the same lines as in [10, 13] and hence omitted. Notice that if $p=1+\frac{4-2b}{N}$ in the previous lemma, then $f^{\prime\prime}(t)=16E(u_{0})-16\gamma^{2}f(t)$. Throughout the rest of this section we assume that $p=1+\frac{4-2b}{N}$. Lemma 2.3. Let $u_{0}\neq 0$ be such that $f(0)\geq 2\gamma^{-2}E(u_{0})$. Then the solution $u$ of (1.1) corresponding to $u_{0}$ blows up in finite time. Proof. Since $f^{\prime\prime}(t)=16E(u_{0})-16\gamma^{2}f(t)$, a straightforward calculation gives first (2.3) $$f(t)=r\mathrm{sin}(4\gamma t+\theta)+\gamma^{-2}E(u_{0}),$$ where $r\geq 0$ and $\theta\in[0,2\pi)$ are constants determined by $f(0)$ and $f^{\prime}(0)$. We also have (2.4) $$r^{2}=[f(0)-\gamma^{-2}E(u_{0})]^{2}+\frac{\gamma^{-2}}{16}[f^{\prime}(0)]^{2}.$$ Since $f(0)\geq 2\gamma^{-2}E(u_{0})$, it follows that $r\geq\gamma^{-2}E(u_{0})$. Thus from (2.3) and (2.4), we see that there exists $\tau<\infty$ such that $$\lim_{t\rightarrow\tau}f(t)=0.$$ Inequality (2.1) implies that $\lim_{t\rightarrow\tau}\|\nabla u(t)\|^{2}_{L^{2}}=+\infty$. This shows that $u(x,t)$ blows up in finite time, which completes the proof of lemma. ∎ Now we give the proof of Theorem 1.2. Proof of Theorem 1.2. First, as noted in the introduction, we have that for every $u\in H^{1}(\mathbb{R}^{N})$, (2.5) $$\int_{\mathbb{R}^{N}}|x|^{-b}|u|^{\frac{4-2b}{N}+2}dx\leq{\|Q\|^{-\left(\frac{% 4-2b}{N}\right)}_{L^{2}}}\left(\frac{2+N-b}{N}\right)\|\nabla u\|^{2}_{L^{2}}% \|u\|^{\frac{4-2b}{N}}_{L^{2}}.$$ Notice that ${\|Q\|^{-\left(\frac{4-2b}{N}\right)}_{L^{2}}}\left(\frac{2+N-b}{N}\right)$ is the best constant for the above inequality. Consider a local solution $u\in C([0,T),\Sigma(\mathbb{R}^{N}))$ of the Cauchy problem of (1.1), as given by Proposition 1.1, where $[0,T)$ is the maximal existence time. In view of (2.5) and the conservation of charge and energy, it is clear that $$\frac{1}{2}\|\nabla u(t)\|^{2}_{L^{2}}\left(1-\left(\frac{\|u_{0}\|_{L^{2}}}{% \|Q\|_{L^{2}}}\right)^{\frac{4-2b}{N}}\right)+\frac{\gamma^{2}}{2}\int_{% \mathbb{R}^{N}}|x|^{2}|u(x,t)|^{2}dx\leq E(u_{0}).$$ Now, since $\|u_{0}\|_{L^{2}}<\|Q\|_{L^{2}}$, it follows that $\|\nabla u(t)\|^{2}_{L^{2}}$ is bounded for all $t\in[0,T)$. From Proposition 1.1 it yields that $u(x,t)$ globally exists in $t\in[0,+\infty)$, which completes the proof of Item (i). On the other hand, for $\lambda>0$ and $c\in\mathbb{C}$, $|c|\geq 1$, we take the initial date $u(0,t)=c\lambda^{\frac{N}{2}}Q(\lambda x)$. Clearly $\|u_{0}\|_{L^{2}}=|c|\|Q\|_{L^{2}}\geq\|Q\|_{L^{2}}$. Now combining (1.2) and (2.2), it follows from straightforward calculations that $$2E(u_{0})=\|\nabla Q\|^{2}_{L^{2}}|c|^{2}\lambda^{2}\left(1-|c|^{\frac{4-2b}{N% }}\right)+\gamma^{2}f(0)\leq\gamma^{2}f(0).$$ Therefore, by Lemma 2.3 we have that $u(x,t)$ blows up in finite time, and this finishes the proof of theorem. ∎ 3. Classification of minimal mass blow up solutions In this section, we give the proof of Theorem 1.3. For any function $u:\mathbb{R}^{N}\times I\rightarrow\mathbb{C}$, we define (3.1) $$u_{L}(x,t)=\left(\frac{1}{\mathrm{cos}\,2\gamma t}\right)^{\frac{N}{2}}e^{-i% \frac{\gamma}{2}|x|^{2}\mathrm{tan}\,\,2\gamma t}u\left(\frac{x}{\mathrm{cos}% \,2\gamma t},\frac{1}{2\gamma}\mathrm{tan}\,2\gamma t\right).$$ Notice that $u_{L}$ is defined on the time interval $\mathrm{tan}^{-1}(I):=\left\{\mathrm{tan}^{-1}(t),t\in I\right\}$ and $u_{L}(x,0)=u(x,0)$; for more details we refer to [27, 4]. We first prove a key lemma to obtain Theorem 1.3. Lemma 3.1. Let $\gamma>0$. (i) Assume that $u$ is a solution of the free (i.e.,zero-potential) inhomogeneous nonlinear Schrödinger equation (3.2) $$i\partial_{t}u+\Delta u+|x|^{-b}|u|^{p-1}u=0,$$ on a interval $I$. Then the function $u_{L}$ defined in (3.1) solves the inhomogeneous nonlinear Schrödinger equation with attractive harmonic potential $$i\partial_{t}u_{L}+\Delta u_{L}-\gamma^{2}|x|^{2}u_{L}+|\mathrm{cos}\,2\gamma t% |^{\frac{N}{2}(p-1)-2+b}|x|^{-b}|u_{L}|^{p-1}u_{L}=0,$$ with $\|u_{L}\|_{L^{2}}=\|u\|_{L^{2}}$. In particular, if $p=1+\frac{4-2b}{N}$, then $u_{L}$ is a solution of (1.1) on $\mathrm{tan}^{-1}(I)$. (ii) Reciprocally, assume that $u\in\Sigma(\mathbb{R}^{N})$ is a solution of (1.1) with $p=1+\frac{4-2b}{N}$, then the function $u_{L^{-1}}$, defined by (3.3) $$u_{L^{-1}}(x,t)=\frac{1}{(1+4(\gamma t)^{2})^{\frac{N}{2}}}e^{i\frac{4\gamma^{% 2}t}{1+4(\gamma t)^{2}}{|x|^{2}}}u\left(\frac{x}{\sqrt{1+4\gamma t^{2}}},\frac% {1}{2\gamma}\mathrm{tan}^{-1}\,2\gamma t\right)$$ solves (3.2) with $p=1+\frac{4-2b}{N}$. Proof. For simplicity, we assume that $\gamma=\frac{1}{2}$. We can easily check that $$\displaystyle\partial_{t}u_{L}(x,t)=\frac{e^{-i\frac{1}{4}|x|^{2}\mathrm{tan}% \,t}}{\mathrm{cos}^{\frac{N}{2}+2}\,t}\left[\partial_{t}u-i\frac{|x|^{2}}{4}u+% \mathrm{sin}\,t\,\,\nabla u\cdot x+\frac{N}{2}\mathrm{sin}\,t\,\,\mathrm{cos}% \,t\,\,u\right]\left(\frac{x}{\mathrm{cos}\,t},\mathrm{tan}\,t\right)$$ and $$\displaystyle\Delta u_{L}(x,t)=\frac{e^{-i\frac{1}{4}|x|^{2}\mathrm{tan}\,t}}{% \mathrm{cos}^{\frac{N}{2}+2}\,t}\left[-\frac{|x|^{2}}{4}\mathrm{sin}^{2}\,t\,u% -i\,\mathrm{sin}\,t\,\nabla u\cdot x-\frac{i}{2}N\mathrm{sin}\,t\,\mathrm{cos}% \,t\,u+\Delta u\right]\left(\frac{x}{\mathrm{cos}\,t},\mathrm{tan}\,t\right).$$ Thus, we see that $$\displaystyle i\,\partial_{t}u_{L}(x,t)+\Delta u_{L}(x,t)$$ $$\displaystyle=\frac{e^{-i\frac{1}{4}|x|^{2}\mathrm{tan}\,t}}{\mathrm{cos}^{% \frac{N}{2}+2}\,t}\left[\frac{|x|^{2}}{4}\mathrm{cos}^{2}\,t\,u+i\,\partial_{t% }u+\Delta u\right]\left(\frac{x}{\mathrm{cos}\,t},\mathrm{tan}\,t\right)$$ $$\displaystyle=\frac{e^{-i\frac{1}{4}|x|^{2}\mathrm{tan}\,t}}{\mathrm{cos}^{% \frac{N}{2}+2}\,t}\left[\frac{|x|^{2}}{4}\mathrm{cos}^{2}\,t\,u-|x|^{-b}|u|^{p% -1}u\right]\left(\frac{x}{\mathrm{cos}\,t},\mathrm{tan}\,t\right)$$ $$\displaystyle=\frac{|x|^{2}}{4}u_{L}(x,t)-|\mathrm{cos}\,t|^{\frac{N}{2}(p-1)-% 2+b}|x|^{-b}|u_{L}(x,t)|^{p-1}u_{L}(x,t).$$ This proves the first statement of lemma. Similarly, the second statement of the lemma follows from a straightforward calculation. With this the lemma is proved ∎ It is important to note that the transforms (3.1) and (3.3) do not alter the initial data $u_{0}$; notice also that $\|u_{L}\|_{L^{2}}=\|u_{L^{-1}}\|_{L^{2}}=\|u\|_{L^{2}}$. Theorem 1.3 follows from the previous lemma and from the following result of Combet and Genoud [10, Theorem 1]. Proposition 3.2. Let $u_{0}\in H^{1}(\mathbb{R}^{N})$ with $\|u_{0}\|_{L^{2}}=\|Q\|_{L^{2}}$, where $Q$ is defined by (1.4). Assume that the solution $u$ of (3.2) blows up in finite time $T>0$. Then there exist $\theta_{0}\in\mathbb{R}$ and $\lambda>0$ such that $$u(t)=S_{\lambda,\theta_{0}}(T-t)\quad\text{for every $t\in[0,T)$},$$ where $S_{\lambda,\theta_{0}}$ is defined by (1.5). Now we give the proof of Theorem 1.3. Proof of Theorem 1.3. Let $p=1+\frac{4-2b}{N}$. Assume that $u$ is a solution of the Cauchy problem (1.1) such that $\|u_{0}\|_{L^{2}}=\|Q\|_{L^{2}}$ and $\lim_{t\rightarrow T}\|\nabla u(t)\|^{2}_{L^{2}}=+\infty$ with $T<\frac{\pi}{4\gamma}$. We set $v(x,t):=u_{L^{-1}}(x,t)$. From Lemma 3.1 (ii), we have that $v(x,t)$ is a solution of (3.2) with $v(x,t)=u_{0}$, which blows up in finite time $T^{\ast}:=\frac{1}{2\gamma}\mathrm{tan}\,(2\gamma T)$. By Proposition 3.2 we know that there exist $\theta_{0}\in\mathbb{R}$ and $\lambda_{0}>0$ such that $$v(t)=S_{\lambda,\theta_{0}}(x,T^{\ast}-t)\quad\text{for every $t\in[0,T^{\ast}% )$},$$ whence, again by Lemma 3.1 and from uniqueness result of Proposition 1.1, it follows that $$u(t)=v_{L}(t)=\left(\frac{1}{\mathrm{cos}\,2\gamma t}\right)^{\frac{N}{2}}e^{-% i\frac{\gamma}{2}|x|^{2}\mathrm{tan}\,\,2\gamma t}S_{\lambda,\theta_{0}}\left(% \frac{x}{\mathrm{cos}\,2\gamma t},T^{\ast}-\frac{1}{2\gamma}\mathrm{tan}\,2% \gamma t\right).$$ Finally, since $$T^{\ast}-\frac{1}{2\gamma}\mathrm{tan}\,(2\gamma t)=\frac{\mathrm{sin}\,2% \gamma(T-t)}{2\gamma\mathrm{cos}\,2\gamma T\,\,\mathrm{cos}\,2\gamma t},$$ we see that $$u(t)=\left(\frac{1}{\mathrm{cos}\,2\gamma t}\right)^{\frac{N}{2}}e^{-i\frac{% \gamma}{2}|x|^{2}\mathrm{tan}\,2\gamma t}S_{{\lambda_{0}},\theta_{0}}\left(% \frac{x}{\mathrm{cos}\,2\gamma t},\frac{\mathrm{sin}\,2\gamma(T-t)}{2\gamma% \mathrm{cos}\,2\gamma T\,\,\mathrm{cos}\,2\gamma t}\right),$$ which completes of proof. ∎ 4. Existence of minimizers The aim this section is to prove Theorem 1.5. Proof of Theorem 1.5. Let $u\in\Sigma\backslash\{0\}$ be such that $K_{\omega}(u)=0$. We have $\|u\|^{2}_{H_{\omega}}=P(u)$. Using the Gagliardo-Nirenberg inequality $$P(u)\lesssim\|\nabla u\|_{L^{2}}^{\frac{N(p-1)}{2}+b}\|u\|_{L^{2}}^{p+1-\frac{% N(p-1)}{2}-b}$$ together with the Young’s inequality, we have $$P(u)\leq C_{1}(\|\nabla u\|^{2}_{L^{2}}+\|u\|^{2}_{L^{2}})^{\frac{p+1}{2}}\leq C% _{2}\|u\|_{H_{\omega}}^{p+1}=C_{2}P(u)^{\frac{p+1}{2}}.$$ This implies that $$P(u)>\left(\frac{1}{C_{2}}\right)^{\frac{2}{p-1}}>0.$$ On the other hand, $$S_{\omega}(u)=\frac{1}{2}\|u\|^{2}_{H_{\omega}}-\frac{1}{p+1}P(u)=\frac{p-1}{2% (p+1)}P(u)>\frac{p-1}{2(p+1)}\left(\frac{1}{C_{2}}\right)^{\frac{2}{p-1}}>0.$$ Taking the infimum, we obtain $d_{\omega}>0$. Let $(u_{n})_{n\geq 1}$ be a minimizing sequence of $d_{\omega}$. Since $K_{\omega}(u_{n})=0$, we have $\|u_{n}\|^{2}_{H_{\omega}}=P(u_{n})$ for all $n\geq 1$. Thus, $$S_{n}(u_{n})=\frac{p-1}{2(p+1)}\|u_{n}\|^{2}_{H_{\omega}}\rightarrow d_{\omega% }\quad\text{as }n\rightarrow\infty.$$ We infer that there exists a constant $C>0$ such that $\|u_{n}\|^{2}_{H_{\omega}}\leq\frac{2(p+1)}{p-1}d_{\omega}+C$ for all $n\geq 1$. For $\gamma>0$ and $\omega>-\gamma N$ fixed, $\|u\|^{2}_{H_{\omega}}\sim\|u\|^{2}_{\Sigma}$. This implies that the sequence $(u_{n})_{n\geq 1}$ is a bounded in $\Sigma$. There exists $u_{0}\in\Sigma$ such that up to a subsequence, we can suppose that $v_{n}\rightharpoonup u_{0}$ weakly in $\Sigma$. Since $\Sigma\rightarrow L^{r+1}(\mathbb{R}^{N})$ compact (see [29, Lemma 3.1]) for $1\leq r<1+\frac{4}{N-2}$ if $N\geq 3$ and $1\leq r<\infty$ if $N=1,2$. This implies that $u_{n}\rightarrow u_{0}$ strongly in $L^{r+1}$ with $r$ as above. We now show that $u_{0}$ is a minimizer of $d_{\omega}$. Since $u_{n}\rightharpoonup u_{0}$ weakly in $\Sigma$, we have $$\|u_{0}\|^{2}_{H_{\omega}}\leq\liminf_{n\rightarrow\infty}\|u_{n}\|^{2}_{H_{% \omega}}.$$ We now claim that for $N\geq 1,0<b<\min\{2,N\}$ and $1<p<2^{\circ}$, (4.1) $$\int_{\mathbb{R}^{N}}|x|^{-b}|u_{n}|^{p+1}dx\rightarrow\int_{\mathbb{R}^{N}}|x% |^{-b}|u|^{p+1}dx\quad\text{as $n\rightarrow\infty$.}$$ We have $$\displaystyle\left|\int_{\mathbb{R}^{N}}|x|^{-b}|u_{n}|^{p+1}dx-\int_{\mathbb{% R}^{N}}|x|^{-b}|u|^{p+1}dx\right|\leq\||x|^{-b}(|u_{n}|^{p+1}-|u_{0}|^{p+1})\|% _{L^{1}}\\ \displaystyle\leq\||x|^{-b}(|u_{n}|^{p+1}-|u_{0}|^{p+1})\|_{L^{1}(B)}+\||x|^{-% b}(|u_{n}|^{p+1}-|u_{0}|^{p+1})\|_{L^{1}(B^{c})},$$ where $B$ is the unit ball in $\mathbb{R}^{N}$ and $B^{c}=\mathbb{R}^{N}\backslash B$. On $B$, we bound $$\||x|^{-b}(|u_{n}|^{p+1}-|u_{0}|^{p+1})\|_{L^{1}(B)}\lesssim\||x|^{-b}\|_{L^{% \delta}(B)}\||u_{n}|^{p+1}-|u_{0}|^{p+1}\|_{L^{\mu}}$$ provided $\delta,\mu\geq 1,1=\frac{1}{\delta}+\frac{1}{\mu}$. The term $\||x|^{-b}\|_{L^{\delta}(B)}$ is finite provided $\frac{N}{\delta}>b$. Thus, $\frac{1}{\delta}>\frac{b}{N}$, and $\frac{1}{\mu}=1-\frac{1}{\delta}<\frac{N-b}{N}$. We next bound $$\||u_{n}|^{p+1}-|u_{0}|^{p+1}\|_{L^{\mu}}\lesssim(\|u_{n}\|^{p}_{L^{\tau}}+\|u% _{0}\|^{p}_{L^{\tau}})\|u_{n}-u_{0}\|_{L^{\sigma}}$$ provided (4.2) $$\displaystyle\frac{p}{\tau}+\frac{1}{\sigma}=\frac{1}{\mu}<\frac{N-b}{N}.$$ Using the embedding $\Sigma\rightarrow L^{r+1}(\mathbb{R}^{N})$ for $1\leq r<1+\frac{4}{N-2}$ if $N\geq 3$ and $1\leq r<\infty$ if $N=1,2$, we are able to choose $\tau\in\left[2,\frac{2N}{N-2}\right)$ if $N\geq 3$ and $\tau\in[2,\infty)$ if $N=1,2$ so that $\|u_{n}\|_{L^{\tau}}\lesssim\|u_{n}\|_{\Sigma}$ (similarly for $u_{0}$). In the case $N\geq 3$, we have $$\frac{p(N-2)}{2N}+\frac{1}{\sigma}<\frac{N-b}{N}.$$ Since $u_{n}\rightarrow u_{0}$ in $L^{r+1}$ with $1\leq r<1+\frac{2}{N-2}$, it follows that $$\frac{p(N-2)}{2N}+\frac{N-2}{2N}<\frac{N-b}{N}.$$ This condition is satisfied since $p<1+\frac{4-2b}{N-2}$. Since $u_{n}\rightarrow u_{0}$ in $L^{r+1}$ with $1\leq r<\infty$, we are able to choose $\tau$ and $\sigma$ large enough so that (4.2) holds. As a consequence, we prove $$\||x|^{-b}(|u_{n}|^{p+1}-|u_{0}|^{p+1})\|_{L^{1}(B)}\quad\text{as }n% \rightarrow\infty.$$ On $B^{c}$, we bound $$\displaystyle\||x|^{-b}(|u_{n}|^{p+1}-|u_{0}|^{p+1})\|_{L^{1}(B^{c})}$$ $$\displaystyle\leq\||u_{n}|^{p+1}-|u_{0}|^{p+1}\|_{L^{1}}$$ $$\displaystyle\lesssim\|u_{n}-u_{0}\|_{L^{p+1}}(\|u_{n}|^{p}_{L^{p+1}}+\|u_{0}% \|^{p}_{L^{p+1}})$$ $$\displaystyle\lesssim\|u_{n}-u_{0}\|_{L^{p+1}}(\|u_{n}\|^{p}_{\Sigma}+\|u_{0}% \|^{p}_{\Sigma})\rightarrow 0.$$ Combining two terms, we prove the claim. Thus $$K_{\omega}(u_{0})\leq\liminf_{n\rightarrow\infty}K_{\omega}(u_{n})=0.$$ We also have $$\lim_{n\rightarrow\infty}S_{\omega}(u_{n})=\lim_{n\rightarrow\infty}\frac{p-1}% {2(p+1)}P(u_{n})=\frac{p-1}{2(p+1)}P(u_{0})=d_{\omega}.$$ Suppose $K_{\omega}(u_{0})<0$, thus $\|u_{0}\|^{2}_{H_{\omega}}<P(u_{0})$. We have $$K_{\omega}(\lambda u_{0})=\lambda^{2}\|u_{0}\|^{2}_{H_{\omega}}-\lambda^{p+1}P% (u_{0}).$$ This implies that $K_{\omega}(\lambda_{0}u_{0})=0$, where $$\lambda_{0}=\left(\frac{\|u_{0}\|^{2}_{H_{\omega}}}{P(u_{0})}\right)^{\frac{1}% {p-1}}\in(0,1).$$ By definition of $d_{\omega}$, $$d_{\omega}\leq S_{\omega}(\lambda_{0}u_{0})=\frac{p-1}{2(p+1)}P(\lambda_{0}u_{% 0})=\frac{p-1}{2(p+1)}\lambda_{0}^{p+1}P(u_{0})<\frac{p-1}{2(p+1)}P(u_{0})=d_{% \omega}.$$ This is a contradiction. Therefore, $K_{\omega}(u_{0})=0$. This combined with the fact $S_{\omega}(u_{0})=\frac{p-1}{2(p+1)}P(u_{0})=d_{\omega}$ imply that $u_{0}$ is a minimizer of $d_{\omega}$. It remains to show $u_{0}$ solves the elliptic equation (1.6). Since $u_{0}$ is a minimizer of $d_{\omega}$, there exists a Lagrange multiplier $\mu\in\mathbb{R}$ such that $S^{\prime}_{\omega}(u_{0})=\mu K^{\prime}_{\omega}(u_{0})$. We have (4.3) $$\displaystyle 0=K_{\omega}(u_{0})=\langle S^{\prime}_{\omega}(u_{0}),u_{0}% \rangle=\mu\langle K^{\prime}_{\omega}(u_{0}),u_{0}\rangle.$$ On the other hand, $$K^{\prime}_{\omega}(u_{0})=2(-\Delta)u_{0}+2\gamma|x|^{2}u_{0}+2\omega u_{0}-(% p+1)|x|^{-b}|u_{0}|^{p-1}u_{0}.$$ Thus, $$\langle K^{\prime}_{\omega}(u_{0}),u_{0}\rangle=2\|u_{0}\|^{2}_{H_{\omega}}-(p% +1)P(u_{0})=-(p-1)P(u_{0})<0.$$ This together with (4.3) imply that $\mu=0$. So, $S^{\prime}_{\omega}(u_{0})=0$ or $u_{0}$ is a solution of (1.6). This proves the first part of the statement. Now let $u$ be a complex valued minimizer for $d_{\omega}$. We claim that there exists $\theta\in\mathbb{R}$ such that $u(x)=e^{i\theta}\varphi(x)$, where $\varphi$ is a positive real valued minimizer. Indeed, since $\|\nabla(|u|)\|^{2}_{L^{2}}\leq\|\nabla u\|^{2}_{L^{2}}$, it is clear that $S_{\omega}(|u|)\leq S_{\omega}(u)$ and $K_{\omega}(|u|)\leq K_{\omega}(u)=0$. In particular, $|u|\in\mathcal{M}_{\omega}$ and (4.4) $$\|\nabla|u|\|^{2}_{L^{2}}=\|\nabla u\|^{2}_{L^{2}}.$$ From the Euler-Lagrange equation (1.6) and an elliptic regularity regularity/bootstrap argument we see that $u\in C^{1}(\mathbb{R}^{N},\mathbb{C})$ (see [20, Sections 2.1 and 2.2] and [11]). Moreover, the positivity of $|u|$ follows from the maximum principle and thus $u\in C^{1}(\mathbb{R}^{N},\mathbb{C}\setminus\left\{0\right\})$. We set $w(x):=\frac{u(x)}{|u(x)|}$. Since $|w|^{2}=1$, it follows that $\text{Re}(\overline{w}\,\nabla w)=0$ and $$\nabla u=(\nabla|u|)w+|u|\nabla w=w(\nabla|u|+|u|\overline{w}\nabla w).$$ Therefore, we see that $|\nabla u|^{2}=|\nabla|u||^{2}+|u|^{2}|\nabla w|^{2}$. From (4.4) we get $$\int_{\mathbb{R}^{N}}|u|^{2}|\nabla w|^{2}dx=0,$$ and thus $|\nabla w|=0$. Hence $w$ is constant with $|w|=1$, we infer that there exists $\theta\in\mathbb{R}$ such that $u=e^{i\theta}\varphi(x)$ where $\varphi(x):=|u(x)|$. This prove the claim. We now prove that $\varphi$ is necessarily radial and radially decreasing. Indeed, denoting by $\varphi^{\ast}$ the Schwarz rearrangement of $\varphi$, it is well known that (see [22]) (4.5) $$\displaystyle\int_{\mathbb{R}^{N}}|x|^{2}|\varphi^{\ast}(x)|^{2}\,dx$$ $$\displaystyle<\int_{\mathbb{R}^{N}}|x|^{2}\varphi^{2}(x)\,dx\quad\text{unless % $\varphi=\varphi^{\ast}$,}$$ (4.6) $$\displaystyle\int_{\mathbb{R}^{N}}|x|^{-b}|\varphi^{\ast}(x)|^{p+1}\,dx$$ $$\displaystyle>\int_{\mathbb{R}^{N}}|x|^{-b}\varphi^{p+1}(x)\,dx\quad\text{% unless $\varphi=\varphi^{\ast}$.}$$ Thus, from $\|\nabla\varphi^{\ast}\|^{2}_{L^{2}}\leq\|\nabla\varphi\|^{2}_{L^{2}}$, we infer that if $\varphi$ is not radial, then $S_{\omega}(\varphi^{\ast})<S_{\omega}(\varphi)=d_{\omega}$ and $K_{\omega}(\varphi^{\ast})<K_{\omega}(\varphi)=0$, a contradiction. This prove that $\varphi$ is radial and radially decreasing. ∎ 5. Sharp thresholds for blowup and global existence in the mass-critical and mass-supercritical cases This section is devoted to the proof of Theorem 1.7. We have divided the proof into a sequence of lemmas. Lemma 5.1. Let $\gamma>0,N\geq 1,0<b<\min\{2,N\},1<p<2^{\circ}$ and $\omega>-\gamma N$. There exists $u\in\Sigma\backslash\{0\}$ such that $K_{\omega}(u)=I(u)=0$. Proof. By Proposition 1.5, there exists a non-trivial solution $u$ to the elliptic equation (1.6). Multiplying both sides of (1.6) with $\overline{u}$ and integrating over $\mathbb{R}^{N}$, we have (5.1) $$\displaystyle\|\nabla u\|^{2}_{L^{2}}+\omega\|u\|^{2}_{L^{2}}+\gamma^{2}\|xu\|% ^{2}_{L^{2}}-P(u)=0.$$ On the other hand, multiplying both sides of (1.6) with $x\cdot\nabla\overline{u}$, integrating over $\mathbb{R}^{N}$ and taking the real part, we have (5.2) $$\displaystyle\frac{2-N}{2}\|\nabla u\|^{2}_{L^{2}}-\frac{N\omega}{2}\|u\|^{2}_% {L^{2}}-\frac{N+2}{2}\gamma^{2}\|xu\|^{2}_{L^{2}}+\frac{N-b}{p+1}P(u)=0.$$ By (5.1), it is obvious that $K_{\omega}(u)=0$. Multiplying both sides of (5.1) with $\frac{d}{2}$ and adding to (5.2), we get $$\|\nabla u\|^{2}_{L^{2}}-\gamma^{2}\|xu\|^{2}_{L^{2}}-\frac{N(p-1)+2b}{2(p+1)}% P(u)=0,$$ which implies that $I(u)=0$. ∎ Lemma 5.2. Let $\gamma>0,N\geq 1,0<b<\min\{2,N\},1+\frac{4-2b}{N}\leq N<2^{\circ}$ and $\omega>-\gamma N$. Then the set $\mathcal{N}$ is not empty. Proof. By Lemma 5.1, there exists $u\in\Sigma\backslash\{0\}$ such that $K_{\omega}=I(u)=0$. Set $u^{\lambda}(x)=\lambda u(x)$. We have $$\displaystyle K_{\omega}(u^{\lambda})$$ $$\displaystyle=\lambda^{2}\|u\|^{2}_{H_{\omega}}-\lambda^{p+1}P(u),$$ $$\displaystyle I(u^{\lambda})$$ $$\displaystyle=\lambda^{2}(\|\nabla u\|^{2}_{L^{2}}-\gamma^{2}\|xu\|^{2}_{L^{2}% })-\frac{N(p-1)+2b}{2(p+1)}\lambda^{p+1}P(u).$$ Since $K_{\omega}(u)=I(u)=0$, the equations $K_{\omega}(u^{\lambda})=0$ and $I(u^{\lambda})=0$ admit unique non-zero solution $\lambda=1$. Therefore, $K_{\omega}(u^{\lambda})<0,I(u^{\lambda})<0$ for all $\lambda>1$. Consider $$A(\lambda):=\|\nabla u^{\lambda}\|^{2}_{L^{2}}-\frac{N(p-1)+2b}{2(p+1)}P(u^{% \lambda}).$$ Since $I(u)=0$, we have $A(1)>0$. By continuity, there exists $\lambda_{0}>1$ such that $A(\lambda_{0})>0$. We denote $v(x)=u^{\lambda_{0}}(x)$. Set $$v_{\mu}(x):=\mu^{\frac{2-b}{p-1}}v(\mu x),\quad\mu>0.$$ A calculation shows that $$\displaystyle K_{\omega}(v_{\mu})$$ $$\displaystyle=\mu^{a}(\|\nabla v\|^{2}_{L^{2}}-P(v))+\mu^{a-4}\gamma^{2}\|xu\|% ^{2}_{L^{2}}+\mu^{a-2}\omega\|u\|^{2}_{L^{2}},$$ $$\displaystyle I(v_{\mu})$$ $$\displaystyle=\mu^{a}\left(\|\nabla v\|^{2}_{L^{2}}-\frac{N(p-1)+2b}{2(p+1)}P(% v)\right)-\mu^{a-4}\gamma^{2}\|xu\|^{2}_{L^{2}},$$ where $$a=\frac{4-2b-(N-2)(p-1)}{p-1}>0.$$ Since $I(v)<0$ and $\lim_{\mu\rightarrow+\infty}I(v_{\mu})=+\infty$, there exists $\mu_{0}>1$ such that $I(v_{\mu_{0}})=0$. On the other hand, $K_{\omega}(v)<0$ implies that $\|\nabla v\|^{2}_{L^{2}}-P(v)<0$. Moreover, since $a>0,a-2\leq 0$ and $a-4<0$, we see that $K_{\omega}(v_{\mu})<0$ for all $\mu>1$. We obtain $v_{\mu_{0}}\in\Sigma\backslash\{0\}$, $K_{\omega}(v_{\mu_{0}})<0$ and $I(v_{\mu_{0}})=0$ or $v_{\mu_{0}}\in\mathcal{N}$. ∎ Lemma 5.3. Let $\gamma>0,N\geq 1,0<b<\min\{2,N\},1+\frac{4-2b}{N}\leq N<2^{\circ}$ and $\omega>-\gamma N$. Then $d_{n}>0$. Proof. Let $u\in\Sigma\backslash\{0\}$ be such that $K_{\omega}(u)<0$ and $I(u)=0$. Since $I(u)=0$, we have $$\|\nabla u\|^{2}_{L^{2}}-\gamma^{2}\|xu\|^{2}_{L^{2}}=\frac{N(p-1)+2b}{2(p+1)}% P(u).$$ Thus, (5.3) $$\displaystyle S_{\omega}(u)=\left(\frac{1}{2}-\frac{2}{N(p-1)+2b}\right)\|% \nabla u\|^{2}_{L^{2}}+\left(\frac{1}{2}+\frac{2}{N(p-1)+2b}\right)\gamma^{2}% \|xu\|^{2}_{L^{2}}+\frac{\omega}{2}\|u\|^{2}_{L^{2}}.$$ We now consider two cases: $L^{2}$-supercritical case and $L^{2}$-critical case. Case 1: $L^{2}$-supercritical case $1+\frac{4-2b}{N}<p<2^{\circ}$. By the Gagliardo-Nirenberg inequality, we have $$\displaystyle P(u)$$ $$\displaystyle\lesssim\|\nabla u\|^{\frac{N(p-1)}{2}+b}_{L^{2}}\|u\|^{p+1-\frac% {N(p-1)}{2}-b}_{L^{2}}$$ $$\displaystyle\leq C_{1}(\|\nabla u\|^{2}_{L^{2}}+\|u\|^{2}_{L^{2}})^{\frac{p+1% }{2}}$$ $$\displaystyle\leq C_{2}\|u\|^{p+1}_{H_{\omega}}.$$ Since $K_{\omega}(u)<0$, it follows that $\|u\|^{2}_{H_{\omega}}<P(u)$. Thus, $\|u\|^{2}_{H_{\omega}}<P(u)\leq C_{2}\|u\|^{p+1}_{H_{\omega}}$. We get $$\|u\|^{2}_{H_{\omega}}>\left(\frac{1}{C_{2}}\right)^{\frac{2}{p-1}}>0.$$ On the other hand, by (5.3) and the fact $\frac{1}{2}-\frac{2}{N(p-1)+2b}>0$ in this case, we have $$S_{\omega}(u)\geq C_{3}\|u\|^{2}_{H_{\omega}}>C_{3}\left(\frac{1}{C_{2}}\right% )^{\frac{2}{p-1}}>0.$$ Taking the infimum, we obtain $d_{n}>0$. Case 2: $L^{2}$-critical case $p=1+\frac{4-2b}{N}$. Assume $d_{n}=0$, there exists $(u_{n})_{n\geq 1}\subset\Sigma\backslash\{0\}$, $K_{\omega}(u_{n})<0$ and $I(u_{n})=0$ for all $n\geq 1$ and $S_{\omega}(u_{n})\rightarrow 0$ as $n\rightarrow\infty$. It follows from (5.3) that (5.4) $$\displaystyle\|u_{n}\|^{2}_{L^{2}}\rightarrow 0,\quad\|xu_{n}\|^{2}_{L^{2}}% \rightarrow 0\quad\text{as }n\rightarrow\infty.$$ Since $K_{\omega}(u_{n})<0$, the sharp Gagliardo-Nirenberg inequality implies that (5.5) $$\displaystyle\|u_{n}\|^{2}_{H_{\omega}}<P(u_{n})\leq C\|\nabla u_{n}\|^{2}_{L^% {2}}\|u_{n}\|^{\frac{4-2b}{N}}_{L^{2}}.$$ For the constant $C$ in (5.5), we have from (5.4) that for $n$ sufficiently large, $$\|\nabla u_{n}\|^{2}_{L^{2}}>C\|\nabla u_{n}\|^{2}_{L^{2}}\|u_{n}\|^{\frac{4-2% b}{N}}_{L^{2}}.$$ It follows that (5.6) $$\displaystyle\|u_{n}\|^{2}_{H_{\omega}}>C\|\nabla u_{n}\|^{2}_{L^{2}}\|u_{n}\|% ^{\frac{4-2b}{N}}_{L^{2}}.$$ The inequalities (5.5) and (5.6) contradict each other. Therefore, $d_{n}>0$. ∎ Lemma 5.4. Let $\gamma>0,N\geq 1,0<b<\min\{2,N\},1+\frac{4-2b}{N}\leq p<2^{\circ}$ and $\omega>-\gamma N$. Then $d>0$. Proof. It comes from Theorem 1.5 and Lemma 5.3. ∎ Lemma 5.5. Let $\gamma>0,N\geq 1,0<b<\min\{2,N\},1+\frac{4-2b}{N}\leq p<2^{\circ}$ and $\omega>-\gamma N$. Then the sets $K_{\pm},R_{\pm}$ are invariant under the flow of (1.1). Proof. We only give the proof for $K_{-}$, the ones for $K_{+},R_{\pm}$ are similar. Let $u_{0}\in K_{-}$, i.e. $S_{\omega}(u_{0})<d$, $K_{\omega}(u_{0})<0,I(u_{0})<0$. By conservation of mass and energy, (5.7) $$\displaystyle S_{\omega}(u(t))=S_{\omega}(u_{0})<d,\quad\forall t\in[0,T).$$ We now prove $K_{\omega}(u(t))<0$ for all $t\in[0,T)$. Suppose there exists $t_{0}>0$ such that $K_{\omega}(u(t_{0}))\geq 0$. By the continuity of $t\mapsto K_{\omega}(u(t))$, there exists $t_{1}\in(0,t_{0}]$ such that $K_{\omega}(u(t_{1}))=0$. By the definition of $d_{\omega}$, $S_{\omega}(u(t_{1}))\geq d_{\omega}\geq d$ which contradicts to (5.7). We finally prove that $I(u(t))<0$ for all $t\in[0,T)$. Suppose it is not true, there exists $t_{2}\in[0,T)$ such that $I(u(t_{2}))\geq 0$. By the continuity of $t\mapsto I(u(t))$, there exists $t_{3}\in(0,t_{2}]$ such that $I(u(t_{3}))=0$. We have $K_{\omega}(u(t_{3}))<0,I(u(t_{3}))=0$, by the definition of $d_{n}$, we have $S_{\omega}(u(t_{3}))\geq d_{n}\geq d$ which contradicts to (5.7). ∎ Proof of Theorem 1.7. By the virial identity, $$\frac{d^{2}}{dt^{2}}\|xu(t)\|^{2}_{L^{2}}=8I(u(t)),\quad\forall t\in[0,T).$$ By the convexity argument, it suffices to show that there exists $\delta>0$ such that $I(u(t))<-\delta$ for all $t\in[0,T)$. Since $K_{-}$ is invariant under the flow of (1.1), we have $K_{\omega}(u(t))<0$ and $I(u(t))<0$ for all $t\in[0,T)$. Fixed $t\in[0,T)$ and denote $u=u(t)$. For $\mu>0$, we set $u_{\mu}(x)=\mu^{\frac{N-b}{p+1}}u(\mu x)$. We have $$\displaystyle K_{\omega}(u_{\mu})$$ $$\displaystyle=\mu^{\frac{2(N-b)-(N-2)(p+1)}{p+1}}\|\nabla u\|^{2}_{L^{2}}+\mu^% {\frac{2(N-b)-(N+2)(p+1)}{p+1}}\gamma^{2}\|xu\|^{2}_{L^{2}}$$ $$\displaystyle\mathrel{\phantom{=}}+\mu^{\frac{2(N-b)-N(p+1)}{p+1}}\omega\|u\|^% {2}_{L^{2}}-P(u),$$ and $$\displaystyle I(u_{\mu})=\mu^{\frac{2(N-b)-(N-2)(p+1)}{p+1}}\|\nabla u\|^{2}_{% L^{2}}$$ $$\displaystyle-\mu^{\frac{2(N-b)-(N+2)(p+1)}{p+1}}\gamma^{2}\|xu\|^{2}_{L^{2}}$$ $$\displaystyle-\frac{N(p-1)+2b}{2(p+1)}P(u).$$ Since $1+\frac{4-2b}{N}\leq p<2^{\circ}$, we see that the exponents of $\mu$ in $I(u_{\mu})$ are positive and negative respectively. Since $I(u)<0$, it yields that there exists $\mu_{0}>1$ such that $I(u_{\mu_{0}})=0$, and when $\mu\in[1,\mu_{0}),I(u_{\mu})<0$. For $\mu\in[1,\mu_{0}]$, since $K_{\omega}(u)<0$, $K_{\omega}(u_{\mu})$ has the following two possibilities: a) $K_{\omega}(u_{\mu})<0$ for $\mu\in[1,\mu_{0}]$, b) there exists $1<\mu_{1}\leq\mu_{0}$ such that $K_{\omega}(u_{\mu_{1}})=0$. For the case a), we have $I(u_{\mu_{0}})=0$ and $K_{\omega}(u_{\mu_{0}})<0$. By the definition of $d_{n}$, we have $S_{\omega}(u_{\mu_{0}})\geq d_{n}\geq d$. Moreover, we have $$\displaystyle S_{\omega}(u)-S_{\omega}(u_{\mu_{0}})$$ $$\displaystyle=\frac{1}{2}\left(1-\mu_{0}^{\frac{2(N-b)-(N-2)(p+1)}{p+1}}\right% )\|\nabla u\|^{2}_{L^{2}}$$ $$\displaystyle\mathrel{\phantom{=}}+\frac{1}{2}\left(1-\mu_{0}^{\frac{2(N-b)-(N% +2)(p+1)}{p+1}}\right)\gamma^{2}\|xu\|^{2}_{L^{2}}$$ $$\displaystyle\mathrel{\phantom{=}}+\frac{1}{2}\left(1-\mu_{0}^{\frac{2(N-b)-N(% p+1)}{p+1}}\right)\omega\|u\|^{2}_{L^{2}},$$ and $$\displaystyle I(u)-I(u_{\mu_{0}})$$ $$\displaystyle=\left(1-\mu_{0}^{\frac{2(N-b)-(N-2)(p+1)}{p+1}}\right)\|\nabla u% \|^{2}_{L^{2}}$$ $$\displaystyle\mathrel{\phantom{=}}-\left(1-\mu_{0}^{\frac{2(N-b)-(N+2)(p+1)}{p% +1}}\right)\gamma^{2}\|xu\|^{2}_{L^{2}}.$$ Since $\mu_{0}>1$ and $1+\frac{4-2b}{N}\leq p<2^{\circ}$, it follows that $$S_{\omega}(u)-S_{\omega}(u_{\mu_{0}})\geq\frac{1}{2}(I(u)-I(u_{\mu_{0}}))=% \frac{1}{2}I(u).$$ For the case b), we have $K_{\omega}(u_{\mu_{1}})=0$ and $I(u_{\mu_{1}})\leq 0$. By the definition of $d_{\omega}$, we have $S_{\omega}(u_{\mu_{1}})\geq d_{\omega}\geq d$. By the same argument as above, we have $$S_{\omega}(u)-S_{\omega}(u_{\mu_{1}})\geq\frac{1}{2}(I(u)-I(u_{\mu_{1}}))\geq% \frac{1}{2}I(u).$$ In both cases, we prove that $$I(u)<2(S_{\omega}(u)-d).$$ Since the above argument is independent of $t\in[0,T)$, we get $I(u(t))<-\delta$ for all $t\in[0,T)$, where $\delta=2(d-S_{\omega}(u_{0}))>0$. Thus we obtain the proof of statement i) of theorem. Next we prove ii). In the case $1<p<1+\frac{4-2b}{N}$, the global existence follows from the sharp Gagliardo-Nirenberg inequality. Therefore, we only consider the case $1+\frac{4-2b}{N}\leq p<2^{\circ}$. 1) Let us consider the case $u_{0}\in R_{+}$. Since $R_{+}$ is invariant under the flow of (1.1), we have $S_{\omega}(u(t))<d$ and $K_{\omega}(u(t))>0$ for any $t\in[0,T)$. Since $K_{\omega}(u(t))>0$, it follows that $\|u(t)\|^{2}_{H_{\omega}}>P(u(t))$. Thus, $$\left(\frac{1}{2}-\frac{1}{p+1}\right)\|u(t)\|^{2}_{H_{\omega}}<\frac{1}{2}\|u% (t)\|^{2}_{H_{\omega}}-\frac{1}{p+1}P(u(t))=S_{\omega}(u(t))<d.$$ We get $\|u(t)\|^{2}_{H_{\omega}}<\frac{2d(p+1)}{p-1}$ for any $t\in[0,T)$. Since $\|u\|^{2}_{H_{\omega}}\sim\|u\|^{2}_{\Sigma}$, this implies that the solution exists globally in time. 2) Let us now consider the case $u_{0}\in K_{+}$. Since $K_{+}$ is invariant under the flow of (1.1), we have $S_{\omega}(u(t))<d,K_{\omega}(u(t))<0$ and $I(u(t))>0$. It follows that $$\displaystyle\left(\frac{1}{2}-\frac{2}{N(p-1)+2b}\right)\|\nabla u(t)\|^{2}_{% L^{2}}+\left(\frac{1}{2}+\frac{2}{N(p-1)+2b}\right)\gamma^{2}\|xu(t)\|^{2}_{L^% {2}}+\frac{\omega}{2}\|u(t)\|^{2}_{L^{2}}\\ \displaystyle<\frac{1}{2}\|u(t)\|^{2}_{H_{\omega}}-\frac{1}{p+1}P(u(t))=S_{% \omega}(u(t))<d.$$ In the case $L^{2}$-supercritical case $1+\frac{4-2b}{N}<p<2^{\circ}$, it follows from the above inequality that $\|\nabla u(t)\|^{2}_{L^{2}}<C$ for some constant $C>0$ and for any $t\in[0,T)$. This shows that the solution exists globally in time. In the $L^{2}$-critical case $p=1+\frac{4-2b}{N}$, we have (5.8) $$\displaystyle\gamma^{2}\|xu(t)\|^{2}_{L^{2}}+\frac{\omega}{2}\|u(t)\|^{2}_{L^{% 2}}<d.$$ Fixed $t\in[0,T)$ and denote $u=u(t)$. We set $u_{\mu}(x)=\mu^{\frac{N(N-b)}{2N+4-2b}}u(\mu x)$. A direct computation shows that $$I(u_{\mu})=\mu^{\frac{4-2b}{N+2-b}}\|\nabla u\|^{2}_{L^{2}}-\mu^{-\frac{4N+4-2% b}{N+2-b}}\gamma^{2}\|xu\|^{2}_{L^{2}}-\frac{N(p-1)+2b}{2(p+1)}P(u).$$ Thus, $I(u)>0$ implies that there exists $0<\mu_{0}<1$ such that $I(u_{\mu_{0}})=0$. It follows that $$\displaystyle S_{\omega}(u_{\mu_{0}})$$ $$\displaystyle=\gamma^{2}\|xu_{\mu_{0}}\|^{2}_{L^{2}}+\frac{1}{2}\omega\|u_{\mu% _{0}}\|^{2}_{L^{2}}$$ $$\displaystyle=\mu_{0}^{-\frac{4N+4-2b}{N+2-b}}\gamma^{2}\|xu\|^{2}_{L^{2}}+\mu% _{0}^{-\frac{2N}{N+2-b}}\omega\|u\|^{2}_{L^{2}}.$$ It follows from (5.8) that (5.9) $$\displaystyle S_{\omega}(u_{\mu_{0}})<\mu_{0}^{-\frac{4N+4-2b}{N+2-b}}d.$$ We now consider $K_{\omega}(u_{\mu_{0}})$ which has two possibilities. The first one is $K_{\omega}(u_{\mu_{0}})<0$. By the definition of $d_{n}$ and the fact $I(u_{\mu_{0}})=0$, we have $$S_{\omega}(u_{\mu_{0}})\geq d_{n}\geq d>S_{\omega}(u).$$ It follows that $$S_{\omega}(u)-S_{\omega}(u_{\mu_{0}})<0,$$ which is $$\displaystyle\left(1-\mu_{0}^{\frac{4-2b}{N+2-b}}\right)\|\nabla u\|^{2}_{L^{2% }}+\left(1-\mu_{0}^{-\frac{4N+4-2b}{N+2-b}}\right)\gamma^{2}\|xu\|^{2}_{L^{2}}% +\left(1-\mu_{0}^{-\frac{2N}{N+2-b}}\right)\omega\|u\|^{2}_{L^{2}}<0.$$ It implies that $$\|\nabla u\|^{2}_{L^{2}}\lesssim\gamma^{2}\|xu\|^{2}_{L^{2}}+\omega\|u\|^{2}_{% L^{2}}.$$ Thanks to (5.8), we get $\|\nabla u\|^{2}_{L^{2}}<C$ for some constant $C>0$. The second possibility is that $K_{\omega}(u_{\mu_{0}})\geq 0$. In this case, using (5.9), we have $$S_{\omega}(u_{\mu_{0}})-\frac{1}{p+1}K_{\omega}(u_{\mu_{0}})<\mu_{0}^{-\frac{4% N+4-2b}{N+2-b}}d.$$ It follows that $$\displaystyle\frac{p-1}{2(p+1)}\left(\mu_{0}^{\frac{4-2b}{N+2-b}}\|\nabla u\|^% {2}_{L^{2}}+\mu_{0}^{-\frac{4N+4-2b}{N+2-b}}\gamma^{2}\|xu\|^{2}_{L^{2}}+\mu_{% 0}^{-\frac{2N}{N+2-b}}\omega\|u\|^{2}_{L^{2}}\right)<\mu_{0}^{-\frac{4N+4-2b}{% N+2-b}}d.$$ We thus get $\|\nabla u\|^{2}_{L^{2}}<C$ for some constant $C>0$. In both possibilities, we always have the boundedness of $\|\nabla u\|^{2}_{L^{2}}$. Since the above argument is independent of $t\in[0,T)$, we obtain the boundedness of $\|\nabla u(t)\|^{2}_{L^{2}}$ for any $t\in[0,T)$. Therefore, the solution exists globally in time in the $L^{2}$-critical case $p=1+\frac{4-2b}{N}$. This completes the proof of theorem. ∎ 6. Normalized ground states This section is devoted to the proof of Theorems 1.8 and 1.9 stated in the introduction. Before giving the proof of Theorem 1.8 we recall that the embedding $\Sigma(\mathbb{R}^{N})\hookrightarrow L^{r+1}(\mathbb{R}^{N})$ is compact, where $1\leq r<1+4/(N-2)$ ($N\geq 3$), $1\leq r<\infty$ ($N=1$, $2$.); see [29, Lemma 3.1]. Now we give the proof of Theorem 1.8. Proof of Theorem 1.8. Let $\left\{u_{n}\right\}$ be a minimizing sequence for the problem $I_{q}$, then we have that $\left\{u_{n}\right\}$ is bounded in $\Sigma(\mathbb{R}^{N})$. Indeed, by the Gagliardo-Nirenberg’s inequality (1.3) and Young’s inequality we see that $$\int_{\mathbb{R}^{N}}|x|^{-b}|u_{n}|^{p+1}dx\leq{\varepsilon}\|\nabla u_{n}\|^% {\left(\frac{N(p-1)}{2}+b\right)\alpha}_{L^{2}}+C_{{\varepsilon}}\|u_{n}\|^{% \left(p+1-\frac{N(p-1)}{2}-b\right)\beta}_{L^{2}},$$ where $C_{{\varepsilon}}>0$ and $1/\alpha+1/\beta=1$. Now choosing $\alpha=\frac{4}{N(p-1)+2b}>1$ (it is due to the assumption $1<p<\frac{4-2b}{N}$), it follows that $$\int_{\mathbb{R}^{N}}|x|^{-b}|u_{n}|^{p+1}dx\leq{\varepsilon}\|\nabla u_{n}\|^% {2}_{L^{2}}+C_{{\varepsilon}}q^{\left(p+1-\frac{N(p-1)}{2}-b\right)\beta}.$$ Eventually, we get $$E(u_{n})\geq\left(\frac{1}{2}-\frac{{\varepsilon}}{p+1}\right)\|\nabla u_{n}\|% ^{2}_{L^{2}}+\frac{\gamma^{2}}{2}\|x\,u_{n}\|^{2}_{L^{2}}-\frac{C_{{% \varepsilon}}}{p+1}q^{\left(p+1-\frac{N(p-1)}{2}-b\right)\beta}.$$ Taking ${\varepsilon}>0$ sufficiently small, this implies that $\left\{u_{n}\right\}$ is bounded in $\Sigma(\mathbb{R}^{N})$. Therefore, there exists $u\in\Sigma(\mathbb{R}^{N})$ such that, up to a subsequence, we can suppose that $u_{n}$ converges to $u$ weakly in $\Sigma(\mathbb{R}^{N})$. Since $\Sigma(\mathbb{R}^{N})\hookrightarrow L^{r+1}(\mathbb{R}^{N})$ is compact, it folows that $u_{n}\rightarrow u$ in $L^{r+1}$ for $1\leq r<1+4/(N-2)$ if $N\geq 3$ and $1\leq r<\infty$ if $N=1$, $2$. By (4.1), we have $P(u_{n})\rightarrow P(u_{0})$ as $n\rightarrow\infty$. From the lower semi continuity we have $$\|\nabla u\|^{2}_{L^{2}}+\frac{\gamma^{2}}{2}\|x\,u\|^{2}_{L^{2}}\leq\liminf_{% n\rightarrow\infty}\left\{\|\nabla u_{n}\|^{2}_{L^{2}}+\frac{\gamma^{2}}{2}\|x% \,u_{n}\|^{2}_{L^{2}}\right\}.$$ It follows that $E(u)\leq\liminf_{n\rightarrow\infty}E(u_{n})$ and $\|u\|^{2}_{L^{2}}=q$, which implies that $u$ is a minimizer of $I_{q}$ and $E(u)=\lim_{n\rightarrow\infty}E(u_{n})$; consequently $u_{n}\rightarrow u$ in $\Sigma(\mathbb{R}^{N})$ as $n\rightarrow+\infty$ and $u\in\mathcal{G}_{q}$, which completes the proof of Item (i). By the same argument as in the Theorem 1.5 we get that there exists a positive and spherically symmetric function such that $u(x)=e^{i\theta_{0}}\varphi(x)$. This concludes the proof. ∎ Proof of Theorem 1.9. Let $p=1+\frac{4-2b}{N}$. Assume that $\left\{u_{n}\right\}$ is a minimizing sequence for $I_{q}$ with $q<\|Q\|^{2}_{L^{2}}$. Then $\left\{u_{n}\right\}$ is bounded in $\Sigma(\mathbb{R}^{N})$. Indeed, since $E(u_{n})\leq I_{q}+1$ for $n$ sufficiently large, by (2.5) we infer that $$\frac{1}{2}\|\nabla u_{n}\|^{2}_{L^{2}}\left(1-\left(\frac{q}{\|Q\|_{L^{2}}}% \right)^{\frac{4-2b}{N}}\right)+\frac{\gamma^{2}}{2}\int_{\mathbb{R}^{N}}|x|^{% 2}|u_{n}|^{2}dx\leq I_{q}+1.$$ Therefore, we have that $\|u_{n}\|_{\Sigma(\mathbb{R}^{N})}$ is bounded. Thus there exists $u\in\Sigma(\mathbb{R}^{N})$ such that $u_{n}\rightharpoonup u$ in $\Sigma(\mathbb{R}^{N})$ and $u_{n}\rightarrow u$ in $L^{r+1}$ for $1\leq r<1+4/(N-2)$ if $N\geq 3$ and $1\leq r<\infty$ if $N=1$, $2$, as $n$ goes to $+\infty$. From here, the proof of Theorem 1.9 is completed exactly as the proof of Theorem 1.8. ∎ 7. The Supercritical Case In this section, we prove Theorem 1.10. Firstly we give Lemma 7.1. Let $1+\frac{4-2b}{N}<p<1+\frac{4-2b}{N-2}$. The following facts hold: (i) $S_{q}\cap B_{r}$ is not empty set iff $q\leq\frac{r}{\gamma N}$. (ii) For any $r>0$, there exists $q_{0}=q_{0}(r)$ such that, for every $q<q_{0}$, (7.1) $$\inf\left\{E(u),\quad u\in S_{q}\cap B_{rq/2}\right\}<\inf\left\{E(u),\quad u% \in S_{q}\cap(B_{r}\setminus B_{rq})\right\}$$ Proof. We set $\zeta(x):=\sqrt{q}\Phi(x)$, where $\Phi$ is given in (1.8). For any $r>0$, if $q\leq\frac{r}{\gamma N}$, it is clear that $$\|\zeta\|^{2}_{L^{2}}=q\quad\text{and}\quad\|\zeta\|^{2}_{H}=\|\nabla\zeta\|^{% 2}_{L^{2}}+\gamma^{2}\|x\,\zeta\|^{2}_{L^{2}}=\gamma N\|\zeta\|^{2}_{L^{2}}% \leq r.$$ Here, the norm $\|\cdot\|^{2}_{H}$ is defined in (1.13). Therefore $\zeta\in S_{q}\cap B_{r}$. On the other hand, if $u\in S_{q}\cap B_{r}$, then from (1.9) we infer $$r\geq\|u\|^{2}_{H}=\|\nabla u\|^{2}_{L^{2}}+\gamma^{2}\|x\,u\|^{2}_{L^{2}}\geq% \gamma Nq,$$ which completes the proof of the statement (i) above. Our proof of statement (ii) is inspired by the one of Lemma 3.1 in [3]. From Gagliardo-Nirenberg inequality (1.3) we get (7.2) $$\begin{cases}E(u)\geq\frac{1}{2}\|u\|^{2}_{H}-Cq^{\frac{p+1}{2}-\frac{N(p-1)}{% 4}-\frac{b}{2}}\|u\|^{\frac{N(p-1)}{2}+{b}}_{H}=\alpha_{q}(\|u\|_{H}),\\ E(u)\leq\frac{1}{2}\|u\|^{2}_{H}=\beta_{q}(\|u\|_{H}),\end{cases}$$ where $$\begin{cases}\alpha_{q}(t)=\frac{1}{2}t(1-2Cq^{\chi}t^{\delta})\\ \beta_{q}(t)=\frac{1}{2}t\end{cases}$$ and $$\chi=\frac{1}{2}\left(p+1-\frac{N(p-1)}{2}-b\right)>0,\quad\delta=\frac{N(p-1)% +2b-4}{4}>0.$$ Note that, by (7.2), to prove (7.1), it suffices to show that there exists $0<q_{0}=q_{0}(r)<<1$ such that, for every $q<q_{0}$, $$\beta_{q}(qr/2)<\inf_{t\in(rq,r)}\alpha_{q}(t).$$ Now since $\alpha_{q}(t)>\frac{5}{16}t$ for $t\in(0,r)$ and $q<q_{0}(r)<<1$, we get $$\beta_{q}(qr/2)=\frac{1}{4}qr<\frac{5}{16}qr\leq\inf_{t\in(rq,r)}\alpha_{q}(t),$$ which completes the proof of lemma. ∎ Proof of Theorem 1.10. Suppose that $\left\{u_{n}\right\}$ is a minimizing sequence for $I^{r}_{q}$. Since $\left\{u_{n}\right\}\subset S_{q}\cap B_{r}$, it follows that $\left\{u_{n}\right\}$ is bounded in $\Sigma(\mathbb{R}^{N})$. Then there exists $u\in\Sigma(\mathbb{R}^{N})$ such that $u_{n}\rightharpoonup u$ in $\Sigma(\mathbb{R}^{N})$ and $u_{n}\rightarrow u$ in $L^{2}$ as $n\rightarrow\infty$. By lower semi-continuity $$\|\nabla u\|^{2}_{L^{2}}+\gamma^{2}\|x\,u\|^{2}_{L^{2}}\leq\liminf_{n% \rightarrow\infty}\left\{\|\nabla u_{n}\|^{2}_{L^{2}}+\gamma^{2}\|x\,u_{n}\|^{% 2}_{L^{2}}\right\},$$ we infer that $u\in S_{q}\cap B_{r}$ and $E(u)\leq\lim_{n\rightarrow\infty}E(u_{n})=I_{q}$. Thus, $u\in\mathcal{G}_{q}$ and $u_{n}\rightarrow u$ in $\Sigma(\mathbb{R}^{N})$. Moreover, by the same argument as in the proof of the Theorem 1.8, we see that there exist a real-valued positive function $\varphi$ and $\theta\in\mathbb{R}$ such that $u=e^{i\theta}\varphi$. Now, since $\|\varphi^{\ast}\|^{2}_{L^{2}}=\|\varphi\|^{2}_{L^{2}}$ and $\|\nabla\varphi^{\ast}\|^{2}_{L^{2}}\leq\|\nabla\varphi\|^{2}_{L^{2}}$, from (4.5) we have that $\varphi^{\ast}\in S_{q}\cap B_{r}$. In addition, if we suppose that $\varphi$ is not radial, then by (4.5)-(4.6) we infer that $E(\varphi^{\ast})<E(\varphi)$, which is a contradiction. Therefore $\varphi$ is radial and radially decreasing, which completes the proof of the statements (i) and (iii). Now we prove statement (ii). From Lemma 7.1 we infer that $\varphi\in B_{rq}$. This implies that $\varphi$ does not belong to the boundary of $S_{q}\cap B_{r}$. Then, we have that $\varphi$ is a critical point of $E$ on $S_{q}$ and there exists a Lagrange multiplier $\omega\in\mathbb{R}$ such that the Euler-Lagrange equation (7.3) $$-\Delta\varphi+\omega\varphi+\gamma^{2}|x|^{2}\varphi-|x|^{-b}|\varphi|^{p-1}% \varphi=0,$$ holds. Let $\zeta$ be the eigenfunction defined in (1.8) such that $\|\zeta\|^{2}_{L^{2}}=q$. Then $\zeta\in S_{q}\cap B_{r}$ and $$I^{r}_{q}\leq E(\zeta)=\frac{1}{2}\gamma Nq-\frac{1}{p+1}\int_{\mathbb{R}^{N}}% |x|^{-b}|\zeta|^{p+1}dx<\frac{1}{2}\gamma Nq.$$ Thus, from (7.3) we see that $$\displaystyle\omega\|\varphi\|^{2}_{L^{2}}$$ $$\displaystyle=-\|\nabla\varphi\|^{2}_{L^{2}}-\gamma^{2}\|x\,\varphi\|^{2}_{L^{% 2}}+\int_{\mathbb{R}^{N}}|x|^{-b}|\varphi|^{p+1}dx$$ (7.4) $$\displaystyle=-2I^{r}_{q}+\frac{p-1}{p+1}\int_{\mathbb{R}^{N}}|x|^{-b}|\varphi% |^{p+1}dx>-\gamma Nq.$$ Therefore $\omega>-\gamma N$. Now, from (1.3) we obtain $$\displaystyle\omega\|\varphi\|^{2}_{L^{2}}$$ $$\displaystyle=-\|\nabla\varphi\|^{2}_{L^{2}}-\gamma^{2}\|x\,\varphi\|^{2}_{L^{% 2}}+\int_{\mathbb{R}^{N}}|x|^{-b}|\varphi|^{p+1}dx$$ $$\displaystyle\leq-\|\varphi\|^{2}_{H}+C\|\varphi\|^{\frac{N(p-1)}{2}+b}_{H}q^{% \frac{p+1}{2}-\frac{N(p-1)}{4}-\frac{b}{2}}$$ $$\displaystyle=-\|\varphi\|^{2}_{H}\left(1-C\|\varphi\|^{\frac{N(p-1)}{2}+b-2}_% {H}q^{\frac{p+1}{2}-\frac{N(p-1)}{4}-\frac{b}{2}}\right)$$ $$\displaystyle\leq-\|\varphi\|^{2}_{H}\left(1-C(rq)^{\frac{N(p-1)}{4}+\frac{b}{% 2}-1}q^{\frac{p+1}{2}-\frac{N(p-1)}{4}-\frac{b}{2}}\right)$$ $$\displaystyle\leq-\|\varphi\|^{2}_{H}\left(1-Cq^{\frac{p-1}{2}}\right),$$ and with (1.9) we obtain $$\omega\leq-\gamma N\left(1-Cq^{\frac{p-1}{2}}\right).$$ This completes the proof of theorem. ∎ 8. Orbital stability This section is devoted to the proof of Theorems 1.11 and 1.12. Proof of Theorem 1.11. We only consider the supercritical case $1+\frac{4-2b}{N}<p<1+\frac{4-2b}{N-2}$, the proof in the other cases, when $1<p\leq 1+\frac{4-2b}{N}$, is similar. We verify the statement of Theorem 1.11 (iii) by contradiction. Then we have that there exist ${\varepsilon}>0$ and two sequences $\left\{u_{0,n}\right\}\subset\Sigma(\mathbb{R}^{N})$ and $\left\{t_{n}\right\}\subset\mathbb{R}$ such that (8.1) $$\displaystyle\inf_{\varphi\in\mathcal{G}^{r}_{q}}\|u_{0,n}-\varphi\|_{\Sigma(% \mathbb{R}^{N})}\rightarrow 0\quad\text{as $n\rightarrow+\infty$}$$ (8.2) $$\displaystyle\sup_{t\in\mathbb{R}}\inf_{\varphi\in\mathcal{G}^{r}_{q}}\|u_{n}(% t_{n})-\varphi\|_{\Sigma(\mathbb{R}^{N})}\geq{\varepsilon}\quad\text{ for % every $n\in\mathbb{N}$.}$$ Here $u_{n}(t)$ is the maximal solution of (1.1) with initial datum $u_{0,n}$. Without loss of generality, we may assume that $\|u_{0,n}\|^{2}_{L^{2}}=q$. From (8.1) and the conservation of charge and energy we infer that $$\displaystyle\|u_{n}(t_{n})\|^{2}_{L^{2}}=\|u_{0,n}\|^{2}_{L^{2}}=q\quad\text{% for every $n$,}$$ $$\displaystyle E(u_{n}(t_{n}))=E(u_{0,n})\rightarrow I^{r}_{q}\quad\text{ as $n% \rightarrow+\infty$.}$$ We claim that there exists a subsequence $\left\{u_{n_{k}}(t_{n_{k}})\right\}$ of $\left\{u_{n}(t_{n})\right\}$ such that $\|u_{n_{k}}(t_{n_{k}})\|^{2}_{H}\leq r$. Indeed, suppose that there exists $K\geq 1$ such that $\|u_{n}(t_{n})\|^{2}_{H}>r$ for every $n\geq K$. By continuity, there exists $t_{n}^{\ast}\in(0,t_{n})$ such that $\|u_{n}(t^{\ast}_{n})\|^{2}_{H}=r$. Since $\|u_{n}(t^{\ast}_{n})\|^{2}_{L^{2}}=q$, $\|u_{n}(t^{\ast}_{n})\|^{2}_{H}=r$ and $E(u_{n}(t^{\ast}_{n}))=E(u_{0,n})\rightarrow I^{r}_{q}$ as $n\rightarrow+\infty$, it follows that $\left\{u_{n}(t^{\ast}_{n})\right\}$ is a minimizing sequence of $I^{r}_{q}$. From Theorem 1.10, we infer that there exists $\psi\in\Sigma(\mathbb{R}^{N})$ such that $\|\psi\|^{2}_{L^{2}}=q$, $\|\psi\|^{2}_{H}=r$ and $E(\psi)=I^{r}_{q}$, which is a contradiction with Lemma 7.1 (ii), because the critical point $\psi$ does not belong to the boundary of $S_{q}\cap B_{r}$. Therefore, there exists a subsequence $\left\{u_{n_{k}}(t_{n_{k}})\right\}$ such that $\|u_{n_{k}}(t_{n_{k}})\|^{2}_{H}\leq r$ for all $k\geq 1$. In particular, $\left\{u_{n_{k}}(t_{n_{k}})\right\}$ is a minimizing sequence for $I^{r}_{q}$. Again from Theorem 1.10 we obtain, passing to a subsequence if necessary, $$\inf_{\varphi\in\mathcal{G}^{r}_{q}}\|u_{n_{k}}(t_{n_{k}})-\varphi\|_{\Sigma(% \mathbb{R}^{N})}\rightarrow 0\quad\text{as $k\rightarrow+\infty$},$$ which is a contradiction with (8.2) and finishes the proof. ∎ Next we study the instability of standing waves for (1.1) in the $L^{2}$-critical and $L^{2}$-supercritical cases. Proof of Theorem 1.12. Since $d_{n}\geq d_{\omega}$, we have $d=d_{\omega}$. From Lemma 5.1, $K_{\omega}(\varphi)=I(\varphi)=0$. Set $\varphi^{\lambda}(x)=\lambda\varphi(x)$. Since $$\displaystyle K_{\omega}(\varphi^{\lambda})$$ $$\displaystyle=\lambda^{2}\|\varphi\|^{2}_{H_{\omega}}-\lambda^{p+1}P(\varphi),$$ $$\displaystyle I(\varphi^{\lambda})$$ $$\displaystyle=\lambda^{2}(\|\nabla\varphi\|^{2}_{L^{2}}-\gamma^{2}\|x\varphi\|% ^{2}_{L^{2}})-\frac{N(p-1)+2b}{2(p+1)}\lambda^{p+1}P(\varphi),$$ it is easy to see that the equations $K_{\omega}(\varphi^{\lambda})=0$ and $I(\varphi^{\lambda})=0$ have unique non-zero solution $\lambda_{0}=1$. It follows that for any $\lambda>1$, $$K_{\omega}(\varphi^{\lambda})<0,\quad I(\varphi^{\lambda})<0.$$ On the other hand, we notice that $\frac{d}{d\lambda}S_{\omega}(\varphi^{\lambda})=\lambda^{-1}K_{\omega}(\varphi% ^{\lambda})$. Thus, $S_{\omega}(\varphi^{\lambda})<S_{\omega}(\varphi)$ for any $\lambda>1$. Since $S_{\omega}(\varphi)=d_{\omega}=d$, we see that for any $\lambda>1$, $S_{\omega}(\varphi^{\lambda})<d,K_{\omega}(\varphi^{\lambda})<0,I_{\omega}(% \varphi^{\lambda})<0$. This implies that $\varphi^{\lambda}\in K_{-}$ for any $\lambda>1$. Now let ${\varepsilon}>0$. We take $\lambda_{1}>1$ sufficiently close to 1 such that $$\|\varphi^{\lambda_{1}}-\varphi\|_{\Sigma}=(\lambda_{1}-1)\|\varphi\|_{\Sigma}% <{\varepsilon}.$$ Set $u_{0}=\varphi^{\lambda_{1}}$, we see that $u_{0}\in K_{-}$. By Proposition 1.7, the corresponding solution blows up in finite time. Thus we obtain the proof of statement i) of theorem. Next we prove ii). In this case $d=d_{n}<d_{\omega}$. Since $u\in\mathcal{M}_{\omega}$, we have $$S_{\omega}(\varphi^{\lambda})<S_{\omega}(\varphi)=d_{\omega}$$ for any $\lambda>1$. Since $\frac{d}{d\lambda}S_{\omega}(\varphi^{\lambda})=\lambda^{-1}K_{\omega}(\varphi% ^{\lambda})$ and since $K_{\omega}(\varphi)=0$, we have $\frac{d}{d\lambda}S_{\omega}(\varphi^{\lambda})<0$ for any $\lambda>1$. On the other hand, $S_{\omega}(\varphi)=d,S_{\omega}(\varphi^{\lambda})\rightarrow-\infty$ as $\lambda\rightarrow\infty$. Thus, there exists $\lambda_{0}>1$ such that $S_{\omega}(\varphi^{\lambda})<S_{\omega}(\varphi^{\lambda_{0}})=d$ as $\lambda>\lambda_{0}$. It follows that $S_{\omega}(\varphi^{\lambda})<d,K_{\omega}(\varphi^{\lambda})<0,I(\varphi^{% \lambda})<0$ for any $\lambda>\lambda_{0}$ or $\varphi^{\lambda}\in K_{-}$ for any $\lambda>\lambda_{0}$. Taking $\delta=(\lambda_{0}-1)\|\varphi\|_{\Sigma}$ and choose $u_{0}=\varphi^{\lambda_{1}}$ for some $\lambda_{1}>\lambda_{0}$, the result follows. ∎ 9. Appendix In this appendix we show the uniqueness result for (1.6). More specifically, if $N\geq 3$, $0<b<1$ and $1<p<1+\frac{4-2b}{N-2}$, then for any $\omega>-\gamma\,N$ there exists a unique positive radial solution of (1.6). Through this appendix we assume that $N\geq 3$, $0<b<1$ and $1<p<1+\frac{4-2b}{N-2}$. In [26, Theorem 1], Shioji and Watanabe give a uniqueness result for positive radial solutions of $$\varphi^{\prime\prime}+\frac{N-2}{r}\varphi^{\prime}+g(r)\varphi+h(r)\varphi^{% p}=0\quad\text{in $(0,+\infty)$,}$$ under appropriate assumptions on $g(r)$ and $h(r)$. Note that for our case, Eq (1.6), we have that $g(r)=-(\omega+\gamma^{2}r^{2})$ and $h(r)=r^{-b}$. Required conditions in [26, Theorem 1] are following. (I) $g\in C^{1}((0,+\infty))$, $h\in C^{3}((0,+\infty))$; $g(r)>0$, $h(r)>0$ for every $r\in(0,+\infty)$. (II) $\lim_{r\rightarrow 0}\frac{1}{r^{N-1}}\int^{r}_{0}\tau^{N-1}\left[g(\tau)+h(% \tau)\right]\,d\tau=0.$ (III) There exists $r^{\ast}\in(0,+\infty)$ such that (i) $r^{N-1}g(r)\in L^{1}((0,r^{\ast}))$, $r^{N-1}h(r)\in L^{1}((0,r^{\ast}))$. (ii) $\tau^{N-1}\left(g(\tau)+h(\tau)\right)\left(\frac{(r^{\ast})^{2-N}-\tau^{2-N}}% {2-N}\right)\in L^{1}((0,r^{\ast}))$. (IV) $\lim_{r\rightarrow 0}a(r)<+\infty$, $\lim_{r\rightarrow 0}|\beta(r)|<+\infty$, $\lim_{r\rightarrow 0}c(r)\in[0,+\infty]$, $\lim_{r\rightarrow 0}a(r)g(r)=0$ and $\lim_{r\rightarrow 0}a(r)h(r)=0$, where $$\displaystyle a(r)=$$ $$\displaystyle r^{\frac{2(N-1)(p+1)}{p+3}}h(r)^{-\frac{2}{p+3}},$$ $$\displaystyle\beta(r)=$$ $$\displaystyle-\frac{1}{2}a^{\prime}(r)+\frac{N-1}{r}a(r),$$ $$\displaystyle c(r)=$$ $$\displaystyle-\beta^{\prime}(r)+\frac{N-1}{r}\beta(r).$$ (V) There exists $k\in[0,+\infty)$ such that $$\displaystyle G(r)>0\,\,\,\text{on $(0,k)$ and }\,\,\,G(r)<0\,\,\text{ on $(k,% +\infty)$,}$$ where $$\displaystyle G(r)=-\beta(r)g(r)+\frac{1}{2}c^{\prime}(r)+\frac{1}{2}(ag)^{% \prime}(r).$$ (VI) $G^{-}\neq 0$ is satisfied, where $G^{-}=\min\left\{G(r),0\right\}$ for $r\in(0,+\infty)$. Next we check the conditions (I)-(VI) to prove the uniqueness of a solution of (1.6). Since $N\geq 3$ and $0<b<1$, it is clear that the conditions (I)-(III) hold true. For simplicity, we assume that $\gamma=1$. Recalling that $g(r)=-(\omega+r^{2})$ and $h(r)=r^{-b}$, a straightforward calculations give $$\displaystyle a(r)=$$ $$\displaystyle r^{2\frac{b+(N-1)(p+1)}{p+3}},$$ $$\displaystyle\beta(r)=$$ $$\displaystyle\frac{1}{p+3}r^{\frac{2b+2N(p+1)-3p-5}{p+3}}\left(2(N-1)-b\right),$$ $$\displaystyle c(r)=$$ $$\displaystyle\frac{1}{(p+3)^{2}}r^{\frac{2(b+N(p+1)-2(p+2))}{p+3}}\left(b-2-(N% -1)\right)\left(2b+N-(p-1)-2(p+1)\right),$$ and $$\displaystyle G(r)=A\,r^{2}+B\,r+C,$$ where $$\displaystyle A=$$ $$\displaystyle-(p+3)^{2}(2b+N(p-1)+4)$$ $$\displaystyle B=$$ $$\displaystyle\omega(p+3)^{2}(2N-(2+b))$$ $$\displaystyle C=$$ $$\displaystyle(b-2N+2)(p(N-2)+b+N-4)(p(N-2)+2b-N-2).$$ Since $N\geq 3$, $0<b<1$ and $1<p<1+\frac{4-2b}{N-2}$, it is not hard to show that (IV)-(VI) hold true. In particular, we obtain $A<0$ and $C\geq 0$, thus we can find that there exists $k\in[0,+\infty)$ such that $G(r)>0$ on $(0,k)$ and $G(r)<0$ on $(k,+\infty)$. Hence by [26, Theorem 1] we see that there exists a unique positive radial solution of (1.6). References [1] G. P. Agrawal. Nonlinear Fiber Optics. Academic Press, 2007. [2] G. Baym and C. J. Pethick. Ground state properties of magnetically trapped Bose-Einstein condensate rubidium gas. Phys. Rev. Lett., 76(6-9), 1996. [3] J. Bellazzini, N. Boussaid, L. Jeanjean, and N. Visciglia. Existence and stability of standing waves for supercritical NLS with a partial confinement. Comm. Math. Physics, 353(1):229–339, 2017. [4] R. Carles. Critical nonlinear Schrödinger equations with and without harmonic potential. Math. Models Methods Appl. Sci., 12(10):1513–1523, 2002. [5] R. Carles. Remarks on the nonlinear Schrödinger equation with harmonic potential. Ann. Henri Poincaré, 3:757–772, 2002. [6] T. Cazenave. Semilinear Schrödinger Equations. Courant Lecture Notes in Mathematics,10. American Mathematical Society, Courant Institute of Mathematical Sciences, 2003. [7] J. Chen. On a class of nonlinear inhomogeneous Schrödinger equations. J. Appl. Math. Comput., 32:237–253, 2010. [8] J. Chen and B. Guo. Sharp global existence and blowing up results for inhomogeneous Schrödinger equations. Discrete Contin. Dyn. Syst. Ser. B, 8:357–367, 2007. [9] Jianqing Chen. On the inhomogeneous nonlinear Schrödinger equation with harmonic potential and unbounded coefficient. Czechos. Math. J., 60(3):715–736, 2010. [10] V. Combet and F. Genoud. Classification of minimal mass blow-up solutions for an ${L}^{2}$ critical inhomogeneous NLS. J. Evol. Equ., 16:483–500, 2016. [11] A. de Bouard and R. Fukuizumi. Stability of standing waves for nonlinear Schrödinger equations with inhomogeneous nonlinearities. Ann. Henri Poincaré, 6:1157–1177, 2005. [12] V. D. Dinh. Blowup of ${H}^{1}$ solutions for a class of the focusing inhomogeneous nonlinear Schrödinger equation. Nonlinear Analysis, 174:169–188, 2018. [13] L. G. Farah. Global well-posedness and blow-up on the energy space for the inhomogeneous nonlinear Schrödinger equation. J. Evol. Equ., 16(1):193–208, 2016. [14] B. Feng. Sharp threshold of global existence and instability of standing wave for the Schrödinger-Hartree equation with harmonic potential Nonlinear Anal. Real World Appl., 31:132–145, 2016. [15] R. Fukuizumi. Stability and instability of standing waves for the Schrödinger equation with harmonic potential. Discrete Contin. Dynam. Systems, 7:525–544, 2000. [16] R. Fukuizumi. Stability of standing waves for nonlinear schrödinger equations with critical power nonlinearity and potentials. Advances in Differential Equations, 10(2):259–276, 2005. [17] R. Fukuizumi and M. Ohta. Stability of standing waves for nonlinear Schrödinger equations with potentials. Differential and Integral Equations, 16(1):111–128, 2003. [18] R. Fukuizumi and M. Ohta. Instability of standing waves for nonlinear Schrödinger equations with inhomogeneous nonlinearities. J. Math. Kyoto University, 45:145–158, 2005. [19] F. Genoud. An inhomogeneous, ${L}^{2}$-critical, nonlinear Schrödinger equation. Z. Anal. Anwend., 31(3):283–290, 2012. [20] F. Genoud and C. Stuart. Schrödinger equations with a spatially decaying nonlinearity: existence and stability of standing waves. Discrete Contin. Dyn. Syst., 21:137–186, 2008. [21] C. M. Guzmán. On well posedness for the inhomogneous nonlinear Schrödinger equation. Nonlinear Anal., 37:249–286, 2017. [22] H. Hajaiej. Cases of equality and strict inequality in the extended hardy-littlewood inequalities. Proc. Roy. Soc. Edinburgh Sect. A, 135(3):643–661, 2005. [23] X. Luo. Stability and multiplicity of standing waves for the inhomogeneous NLS equation with a harmonic potential. Nonlinear Anal. Real World Appl., 45:688–703, 2019. [24] T. Saanouni. Global well-posedness and instability of an inhomogeneous nonlinear Schrödinger equation. Med. J. Math., 12(2):387–417, 2015. [25] T. Saanouni. Remarks on the inhomogeneous fractional nonlinear Schrödinger equation. J. Math Phys., 57(8):081503, 2016. [26] N. Shioji and K.Watanabe. A generalized pohozaev identity and uniqueness of positive radial solutions of ${\Delta}u+g(r)u+h(r)u^{p}=0$. J. Differential Equations, 255:4448–4475., 2013. [27] T. Tao. A pseudoconformal compactification of the nonlinear Schrödinger equation and applications,. New York J. Math, 15:265–282, 2009. [28] J. Zhang. Stability of attractive Bose-Einstein condensates. J. Statist. Phys., 101(3/4):731–746, 2000. [29] J. Zhang. Stability of standing waves for nonlinear Schrödinger equations with unbounded potentials. Z. Angew. Math. Phys., 51:498–503, 2000. [30] J. Zhang. Sharp threshold for global existence and blowup in nonlinear Schrödinger equation with harmonic potential. Commun. Partial Differ. Equ., 30:1429–1443, 2005. [31] S. Zhu. Blow-up solutions for the inhomogeneous Schrödinger equation with ${L}^{2}$ supercritical nonlinearity. J. Math. Anal. Appl., 409:760–776, 2014.
Baryon formation and dissociation in dense hadronic and quark matter Jin-cheng Wang Interdisciplinary Center for Theoretical Study and Department of Modern Physics, University of Science and Technology of China, Anhui 230026, People’s Republic of China Institute for Theoretical Physics, Johann Wolfgang Goethe University, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany    Qun Wang Interdisciplinary Center for Theoretical Study and Department of Modern Physics, University of Science and Technology of China, Anhui 230026, People’s Republic of China Theoretical Physics Center for Science Facilities, Chinese Academy of Sciences, Beijing 100049, People’s Republic of China    Dirk H. Rischke Institute for Theoretical Physics, Johann Wolfgang Goethe University, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany Frankfurt Institute for Advanced Studies, Ruth-Moufang-Str. 1, D-60438 Frankfurt am Main, Germany Abstract We study the formation of baryons as composed of quarks and diquarks in hot and dense hadronic matter in a Nambu–Jona-Lasinio (NJL)–type model. We first solve the Dyson-Schwinger equation for the diquark propagator and then use this to solve the Dyson-Schwinger equation for the baryon propagator. We find that stable baryon resonances exist only in the phase of broken chiral symmetry. In the chirally symmetric phase, we do not find a pole in the baryon propagator. In the color-superconducting phase, there is a pole, but is has a large decay width. The diquark does not need to be stable in order to form a stable baryon, a feature typical for so-called Borromean states. Varying the strength of the diquark coupling constant, we also find similarities to the properties of an Efimov states. A baryon is a color-singlet bound state of three constituent quarks. Since the interaction between two quarks is attractive in the color-antitriplet channel, baryon formation can be regarded as a two-step process: first, two quarks combine to form a diquark with color-antitriplet quantum numbers, and then this diquark combines with another color-triplet quark to form a color-singlet bound state GellMann:1964nj ; Ida:1966ev ; Lichtenberg:1967zz ; Efimov:1990uz ; Anselmino:1992vg ; Buck:1992wz ; Ishii:1993rt ; AbuRaddad:2002pw ; Zou:2005xy . At extremely high baryonic densities and low temperatures quarks form Cooper pairs in the attractive color-antitriplet channel, leading to the phenomenon of color superconductivity Alford:1997zt ; Rapp:1997zu ; Pisarski:1999bf ; Hong:1999fh [for recent reviews, see e.g. Refs. Alford:2007xm ; Wang:2009xf ]. Because of asymptotic freedom, the interaction is weak and, just like in BCS theory, the Cooper pair wave function has a correlation length that exceeds the interparticle distance. However, as the density is lowered, the interaction strength increases and the Cooper pair becomes more and more localized Abuki:2001be ; Abuki:2006dv . Eventually, Cooper pairs will form tightly bound molecular diquark states Kitazawa:2007zs . These may pick up another quark with the right color to form a color-singlet baryon. This is what must happen across the deconfinement transition into the hadronic phase. Understanding the nature of the transition between dense hadronic and quark matter is one of the scientific goals of the Compressed Baryonic Matter (CBM) experiment planned at the Facility for Antiproton and Ion Research (FAIR) Senger:2009zz . In this paper we investigate the formation and dissociation of baryons in different regions of the phase diagram of strongly interacting matter: the phase of broken chiral symmetry (hadronic phase), the phase of restored chiral symmetry (the quark-gluon plasma) above and below the dissociation boundary for diquarks, and the phase where quark matter is a color superconductor. We use an NJL-type model Hatsuda:1994pi ; Buballa:2003qv for two quark flavors and employ the following strategy. First, we compute the full propagator for the scalar diquark state via solving a Dyson-Schwinger equation. With the diquark propagator and an additional quark propagator, we then solve a Dyson-Schwinger equation for the baryon propagator. Our approach bears some similarities to previous studies of diquark and baryon formation Ishii:1995bu ; Pepin:1999hs ; Bentz:2001vc ; Bentz:2002um ; Gastineau:2005wm . These works also considered an NJL-type model, but they solved the full Faddeev equation instead of a (simpler) Dyson-Schwinger equation to obtain baryon states. The difference is that in the Faddeev equation the coupling between quark and diquark is not assumed to be local: a non-static quark can be exchanged between them. Our work is based on the cruder approximation of a local quark-diquark coupling. These works also considered the axial-vector diquark state, not only the scalar one, and thus were able to investigate also excited baryon states. On the other hand, in those works only the zero-temperature case was studied, while we also consider non-zero temperature. Moreover, we do not assume the diquark to be a well-defined quasi-particle in order to solve the Dyson-Schwinger equation (an approximation employed in the aforementioned works in order to solve the Fadeev equation). We shall see that diquarks can also be unstable, but still give rise to stable baryons, a typical feature of a Borromean state also encountered in atomic and nuclear physics. Varying the diquark coupling strength, we also find that our baryon has properties which bear similarities to those of an Efimov state. We use natural units $\hbar=c=k_{B}=1$; the metric tensor is $g_{\mu\nu}={\rm diag}(+,-,-,-)$. The Lagrangian of the two-flavor NJL model with diquark-diquark interactions reads $$\displaystyle\mathcal{L}_{NJL}$$ $$\displaystyle=$$ $$\displaystyle\overline{\psi}(i\gamma_{\mu}\partial^{\mu}-\hat{m}_{0}+\hat{\mu}% \gamma_{0})\psi$$ (1) $$\displaystyle+G_{S}[(\overline{\psi}\psi)^{2}+(\overline{\psi}i\gamma_{5}% \boldsymbol{\tau}\psi)^{2}]$$ $$\displaystyle+G_{D}[\overline{\psi}i\gamma_{5}\tau_{2}J_{a}\psi_{C}][\overline% {\psi}_{C}i\gamma_{5}\tau_{2}J_{a}\psi]\;.$$ Here, we have suppressed the color indices in the fundamental representation, $a=1,2,3$, and the flavor indices, $\alpha=u,d$, in the quark spinors $\psi\equiv\psi_{a\alpha}$. The bare mass matrix is $\hat{m}_{0}=\mathrm{diag}(m_{u}^{(0)},m_{d}^{(0)})$ and the chemical potential matrix is $\hat{\mu}=\mathrm{diag}(\mu_{u},\mu_{d})$, $\tau_{s}$ ($s=1,2,3$) are the Pauli matrices in flavor space, $(J_{a})_{bc}=-i\epsilon_{abc}$ are the antisymmetric color matrices, $G_{S}$ and $G_{D}$ are coupling constants for quark-antiquark and quark-quark interactions, respectively. In principle, $G_{D}$ can be related to $G_{S}$ via a Fierz transformation, but we choose to keep it as a free parameter, allowing to explore a wider range of potentially interesting phenomena within our effective model for the strong interaction. In the following, we neglect the contribution from the isovector quark-antiquark channel, $\overline{\psi}i\gamma_{5}\boldsymbol{\tau}\psi=0$. We also decompose the scalar quark current in terms of a condensate part and a fluctuation, $\bar{\psi}_{\alpha}\psi_{\alpha}=\sigma_{\alpha}+\delta_{\alpha}$, where $\sigma_{\alpha}=\left\langle\overline{\psi}_{\alpha}\psi_{\alpha}\right\rangle$ is the chiral condensate, and we work in the mean-field approximation, i.e., we neglect terms of order $O(\delta_{\alpha}^{2})$. Similarly, we decompose the diquark current as $\overline{\psi}i\gamma_{5}\tau_{2}J_{a}\psi_{C}=(\Delta_{a}+\delta_{a})/(2G_{D})$ and drop the quadratic term in $\delta_{a}$, where the diquark condensate is $\Delta_{a}=2G_{D}\left\langle\overline{\psi}i\gamma_{5}\tau_{2}J_{a}\psi_{C}\right\rangle$. The diquark condensate fluctuation can be introduced by the replacement $\Delta_{a}\rightarrow\Delta_{a}+\varphi_{a}$ and keeping quadratic terms in the fluctuation $\varphi_{a}$. The above operation is equivalent to performing the Hubbard-Stratonovich transformation in the diquark sector. The Lagrangian (1) now becomes $$\displaystyle\mathcal{L}_{NJL}$$ $$\displaystyle\approx$$ $$\displaystyle-\frac{1}{2}\overline{\Psi}S^{-1}\Psi-\frac{1}{4G_{D}}\sum_{a}|% \Delta_{a}|^{2}-G_{S}(\sigma_{u}+\sigma_{d})^{2}$$ (2) $$\displaystyle-\frac{1}{8G_{D}}(\varphi_{aR}^{2}+\varphi_{aI}^{2})+\frac{1}{2}% \overline{\Psi}\varphi_{ai}\widehat{\Gamma}_{ai}\Psi\;.$$ Here $\Psi=(\psi,\psi_{C})^{T}$ and $\overline{\Psi}=(\overline{\psi},\overline{\psi}_{C})$ are quark spinors in the Nambu-Gorkov (NG) basis. The charge-conjugate spinors are defined by $\psi_{C}=C\overline{\psi}^{T}$ and $\overline{\psi}_{C}=\psi^{T}C$ with $C=i\gamma^{2}\gamma^{0}$. The complex diquark fluctuation $\varphi_{a}$ has been decomposed in terms of its real and imaginary parts, $\varphi_{a}=(\varphi_{aR}+i\varphi_{aI})/\sqrt{2}$, with color indices $a=1,2,3$. The inverse fermion propagator $S^{-1}$ in the NG basis is given by $$S^{-1}(P)=-\left(\begin{array}[]{cc}P_{\mu}\gamma^{\mu}+\hat{\mu}\gamma^{0}-% \hat{m}&i\gamma_{5}\tau_{2}J_{a}\Delta_{a}^{\dagger}\\ i\gamma_{5}\tau_{2}J_{a}\Delta_{a}&P_{\mu}\gamma^{\mu}-\hat{\mu}\gamma^{0}-% \hat{m}\end{array}\right).$$ (3) where $\hat{m}=\mathrm{diag}(m_{u},m_{d})$ is the quark mass matrix with corrections from chiral condensates, $m_{i}=m_{i}^{(0)}-2G_{S}(\sigma_{u}+\sigma_{d})$ with $i=u,d$. The quark-quark-diquark vertices $\widehat{\Gamma}_{ai}$ are given by $\widehat{\Gamma}_{aR}=\frac{i}{\sqrt{2}}\gamma_{5}\tau_{2}J_{a}\tau_{1}^{NG}$, $\widehat{\Gamma}_{aI}=\frac{i}{\sqrt{2}}\gamma_{5}\tau_{2}J_{a}\tau_{2}^{NG}$, where $\tau_{s}^{NG}$ ($s=1,2,3$) are Pauli matrices in NG space. In the following, without loss of generality we choose the diquark condensate to be $\Delta_{a}=\delta_{a3}\Delta_{3}$. Note that we only consider the scalar channel for the diquark condensate, as we are only interested in the lowest baryon state, not the higher-lying excited ones. Including the axial-vector channel is straightforward, but will not modify our results qualitatively. Finally, we remark that the tadpole term $\varphi_{a}\Delta_{a}^{*}+\varphi_{a}^{*}\Delta_{a}$, which in principle also appears in Eq. (2), is cancelled by the term $\varphi_{a}\overline{\psi}_{C}i\gamma_{5}\tau_{2}J_{a}\psi+\varphi_{a}^{*}% \overline{\psi}i\gamma_{5}\tau_{2}J_{a}\psi_{C}$ at the one-loop level, where $\overline{\psi}_{C}\psi+\overline{\psi}\psi_{C}$ contracts and forms a quark loop in the NG basis. The cancellation condition is just the gap equation for $\Delta$. We now add the baryon field to our Lagrangian. We assume the baryon to be generated by an interaction term between two quark and two diquark fields, $$\displaystyle{\cal L}_{B}$$ $$\displaystyle=$$ $$\displaystyle G_{B}\varphi_{a}^{\dagger}\bar{\psi}_{a}\psi_{b}\varphi_{b}$$ (4) $$\displaystyle\simeq$$ $$\displaystyle-\frac{1}{2G_{B}}\overline{\mathbf{B}}\mathbf{B}+\frac{1}{2}% \overline{\mathbf{B}}\widehat{\Gamma}_{Bi}\Psi_{a}\varphi_{ai}+\frac{1}{2}% \varphi_{ai}\overline{\Psi}_{a}\widehat{\Gamma}_{Bi}^{*}\mathbf{B}\;.$$ Here, we decomposed $\psi_{a}\varphi_{a}=\left\langle\psi_{a}\varphi_{a}\right\rangle+\beta_{a}$, defined the baryonic field as $B=G_{B}\left\langle\psi_{a}\varphi_{a}\right\rangle$, and neglected terms of order $O(\beta_{a}^{2})$. The baryonic fields in the NG basis are then denoted by $\mathbf{B}=(B,B_{c})^{T}$ and $\overline{\mathbf{B}}=(\overline{B},\overline{B}_{c})$. The baryon-quark-diquark vertices are $\widehat{\Gamma}_{BR}=\frac{1}{\sqrt{2}}1_{NG}$ and $\widehat{\Gamma}_{BI}=i\frac{1}{\sqrt{2}}\tau_{3}^{NG}$, respectively. The sum of the Lagrangians (2) and (4) is the starting point for our further treatment. In the following, for the sake of simplicity we assume exact isospin symmetry and we work in the chiral limit, i.e., $\sigma_{u}=\sigma_{d}\equiv\sigma$, thus $m_{u}=m_{d}=m_{q}$, $\mu_{u}=\mu_{d}=\mu_{q}$, and $m_{u}^{(0)}=m_{d}^{(0)}\equiv 0$. We now derive the full diquark propagator via the Dyson-Schwinger equation, $$\displaystyle D_{i,a}^{-1}(p_{0},\mathbf{p})$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{4G_{D}}-\Pi_{i,a}(p_{0},\mathbf{p})\;,$$ (5) where $p_{0}=i2\pi nT$are the bosonic Matsubara frequencies ($n=0,\pm 1,\pm 2,\ldots$), $i,j=R,I$, and $a,b=1,2,3$ are fundamental colors. The full propagator $D_{i,a}$ and the self-energy $\Pi_{i,a}$ only carry one index $i=R,I$ and one color index $a$, because they are diagonal in the space of $R,I$ and in color space. The self-energy has the property $\Pi_{R/I,a}=\frac{1}{2}\left(\Pi_{0}^{a}\pm\Pi_{1}^{a}\right)$, where $\Pi_{0}^{a}$ and $\Pi_{1}^{a}$ depend on the diagonal and the off-diagonal parts of the quark propagator, respectively, and $\Pi_{1}^{a}=\delta_{a3}\Pi_{1}^{3}$. The expressions for $\Pi_{0}^{a}(p_{0},\mathbf{p})$ and $\Pi_{1}^{a}(p_{0},\mathbf{p})$ are $$\displaystyle\Pi_{0}^{1,2}$$ $$\displaystyle=$$ $$\displaystyle 2\int\frac{d^{3}k}{(2\pi)^{3}}\,c_{k,p+k}$$ $$\displaystyle\times\left[\frac{e_{1}^{\prime}\epsilon_{k}^{e^{\prime}}+\xi_{k}% ^{e^{\prime}}}{2e_{1}^{\prime}\epsilon_{k}^{e^{\prime}}}\frac{1-f(e_{1}^{% \prime}\epsilon_{k}^{e^{\prime}})-f(\xi_{p+k}^{e})}{p_{0}-e_{1}^{\prime}% \epsilon_{k}^{e^{\prime}}-\xi_{p+k}^{e}}\right.$$ $$\displaystyle+\left.\frac{e_{1}\epsilon_{p+k}^{e}+\xi_{p+k}^{e}}{2e_{1}% \epsilon_{p+k}^{e}}\frac{1-f(\xi_{k}^{e^{\prime}})-f(e_{1}\epsilon_{p+k}^{e})}% {p_{0}-\xi_{k}^{e^{\prime}}-e_{1}\epsilon_{p+k}^{e}}\right]\;,$$ $$\displaystyle\Pi_{0}^{3}$$ $$\displaystyle=$$ $$\displaystyle 4\int\frac{d^{3}k}{(2\pi)^{3}}\frac{e_{1}^{\prime}\epsilon_{k}^{% e^{\prime}}+\xi_{k}^{e^{\prime}}}{2e_{1}^{\prime}\epsilon_{k}^{e^{\prime}}}% \frac{e_{1}\epsilon_{p+k}^{e}+\xi_{p+k}^{e}}{2e_{1}\epsilon_{p+k}^{e}}$$ $$\displaystyle\times\frac{1-f(e_{1}^{\prime}\epsilon_{k}^{e^{\prime}})-f(e_{1}% \epsilon_{p+k}^{e})}{p_{0}-e_{1}^{\prime}\epsilon_{k}^{e^{\prime}}-e_{1}% \epsilon_{p+k}^{e}}c_{k,p+k},$$ $$\displaystyle\Pi_{1}^{1,2}$$ $$\displaystyle=$$ $$\displaystyle 0,$$ $$\displaystyle\Pi_{1}^{3}$$ $$\displaystyle=$$ $$\displaystyle-\int\frac{d^{3}k}{(2\pi)^{3}}\frac{\Delta_{3}^{2}}{e_{1}e_{1}^{% \prime}\epsilon_{k}^{e}\epsilon_{p+k}^{e^{\prime}}}$$ (6) $$\displaystyle\times\frac{1-f(e_{1}\epsilon_{k}^{e})-f(e_{1}^{\prime}\epsilon_{% p+k}^{e^{\prime}})}{p_{0}-e_{1}\epsilon_{k}^{e}-e_{1}^{\prime}\epsilon_{p+k}^{% e^{\prime}}}c_{k,p+k},$$ where summations over $e,e^{\prime},e_{1},e_{1}^{\prime}=\pm 1$ are implied, $f(x)=1/(e^{x/T}+1)$ is the Fermi-Dirac distribution, $E_{k}=\sqrt{k^{2}+m_{q}^{2}}$, $\xi_{k}^{e}=eE_{k}-\mu$, $\epsilon_{k}^{e}=\sqrt{(\xi_{k}^{e})^{2}+\Delta^{2}}$, and $c_{k,p+k}=1+ee^{\prime}\frac{\mathbf{k}\cdot(\mathbf{p+k})+m_{q}^{2}}{E_{k}E_{% p+k}}$. Some simple properties of $D_{i,a}^{-1}$ are: (1) $D_{R,3}^{-1}\neq D_{I,3}^{-1}$ when $\Delta_{3}\neq 0$; (2) $D_{i,1}^{-1}=D_{j,2}^{-1}$ for any $i,j=R,I$; (3) $D_{i,1}^{-1}=D_{i,2}^{-1}=D_{i,3}^{-1}=D^{-1}$ when $\Delta_{3}=0$ for any $i=R,I$. We also have $\Pi_{1}^{a}=0$ when $\Delta_{a}=0$. The spectral density for diquarks is then given by $$\displaystyle\rho_{i,a}(\omega,\mathbf{p})=\frac{1}{\pi}$$ (7) $$\displaystyle\times$$ $$\displaystyle\frac{\mathrm{Im}D_{i,a}^{-1}(\omega+i\eta,\mathbf{p})}{\left[% \mathrm{Re}D_{i,a}^{-1}(\omega+i\eta,\mathbf{p})\right]^{2}+\left[\mathrm{Im}D% _{i,a}^{-1}(\omega+i\eta,\mathbf{p})\right]^{2}}\;,$$ where we analytically continued $p_{0}\rightarrow\omega+i\eta$ with real $\omega$ and $\eta$ a small positive number. We have similar properties for the spectral densities as for $D_{i,a}^{-1}$. With the spectral density, we can obtain the full propagator via the dispersion relation $$D_{i,a}(p_{0},\mathbf{p})=\int_{-\infty}^{\infty}d\omega\frac{\rho_{i,a}(% \omega,\mathbf{p})}{\omega-p_{0}}\;.$$ (8) From the Lagrangian (4) the 11-component in NG space of the inverse baryon propagator is $S_{B}^{-1}=-1/(2G_{B})-\Sigma$, where $$\Sigma(P)=-\frac{1}{4}\sum_{a}\int_{K}S_{11}^{a}(P-K)[D_{R,a}(K)+D_{I,a}(K)]$$ (9) is the 11-component of the baryon self-energy. The quark propagator in NG space, $S_{11}^{a}$, is diagonal in color space. In the presence of a non-vanishing diquark condensate, $S_{11}^{1}=S_{11}^{2}\neq S_{11}^{3}$. If the diquark condensate vanishes, $S_{11}^{1}=S_{11}^{2}=S_{11}^{3}$ and $D_{R,a}=D_{I,b}$ for any $a,b$. In order to evaluate $\Sigma$, we insert Eq. (8) into Eq. (9). Since we are interested in baryons at rest, we shall take the $\mathbf{p}=\mathbf{0}$ limit of the positive energy component of $S_{B}^{-1}$, $S_{B,+}^{-1}(p_{0},\mathbf{p}=\mathbf{0})=\frac{1}{2}\mathrm{Tr}\left[S_{B}^{-% 1}\Lambda_{\mathbf{p=0}}^{+}\gamma^{0}\right]$, where $\Lambda_{\mathbf{p}}^{s}$ is the energy projector $\Lambda_{\mathbf{p}}^{s}=\frac{1}{2}\left[1+s\left(\gamma_{0}\gamma\cdot% \mathbf{p}+\gamma_{0}M_{B}\right)/E_{p}\right]$, with $E_{p}=\sqrt{p^{2}+M_{B}^{2}}$ and $s=\pm 1$. In the homogeneous limit, $\mathbf{p}=\mathbf{0}$, the energy projector assumes a simple form, $\Lambda_{\mathbf{p=0}}^{s}=\frac{1}{2}(1+s\gamma_{0})$, which is independent of $M_{B}$. Then, we obtain the spectral density as $$\displaystyle\rho_{B}(\omega,\mathbf{p})=\frac{1}{\pi}$$ (10) $$\displaystyle\times$$ $$\displaystyle\frac{\mathrm{Im}S_{B,+}^{-1}(\omega+i\eta,\mathbf{0})}{\left[% \mathrm{Re}S_{B,+}^{-1}(\omega+i\eta,\mathbf{0})\right]^{2}+\left[\mathrm{Im}S% _{B,+}^{-1}(\omega+i\eta,\mathbf{0})\right]^{2}}\;,$$ where we have again analytically continued $p_{0}\rightarrow\omega+i\eta$. In our calculations for Figs. 1–6, we choose the following parameters: $G_{S}=5.1\;\mathrm{GeV}^{-2}$, $\Lambda=0.65\;\mathrm{GeV}$ (momentum cutoff). For Figs. 1 and 6, we vary $G_{D}$, in order to investigate the effect of the diquark coupling constant on the boundaries of the diquark dissociation and the color-superconducting (CSC) phase and on the baryon formation. For Figs. 2–5, we set $G_{D}=3.11\;\mathrm{GeV}^{-2}$. This value is in the weak-coupling region, so the diquark is unstable in the phase of broken chiral symmetry. Nevertheless, we shall show that a quark and an unstable diquark can form a stable baryon in this phase. For Figs. 4–6, we choose $G_{B}=10.04$ GeV${}^{-1}$. The baryon coupling constant $G_{B}$ is actually the static approximation for an intermediate quark propagator in the Faddeev equation. This approximation allows us to investigate baryon properties also at nonzero temperature and density. We fix $G_{B}$ to obtain a baryon mass of 940 MeV in the vacuum. In the phase diagram of Fig. 1, we choose four sets of values for temperature and quark chemical potential, $(T,\mu_{q})=$ (0.03,0.25), (0.03,0.33), (0.03,0.36), and (0.15,0.36), all in GeVs. They correspond to points A, B, C, and D. The red solid line separates the regions (indicated by $\chi$SB/$\chi$SR) where chiral symmetry is broken/restored; CSC denotes the color-superconducting phase. The blue dashed lines show the diquark dissociation boundaries for three values of the diquark coupling constant, $G_{D}=3.11,3.8,4.025$ (in units of GeV${}^{-2}$). Below a diquark dissociation line, the equation $\mathrm{Re}D^{-1}(\omega,\mathbf{p}=\mathbf{0})=0$ has a real solution $\omega$, the so-called diquark pole. The corresponding regions in Fig. 1 are filled with light blue, green, and magenta color, respectively. These poles also exist in the CSC phases, however, for the sake of clarity we choose not to color the respective regions. The CSC phases are bounded by the red solid line from the left and by the dash-dotted lines from above (from bottom to top for $G_{D}=3.11,3.8,4.025$, respectively). Note that the diquark coupling constants we have chosen here are in the weak-coupling or BCS regime. As we increase $G_{D}$, Bose-Einstein condensation of diquarks could take place in the region below the dissociation lines, provided the bare quark mass is nonzero Nishida:2005ds ; Deng:2006ed ; Sun:2007fc ; Kitazawa:2007zs ; Brauner:2008td ; Abuki:2010jq ; Basler:2010xy . Note that in Ref. Kitazawa:2007zs , a vanishing decay width was imposed as an additional criterion for the location of the dissociation boundary. The numerical results for the spectral densities are presented in Figs. 2–3. The upper panel of Fig. 2 shows the diquark spectral densities in the phase of broken chiral symmetry (point A of Fig. 1). In the homogeneous limit ($\mathbf{p}=\mathbf{0}$, red solid line), no diquark poles exist (since $G_{D}=3.11\;\mathrm{GeV}^{-2}$ is too small), and the curves are smooth. The middle panel shows the diquark spectral densities in the phase of restored chiral symmetry, below the dissociation boundary, but above the CSC phase (point B in Fig. 1). In the homogeneous limit, there is one sharp peak at $\omega=0$. The non-zero width of this peak implies that the diquark is unstable. When temperature grows, the diquarks dissociate, so the peak is replaced by a broad bump shown in the lower panel (corresponding to point D in Fig. 1). In the three panels (from top to bottom) of Fig. 3 we show $\rho_{R,3}$, $\rho_{I,3}$ and $\rho_{i,1/2}$ in the CSC phase (point C in Fig. 1), respectively. For $\rho_{I,3}$ there are $\delta-$function-like peaks in the range $|\omega|<2\Delta$, indicating stable diquarks. For $\rho_{R,3}$ and $\rho_{i,1/2}$, these peaks attain a small width. Also, as $\mathbf{p}$ increases, all peaks become wider. Note that the spectral densities are not odd functions of $\omega$, because $\mu_{q}$ is non-zero. We see that stable diquarks only exist in the CSC region. Unstable diquark poles outside the CSC region are actually the diquark fluctuations discussed in Ref. Kitazawa:2005vr . One can also see from the lower two panels that there are five Nambu-Goldstone (NG) modes which have poles at $\omega=0$ for zero momenta. In the lowest panel, there are four NG modes, i.e., the real and imaginary scalar fields with red and green color. In the middle panel, there is one NG mode for the imaginary scalar field with blue color. These existence of these NG modes is due to the validity of the following equations: $\frac{1}{2}\Pi_{I/R,1/2}(0,\mathbf{0})+\frac{1}{4G_{D}}=0$ and $\frac{1}{2}\Pi_{I,3}(0,\mathbf{0})+\frac{1}{4G_{D}}=0$. In Fig. 4 we show the real and imaginary parts of the inverse retarded Greens function for baryons (positive energy component), again at points A,B,C, and D in the phase diagram of Fig. 1. In the phase of broken chiral symmetry with $m_{q}\neq 0$ and $\Delta=0$ (point A), there are no diquark condensates or resonances but there are stable baryon resonances: in the first panel (from top to bottom), we see that $\mathrm{Re}S_{B+}^{-1}(\omega_{B},\mathbf{0})=0$ has a solution at $\omega_{B}+3\mu_{q}\approx 0.94$ GeV, i.e., close to the rest mass of the nucleon. There is a region of $\omega_{B}\in[-3(m_{q}+\mu_{q}),3(m_{q}-\mu_{q})]$ or $M_{B}\in[-3m_{q},3m_{q}]$, where the imaginary part $\mathrm{Im}S_{B+}^{-1}(\omega_{B},\mathbf{0})$ is very small (smaller than $10^{-6}$ GeV) in the homogeneous limit. The position is just inside this region, i.e., $M_{B}<3m_{q}$: the baryon weighs less than its constituents. It is therefore stable, although its constituents by themselves are unbound, like in a Borromean state in atomic or nuclear physics. The second panel shows the case with diquark resonances but outside the CSC phase (point B). There is no positive energy baryon pole in this case. In the region of higher temperatures and quark chemical potentials where chiral symmetry is restored and where there are neither diquark condensates nor resonances (point D), there are also no baryon resonances and the absolute value of $\mathrm{Im}S_{B+}^{-1}$ is very large. This case is shown in the third panel. In the CSC phase (point C), there are baryon poles but with large imaginary parts, indicating unstable baryon resonances, as shown in the fourth panel. This is confirmed by a broad bump in the baryon spectral density in the fourth panel of Fig. 5. The results for the baryon spectral density at different values of $T$ and $\mu_{q}$ are presented in Fig. 5. In the first and second panels (from top to bottom), where $T=0.01,0.03$ GeV, we observe that the baryon spectral density hardly changes with respect to its width or peak position when varying the chemical potential from 0.29 to 0.32 GeV. In the third panel with $T=0.05$ GeV the peak position shows a small increase with increasing $\mu_{q}$. For these larger temperatures, however, the width shows a dramatic increase: the curves for $(T,\mu_{q})=(0.05,0.31),(0.05,0.32)$ GeV are not even visible on the current scale, implying the disappearance of the baryon resonances. For the curves still visible at $T=0.05$ GeV, the widths are very large indicating highly unstable baryon resonances. In the CSC phase with $(T,\mu_{q})=(0.03,0.36)$ GeV (the fourth panel) the baryon resonance is also quite unstable, since the peak is very low and broad on the scale of the other panels in this figure. In Fig. 6 we vary the diquark coupling constant in order to investigate where the baryon is stable at $T=\mu_{q}=0$. We choose as $x-$axis the renormalized coupling $G_{r}$ defined in Eq. (40) of Ref. Abuki:2006dv . The advantage of using $G_{r}$ instead of $G_{D}$ is that the existence of stable diquark bound states is determined by the sign of $G_{r}$: for $G_{r}>0$ we have diquark bound states, for $G_{r}<0$ they do not exist. The shaded region in Fig. 6 indicates where baryons are stable, i.e., where the imaginary part of the inverse baryon propagator vanishes, or where the spectral density may exhibit a $\delta$-function-like peak (provided the real part also vanishes inside this region). Above the blue curve the system is in a three-quark state (for weak diquark coupling) or in a quark-diquark state (for strong diquark coupling). At moderately weak negative $G_{r}$ the diquark is not stable, but, as indicated by the red curve, we obtain a stable baryonic bound state with mass $M_{B,phys}<3m_{q}$, where $M_{B,phys}$ is defined as the location of the peak position of the baryon spectral density. If we increase $G_{r}$ towards positive values, i.e., in the range where diquarks are stable, the pole energy of a stable baryonic bound state must lie in the range $[-(\omega_{D}+2\mu_{q})-m_{q},\omega_{D}+2\mu_{q}+m_{q}]$ ($\omega_{D}$ is the energy of the diquark at $\mathbf{p}=\mathbf{0}$). The upper boundary of this range corresponds to the blue curve which is consequently given by $\omega_{D}+2\mu_{q}-2m_{q}$. The threshold for stable baryons shown in Fig. 6 by the blue curve is similar to the boundary for Efimov states in non-relativistic cold atom physics: there, the boundary is proportional to $-1/a_{s}^{2}$, where $a_{s}$ is the scattering length. In our case, $G_{r}\sim a_{s}$, cf. Eq. (39) of Ref. Abuki:2006dv . The curvature of the boundary in Fig. 6 indeed indicates a quadratic behavior as a function of $G_{r}$. The red curve for the baryon bound state was computed with a fixed coupling constant $G_{B}$. There are some similarities between this state and an Efimov state. Also there, the latter cannot form, if the two-body coupling constant is too weak, i.e., for small negative $G_{r}$. On the other hand, for a very strong two-body coupling, i.e., for small positive $G_{r}$, there may be a competition between the two-body bound state and the three-body bound state. There are also differences to an Efimov state, for instance, in Fig. 6 the baryon bound state does not cross the decay threshold for positive $G_{r}$. We perceive this to be an artifact of a fixed quark-diquark coupling $G_{B}$ in our model. In a full calculation the quark-diquark coupling $G_{B}$ should vary proportional to the inverse (dressed) mass of the quark exchanged between quark and diquark Gastineau:2005wm . Then, $G_{B}$ will also become a function of the diquark coupling constant $G_{D}$. A characteristics of Efimov physics is an infinite tower of higher-lying excited states. In order to show that they also occur in our case, we would have to solve an eigenvalue equation for baryonic bound states. This is a subject for future investigations. In order to see the interplay between the stable diquark and baryon more explicitly, we present in Fig. 7 the diquark spectral density and the imaginary part of the inverse baryon propagator for $T=\mu_{q}=0$ GeV. For a strong diquark coupling (red solid line), one finds two components in the spectral density, a continuous component $\rho_{c}$ and a pole one, $$\rho_{\delta}(\omega,\mathbf{p})=A(p)\delta[\omega-\omega_{p}(p)]-A(p)\delta[% \omega+\omega_{p}(p)],$$ (11) where the amplitude is given by $A(p)=(\partial\mathrm{Re}\Pi/\partial\omega)^{-1}|_{\omega=\omega_{p}(p)}$, and $\omega_{p}(p)$ is the energy of the pole with $p=|\mathbf{p}|$. If the diquark coupling is weak (blue dashed line), only the continuous component remains, indicating an unstable diquark. Both components are taken into account in calculating the baryon self-energy. From the imaginary part of the inverse baryon propagator, one finds a region $M_{B}\in[-3m_{q},3m_{q}]$ where $\mathrm{Im}S_{B,+}^{-1}=0$ GeV in the weak-coupling case $G_{D}=3.11\;\mathrm{GeV}^{-2}$ (blue dashed line), where a stable baryon can be formed. In the strong-coupling case $G_{D}=5.95\;\mathrm{GeV}^{-2}$ (red solid line), two additional bumps appear which overlap with the window $M_{B}\in[-3m_{q},3m_{q}]$. Since nonzero $\mathrm{Im}S_{B,+}^{-1}$ indicates unstable baryons, the region for stable baryons is reduced. This shows that the interplay between pole and continuum part of the diquark spectral density is an important ingredient in the formation of baryons. Neglecting the latter and taking only the pole part into account misses important physics (such as the formation of a Borromean-type stable baryon from an unstable diquark and a quark). Finally, we would like to make some comparison to previous works. In the Faddeev approach Ishii:1995bu ; Pepin:1999hs , it is assumed that the baryon is stable and the baryonic $T-$matrix has a separable form, which reduces the full Faddeev equation to the Bethe-Salpeter equation (BSE) for the baryonic vertex. Furthermore, for numerical simplicity it is also assumed that the diquark is stable. Thus, the baryon mass can be obtained via solving an eigen-equation (i.e., BSE) for the baryonic vertex. In this approach, the effect of temperature was so far neglected due to the increase in numerical complexity. An unstable diquark would also make the equation numerically hard to solve. Thus, so far the baryon was only treated as a stable bound state of a quark and a stable diquark. Since the baryon is stable by assumption, the properties of baryon resonances cannot be obtained in the Faddeev approach, where the baryon dissociation condition is simply realized by the condition that the baryon mass exceeds the sum of quark and diquark masses. However, this baryon dissociation condition is not correct in the CSC region where quarks are gapped. The correct way is to find if there are $\delta-$function-like peaks in the baryon spectral density, as done in this paper. Our static approximation simplifies the Faddeev equation to an RPA-type quasi-fermion BSE, so the baryon formation and dissociation at nonzero temperature and chemical potential is amenable to treatment. As we have shown, we have calculated the full baryonic spectral densities in different phases, from which the baryon dissociation condition is correctly obtained. The authors of Refs. Bentz:2001vc ; Bentz:2002um also used the static approximation in order to simplify the Faddeev equation, but they focus on different issues. For the diquark propagator, they used the proper-time regularization method which introduces an effective confinement, but the method is not applicable to nonzero temperature. The diquark $T-$matrix is approximated by a constant term $1/4G_{D}$ plus pole terms, which is equivalent to taking a stable diquark, while we employ the full spectral density of the diquark. There is some difference between our results and theirs. At low temperatures we also calculated the baryon mass as a function of chemical potential: we find only a slight decrease of the baryon mass with chemical potential, while they obtain a significant decrease. The reason is that we did not include vector mesons and thus do not obtain large baryon number densities. Also we did not find a way of introducing confinement at nonzero temperature. We plan to look at these issues in a future study. In Ref. Gastineau:2005wm , the static approximation and a stable diquark are used. The authors considered a three-flavor NJL model, and the baryon mass is found to decrease by 25% at normal nuclear matter density. We also plan to extend our model to the three-flavor case and study the properties of nuclear matter in the future. In conclusion, we used an NJL-type model to compute the full diquark propagator and its spectral density in different regions of the phase diagram of strongly interacting matter. Baryon formation and dissociation in dense nuclear and quark matter is then studied via the baryon poles and spectral densities, incorporating the previously obtained diquark propagator. We find that stable baryon resonances with zero width are present in the phase of broken chiral symmetry. There are no baryon poles in the chirally symmetric phase. In the CSC phase, baryon poles exist, but they are found to be unstable due to a sizable width. We also pointed out that the stable baryon states found by us have some similarities to Borromean and Efimov states in atomic or nuclear physics. Acknowledgement. JCW and QW thank Jian Deng for many insightful discussions especially in the technique of principal value integration, and thank Lian-yi He for helpful discussions. QW is supported in part by the ’100 talents’ project of Chinese Academy of Sciences (CAS) and by the National Natural Science Foundation of China (NSFC) under Grant Nos. 10675109 and 10735040. JCW is supported in part by China Scholarship Council. References (1) M. Gell-Mann, Phys. Lett. 8, 214 (1964). (2) M. Ida and R. Kobayashi, Prog. Theor. Phys.  36, 846 (1966). (3) D. B. Lichtenberg and L. J. Tassie, Phys. Rev.  155, 1601 (1967). (4) G. V. Efimov, M. A. Ivanov and V. E. Lyubovitskij, Z. Phys.  C 47, 583 (1990). (5) M. Anselmino, E. Predazzi, S. Ekelin, S. Fredriksson and D. B. Lichtenberg, Rev. Mod. Phys.  65, 1199 (1993). (6) A. Buck, R. Alkofer and H. Reinhardt, Phys. Lett.  B 286, 29 (1992). (7) N. Ishii, W. Bentz and K. Yazaki, Phys. Lett.  B 318, 26 (1993). (8) L. J. Abu-Raddad, A. Hosaka, D. Ebert and H. Toki, Phys. Rev.  C 66, 025206 (2002) [arXiv:nucl-th/0206002]. (9) B. S. Zou and D. O. Riska, Phys. Rev. Lett.  95, 072001 (2005) [arXiv:hep-ph/0502225]. (10) M. G. Alford, K. Rajagopal and F. Wilczek, Phys. Lett. B 422, 247 (1998) [arXiv:hep-ph/9711395]. (11) R. Rapp, T. Schafer, E. V. Shuryak and M. Velkovsky, Phys. Rev. Lett.  81, 53 (1998) [arXiv:hep-ph/9711396]. (12) R. D. Pisarski and D. H. Rischke, Phys. Rev. D 61, 051501 (2000) [arXiv:nucl-th/9907041]. (13) D. K. Hong, V. A. Miransky, I. A. Shovkovy and L. C. R. Wijewardhana, Phys. Rev. D 61, 056001 (2000) [Erratum-ibid. D 62, 059903 (2000)] [arXiv:hep-ph/9906478]. (14) M. G. Alford, A. Schmitt, K. Rajagopal and T. Schafer, Rev. Mod. Phys.  80, 1455 (2008) [arXiv:0709.4635 [hep-ph]]. (15) Q. Wang, Prog. Phys.  30, 173 (2010) [arXiv:0912.2485 [nucl-th]]. (16) H. Abuki, T. Hatsuda and K. Itakura, Phys. Rev.  D 65, 074014 (2002) [arXiv:hep-ph/0109013]. (17) H. Abuki, Nucl. Phys.  A 791, 117 (2007) [arXiv:hep-ph/0605081]. (18) M. Kitazawa, D. H. Rischke and I. A. Shovkovy, Phys. Lett.  B 663, 228 (2008) [arXiv:0709.2235 [hep-ph]]. (19) P. Senger, T. Galatyuk, A. Kiseleva, D. Kresan, A. Lebedev, S. Lebedev and A. Lymanets, J. Phys. G 36, 064037 (2009). (20) T. Hatsuda and T. Kunihiro, Phys. Rept.  247, 221 (1994) [arXiv:hep-ph/9401310]. (21) M. Buballa, Phys. Rept.  407, 205 (2005) [arXiv:hep-ph/0402234]. (22) N. Ishii, W. Bentz and K. Yazaki, Nucl. Phys.  A 587, 617 (1995). (23) S. Pepin, M. C. Birse, J. A. McGovern and N. R. Walet, Phys. Rev.  C 61, 055209 (2000) [arXiv:hep-ph/9912475]. (24) W. Bentz and A. W. Thomas, Nucl. Phys. A 696, 138 (2001) [arXiv:nucl-th/0105022]. (25) W. Bentz, T. Horikawa, N. Ishii and A. W. Thomas, Nucl. Phys. A 720, 95 (2003) [arXiv:nucl-th/0210067]. (26) F. Gastineau and J. Aichelin, AIP Conf. Proc. 739, 398 (2005). (27) Y. Nishida and H. Abuki, Phys. Rev.  D 72, 096004 (2005) [arXiv:hep-ph/0504083]. (28) J. Deng, A. Schmitt and Q. Wang, Phys. Rev.  D 76, 034013 (2007) [arXiv:nucl-th/0611097]. (29) G. f. Sun, L. He and P. Zhuang, Phys. Rev.  D 75, 096004 (2007) [arXiv:hep-ph/0703159]. (30) T. Brauner, Phys. Rev.  D 77, 096006 (2008) [arXiv:0803.2422 [hep-ph]]. (31) H. Abuki, G. Baym, T. Hatsuda and N. Yamamoto, Phys. Rev. D 81, 125010 (2010). (32) H. Basler and M. Buballa, Phys. Rev.  D 82, 094004 (2010) [arXiv:1007.5198 [hep-ph]]. (33) M. Kitazawa, T. Koide, T. Kunihiro and Y. Nemoto, Prog. Theor. Phys.  114, 117 (2005) [arXiv:hep-ph/0502035].
The MalSource Dataset: Quantifying Complexity and Code Reuse in Malware Development Alejandro Calleja, Juan Tapiador, and Juan Caballero A. Calleja and J. Tapiador are with the Department of Computer Science, Universidad Carlos III de Madrid, 28911 Leganes, Madrid, Spain. E-mail: {accortin, jestevez}@inf.uc3m.es.J. Caballero is with the IMDEA Software Institute, Madrid, Spain. E-mail: [email protected]. Abstract During the last decades, the problem of malicious and unwanted software (malware) has surged in numbers and sophistication. Malware plays a key role in most of today’s cyber attacks and has consolidated as a commodity in the underground economy. In this work, we analyze the evolution of malware from 1975 to date from a software engineering perspective. We analyze the source code of 456 samples from 428 unique families and obtain measures of their size, code quality, and estimates of the development costs (effort, time, and number of people). Our results suggest an exponential increment of nearly one order of magnitude per decade in aspects such as size and estimated effort, with code quality metrics similar to those of benign software. We also study the extent to which code reuse is present in our dataset. We detect a significant number of code clones across malware families and report which features and functionalities are more commonly shared. Overall, our results support claims about the increasing complexity of malware and its production progressively becoming an industry. I Introduction The malware industry seems to be in better shape than ever. In their 2015 Internet Security Threat Report [1], Symantec reports that the total number of known malware in 2014 amounted to 1.7 billion, with 317 million (26%) new samples discovered just in the preceding year. This translates into nearly 1 million new samples created every day. A recent statement by Panda Security [2] provides a proportionally similar aggregate: out of the 304 million malware samples detected by their engines throughout 2015, 84 million (27%) were new. These impressive figures can be partially explained by the adoption of reuse-oriented development methodologies that make it exceedingly easy for malware writers to produce new samples, and also by the increasing use of packers with polymorphic capabilities. Another key reason is the fact that over the last decade malware has become a profitable industry, thereby acquiring the status of a commodity [3, 4] in the flourishing underground economy of cyber crime [5, 6]. From a purely technical point of view, malware has experienced a remarkable evolutionary process since the 1980s, moving from simple file-infection viruses to stand-alone programs with network propagation capabilities, support for distributed architectures based on rich command and control protocols, and a variety of modules to execute malicious actions in the victim. Malware writers have also rapidly adapted to new platforms as soon as these acquired a substantial user base, such as the recent case of smartphones [7]. The surge in number, sophistication, and repercussion of malware attacks has gone hand in hand with much research, both industrial and academic, on defense and analysis techniques. The majority of such investigations have focused on binary analysis, since most malware samples distribute in this form. Only very rarely researchers have access to the source code and can report insights gained from its inspection. (Notable exceptions include the analysis of the source code of 4 IRC bots by Barford and Yegneswaran [8] and the work of Kotov and Massacci on 30 exploit kits [9]). One consequence of the lack of wide availability of malware source code is a poor understanding of the malware development process, its properties as a software artifact, and how these properties have changed in the last decades. In this paper, we present a study of malware evolution from a software engineering perspective. Our analysis is based on a dataset collected by the authors over two years and composed of the source code of 456 malware samples ranging from 1975 to 2016. Our dataset includes, among others, early viruses, worms, botnets, exploit kits, and remote access trojans (RATs). This is the largest dataset of malware source code presented in the literature. We perform two separate analysis on this dataset. First, we provide quantitative measurements on the evolution of malware over the last four decades. Second, we study the prevalence of source code reuse among these malware samples. To measure the evolution of malware complexity over time we use several metrics proposed in the software engineering community. Such metrics are grouped into three main categories: (i) measures of size: number of source lines of code (SLOC), number of source files, number of different programming languages used, and number of function points (FP); (ii) estimates of the development cost: effort (man-months), required time, and number of programmers; and (iii) measures of code quality: comment-to-code ratio, complexity of the control flow logic, and maintainability of the code. We use these metrics to compare malware source code to a selection of benign programs. We also study the prevalence of source code reuse in our dataset. Code reuse–or code clone–detection is an important problem to detect plagiarism, copyright violations, and to preserve the cleanness and simplicity of big software projects [10]. Several authors have suggested that code cloning is a fairly common practice in large code bases, even if it also leads to bug propagation and poorly maintainable code [11]. Given the high amount of malware discovered on a daily basis, it is a common belief that most malware is not developed from scratch, but using previously written code that is slightly modified according to the attacker’s needs [12]. Detecting clones in malware source code enables a better understanding of the mechanisms used by malware, their evolution over time, and may reveal relations among malware families. This paper builds on our previous work that studied malware evolution using software metrics on a dataset of 151 malware samples covering 30 years [13]. In this work, we present our updated dataset, which triples the number of original samples and extends the covered time frame to four decades. We redo the analysis on malware evolution to cover the new samples, and also provide a new analysis on malware source code reuse. The main findings of our work include: 1. We observe an exponential increase of roughly one order of magnitude per decade in the number of source code files and SLOC and FP counts per sample. Malware samples from the 1980s and 1990s contain just one or a few source code files, are generally programmed in one language and have SLOC counts of a few thousands at most. Contrarily, samples from the late 2000s and later often contain hundreds of source code files spanning various languages, with an overall SLOC count of tens, and even hundreds of thousands. 2. In terms of development costs, our estimates evidence that malware writing has evolved from small projects of just one developer working no more than 1-2 months full time, to larger programming teams investing up to 6-8 months and, in some cases, possibly more. 3. A comparison with selected benign software projects reveals that the largest malware samples in our dataset present software metrics akin to those of products such as Snort or Bash, but are still quite far from larger software solutions. 4. The code quality metrics analyzed do not suggest significant differences between malware and benign software. Malware has slightly higher values of code complexity and also better maintainability, though the differences are not remarkable. 5. We find quite a large number of code reuse instances in our dataset, specifically in C/C++ and Assembly code, that range from a few lines to several thousands lines of code in length. An analysis of such clones reveals that commonly shared functionality belongs to one of four groups: (a) Anti-analysis capabilities such as unpacking routines, polymorphic engines, and code to kill antivirus (AV) processes. (b) Core malware artifacts, including shellcodes for initial infection, spreading routines, and code for various actions on the victim. (c) Data clones such as arrays of passwords, process names, and IP addresses. (d) Data structures and associated functions, such as those needed to interact with PE or ELF files, popular communication protocols, or the operating system kernel through documented and undocumented APIs. The remaining of this paper is organized as follows. In Section II we describe our dataset of malware source code. Section III presents our quantitative measurements on the evolution of malware development. In Section IV we detail our code clone detection approach and results. Section V discusses the suitability of our approach, its limitations, and additional conclusions. Finally, Section VII concludes the paper. II Dataset Our work is based on a dataset of malware source code collected by the authors over two years (2015–2016). Collecting malware source code is a challenging endeavor because malware is typically released in binary form. Only occasionally its source code is released or leaked, with its availability being strongly biased towards classical viruses and early specimens. When leaked, the source code may be difficult to access in underground forums. These challenges make it impossible to try to be complete. While we try to collect as many samples as possible, the goal is to acquire representative examples of the malware ecosystem during the last 40+ years, constrained to the limited availability. Samples were obtained from a variety of sources, including virus collection sites such as VX Heaven [14], code repositories such as GitHub, classical e-zines published by historically prominent malware writing groups such as 29A, malware exchange forums, and through various P2P networks. We expanded our list of sources by using a snowballing methodology, exploring previously unknown sources that were referenced in sites under examination. A sample in our dataset corresponds to a specific version of a malware project, where a malware project is most often referred to as a malware family. A sample may comprise of one or multiple source code files typically bundled as an archive (e.g., a ZIP file). Those files may be set in an arbitrarily complex directory structure and may be written in one or multiple programming languages (see Section III). Most often only one version of a family has been leaked, but occasionally we collect multiple, e.g., Cairuh.A and Cairuh.B. For the vast majority of samples we do not know the author(s). Our initial collection contained 516 samples. Each sample was first quickly verified through manual inspection and then compiled, executed and, whenever possible, functionally tested. At this point, 11.6% of the obtained samples were discarded, either because testing them was unfeasible (e.g., due to nontrivial compilation errors or unavailability of a proper testing environment), or simply because they turned out to be fake. The 456 successfully tested samples that comprise our final dataset have been tagged with a year and a loose category. The year corresponds to their development when stated by the source, otherwise with the year they were first spotted in the wild. They are also tagged with a coarse-grained malware type: Virus (V), Worm (W), Macro virus (M), Trojan (T), Botnet (B), RAT (R), Exploit kit (E), or Rootkit (K). We are aware that this classification is rather imprecise. For instance, nearly all Botnets and RATs can be easily considered as Trojans, and, in some cases, show Worm features too. We chose not to use a more fine-grained malware type because it is not essential to our study and, furthermore, such classifications are problematic for many modern malware examples that feature multiple capabilities. Figure 1 shows the distribution by year of the final dataset of 456 samples. Approximately 61% of the samples (281) correspond to the period 1995-2005. The second biggest set of samples (139) correspond to the period 2006-2016. Finally, the rest of samples (36) corresponds to the period ranging from 1975 to 1994. The largest category is Virus (318 samples), followed by Worm (58), Botnet (26), Trojan (26), RAT (12), Exploit kit (11), Macro virus (4), and Rootkit (1). III Malware Evolution Analysis This section describes our analysis of the evolution of malware source code using software metrics. It first quantifies the evolution in code size (Section III-A), then it estimates development cost (Section III-B), next it measures code quality (Section III-C), and finally compares malware to benign code (Section III-D). In each section, we briefly introduce the software metrics used, and refer the reader to our original paper for more details [13]. III-A Code Size We use 3 different metrics to measure code size: number of files, number of source code lines, and function point estimates. We also measure the use of different programming languages in malware development. Number of files. Figure 1(a) shows the distribution over time of the number of files comprising the source code of each sample in the dataset. Except for a few exceptions, until the mid 1990s there is a prevalence of malicious code consisting of just one file. Nearly all such samples are viruses written in assembly that, as discussed later, rarely span more than 1,000 lines of code. This follows a relatively common practice of the 1980s and 1990s when writing short assembly programs. From the late 1990s to date there is an exponential growth in the number of files per malware sample. The code of viruses and worms developed in the early 2000s is generally distributed across a reduced ($<$10) number of files, while some Botnets and RATs from 2005 on comprise substantially more. For instance, Back Orifice 2000, GhostRAT, and Zeus, all from 2007, contain 206, 201, and 249 source code files, respectively. After 2010, no sample comprises a single file. Examples of this time period include KINS (2011), SpyNet (2014), and the RIG exploit kit (2014) with 267, 324, and 838 files, respectively. This increase reveals a more modular design, which also correlates with the use of higher-level programming languages discussed later, and the inclusion of more complex malicious functionalities (e.g., network communications and support for small databases). Simple least squares linear regression over the data points shown in Figure 1(a) yields a regression coefficient (slope) of 1.14. (Note that the y-axis is in logarithmic scale and, therefore, such linear regression actually corresponds to an exponential fit.) This means that the number of files has grown at an approximate yearly ratio of 14%, i.e., it has doubled every 5 years. Number of lines. Traditionally, the number of lines in the source code of a program, excluding comment and blank lines (SLOCs), has been employed as the most common metric for measuring its size. In our analysis we use cloc [15], an open-source tool that counts SLOCs, blank lines, and comment lines, and reports them broken down by programming language. Figure 1(b) shows the SLOCs for each sample, obtained by simply aggregating the SLOCs of all source code files of the sample, irrespective of the programming language in which they were written. Again, the growth over the last 40 years is clearly exponential. Up to the mid 1990s viruses and early worms rarely exceeded 1,000 SLOCs. Between 1997 and 2005 most samples contain several thousands SLOCs, with a few exceptions above that figure, e.g., Simile (10,917 SLOCs) or Troodon (14,729 SLOCs). The increase in SLOCs during this period correlates positively with the number of source code files and the number of different programming languages used. Finally, a significant number of samples from 2007 on exhibit SLOCs in the range of tens of thousands. For instance, GhostRAT (33,170), Zeus (61,752), KINS (89,460), Pony2 (89,758), or SpyNet (179,682). Most of such samples correspond to moderately complex malware comprising of more than just one executable. Typical examples include Botnets or RATs featuring a web-based C&C server, support libraries, and various types of bots/trojans. There are exceptions, too. For instance, Point-of-Sale (POS) trojans such as Dexter (2012) and Alina (2013) show relatively low SLOCs (2,701 and 3,143, respectively). In this case the linear regression coefficient over the data points is 1.11, i.e., the SLOCs per malware have increased approximately 11% per year; or, equivalently, the SLOC count doubles every 6.5 years, resulting in an increase of nearly an order of magnitude each decade. Function points estimates. Although SLOCs is the most popular metric for measuring project size, it has a number of shortcomings [16]. Most notably, when comparing the sizes of projects developed using different programming languages, SLOCs may lead to misleading conclusions since this metric does not take into account the programming language expressiveness. To address this issue, we leverage the function-point count [17, 18] (FPC) metric, which aims to capture the overall functionality of the software. The function-point count is measured using four program features: external inputs and outputs, user interactions, external interfaces, and files used. The expected size in SLOCs of a software project can be estimated (before it is coded) from function-point counts through a process known as backfiring [19]. This process uses programming languages empirical tables (PLTs) that provide the average SLOCs per function point for different programming languages. In our analysis, we use a reversed backfiring process that uses PLT v8.2 [20] to obtain function-point counts from SLOCs. We use those function-point counts as a normalized size for malware written in different languages. Figure 3 shows, as expected, a clear correlation between FPC and SLOCs and the conclusions in terms of sustained growth are similar. Starting in 1990, there is roughly an increase of one order of magnitude per decade. Thus, in the 1990s most early viruses and worms contain just a few ($<10$) function points. From 2000 to 2010 the FPC concentrate between 10 and 100, with Trojans, Botnets, and RATs accounting for the higher counts. Since 2007, many samples exhibit FPC of 1,000 and higher; examples include Pony2 (2013), with 1,240, SpyNet (2014), with 2,028, and the RIG exploit kit (2014), with 4,762. Linear regression over the data points yields a coefficient of 1.13, i.e., FPCs per malware have suffered an approximate growth of 13% per year; or, equivalently, FPCs double every 5.5 years. Programming languages. Figure 3(a) shows the distribution over time of the number of different languages used to develop each malware sample. This includes not only compiled and interpreted languages such as assembly, C/C++, Java, Pascal, PHP, Python, or Javascript, but also others used to construct resources that are part of the final software project (e.g., HTML, XML, CSS) and scripts used to build it (e.g., BAT or Make files). Figure 3(b) shows the usage of different programming languages to code malware over time in our dataset. The pattern reveals the prevalent use of assembly until the late 2000s. From 2000 on, C/C++ become increasingly popular, as well as other “visual” development environments such as Visual Basic and Delphi (Pascal). Botnets and RATs from 2005 on also make extensive use of web interfaces and include numerous HTML/CSS elements, pieces of Javascript, and also server-side functionality developed in PHP or Python. Since 2012 the distribution of languages is approximately uniform, revealing the heterogeneity of technologies used to develop modern malware. III-B Development Cost An important problem in software engineering is to make an accurate estimate of the cost required to develop a software system [21]. A prominent approach to this problem are algorithmic cost modeling methods, which provide cost figures using as input various project properties such as code size and organizational practices. Probably the best known of these methods is the Constructive Cost Model (COCOMO) [22]. COCOMO is an empirical model derived from analyzing data collected from a large number of software projects. COCOMO provides the following equations for estimating three metrics related to the cost of a project: effort (in man-months), development time (in months), and number of people required. $$E=a_{b}(\mathrm{KLOC})^{b_{b}}$$ (1) $$D=c_{b}E^{d_{b}}$$ (2) $$P=\frac{E}{D}$$ (3) In the equations above, KLOC represent the estimated SLOCs in thousands and $a_{b}$, $b_{b}$, $c_{b}$, $d_{b}$ are empirically obtained regression coefficients provided by the model. The value of these coefficients depends on the nature of the project. COCOMO considers three different types of projects: (i) Organic projects (small programming team, good experience, and flexible software requirements); Semi-detached projects (medium-sized teams, mixed experience, and a combination of rigid and flexible requirements); and (iii) Embedded projects (organic or semi-detached projects developed with tight constraints). For our analysis, we decided to consider all samples as organic for two reasons. First, it is reasonable to assume that, with the exception of a few cases, malware development has been led so far by small teams of experienced programmers. Additionally, we favor a conservative estimate of development cost, which is achieved using the lowest COCOMO coefficients (i.e., those of organic projects). Thus, our estimates can be seen as a (estimated) lower bound of development cost. Figure 5 shows the COCOMO estimates for the effort, time, and team size required to develop the malware samples in our dataset. Figure 4(a) shows the COCOMO estimation of effort in man-months. The evolution over time is clearly exponential, with values roughly growing one order of magnitude each decade. While in the 1990s most samples required approximately one man-month, this value rapidly escalates up to 10–20 man-months in the mid 2000s, and to 100s for a few samples in the last years. Linear regression confirms this, yielding a regression coefficient of 1.11; i.e., the effort growth ratio per year is approximately 11%; or, equivalently, it doubles every 6.5 years. The estimated time to develop the malware samples (Figure 4(b)) experiences a linear increase up to 2010, rising from 2-3 months in the 1990s to 7-10 months in the late 2000s. The linear regression coefficient in this case is 0.255, which translates into an additional month every 4 years. Note that a few samples from the last 10 years report a considerable higher number of months, such as Zeus (2007) or SpyNet (2014) with 18.1 and 27.7 months, respectively. The amount of people required to develop each sample (Figure 4(c)) grows similarly. Most early viruses and worms require less than one person (full time). From 2000 on, the figure increases to 3-4 persons for some samples. Since 2010, a few samples report substantially higher estimates. For these data, the linear regression coefficient is 0.143, which roughly translates into an additional team member every 7 years. Finally, the table in Figure 4(d) provides some numerical examples for a selected subset of samples. III-C Code Quality We measure 2 aspects of code quality: source code complexity and software maintainability. Complexity. To measure software complexity we use McCabe’s cyclomatic complexity [23], one of the earliest—and still most widely used—software complexity metrics. Despite having been introduced more than 40 years ago, it is still regarded as a useful metric to predict how defect-prone a software piece is [24], hence its use in many software measurement studies. For instance Warren et al. [25] characterized the evolution of modern websites by measuring different parameters, including the complexity of their source code. More recently, Hecht et al. included McCabe’s complexity in their analysis of the complexity evolution of Android applications [26]. The cyclomatic complexity (CC) of a piece of source code is computed from its control flow graph (CFG) and measures the number of linearly independent paths within the CFG. Mathematically, the cyclomatic complexity is given by: $$CC=E-N+2P$$ (4) where $E$ is the number of edges in the CFG, $N$ the number of nodes, and $P$ the number of connected components. There are many available tools for measuring this metric [27, 28, 29], but most of them only support a small subset of programming languages. To compute the cyclomatic complexity we use the Universal Code Count (UCC) [30]. UCC works over C/C++, C#, Java, SQL, Ada, Perl, ASP.NET, JSP, CSS, HTML, XML, JavaScript, VB, PHP, VBScript, Bash, C Shell Script, Fortran, Pascal, Ruby, and Python. Since our dataset contains source code written in different languages, UCC best suits our analysis. Still, it limited our experiments since it is not compatible with assembly source code, which appears in many projects in our dataset (see Figure 3(b)). Filtering out samples that contain at least one source file in assembly left 144 samples for this analysis, i.e., 32% of the dataset. Figure 6(a) shows the distribution of the average cyclomatic complexity per function for each analyzed sample. Most of the samples have functions with complexities between 3 and 8, with values in the interval $[5,6]$ being the most common. Overall, this can be seen as a supporting evidence of a generally modular design with a good break down into fairly simple functions and class methods. There are, however, some counterexamples. We observed a number of functions with complexity higher than 10, which exceeds McCabe’s recommended complexity threshold. Maintainability. A concept often linked to software quality is source code maintainability. Maintainability is connected to complexity, since high complexity translates into poor maintainability [31]. Maintaining a software product generally involves fixing bugs and adding new features. The documentation found in the source code as code comments can have a great impact in facilitating this process. Thus, the comments-to-code ratio (or simply “comments ratio”) has traditionally been the metric used to measure documentation quality [32, 33]. Figure 6 shows the comments-to-code ratio for each sample, computed as the number of comment lines divided by the SLOCs. There is no clear pattern in the data, which exhibits an average of 17.2%, a standard deviation of 21.5%, and a median value of 10.7%. There are a few notable outliers, though. For example, W2KInstaller (2000) and OmegaRAT (2014) show ratios of 99.6% and 139.1%, respectively. Conversely, some samples have an unusually low comments ratio. We ignore if they were originally developed in this way or, perhaps, comments were cut off before leaking/releasing the code. A more direct metric for measuring the maintainability of a software project is the maintainability index ($MI$) [32]. This metric is a quantitative estimator of how easy is to understand, support, and modify the source code of a project. A popular definition of $MI$ is: $$MI=100\frac{171-5.2\ln{(\overline{V})}-0.23\overline{M}-16.2\ln{(\overline{% SLOC})}}{171}$$ (5) where $\overline{V}$ is Halstead’s average volume per module (another classic complexity metric; see [34] for details), $\overline{M}$ is the average cyclomatic complexity per module, and $\overline{SLOC}$ is the average number of source code lines per module. $MI$ has been included in Visual Studio since 2007 [35]. Visual Studio flags modules with $MI<20$ as difficult to maintain. We use Equation (5) for computing an $MI$ upper bound for each sample in our dataset. Note that we cannot estimate $MI$ exactly since we do not have the average Halstead’s volume for each sample. Since this is a negative factor in Equation (5), the actual $MI$ value would be lower than our estimate. Nevertheless, note that such factor contributes the lowest, so we expect our estimate to provide a fair comparison among samples. Figure 6(b) shows the distribution of $MI$ values grouped in quartiles. Most samples have an $MI$ in the third quartile, and only 15 samples fall short of the recommended threshold of $MI<20$, mainly because of a higher-than-expected cyclomatic complexity. III-D Comparison with Regular Software In order to better appreciate the significance of the figures presented so far, we next discuss how they compare to those of a selected number of open source projects. We selected 9 software projects: 3 security products (the IPTables firewall, the Snort IDS, and the ClamAV antivirus); a compiler (gcc); a web server (Apache); a version control tool (Git); a numerical computation suite (Octave); a graphic engine (Cocos2d-x); and a Unix shell (Bash). The source code was downloaded from the web page of each project. For each project we then computed the metrics discussed above for malware. As in the case of malware, we use the COCOMO coefficients for organic projects. The results are shown in Table I in increasing order of SLOC count. The first natural comparison refers to the size of the source code. Various malware samples from 2007 on (e.g. Zeus, KINS, Pony2, or SpyNet) have SLOC counts larger than those of Snort and Bash. This automatically translates, according to the COCOMO model, into similar or greater development costs. The comparison of function point counts is alike, with cases such as Rovnix and KINS having an FPC greater than 1000, or SpyNet, with an FPC comparable to that of Bash. In general, only complex malware projects are comparable in size and effort to these two software projects, and they are still far away from the remaining ones. In terms of comments-to-code ratio, the figures are very similar and there is no noticeable difference. This seems to be the case for the cyclomatic complexity, too. To further investigate this point, we computed the cyclomatic complexities at the function level; i.e., for all functions of all samples in both datasets. The histograms of the obtained values are shown in Figure 8. Both distributions are very similar, with a clear positive skewness. A Chi-squared and two-sample Kolgomorov-Smirnov tests corroborate their similarity for a significance level of $\alpha=0.05$. Regarding the maintainability index, no malware sample in our dataset shows an $MI$ value higher than the highest for regular software–Snort, with $MI=81.26$. However, Figure 6(b) shows that most $MI$ values for malware source code fall within the second and third quartiles, which also holds for traditional software. Two notable exceptions are Cairuh and the Fragus exploit kit, which exhibit surprisingly low values (29.99 and 14.1, respectively). IV Source Code Reuse This section presents the analysis of malware source code reuse in our dataset. Section IV-A first introduces the two techniques we use for clone detection. We present the clone detection results in Section IV-B. Finally, Section IV-C analyzes some of the clones found. IV-A Detecting Code Clones One challenge to detect code clones in our dataset is the diversity of programming languages used by the samples (Figure 4). Since samples written in C/C++ and Assembly constitute 92% of our dataset (115 projects contain at least one file fully written in C/C++ and 304 projects contain at least one file fully written in Assembly), we need clone detection techniques that can at least cover these two languages. To achieve this goal, we use two code detection techniques detailed next. Deckard. Our first clone detection technique uses Deckard [36], a tool for detecting source code clones that was specifically designed to scale to large code bases such as the Java JDK and the Linux kernel, which comprise thousands of files. Deckard computes an Abstract Syntax Tree (AST) for each input source file. For each AST tree it produces a set of vectors of fixed length, each representing the structure of the AST subtree rooted at a specific node. These vectors are then clustered and each output cluster comprises of a set of similar ASTs, each a clone of the others. One advantage of AST-based clone detection techniques is that they can detect code clones with the same structure, despite some code changes such as variable renaming or different values assigned to a variable. Figure 9 shows an example of a code clone detected by Deckard despite changes in the names of the function, function parameters, and function variables. On the other hand, they can have high false positives since code snippets with the same AST structure may not necessarily be clones. To limit the false positives, Deckard allows to specify the minimum clone size (as a number of AST tokens). This enables to remove short code sequences that appear in multiple samples simply because they are commonly used, but may not imply that the code was copied from one project into another. For example, sequences of C/C++ #include directives and loop statements, e.g., for (i=0; i<n; i++), are not real clones. Deckard allows the user to set two additional parameters, the stride that controls the size of the sliding windows used during the serialization of ASTs, and the clone similarity used to determine if two code fragments are clones. In our experiments we tried different settings for these parameters. We obtained best results using minimum clone size of 100, stride of 2, and 1.0 similarity. By default, Deckard offers support for the following languages: C/C++, Java, PHP, and dot. It can also support other languages if a grammar is available. Since our dataset contains a diversity of Assembly instruction sets (PPC, x86) and syntax specifications (Intel, AT&T), we would need to generate a grammar for each instruction set and syntax. That would require a significant effort to support Assembly samples (in some cases with a low return given the small number of samples for some combinations). Thus, we only use Deckard for finding clones among samples developed using C/C++. We apply Deckard on all projects of the same language (C/C++ or Assembly) simultaneously, which leverages Deckard’s design for efficiency. Pairwise comparison. Our second clone detection technique compares two source code files using the Ratcliff-Obershelp algorithm for string similarity [37]. This technique measures how similar two sequences of characters are by computing the ratio between the matching characters and the total number of characters in the two sequences. The algorithm outputs matching blocks of characters containing the longest common subsequence (LCS) and characters neighboring the LCS that are similar in both strings. We consider a code clone every matching block satisfying a minimum length. We experimented with different minimum length values, achieving best results with a minimum length of 10 SLOC for Assembly and 5 SLOC for C/C++. The higher threshold for Assembly is due to its lower abstraction compared to C/C++. Since this technique operates on two input files, for each pair of samples we compare every pair of source code files, one file from each sample. To avoid missing clones because of simple changes to the copied code we preprocess the files using these rules: remove blank spaces, tabs, and newline characters; convert to lower case; and translate the character set to UTF-8. The main advantages of this technique are its simplicity, that it can handle any text-based language, and very low false positives. The main disadvantages are potentially high false negatives and low efficiency. False negatives can happen because this technique only detects reuse of identical code fragments; it will not detect a clone if the code is modified (e.g., variable renaming). Low efficiency is due to the quadratic number of comparisons needed. IV-B Clone Detection Results This section presents the clone detection results using Deckard and the pairwise comparison technique. We manually examine the clones detected by both techniques to determine whether they are true positives or false positives. During our initial manual analysis we found that a large number of clones were different instances of the same cases. To speed up the manual analysis of the more than 10K detected clones, we use a clustering approach based on regular expressions and fuzzy hashing [38] to automatically group nearly identical clones. The analyst then labels each cluster, which considerable speeds up the manual analysis since the number of clusters is nearly two orders of magnitude smaller than the number of clones. Table II summarizes the code clone detection results. For each language and detection technique, it shows the number of detected clones, the split of those clones into true and false positives, the total and per-pair runtime, and statistics on the SLOC size of the detected clones. The C/C++ results show that Deckard detects 7,655 clones compared to 959–1,040 using the pairwise comparison technique. However, of the 7,655 Deckard clones, 87% are false positives, compared to 6.4% (raw) and 5.7% (normalized) using pairwise comparison. The very high false positive rate of Deckard is due to its AST representation, which ignores type information and constant values. For example, an array definition like static unsigned long SP1[64] = { 0x01010400L, $\dots$ } is considered by Deckard a clone of static unsigned char PADDING[64] = {0x80, $\dots$ }, even if both arrays are of different type and are initialized with different values. As another example, the function invocation CopyFile(wormpath, "gratis.mp3.exe", 0) is considered a clone of add_entry(TABLE_KILLER_FD, "\x0D\x44\x46\x22", 4). Clone lengths. The average clone size using Deckard is 112.3 SLOC, 52.7 using normalized pairwise comparison, and 25.9 using raw pairwise comparison. Thus, while the number of TPs is similar using Deckard and raw pairwise comparison, Deckard is able to find longer (i.e., higher quality) clones. Surprisingly, the number of TPs found by the raw pairwise comparison is higher than those found by the normalized pairwise comparison. This happens because the raw pairwise comparison breaks longer clones into multiple smaller clones, which increases the number of detected clones, but produces shorter (i.e., lower quality) clones. Thus, normalization helps to find larger clones. For example, in the C/C++ code, normalization allowed us to discover a large true clone consisting of 22K SLOC (detailed in Section IV-C). Figure 10 shows the distributions of length values for raw and normalized clones in both languages. In both distributions the number of clones becomes smaller as the size grows, which translates into positively skewed distributions. Noteworthy exceptions of this trend are clones in the range of 50-99 and 100-199 lines in Assembly code, and also clones larger than 100 lines in C/C++. These peaks are related to the nature of the detected clones, discussed in Section IV-C. Small clones (i.e., shorter than 20 lines) are fairly common in both C/C++ and Assembly. Manual inspection revealed different explanations for each language. In the case of Assembly, such small cloned fragments are usually related to control flow structures such as loops, which employ the same sequence of instructions regardless of the actual data values. In addition, we also observed reuse of the names used to label Assembly code segments. In the case of C/C++ projects, we found that small clones are generally associated with preprocessor directives such as #typedef, #define, and #include. These short clones also include sequences of instructions to initialize data structures (e.g., arrays), and generic sequences often found at the end of functions that release allocated variables before returning a value. These clones are often exact copies of each other and are common in malware samples from the same family. On the other hand, clones larger than 20 lines represent less than 50% of the detected clones in both languages. In particular, 350 assembly clones are larger than 20 lines. The number of Assembly clones is also notably smaller than the total number of Assembly source code files in our dataset, which is close to 800. Comparing the average lengths of Assembly source code files and large clones provides similar results. Large Assembly clones are 102.87 SLOC on average, while Assembly source files in the dataset are 498.18 SLOC. In the case of C/C++ clones, the figures are comparable. We identified 450 C/C++ clones out of 984 that are larger than 20 lines. The total number of C/C++ files in the dataset is 2,981, which almost doubles the total number of clones found. The average length of large C/C++ clones depends greatly on the normalization process: it is just 102.01 SLOC without normalization and 231.28 SLOC after normalization. We analyzed the size of clones found in samples belonging to the same family or developed by the same author. To do so, we selected 4 pairs of projects developed in C/C++ and Assembly for which we had ground truth, i.e., they are known to share authorship or are variants of the same family. Specifically, we selected two banking trojans (Zeus & Kins) and two worms (Cairuh.A & Hunatch.a) for the C/C++ case; and two pairs of viruses written in Assembly: (methaphor.1.d & Simile) and (EfishNC & JunkMail). The average clone sizes for the C/C++ couples are 57.40 lines (37 clones) for the banking trojans and 13.86 lines (60 clones) for the worms. In the case of the Assembly samples, the average clone lengths are 179.72 lines for methaphor.1.d & Simile (54 clones) and 47.06 lines for EfishNC & JunkMail (30 clones). Clone file distribution. Figure 11 shows the distribution of the number of files in which a clone appears. As it can be seen, in approximately 80% of the cases clones are found in just two files. The number of clones appearing in 3 or more files decreases considerably for both languages. In the case of C/C++, the fraction of clones appearing in 3, 4, 5, and more than 5 files is 0.11, 0.04, 0.008, and 0.005 respectively. The pattern for Assembly clones is similar, though clones are slightly more widespread as they appear in more than 3 files more often than C/C++ clones. Runtime. Deckard is almost two orders of magnitude faster than the pairwise comparison technique, finding clones across all 115 C/C++ samples in 1.4 hours, compared to 8 days for the pairwise comparison. Such efficiency difference is due to Deckard parsing each file only once and to its clustering. We observe that the pairwise comparison on Assembly run much faster than on C/C++. This is due to the C/C++ projects containing more, and longer, files. To conclude, our clone detection results show a significant amount of code reuse despite the relatively small number of samples, e.g., 984 clones of 112 SLOC on average across 115 C/C++ projects. We detail the type of clones found in Section IV-C. This constitutes an evidence that malware authors copy functionalities useful to their goals that may appear in leaked malware source code (or malware they have purchased). Of the two techniques evaluated, Deckard runs very fast and finds more and longer clones, but produces very high false positives. On the other hand, the pairwise comparison technique produces much lower (but a non-negligible 6%) false positives, but runs two orders of magnitude slower. IV-C Clone Analysis We next discuss the purpose of the reused code fragments we found. Through manual analysis, we classified clones into four main groups according to the functionality they provide. For additional details, Tables IV and V summarize the main features of a selection of representative clones of these categories we found for both C/C++ and Assembly. Operational data structures and functions. One large group of code clones consists of libraries of data structures and the associated functions to manipulate system and networking artifacts, such as executable file formats (PE and ELF) and communication protocols (TCP, HTTP) and services (SMTP, DNS). We also observe a number of clones consisting of headers for several API functions needed to interact with the Windows kernel, such as the 3,054 lines long clone shared by W32.Remhead and W32.Rovnix. Core malware artifacts. The second category of clones consists of code that implements properly malicious capabilities, such as infection, spreading, or actions on the victim. For instance, the W32.Dopebot botnet contains shellcode to exploit the CVE-2003-0533 vulnerability, and the same shellcode is found in the W32.Sasser worm. Another good example of this practice is the network sniffer shared by W32.NullBot and W32.LoexBot. Data clones. Some of the clones are not code, but rather data structures that appear in multiple samples. An example is the array of frequent passwords present in both W32.Rbot and W32.LoexBot. Another example is the list of strings found in W32.Hunatchab and W32.Branko, containing the process names associated to different commercial AV (antivirus) software, which both bots try to disable. Furthermore, some samples also share strings containing IP addresses, for example the Sasser worm and the Dopebot botnet. Anti-analysis capabilities. One of the most noticeable examples of this is the packer included in the W32.Cairuh worm, which is shared by the W32.Hexbot botnet. Its size is 22,709 lines and it is the biggest clone we found in our dataset. Another remarkable example is the metamorphic engine shared by the Simile and Metaphor.1d viruses, consisting of more than 10,900 lines of assembly code. Other examples of reused anti-analysis modules can be found in W32.Antares and W32.Vampiro, which share the same polymorphic engine, and also in W95.Babyloni, and W32.Ramlide, which share the same packing engine. Finally, we also found a number of reused instances of code to kill running AV processes, such as the clone found in Hunatchab.c and Branko.c In order to estimate the number of clones for each category, we randomly sampled the set of found clones and selected 100 for each language. The 200 clones were then manually labeled according to the four semantic categories described above. Table III shows the distribution of clones together with their average length. As it can be seen, most of the cases belong to types A (operational data structures and functions) and B (core malware artifacts). In the case of Assembly, both categories amount for 84% of all clones, while in the case of C/C++ core malware artefacts alone constitute 55% of the clones. In both cases, data clones and anti-analysis capabilities are considerably less frequent. With respect to their lengths, Type D assembly clones are noticeably larger than clones in other categories. This is due to the presence of polymorphic and packing engines in this category, which are relatively complex code samples. Contrarily, data clones (Type C) are generally shorter, which is reasonably given their nature. In general, Assembly clones are bigger than their C/C++ counterpart, which is in line with the general results described above. The data in Table III suggests that clone size highly depends on the nature of shared features. This is especially evident for those clones labeled as type C. In addition, the results reveal that the inclusion of evasion and anti-analysis capabilities has a noticeable impact in the size of malicious codebases. Last but not least, we observed that in most cases code reuse usually takes place in short time spans, i.e., the samples sharing a particular fragment of code have been developed within 1-4 years of each other. This could evidence that the same author has participated in the development of such samples, or else that collaborating groups share previously developed artifacts that can be easily reused. IV-D Code Sharing with Benign Source Code We also explored if code cloning between malicious and benign source happens to the same extent as it does among malware samples. For this purpose, we analyzed the set of major open source projects used in Section III-D and extended this set adding the Bitcoin cryptocurrency and Linux kernel source code master branches. We followed a similar approach as for the main cloning experiment. However we decided to lean exclusively on Deckard since it is faster, especially when dealing with large codebases. We ran Deckard ten times, one time per project, combining the open source project with the whole malicious source code dataset each time. Then, we processed the output as outlined in Section IV-A. Despite the high FP ratios obtained, in the experiment we found code cloning cases in 4 out of 10 source code projects. Snort, Iptables, Bash, Apache, Cocos2d and the Bitcoin projects do not share any source code snippet with any of the samples included in our dataset. We did find up to $210$ relevant code clones (larger than 5 lines) in gcc, the Linux kernel, Git, and clamAV. Surprisingly, all the cloned source clones found in gcc, clamAV, and the Linux kernel are part of the Zlib compression library. In particular, the cloned fragments appear in a C header (defutil.h) and 3 C source files (infbak.h, inflate64.c, and inflate.c) in the open source project. In the malicious source code dataset, the same fragments are contained in the files deflate.h, infback.c, inflate.c, and inftrees.c included in the server source code of the XtremeRAT botnet. Git shares a set of data structures with the samples w32.Rovnix and w32.Carberp. The content of these data structures is used as padding in the implementation of the SHA1 and MD5 hashing algorithms. The shared code is located in the files sha1.c in the git source code tree and also in the files md5.c and md5.cpp included in the code of Rovnix and Carberp, respectively. The average size of the cloned fragments is 102 lines. V Discussion We next discuss some aspects of the suitability of our approach, the potential limitations of our results, and draw some general conclusions. Suitability of our approach. Software metrics have a long-standing tradition in software engineering and have been an important part of the discipline since its early days. Still, they have been subject to much debate, largely because of frequent misinterpretations (e.g., as performance indicators) and misuse (e.g., to drive management) [21]. In this work, our use of certain software metrics pursues a different goal, namely to quantify how different properties of malware as a software artifact have evolved over time. Thus, our focus here is not on the accuracy of the absolute values (e.g., effort estimates given by COCOMO), but rather on the relative comparison of values between malware samples, as well as with benign programs, and the trends that the analysis suggests. The use of comments as an efficient documentation method has been questioned by several experts. Among the stated reasons, it has been argued that often comments add redundant description of code functionality instead of clarifying design decisions and the underliying algorithmic workflow. However others authors defend that good quality comments are still valuable and necessary, specially in large collaborative projects [41]. The validity of the comments-to-code ratio nowadays could also be criticized, given the trend to develop source code using automatically generated documentation frameworks. This trend may have reduced over time the reliability of comments-to-code ratio as a maintainability metric. Nevertheless, during our analysis we did not find any samples, using such approaches, as the only delivered documentation with the (recent) samples, are the comments written by the authors. Thus, comments seem to still play an important role in the development of malware. As for the case of detecting code reuse, the techniques we used represent standard approaches to the problem. By using two different approaches, we obtain complementary and more robust results. For example, we can use the pairwise comparison technique to analyze assembly samples not supported by Deckard, while Deckard’s AST-based approach resists certain classes of evasion attacks, e.g, variable and function renaming, which affect the pairwise comparison technique. Limitations. Our analysis may suffer from several limitations. Perhaps the most salient is the reduced number of samples in our dataset. However, as discussed in Section II, obtaining source code of malware is hard. Still, we analyze 456 samples, which to the best of our knowledge is the largest dataset of malware source code analyzed in the literature. While the exact coverage of our dataset cannot be known, we believe it is fairly representative in terms of different types of malware. It should also be noted that in our study, the sample concept refers to a malware family. Thus, we are not only covering 456 binary samples but a wider set of potential variants. The wide gap between the number of binary samples found in the wild and the number of malware families has been previously discussed in the community. A recent study [42] examined 23.9M samples and classified them into 17.7K families (i.e., three orders of magnitude smaller). While this phenomenon is due to different reasons, the most prominent one is the use of polymorphism and other advanced obfuscation methods employed by malware authors. We note that 428 out of 17.7K is a respectable 2.4% coverage. In particular, we believe the coverage of our dataset is enough to quantify and analyze the trends in malware evolution (size, development cost, complexity), but we do not attempt to analyze the evolution of malware code reuse. Since we only have one (or a few) versions for each malware family and a limited number of families, our dataset may miss important instances of malware code reuse. Thus, we have focused on analyzing what type of code we observe being reused in our dataset. As we collect more samples, we should be able to obtain a more representative picture of the code sharing phenomenon in malware creation, going beyond the findings we have reported. Another limitation is selection bias. Collection is particularly difficult for newest samples and more sophisticated samples (e.g., those used in targeted attacks) have not become publicly available. We believe those samples would emphasize the increasing complexity trends that we observe. Finally, even if the employed pairwise comparison code clone detection technique is very simple and has poor scalability, it has performed remarkably well in terms of false positives compared with Deckard, a more sophisticated tool based on comparing syntactic structures. The large amount of false positives obtained with Deckard can be partially explained because of the way in which malware writers reuse code. As discussed in section IV, cloned code fragments are often core artifacts such as shellcodes or obfuscation engines. Given the nature of these artifacts, malware authors are forced to reuse them in a copy and paste fashion rather than rewriting some of their content. This makes very uncommon to find partial clones, consisting on slightly modified code fragments. For this reason, and despite the great variety of code-clone detection techniques available in the literature [43, 44], it is unclear whether employing more sophisticated approaches might lead to finding significantly more clones when dealing with plain malware source code. In addition, clone detection tools based on syntactic structures depend greatly on the set of selected features. In the case of Deckard, leaving out data types and literals definitely contributes to achieving poorly accurate results, especially in our use case which differs from standard use cases for this kind of tools. Deckard could be improved in many ways in order to obtain more precise results. Two natural ideas would be combining syntactic and semantic features, and introducing a similarity metric after the clustering step. However, in this paper we just aimed at comparing the performance of a naive approach (diff-based clone detection) against an already proposed tool, and therefore we decided to use Deckard out of the box, leaving out any improvement. Main conclusions and open questions. In the last 40 years the complexity of malware, considered as a software product, has increased considerably. We observe increments of nearly one order of magnitude per decade in aspects such as the number of source code files, source code lines, and function point counts. This growth in size can be attributed to various interconnected reasons. On the one hand, malicious code has progressively adapted to the increasing complexity of the victim platforms they target. Thus, as Operating Systems evolved to offer richer application programming interfaces (API), malware authors rapidly leveraged them to achieve their purposes. This translated into larger and more complex samples with a variety of computing and networking capabilities. On the other hand, malware authors have clearly benefited from newer and richer integrated development environments (IDEs), frameworks, and libraries. This explains the increasing modularity seen in the most recent samples–and, especially, the rise of complex, multi-language malware projects that would be otherwise unmanageable. One interesting question is whether this trend will hold in time. If so, we could soon see malware specimens with more than 1 million SLOC. To translate these numbers into real-world examples, in the near future we could witness malware samples exceeding three times in size open source projects like Git or the Apache web server (see Table I). However, evolving into large pieces of software will surely involve a higher amount of vulnerabilities and defects. This has been already observed (and exploited), e.g., in [45] and [46]. In addition, such evolution requires larger efforts and thus possibly larger development teams. While we observe the trend we have not examined in detail those development teams. For this, we could apply authorship attribution techniques for source code [47, 48]. More generally, the results shown in this paper provide quantified evidence of how malware development has been progressively transforming into a fully fledged industry. VI Related Work While malware typically propagates as binary code, some malware families have distributed themselves as source code. Arce and Levy performed an analysis of the Slapper worm source code [49], which upon compromising a host would upload its source code, compile it using gcc, and run the compiled executable. In 2005, Holz [50] performed an analysis of the botnet landscape that describes how the source code availability of the Agobot and SDBot families lead to numerous variants of those families being created. Barford and Yegneswaran [8] argue that we should develop a foundational understanding of the mechanisms used by malware and that this can be achieved by analyzing malware source code available on the Internet. They analyze the source code of 4 IRC botnets (Agobot, SDBot, SpyBot, and GTBot) along 7 dimensions: botnet control mechanisms, host control mechanisms, propagation, exploits, delivery mechanisms, obfuscation, and deception mechanisms. Other works have explored the source code of exploit kits collected from underground forums and markets. Exploit kits are software packages installed on Web servers (called exploit servers) that try to compromise their visitors by exploiting vulnerabilities in Web browsers and their plugins. Different from client malware, exploit kits are distributed as (possibly obfuscated) source code. Kotov and Massacci [9] analyzed the source code of 30 exploit kits collected from underground markets finding that they make use of a limited number of vulnerabilities. They evaluated characteristics such as evasion, traffic statistics, and exploit management. Allodi et al. [51] followed up on this research by building a malware lab to experiment with the exploit kits. Eshete and Venkatakrishnan describe WebWinnow [52] a detector for URLs hosting an exploit kit, which uses features drawn from 40 exploit kits they installed in their lab. Eshete et al. follow up this research line with EKHunter [45] a tool that given an exploit kit finds vulnerabilities it may contain, and tries to automatically synthesize exploits for them. EKHunter finds 180 in 16 exploit kits (out of 30 surveyed), and synthesizes exploits for 6 of them. Exploitation of malicious software was previously demonstrated by Caballero et al. [46] directly on the binary code of malware samples installed in client machines. The problem of detecting duplicated or cloned code was first approached using simple text-matching solutions. The technique described in [53] consists in a pairwise comparison among source code files looking for a coincidence. While this allows to find exact copies of source code, it does not scale well and may incur in performance issues. In any case, note that text-matching approaches require a preliminary normalization step such as the one used in this work. A second group of techniques rely on data structures such as graphs or trees to represent the syntactic structure of the programs [54, 55], together with an appropriate similarity measure among them. Other works have proposed solutions based on a lexical analysis of source files. These techniques convert the source code sentences into lists of tokens, which are then compared to detect duplicated subsequences [11, 56]. In the case of code sharing in malware, most existing work has focused on binary objects [57, 58]. Even though the results reported are reasonable, one potential limitation of such works is that modern compilers can perform different optimization and cleaning tasks (e.g., loop unraveling, symbol stripping, etc.) to generate optimal binary objects in terms of size and memory consumption. This could end up altering the structure of the original code and deleting many valuable and meaningful artifacts [59]. Contrarily, working directly with the original source code gives us more precise insights on the functionality that is more frequently reused across samples. VII Conclusion In this paper, we have presented a study on the evolution of malware source code over the last four decades, as well as a study of source code reuse among malware families. We have gathered and analyzed a dataset of 456 samples, which to our knowledge is the largest of this kind studied in the literature. Our focus on software metrics is an attempt to quantify properties both of the code itself and its development process. The results discussed throughout the paper provide a numerical evidence of the increase in complexity suffered by malicious code in the last years and the unavoidable transformation into an engineering discipline of the malware production process. Acknowledgments This work was supported by the Spanish Government through MINECO grants SMOG-DEV (TIN2016-79095-C2-2-R) and DEDETIS (TIN2015-7013-R), and by the Regional Government of Madrid through grants CIBERDINE (S2013/ICE-3095) and N-GREENS (S2013/ICE-2731). References [1] “Symantec’s 2015 internet security threat report,” https://www.symantec.com/security-center/archived-publications, accessed: 2017-12-6. [2] P. Security, “27% of all recorded malware appeared in 2015,” http://www.pandasecurity.com/mediacenter/press-releases/all-recorded-malware-appeared-in-2015, accessed: 2016-04-6. [3] J. Caballero, C. Grier, C. Kreibich, and V. Paxson, “Measuring pay-per-install: The commoditization of malware distribution,” in Proceedings of the 20th USENIX Conference on Security, ser. SEC’11.   Berkeley, CA, USA: USENIX Association, 2011. [4] Grier et al., “Manufacturing compromise: The emergence of exploit-as-a-service,” in Proceedings of the ACM Conference on Computer and Communications Security.   ACM, 2012. [5] G. Stringhini, O. Hohlfeld, C. Kruegel, and G. Vigna, “The harvester, the botmaster, and the spammer: On the relations between the different actors in the spam landscape,” in Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, ser. ASIA CCS ’14.   New York, NY, USA: ACM, 2014. [6] K. Thomas, D. Huang, D. Wang, E. Bursztein, C. Grier, T. J. Holt, C. Kruegel, D. McCoy, S. Savage, and G. Vigna, “Framing dependencies introduced by underground commoditization,” in Workshop on the Economics of Information Security, 2015. [7] G. Suarez-Tangil, J. E. Tapiador, P. Peris-Lopez, and A. Ribagorda, “Evolution, detection and analysis of malware for smart devices,” IEEE Communications Surveys and Tutorials, vol. 16, no. 2, 2014. [8] P. Barford and V. Yegneswaran, Malware Detection.   Springer, 2007, ch. An Inside Look at Botnets. [9] V. Kotov and F. Massacci, “Anatomy of Exploit Kits: Preliminary Analysis of Exploit Kits As Software Artefacts,” in International Conference on Engineering Secure Software and Systems, 2013. [10] C. K. Roy, J. R. Cordy, and R. Koschke, “Comparison and evaluation of code clone detection techniques and tools: A qualitative approach,” Science of Computer Programming, vol. 74, no. 7, 2009. [11] T. Kamiya, S. Kusumoto, and K. Inoue, “Ccfinder: a multilinguistic token-based code clone detection system for large scale source code,” IEEE Transactions on Software Engineering, vol. 28, no. 7, 2002. [12] A. Rahimian, R. Ziarati, S. Preda, and M. Debbabi, “On the reverse engineering of the citadel botnet,” in Foundations and Practice of Security.   Springer, 2014. [13] A. Calleja, J. Tapiador, and J. Caballero, “A Look into 30 Years of Malware Development from a Software Metrics Perspective,” in Proceedings of the 19th International Symposium on Research in Attacks, Intrusions and Defenses, Evry, France, September 2016. [14] “Vx heaven,” http://vxheaven.org/, accessed: 2017-06-20. [15] “cloc,” http://github.com/AlDanial/cloc, accessed: 2015-09-22. [16] V. Nguyen, S. Deeds-rubin, T. Tan, and B. Boehm, “A sloc counting standard,” in COCOMO II Forum 2007, 2007. [17] A. J. Albrecht, “Measuring Application Development Productivity,” in IBM Application Development Symp., I. B. M. Press, Ed., Oct. 1979. [18] A. J. Albrecht and J. E. Gaffney, “Software function, source lines of code, and development effort prediction: A software science validation,” IEEE Trans. Softw. Eng., vol. 9, no. 6, Nov. 1983. [19] C. Jones, “Backfiring: Converting lines-of-code to function points,” Computer, vol. 28, no. 11, Nov. 1995. [20] ——, “Programming languages table, version 8.2,” 1996. [21] I. Sommerville, Software Engineering: (Update) (8th Edition) (International Computer Science).   Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 2006. [22] B. W. Boehm, Software Engineering Economics.   Prentice-Hall, 1981. [23] T. J. McCabe, “A complexity measure,” in Proceedings of the 2Nd International Conference on Software Engineering, ser. ICSE ’76.   Los Alamitos, CA, USA: IEEE Computer Society Press, 1976. [24] C. Ebert and J. Cain, “Cyclomatic complexity,” IEEE Software, vol. 33, no. 6, pp. 27–29, 2016. [25] P. Warren, C. Boldyreff, and M. Munro, “The evolution of websites,” in Program Comprehension, 1999. Proceedings. Seventh International Workshop on.   IEEE, 1999, pp. 178–185. [26] G. Hecht, O. Benomar, R. Rouvoy, N. Moha, and L. Duchien, “Tracking the software quality of android applications along their evolution (t),” in Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on.   IEEE, 2015, pp. 236–247. [27] “Jhawk,” http://www.virtualmachinery.com/jhawkprod.htm, accessed: 2016-04-4. [28] “Radon,” https://pypi.python.org/pypi/radon, accessed: 2016-04-4. [29] “Eclipse metrics plugin,” https://marketplace.eclipse.org/content/eclipse-metrics, accessed: 2016-04-4. [30] “UCC,” http://csse.usc.edu/ucc_wp/, accessed: 2016-04-4. [31] M. M. Lehman, “Laws of software evolution revisited,” in Proceedings of the 5th European Workshop on Software Process Technology, ser. EWSPT ’96.   London, UK, UK: Springer-Verlag, 1996. [32] P. Oman and J. Hagemeister, “Metrics for assessing a software system’s maintainability,” in Proc. Conf. on Software Maintenance, 1992. [33] M. J. B. Garcia and J. C. G. Alvarez, “Maintainability as a key factor in maintenance productivity: a case study,” in 1996 Proceedings of International Conference on Software Maintenance, Nov 1996, pp. 87–93. [34] M. H. Halstead, Elements of Software Science (Operating and Programming Systems Series).   Elsevier Science Inc., 1977. [35] “Code metrics values in microsoft’s visual studio,” https://msdn.microsoft.com/en-us/library/bb385914.aspx, accessed: 2018-03-19. [36] L. Jiang, G. Misherghi, Z. Su, and S. Glondu, “Deckard: Scalable and accurate tree-based detection of code clones,” in Proceedings of the 29th international conference on Software Engineering.   IEEE Computer Society, 2007. [37] J. W. Ratcliff and D. E. Metzener, “Pattern-matching-the gestalt approach,” Dr Dobbs Journal, vol. 13, no. 7, 1988. [38] “ssdeep,” https://ssdeep-project.github.io/, accessed: 2017-11-01. [39] “Cve-2003-0533 vulnerability,” https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2003-0533, accessed: 2017-06-20. [40] “Virtual code obfuscation by roi g biv,” http://vxheaven.org/lib/vrg19.html, accessed: 2017-06-20. [41] E. Torres, “Why code comments still matter,” https://cacm.acm.org/blogs/blog-cacm/225574-why-code-comments-still-matter/fulltext, accessed: 2018-10-2. [42] C. Lever, P. Kotzias, D. Balzarotti, J. Caballero, and M. Antonakakis, “A lustrum of malware network communication: Evolution and insights,” in Security and Privacy (SP), 2017 IEEE Symposium on.   IEEE, 2017, pp. 788–804. [43] C. K. Roy and J. R. Cordy, “A survey on software clone detection research,” 2007. [44] A. Sheneamer and J. Kalita, “A survey of software clone detection techniques,” International Journal of Computer Applications, 2016. [45] B. Eshete, A. Alhuzali, , M. Monshizadeh, P. Porras, V. Venkatakrishnan, and V. Yegneswaran, “EKHunter: A Counter-Offensive Toolkit for Exploit Kit Infiltration,” in Network and Distributed System Security Symposium, February 2015. [46] J. Caballero, P. Poosankam, S. McCamant, D. Babic, and D. Song, “Input Generation Via Decomposition and Re-Stitching: Finding Bugs in Malware,” in ACM Conference on Computer and Communications Security, Chicago, IL, October 2010. [47] G. Frantzeskou, S. MacDonell, E. Stamatatos, and S. Gritzalis, “Examining the significance of high-level programming features in source code author classification,” J. Syst. Softw., vol. 81, no. 3, Mar. 2008. [48] A. Caliskan-Islam, R. Harang, A. Liu, A. Narayanan, C. Voss, F. Yamaguchi, and R. Greenstadt, “De-anonymizing Programmers via Code Stylometry,” in USENIX Security Symposium, 2015. [49] I. Arce and E. Levy, “An analysis of the slapper worm,” IEEE Security & Privacy, vol. 1, no. 1, 2003. [50] T. Holz, “A short visit to the bot zoo,” IEEE Security & Privacy, vol. 3, no. 3, 2005. [51] L. Allodi, V. Kotov, and F. Massacci, “MalwareLab: Experimentation with Cybercrime Attack Tools,” in USENIX Workshop on Cyber Security Experimentation and Test, Washington DC, August 2013. [52] B. Eshete and V. N. Venkatakrishnan, “WebWinnow: Leveraging Exploit Kit Workflows to Detect Malicious Urls,” in ACM Conference on Data and Application Security and Privacy, 2014. [53] B. S. Baker, “On finding duplication and near-duplication in large software systems,” in Reverse Engineering, 1995., Proceedings of 2nd Working Conference on.   IEEE, 1995. [54] I. D. Baxter, A. Yahin, L. Moura, M. Sant’Anna, and L. Bier, “Clone detection using abstract syntax trees,” in Software Maintenance, 1998. Proceedings., International Conference on.   IEEE, 1998. [55] J. Mayrand, C. Leblanc, and E. Merlo, “Experiment on the automatic detection of function clones in a software system using metrics.” in icsm, vol. 96, 1996. [56] Z. Li, S. Lu, S. Myagmar, and Y. Zhou, “Cp-miner: Finding copy-paste and related bugs in large-scale software code,” IEEE Transactions on software Engineering, vol. 32, no. 3, 2006. [57] H. Huang, A. M. Youssef, and M. Debbabi, “Binsequence: Fast, accurate and scalable binary code reuse detection,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security.   ACM, 2017. [58] J. Jang and D. Brumley, “Bitshred: Fast, scalable code reuse detection in binary code (cmu-cylab-10-006),” CyLab, 2009. [59] N. E. Rosenblum, B. P. Miller, and X. Zhu, “Extracting compiler provenance from program binaries,” in Proceedings of the 9th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering.   ACM, 2010.
Abstract States beyond those expected in the simple constituent quark model are now emerging. I focus on the scalar glueball and its mixing with states in the $q\bar{q}$ nonet, and also on correlations in Strong QCD that may form diquarks and seed $qq\bar{q}\bar{q}$ states. Some models of the pentaquark candidate $\Theta(1540)$ are critically discussed. Hadron Spectroscopy (theory): Diquarks, Tetraquarks, Pentaquarks and no quarks F.E. CLOSE111e-mail: [email protected] Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Oxford OX1 3NP, United Kingdom The meson landscape This year we have seen several hadrons announced that do not fit easily with the simple valence picture of $q\bar{q}$ or $qqq$ mesons and baryons. With hindsight one might wonder why it took so long. This simple picture exploits degrees of freedom that transform like the fields of $L_{QCD}$ but are not identical to them. Two quarks attract one another in $\bf{\bar{3}_{c}}$ with about the strength of $q\bar{q}$ coupled to colour singlet and so should play a significant role in in generating the colour degrees of freedom in Strong QCD. For light flavours there is even an old calculation[1] suggesting that the effective mass of the antisymmetric $[ud]$ pair, “scalar diquark”, is comparable to that of a single $q\equiv u,d$. There is even some phenomenological support for this[2, 3]. If so it is energetically as easy to make colour singlets from $[qq][\bar{q}\bar{q}]$ as from $q\bar{q}$. The low lying scalar mesons fit well with this idea[4, 5, 6]. This strong attraction of flavour antisymmetric scalar diquarks should even imply exotic combinations made from $[cs][\bar{u}\bar{d}]$, $[cd][\bar{u}\bar{s}]$ and $[cu][\bar{u}\bar{s}]$[7]. The idea that baryons may emerge naturally as excitations of a quasi-two centred system has been resurrected[3]. In turn this raises questions about other energetically favoured examples of such correlations. Two $[ud][ud]$ couple attractively to $\bf{3_{c}}$ (probably forced into $L=1$ by Bose symmetry[8]) and so need a further $\bf{\bar{3}_{c}}$ to saturate. One way would be to add a third $[ud]$ (and another L=1 but even so the diquark mass cannot be too low if we are not to end up with a state more stable than the deuteron!) or a $\bar{q}$. If the latter is $\bar{s}$ we have a manifestly exotic strange baryon with the quantum numbers of the $\Theta$, evidence for and against which has been extensively reviewed here[9]. But why stop at the diquark? A $[ud\bar{s}]$ combination also is strongly attractive and with different flavours does not suffer annihilation via gluons. This enables one to construct the $\Theta$ quantum numbers with a quasi-two centred system, $[ud\bar{s}][ud]$ with $L=1$ needed to keep would be repulsive correlations apart[10]. Completing the simple quasi-two centred states are attractive combinations such as $[ud\bar{s}](\bar{s})$. The phenomenology of these includes flavour 10 and $\bf{\bar{10}}$ mesons, which might relate to exotic mesons with $J^{PC}=1^{-+}$ at 1.4-1.6GeV[11], but the detailed similarities and differences with the $[qq][\bar{q}\bar{q}]$ remains to be investigated. Most attention has focussed on the $\Theta$ and its implications for correlations as above. I shall not review this literature due to limitations of space and as it is well known, but I shall raise some questions that remain to be answered. While the above remarks may turn out to be critical in understanding the degrees of freedom for light flavours, the heavy flavours are better understood. Their phenomenology gives hints as to how to begin unravelling the code of the light flavoured sector. So I shall begin with the $b\bar{b}$ and $c\bar{c}$ traditional realm where the non relativistic model works well, at least as far as the S and P wave combinations are concerned. In particular note the scalar mesons are canonical: they are in the right place, the E1 radiative transitions from $2^{3}S_{1}$ and their decays to $1^{3}S_{1}$ appear to be in accord with theory[12]. The ${}^{1}P_{1}$ charmonium state has been reported[13]. A novel entree to light hadrons is emerging with the decays of these $\chi$ states[14]. $\chi_{0}\to\pi^{+}\pi^{-}\pi^{+}\pi^{-}$ shows $f_{0}(980)f_{0}(980)$ pair production at a strength similar to that of $K\bar{K}$; this is a surprise if the $f_{0}(980)$ is purely a $qq\bar{q}\bar{q}$ state and suggests that in production by hard gluons the “simplest” $q\bar{q}$ or $gg$ configuration dominates the dynamics (I shall contrast this with its appearance in $\phi\to\gamma f_{0}(980)$ later). In the present data $\chi_{0}\to\pi+(3\pi)$ should be studied to see if the $\pi(1800)$ is prominent: this is a potential candidate for a gluonic excitation of the $\pi$ and is degenerate with the $D$. This is an example of how light hadron dynamics needs to be understood as it can affect $D$ decays: the Cabibbo suppressed decays of the $D$ can be affected by mixing with this $\pi(1800)$[15]. We now await data on $\chi_{1}\to\pi+X$; this is intriguing because in S-wave $X\equiv 1^{-+}$, predicted to be the lightest exotic gluonic hybrid channel. Production by the short distance gluons in $\chi$ decays is thus eagerly awaited. What do we expect to find in the spectroscopy of light flavours? The $Q\bar{Q}$ pattern of S P D states that was apparent for heavy quarkonium seems to survive for light flavours, though there is no fundamental a priori reason why we should have expected this. There is however a clear indication of where it does not work. In the P-wave states the $2^{+},1^{+}$ nonets, each containing two isoscalars representing the $s\bar{s}$ and $n\bar{n}$ flavour combinations, are clearly seen even though these states are now above threshold for decays into mesons. The rule seems to be that the spectroscopy seeded by a short range $q\bar{q}$ remains visible so long as there are no open S-wave hadron channels, which obscure the underlying short range structure. This is particularly obvious in the $0^{+}$ sector. Above 1GeV we find three I=0 (1370;1500;1710) - or even a fourth $f_{0}(1790)$[9] - in place of two, and below 1 GeV there are certainly two further states $f_{0}(980)$ and $a_{0}(980)$ and attractive channels hinting at a full nonet including a further I=0 $\sigma(600)$. Intriguingly, this proliferation is in accord with simple ideas from QCD. Above 1 GeV Lattice QCD predicts a scalar glueball, mass $\sim 1.6$ GeV[16, 17], which mixes[18] with the isoscalar $q\bar{q}$ in its vicinity. In the limit of large mixing, the flavour eigenstates tend towards 1+G, 8, 1-G[17]. Fits to the pseudoscalar meson decays from WA102 and LEAR give independent support to such relative phases[19] The fact that mass mixing and also meson decays are consistent with this set of relative phases is interesting. The numerical values should not be taken seriously; the errors on them are probably considerable, but the relative phases and separation of “large, medium, small” is probably reliable. These independent analyses give a consistent interpretation of the glueball-$q\bar{q}$ mixing in the scalar channel. The challenge is how to test this? BES and CLEO-c will soon provide over a billion $\psi$ decays giving over a thousand events per channel in the radiative decays $0^{++}\to\gamma(\rho;\omega;\phi)$. The ideal flavour mixing of the vector mesons will thus “weigh” the flavour contents of any C=+ meson produced in $\psi\to\gamma R$. Some preliminary hints that there is such mixing come from the anomalous pattern of meson states $M_{2}$in $\psi\to\omega/\phi+M_{2}$[9]. For an ideal flavour combination, such as $\phi=s\bar{s}$, then the folklore is that $M_{2}$ will be produced via its $s\bar{s}$ content as this leads to a flavour connected diagram. Similarly the $\omega$ selects out $n\bar{n}$ for the $M_{2}$. The test of this hypothesis has been when $M_{2}\equiv 2^{++}$; this nonet consists of ideal states $a_{2}(1320);f_{2}(1270)\equiv n\bar{n};f_{2}(1525)\equiv s\bar{s}$ and therefore is rather clean and confirms the dominance of the “hairpin” diagram. However, the case $M_{2}=0^{++}$ has no simple solution. Indeed, some channels which ought to have been dominant appear even to be absent. For example: $f_{0}(1370)$ has strong affinity for $\pi\pi$ and hence $n\bar{n}$ in its wavefunction yet is not seen in $\psi\to\omega\pi\pi$. This being anomalous does not require one to suppose that $f_{0}(1370)$ is $n\bar{n}$ alone; multiquark components containing non-strange flavours ought to be enough to highlight the paradox. One explanation could be that some other contribution leads to destructive interference. The $G$ component in the $f_{0}(1370)$ wavefunction is a natural candidate for this and it has even been predicted[20] that the strength of $b(\psi\to\phi G)\sim 10^{-3}$; if this also applies to $b(\psi\to\omega G)\sim 10^{-3}$ then the destructive interference becomes plausible. The test will be to see if the relative phases of $G$ and flavored components are in line with the observed pattern of suppressed and observed decays $\psi\to\omega/\phi f_{0}$. Below 1GeV the dynamics are controlled by the strong attractive QCD forces between colour-spin symmetric $qq$ (or $\bar{q}\bar{q}$) pairs, for example $S=0$ $\bar{\bf 3}_{F}$. In flavour this equates to attraction in $\bar{\bf 3}_{F}$. This leads to a nonet of low lying scalars[5]. Recent data from KLOE on $\phi\to\gamma f_{0}/a_{0}(980)$ support this picture[6, 21] though the role of $K\bar{K}$ threshold in disturbing the short disance diquark clustering dynamics from the looser molecular[22] remains to be determined[6, 23]. The dominant production is via the $\phi\to K\bar{K}\to K\bar{K}\gamma\to 0^{++}\gamma$ loop: this produces the $f_{0}/a_{0}$ via their long range wavefunction and does not teach much about the short range QCD structure. These tentative ideas on diquark or molecular clustering may now be receiving support from the heavy flavour sectors. The X(3872): anomalous charmonium $B$ decays have turned out to be a novel and rich source of charmonium. Among these is a narrow state $X(3872)\to\psi\pi\pi$. Immediately above $D\bar{D}$ threshold states can remain narrow if they are forbidden to decay into $D\bar{D}$. Examples are $2^{-\pm},3^{--}$ and radially excited $1^{++}$ within the $c\bar{c}$; also, hybrid charmonium or $DD^{*}$ molecular state. However, each of these has problems[24]. Compared to predictions in charmonium potential models: $2^{--}$ and $3^{--}$ have the wrong mass and the experimental $\Gamma(\gamma 1^{+})$ is too small; for $2^{-+}$ the $b(\psi\pi\pi)$ is expected to be small, in contrast to its visibility there; the radial $1^{++}$ is expected to have a larger $\Gamma(\gamma\psi)$ than seen; the $1^{+-}$ has a different cos$\theta$ distribution. Either standard $c\bar{c}$ theory is wrong or the $X(3872)$ is not a simple charmonium state. The latter is suspected to be the case, in part driven by the remarkable coincidence between its mass and that of the threshold for $D^{0}D^{0*}$ which agree to better than one part in 10,000. Refs[25] suggest that it is a molecular or tetraquark bound state of these mesons in S-wave; thus $1^{++}$. A particular model realisation is due to Swanson[26]. Observation[27] of the $\psi\omega$ decay supports $C=+$ and the hint that the decay to $\psi\pi\pi$ has the $\pi\pi$ $\equiv\rho$ and not $\sigma$ support the isospin violation that the $D^{0}D^{0*}$ constitution would imply. Further tests include verifying that there is no $\psi\pi^{0}\pi^{0}$: forbidden for the $\rho$ but allowed for $\sigma$. Also the hadronic decays into e.g. $K\bar{K}\pi$ will be dominated by neutral $K^{0}\bar{K}^{0}\pi$ relative to $K^{+}K^{-}\pi$. . The $DD^{*},\psi\omega;\psi\rho$ are all effectively mass degenerate. So a mixing via quark exchange $D^{0}D^{0*}\to\psi u\bar{u}$ is driven by the energy coincidence, which is probably more generally true than the details of any particular model. The $u\bar{u}$ maps equally onto $\rho,\omega$ and so one expects $\psi\omega\sim\psi\rho$, any deviations from equality being a pointer to dynamical effects. Decays are driven by the meson components of the wavefunction while the production will be by the easiest route; thus seeding by the short range $c\bar{c}$ component will cause the X to be produced like conventional charmonium states. There may be analogues of this dynamics in the $\psi\to(K\Lambda)\bar{p}$ where the $K\Lambda$ appear to have $S_{11}$ baryon quantum numbers. This is another example of the S-wave hadron channels overriding the P-wave quark structure (in this case $qqq$): quark exchange links the $N\eta\to K\Lambda$. The $\psi\to\gamma p\bar{p}$ also may be showing S-wave enhancements; whether these are evidence of a bound state or above threshold S-wave attractions remains to be determined, though comparison with the LEAR data on $p\bar{p}$ annihilation just above threshold supports the bound state interpretation [9]. Strange strange-charmed states: 2317,2460, 2635 MeV The $0^{+},1^{+}$ at 2317 and 2460MeV are lighter than the quark model had predicted, even though it had been successful hitherto in this sector. One interpretation is that this is evidence for a chiral symmetry where the mass gap of $0^{-}:1^{-}$ equates with $0^{+}:1^{+}$[28]. Why does this show up in $c\bar{s}$ where no $u,d$ chiral-friendly flavours are involved? And the axial ought to be the $j=1/2$ member whereas the physical states are mixtures of $j=1/2,3/2$. Thus unless one can argue that the 2460 is the $j=1/2$ member, the identity in the mass gaps appears a tantalising coincidence. The chiral relation may be applicable in the $M_{Q}\to\infty$ limit when $u,d$ accompany the heavy quark; its application in the finite mass case with $s$ is less clear. The coincidence of the masses lying just below the $DK$ and $D^{*}K$ thresholds has led to suggestions that their masses are lowered from the naive $c\bar{s}$ by a mechanism similar to that responsible for lowering the $f_{0}(980)$ and $a_{0}(980)$ to the vicinity of the $K\bar{K}$ threshold. The challenge now is to distinguish beetween $c\bar{s}$ and molecule interpretations. One suggestion is the radiative transition $1^{+}\to 0^{+}\gamma$. In the molecule interpretation this is driven by $D^{*}\to D\gamma$ which is known. In the $c\bar{s}$ this branching ratio is predicted to be $1.2\times 10^{-3}$[29]. $D_{s}(2632)$ has anomalous branching ratios favouring $D_{s}\eta$ over $DK$. Attempts to accomodate these within the $c\bar{s}$ picture (as a radial excitation of the $1^{-}$) exploiting nodes in the radial wavefunction are unable to do so without choosing unrealistic values of established parameters. Suggestions that it is a tetraquark ($cu\bar{s}\bar{u}$) which feeds $D_{s}\eta$ and not $DK$ run into problems as they also imply $D_{s}\pi^{0}$ decays. These would feed $D_{s}\gamma\gamma$ but there is no sign of an enhancement at 2632 in these data. Ref.[30] concludes that either our understanding of hadron decays is wrong or this state is an artefact. Its non observation in other experiments adds weight to this interpretation. Some reflections on pentaquarks If narrow width pentaquarks exist with positive parity, powerful correlations must arise in Strong QCD. In QCD attractions are predicted between distinct flavoured pairs in net spin zero, which is the starting point of two particular models[8, 10]. It has not been demonstrated how scalar diquarks form with ultra-light masses as required to accomodate a 1540 MeV state; their stability is an open question; their effective boson nature and consistency with hadron spectroscopy also are not well understood. But first we need to establish whether this state is real. I shall now review various features. Mass The original prediction[31] assumed that the 1710 $N^{*}$ is in the $\bar{10}$ and used this to set the scale of mass. However $\gamma p\to p^{*}(\bar{10})$ is forbidden by U-spin which argues against this[12]. The mass gap of 180MeV per unit of strangeness is also suspect in a quark model interpretation as it leads to a 540MeV spread across the $\Theta-\Xi$ multiplet even though there is only one extra strange mass in going from $(udud\bar{s})$ to $(usus\bar{d})$ and so a much smaller gap would be anticipated[8]. Beware also naive application of Gell Mann Okubo mass formulae which do not distinguish between $|S|$ and $S$ as one goes from $\Theta(S=+1)$ to $\Xi(S=-2)$. If the $\Theta$ should prove to be real, then no simple mapping from chiral soliton onto a pentaquark description seems feasible. The relation between these is more profound. Nonetheless a narrow state of mass $\sim$1540MeV has been claimed. But when one compares the masses reported in $K^{+}n$ versus $K^{0}p$ there appears to be a tantalising trend towards a difference[32]. Is this a hint of an explanation (see later) or that we are being fooled by poor statistics? No models successfully predict the mass; in all cases it is fitted relative to some other assumed measure. The original chiral soliton normalised to the 1710, as we already discussed. Ref.[8] assume that the Roper 1440 is the $udud\bar{d}$ (but this state is partnered by $\Delta(1660)$ which along with its electromagnetic and other properties, is in accord with it being a radial $qqq$ excitation of the nucleon). Ref[10] noted the kinematic similarity between reduced masses in their diquark-triquark model and the $c\bar{s}$ system. They adopted a 200MeV orbital excitation energy from the $1^{-}-0^{+}(2317)$ mass gap to realise a 1540MeV mass for the $\Theta$. However, if one makes a spin averaged mass for the $L=0,1$ levels, notwithstanding the questions about the low mass of the 2317, one gets nearer to a 450 -480MeV energy gap and hence a $\Theta\sim$ 1800 MeV. In summary, all models appear to normalise to some feature and do not naturally explain the low mass of an orbitally excited pentaquark. Width The chiral soliton model Lagrangian contains three terms with arbitrary strengths, $A,B,C$. Linear combinations of these can be related to the observable transition $\Delta N\pi$ and the $F/D$ ratio for the $NN\pi$ vertex. The $\Theta NK$ vertex is then given by $g(\bar{10})=1-B-C$. We thus have one unknown $g(\Theta NK)$ described by another unknown, $C$. Ref[33] shows the coupling is relatively insensitive to $F/D$ and that it is $C$ that controls $g(\Theta NK)$. In the non relativistic quark model it is argued[31, 33] that $F/D=2/3$ and the absence of $s\bar{s}$ in the nucleon lead to $B=1/5;C=4/5$. This has the remarkable implication that $g(\Theta NK)=0$. If the $\Theta$ phenomenon survives then a deeper understanding of this result and its implications would be welcome. It would also raise the challenge of how the $\Theta$ is strongly produced. Phenomenologically $\Gamma(\Lambda(1520)\to KN)\sim 7$MeV has been suggesttted as a measure for narrow widths. However this is D-wave and phase space limited: the P-wave $\Lambda(1660)$ width is $\sim 100$MeV. Furthermore these decays require creatioon of a $q\bar{q}$; for the pentaquark one has $qqqqq\bar{q}$ and the challenge is to stop its decay. There are no indications in conventional spectroscopy underpinning a narrow width of $\sim 1$MeV for $\Theta$. Colour spin and flavour mismatches between $\Theta$ and $NK$ wavefunctions have been proposed to suppress the natural width by large factors[34]. However it is easy to override these: soft gluon exchange defeats the colour; spin flip costs little and flavour rearrangement can occur. Further there is colour singlet $q\bar{q}$ in relative S-wave within the correlated models of JW and KL[35] and their dissociation into $NK$ seems hard to prevent. Ref[36] suggested that overlaps of spatial wavefunctions between pentaquark and nucleon may lead to a suppression. However it has not been demonstrated that such is generated dynamically. Dudek has shown[35] that such an effect can arise but this involves taking a non relativistic picture rather literally. It is also unclear how a colour $\bar{3}$ diquark is attracted into a tighter (smaller?) configuration than a colour singlet meson. We almost have a paradox here. The small width implies a feeble coupling to $KN$, yet something must couple to $\Theta$ strongly to give a normal hadronic production rate[9]. This is an enigma which we must confront. Production We have heard several experimental limits on the hadroproduction of the $\Theta$. Some are not yet restrictive,e.g. the limit in $\psi\to\Theta\bar{\Theta}$ which is phase space limited or that in $\psi^{\prime}$ decay where one can claim that there is a big price to pay for creating ten $q$ and $\bar{q}$. So it is possible to wriggle. However on balance the limits in high statistics hadroproduction appear impressive. The onus is on supporters to explain them away or find a loophole. An example of such a loophole suggested here[37] asks why signals are in photoproduction but not in hadroproduction. The photon contains $s\bar{s}$ and so may be able to feed the $\bar{s}$ needed to make $\Theta(udud\bar{s})$ in a way not so readily accessible in hadroproduction. Further appeal is made to a CLAS observation that suggests that a narrow $N^{*}$ at $\sim 2.4$GeV may be the source of $\Theta+K$. While such a dynamics can be tested by searching for other decay modes, forced by SU(3)[37], there remain problems. CLAS see this (statistically insignificant) $N^{*}$ in $\pi$ exchange and so the photon does not appear to be essential: why is this object (and its progeny, the $\Theta$) not also made in hadroprodcution if it is made by $\pi N$? Second; while a 2.4 GeV $N^{*}$ may be produced in the 3-5 GeV CLAS experiment, it is kinematically inaccessible in the original SpRING8 experiment and in the earlier CLAS $\gamma d$. So the source of $\Theta$ in this latter pair would still remain to be explained. Ref[38] have noted that the relative photoproduction strengths of $\Theta$ and the related $\Sigma^{+}_{5}$ should be similar even though the scale of each individually is highly model dependent. As either of these can decay into $K_{s}p$, the absence of any $\Sigma^{+}_{5}$ signal (even after mixing with known $\Sigma^{*}$) accompanying the claimed $\Theta$ in the HERMES data for example raises questions. Photoproduction has also been suggested as a source of kinematic peaks that fake a $\Theta$[39]. $\gamma N\to a_{2}/\rho_{3}N$ followed by the $K\bar{K}$ decays of these mesons in D/F waves give a forward-backward peaking in the c.m. along the direction of the recoil nucleon and a spurious $KN$ peak. At first sight the experimental absence of such peaks in $K^{-}n$ supports for the reality of the peak in $K^{+}n$, but it is not necessarily so simple. Charge exchange and D/F interference can introduce a charge asymmetry and it is claimed to be possible to choose phases such that a narrow peak can arise in $K^{+}n$ (after feeding through Monte Carlo) whereas broad structure would arise in $K^{-}n$. It has been suggested in the discussion sessions here that the different Q-values could cause a mass shift in the kinematic peak in $K^{+}n$ versus $K^{0}p$, in accord with the trend of the data[40]. Whether this kinematic effect is responsible may be settled when higher statistics data and significant Dalitz plots become available. Conclusion Precision and variety in experiments are taking us beyond the 40 year old simple $q\bar{q}$ quark model of mesons. The role of strong glue in QCD is tantalising: $\psi\to\gamma\gamma V$ is a novel opportunity that can test the current interpretation of the mixing between scalar glueball and $q\bar{q}$ above 1GeV. Evidence for exotic hybrid mesons is emerging; $\chi_{1}\to\pi X$ in S-wave immediately accesses the exotic $X=1^{-+}$ channel. The analogous $\chi_{0}\to\pi X$ probes $X=0^{-+}$ where production of $\pi(1800)$ (a potential hybrid partner of the pion, and interesting due to its mass degeneracy with the charmed $D$) may be studied. Multiquark molecules are appearing. I suggest that $X(3872)$ is $1^{++}$; the $D_{s}(2317/2460)$ are $0^{+},1^{+}$ shifted to below $DK/D^{*}K$ thresholds by dynamics analogous to those that pull the $f_{0}(980)$ and $a_{0}(980)$ to below the $K\bar{K}$ threshold. Ways of testing this need to be clarifyied. I suggest that the $D_{s}(2632)$ is an artefact: data can easily prove me wrong. The $\Theta$, and the question of narrow width pentaquark(s), is rightly at the centre of attention. Either the $\Theta$ is some artefact (if so, what?) or, if real, the behaviour of Strong QCD is profound and our current model attempts will turn out to be mere tinkering. For future historians the vote taken at this conference from around 1000 physicists was $\sim 60\%$ believe the evidence remains inconclusive; $\sim 40\%$ believe that the $\Theta$ is not a resonance and in the dark of the hall only a handful were convinced that a genuine narrow resonance has been found. A vote taken a year ago at Hadron03 scored $\sim 50\%$, $25\%$ and $25\%$ respectively. Time will tell. References [1] U Vogl and W Weise, Prog. Part. Nucl. Phys. 27 (1991) 195 [2] F E Close, in proceedings of Scottish Universities Summer School SUSSP04 (to be published) [3] F Wilczek hep-ph/0409168 [4] L Maiani et al, hep-ph/0407017 [5] R L Jaffe, Physical Review D15 (1977) 281; R L Jaffe and F E Low, Physical Review D19 (1979) 2105 [6] F E Close and N Tornqvist, Journal of Physics G 28 (2002) R249 [7] H J Lipkin Phys Letters 70B (1977) 113 [8] R L Jaffe and F Wilczek, hep-ph/0307341 [9] S. Jin, plenary talk on Hadron Spectroscopy. [10] M Karliner and H J Lipkin, hep-ph/0307243 [11] T Burns and F E Close (in preparation). S Chung, E Klempt and J Korner, hep-ph/0211100 [12] S.Eidelman et al (Particle Data Group) Phys. Letters B592 (2004) 1 [13] J Rosen, parallel session on Hadron Spectroscopy [14] F A Harris, parallel session on Hadron Spectroscopy [15] F E Close and H J Lipkin Phys Letters B372 (1996) 306; D V Amelin et al. Phys. Letters B356 (1995) 595 [16] G Bali et al Physics Letters B309 (1993) 378; C J Morningstar and M Peardon, Physical Review D56, (1997) 4043; D Weingarten, Nucl.Phys.Proc.Suppl. 73 (1999) 249 [17] F E Close and M J Teper, On the lightest Scalar Glueball Rutherford Appleton Laboratory RAL-96-040; Oxford University OUTP-96-35P (1996) [18] C Amsler and F E Close, Physics Letters B353 (1995) 385 [19] F E Close and A Kirk, Eur.Phys.J. C21 (2001) 531 F E Close and A Kirk, Physics Letters B483 (2000) 345 [20] F E Close and Qiang Zhao, hep-ph/0402090 Physics Letters B586 (2004) 332 [21] N N Achasov and G N Shestakov Physical Review D56 (1997) 212 [22] J Weinstein and N Isgur, Physical Review Letters 48 (1982) 659; Physical Review D27 (1983) 588. [23] F E Close, Proceedings of Hadron03, hep-ph/0311087 [24] S Olsen (Belle Collaboration) hep-ex/0407033 [25] F E Close and P R Page, Physics Letters B578 (2004) 119-123; N A Tornqvist hep-ph/0402237 [26] E Swanson, parallel session on Hadron Spectroscopy [27] S L Olsen, parallel session on Hadron Spectroscopy; Belle collaboration ICHEP04 8-0685 [28] W Bardeen, E Eichten and C Hill, hep-ph 0305049 Phys Rev D68 (2003) 054024 [29] S Godfrey hep-ph/0305122 [30] T Barnes, F E Close, J J Dudek, S Godfrey and E Swanson, hep-ph/0407120 [31] D Diakanov, V Petrov and M Polyakov hep-ph/9703373; Zeit. fur Physik A359 (1997) 305 [32] F E Close and Q Zhao hep-ph/0404075 [33] J Ellis, M Karliner and M Praszalowicz hep-ph/0401127, JHEP 0405 (2004) 002 [34] B Jennings and K Maltman hep-ph/0308286 Phys Rev D69 (2004) 094020; F E Close and J J Dudek hep-ph/0401192 Phys. Lett B586 (2004) 75; C Carlson et al hep-ph/0312325 [35] J J Dudek hep-ph/0403235; H Hogaasen and P Sorba, hep-ph/0406078 [36] D Melikhov, S Simula and B Stech hep-ph/0405037 Phys Lett B594 (2004) 265 [37] H J Lipkin, comments in parallel session at ICHEP04; H J Lipkin and M Karliner, hep-ph/0405002 [38] F E Close and Qiang Zhao hep/ph0403159 Phys Lett B590 (2004) 176 [39] A Dzierba et al hep-ph/0311125 Phys Rev D69 (2004) 051901 [40] F E Close and Qiang Zhao, hep-ph/0404075
Observational constraints on the inflaton potential combined with flow-equations in inflaton space Steen H. Hansen, Martin Kunz Department of Physics, Nuclear & Astrophysics Laboratory, University of Oxford, Keble Road, Oxford OX1 3RH, U.K. (Draft version November 26, 2020) Abstract Direct observations provide constraints on the first two derivatives of the inflaton potential in slow roll models. We discuss how present day observations, combined with the flow equations in slow roll parameter space, provide a non-trivial constraint on the third derivative of the inflaton potential. We find a lower bound on the third derivative of the inflaton potential $V^{\prime\prime\prime}/V>-0.2$. We also show that unless the third derivative of the inflaton potential is unreasonably large, then one predicts the tensor to scalar ratio, $r$, to be bounded from below $r>3\times 10^{-6}$. ††pagerange: Observational constraints on the inflaton potential combined with flow-equations in inflaton space–Acknowledgements††pubyear: 2002 1 Introduction Inflation is today considered a natural and necessary part of the cosmological standard model, providing the initial conditions for cosmic microwave background radiation and large scale structure formation. Our knowledge of the fundamental physics responsible for inflation is, however, very limited, and only recent observations of the cosmic microwave background [Netterfield et al. 2002, Lee et al. 2001, Halverson et al. 2002] and large scale structure [Croft et al. 2000, Saunders et al. 2000, Percival et al. 2001] have provided the first glimpse of the underlying physics. This has been achieved (and is still only possible) in slow roll inflation (see [Lyth & Riotto 1999] for a review on slow roll and list of references). For any given inflationary model one can find the power spectrum of primordial curvature perturbations, ${\cal P}(k)$, which is a function of the wavenumber $k$. This power spectrum can be Taylor-expanded about some wavenumber $k_{0}$ and truncated after a few terms [Lidsey et al. 1997] $$\displaystyle{\rm ln}{\cal P}(k)$$ $$\displaystyle=$$ $$\displaystyle{\rm ln}{\cal P}(k_{0})+(n_{S}-1)\,{\rm ln}\frac{k}{k_{0}}$$ (1) $$\displaystyle+\left.\frac{1}{2}\frac{d\,n_{S}}{d\,{\rm ln}k}\right|_{k_{0}}\,{% \rm ln}^{2}\frac{k}{k_{0}}+\cdots$$ where the first term is a normalization constant, the second is the power-law approximation, with the case $n_{S}=1$ corresponding to a scale invariant (Harrison-Zel’dovich) spectrum, and the third term is the running of the spectral index. Early data analyses [Wang, Tegmark & Zaldarriaga 2002, Kinney, Melchiorri & Riotto 2001] have truncated this expansion after the first two terms, hence assuming that the bend of the spectrum is zero, $\partial_{{\rm ln}k}\equiv dn_{S}/d{\rm ln}k|_{k=k_{0}}=0$. However, as shown in [Copeland, Grivell & Liddle 1997, Hannestad, Hansen & Villante 2001], this early truncation gives too strong constraints on both scalar and tensor indices, and the analysis must allow for a bend of the spectrum. In most slow roll (SR) models $\partial_{{\rm ln}k}$ is expected to be very small, since it is second order in small parameters [Kosowsky & Turner 1995], but there are very interesting models where this need not be the case [Stewart 1997, Stewart 1997b, Kinney & Riotto 1998, Dodelson & Stewart 2002], and $\partial_{{\rm ln}k}$ may assume values big enough to be observable (see e.g. refs. [Copeland, Grivell & Liddle 1997, Covi & Lyth 1999]). The more general SR models are constrained through the expansion (1), which can provide constraints on the first two derivatives of the inflaton potential [Liddle & Turner 1994, Hannestad et al. 2002]. In SR it is straight forward to find the derivatives of the scalar and tensor spectral indices [Kosowsky & Turner 1995, Liddle & Lyth 1992], $dn_{S}/d{\rm ln}k$ and $dn_{T}/d{\rm ln}k$, and these two provide the flow equations in SR space [Hoffman & Turner 2001]. We discuss below how one can combine present day observation with these flow equations to obtain a non-trivial bound on the third derivative of the inflaton potential, $V^{\prime\prime\prime}/V$ (or combinations like $V^{\prime}V^{\prime\prime\prime}/V^{2}$), under the assumption that $V^{\prime\prime\prime}/V$ (or $V^{\prime}V^{\prime\prime\prime}/V^{2}$) can be treated as approximately constant. 2 Slow roll models The flow equations. Slow roll models are traditionally defined through the 3 parameters $\epsilon,\eta$ and $\xi^{2}$, which roughly correspond to the first, second and third derivatives of the inflaton potential. We will use the notation [Lyth & Riotto 1999] $$\displaystyle\epsilon\equiv\frac{M^{2}}{2}\left(\frac{V^{\prime}}{V}\right)^{2% }\,\,\,,\,\,\,\eta\equiv M^{2}\frac{V^{\prime\prime}}{V}\,\,\,,\,\,\,\xi^{2}% \equiv M^{4}\frac{V^{\prime}V^{\prime\prime\prime}}{V^{2}}\,,$$ (2) where $M$ is the reduced Planck mass, $M=2.4\times 10^{18}$ GeV, from which one can express the SR parameters using the directly observable quantities $n,r$ and $\partial_{{\rm ln}k}$ $$\displaystyle 2\xi^{2}$$ $$\displaystyle=$$ $$\displaystyle-\partial_{{\rm ln}k}-24\epsilon^{2}+16\epsilon\eta\,,$$ (3) $$\displaystyle 2\eta$$ $$\displaystyle=$$ $$\displaystyle n_{S}-1+6\epsilon\,,$$ (4) $$\displaystyle 2\epsilon$$ $$\displaystyle=$$ $$\displaystyle\frac{r}{\kappa}=-n_{T}\,,$$ (5) where $r$ is the tensor to scalar ratio at the quadrupole. Eqs. (4,5) are truncated at order $\xi^{2}$ and eq. (3) at order $V^{\prime 2}V^{\prime\prime\prime\prime}/V^{3}$, and are thus correct to leading order in slow roll expansion. The factor $\kappa$ in eq. (5) depends on the given cosmology [Knox 1995, Turner & White 1996], in particular on the value of $\Omega_{\Lambda}$ and $\Omega_{M}$, and in this paper we will use the value $\kappa=5$, corresponding to $\Omega_{\Lambda}=0.65$ and $\Omega_{M}=0.35$. As the inflaton rolls down the potential, the values of $n_{S}$ and $n_{T}$ will change, and this variation is governed by the flow equations [Liddle & Lyth 1992, Kosowsky & Turner 1995] $$\displaystyle\frac{d\,n_{S}}{d\,N}$$ $$\displaystyle=$$ $$\displaystyle-4\frac{r}{\kappa}\left[\left(n_{S}-1\right)+\frac{3}{2}\frac{r}{% \kappa}\right]+2\xi^{2}\,,$$ (6) $$\displaystyle\frac{d\,n_{T}}{d\,N}$$ $$\displaystyle=$$ $$\displaystyle-\frac{r}{\kappa}\left[\left(n_{S}-1\right)+\frac{r}{\kappa}% \right]\,,$$ (7) where we have used $d\,{\rm ln}k=-dN$ with $N$ the number of Hubble times (e-folds) until the end of inflation. Also this equation is correct at leading order in slow roll, since one has $d/dN=-(1-\epsilon)d/d{\rm ln}k$ [Liddle & Turner 1994]. The connection between $\xi^{2}$ and $M^{3}V^{\prime\prime\prime}/V$ through $\epsilon$ is given by equations (2), $\xi^{2}=\sqrt{r/\kappa}M^{3}V^{\prime\prime\prime}/V$. Certainly one can find good inflationary models, which do not obey this slow-roll description. This could e.g. appear, because the derivation of the slow-roll equations is based on the assumption of a slowly varying Hubble parameter, which for particular models could be violated. As $N$ decreases, the inflaton rolls down its potential, and the observable parameters are determined when the relevant scales cross outside the horizon, approximately 50-60 e-folds before the end of inflation [Kolb & Turner 1990]. Single field inflation will end, when the SR conditions are violated [Kolb & Turner 1990, Hoffman & Turner 2001] $$r<6\kappa\,\,\,\,\,\mbox{or}\,\,\,\,\,\left|\left(n_{S}-1\right)+\frac{3}{% \kappa}r\right|<6\,.$$ (8) The area in $(n_{S},r)$ space inside this boundary is denoted the SR “validity-region”. The solid lines in fig. 1 show this region, and also examples of the flow of two models (dotted lines). An almost trivial observation is that the region allowed by current observations, given in (10-12), lies inside the SR validity-region. In order to close the set of equations, so that the flow equations uniquely define the time evolution of $n_{S}$ and $n_{T}$, we need to introduce an additional constraint, and there are various possibilities. In ref. [Hoffman & Turner 2001] the assumption was made that $x^{\prime\prime}=0$, where $x=V^{\prime}/V$. Another possibility would be to assume that either $V^{\prime\prime\prime}/V$ or $\xi^{2}$ can be treated as constants. Different choices will lead to different fix-points and different time evolution of $n_{S}$ and $n_{T}$, and general conclusions are therefore only credible if such conclusions are reached for any choice of this additional constraint. Observational constraints. The COBE observations [Bunn, Liddle & White 1996] gave the first constraint on the first derivative $$\frac{V^{3/2}}{M^{3}V^{\prime}}\approx 5\times 10^{-4}\,,$$ (9) and the present day constraints on slow roll parameters are improved when combining CMB data with data from the Lyman-$\alpha$ forest. The reason being that the error-ellipses for CMB and Lyman-$\alpha$ are almost perpendicular [Hannestad et al. 2002]. The reason for using Lyman-$\alpha$ data [Croft et al. 2000] instead of “standard” LSS data (such as PSCz [Saunders et al. 2000] or 2dFGRS [Percival et al. 2001]) is, that the Lyman-$\alpha$ data are obtained at high red-shift, where small scales are still linear. One should naturally keep in mind, that neither CMB nor Ly-$\alpha$ data include all the possible systematic errors. The bounds obtained are [Hannestad et al. 2002] (all at $2\sigma$) $$\displaystyle 0.8<$$ $$\displaystyle n_{s}$$ $$\displaystyle<1.0\,,$$ (10) $$\displaystyle 0<$$ $$\displaystyle r$$ $$\displaystyle<0.3\,,$$ (11) $$\displaystyle-0.05<$$ $$\displaystyle\partial_{{\rm ln}k}$$ $$\displaystyle<0.02\,,$$ (12) where $n_{S}$ is the scalar spectral index, $r$ is the tensor to scalar ratio, and $\partial_{{\rm ln}k}=dn_{S}/d{\rm ln}k$ is the bend defined through eq. (1). These bounds directly provide constraints on the first and second derivatives of the potential, $$\displaystyle M\left|\frac{V^{\prime}}{V}\right|$$ $$\displaystyle<$$ $$\displaystyle 0.25\,,$$ (13) $$\displaystyle M^{2}\left|\frac{V^{\prime\prime}}{V}\right|$$ $$\displaystyle<$$ $$\displaystyle 0.1\,,$$ (14) however, the third derivative is not directly constrained. Instead, eqs. (10-12) limit only $\xi^{2}$ to be smaller than about $|\xi^{2}|<0.036$, when assuming independent errors on $n_{S},r$ and $dn_{S}/d{\rm ln}k$, and in reality one could obtain a slightly stronger bound. To obtain a bound on $V^{\prime\prime\prime}$ we must combine the observational constraints (10-12) with the flow equations in SR parameter space, eqs. (6,7). This is because one has $dn_{S}/d{\rm ln}k\approx\sqrt{r}\,(V^{\prime\prime\prime}/V)\,(-2M^{3}/\sqrt{% \kappa})+4r/\kappa\,[n_{S}-1+3/2\,r/\kappa]$, and since we don’t have a lower bound on $r$ we cannot get any direct constraint on $V^{\prime\prime\prime}/V$. 3 Discussion We are going to consider the case, where slow-roll inflation is ended because the slow-roll conditions are violated, eq. (8). Another possibility would be to allow for other fields coupled to the inflaton field which could end inflation. The parameters observable with CMB and LSS are determined approximately 50 e-folds before the end of inflation, and we therefore run the SR violating boundary back in time 50 e-folds. This is done for various values of fixed $\xi^{2}$ (or fixed $V^{\prime\prime\prime}/V$). Now we demand that the observable parameters be in agreement with eq. (10), and if no point on the SR violating boundary lands inside the observed parameter-range, then we can exclude this value of $\xi^{2}$ (or $V^{\prime\prime\prime}/V$). Let us first consider the case where $\xi^{2}$ can be treated as a constant during the 50 e-folds. If $\xi^{2}=0$ one finds, that 50 e-folds before the crossing of the SR violating boundary one has $$2\times 10^{-5}<r<0.5\,,$$ (15) when we demand that $n_{s}$ complies with eq. (10). This can also be seen in fig. 2, where the thicker solid line is for $\xi^{2}=0$. If $\xi^{2}$ is positive then $r$ must be even smaller, since the region in fig. 2 moves to the right (larger $n_{s}$), and for $\xi^{2}>0.06$ there are no more points in agreement with observations. For negative $\xi^{2}$ the acceptable values of $r$ are larger than $2\times 10^{-5}$, and for $\xi^{2}<-0.06$ there are no points in agreement with observations. We hence conclude, that in the case where $\xi^{2}$ can be considered constant throughout the 50 e-folds and inflation ends by violating the slow-roll conditions, one must have $|\xi^{2}|<0.06$. We therefore find a constraint on $\partial_{{\rm ln}k}$ similar to (12) from the observational constraints on $n_{s}$ and the flow equations alone. This bound on $\xi^{2}$ can be converted into a bound on $V^{\prime\prime\prime}/V$ using the predicted $r$ and the relation $\xi^{2}=\sqrt{r/\kappa}M^{3}V^{\prime\prime\prime}/V$. We find $M^{3}V^{\prime\prime\prime}/V>-0.2$. If one instead considers the case where $V^{\prime\prime\prime}/V$ can be treated as a constant during the 50 e-folds, then the conclusions are slightly different. Again, in the case with $V^{\prime\prime\prime}/V=0$ one finds $2\times 10^{-5}<r<0.5$. When $V^{\prime\prime\prime}/V$ is negative then no points agree with observations if $V^{\prime\prime\prime}/V<-0.05$. Only for positive $V^{\prime\prime\prime}/V$ there will always be acceptable points, however, the allowed range for $r$ will decrease, e.g. for $M^{3}V^{\prime\prime\prime}/V=1$ one finds $10^{-7}<r<10^{-5}$. As discussed above, the most credible results must agree independently of the additional constraint (fixed $\xi^{2}$ or fixed $V^{\prime\prime\prime}/V$). We have seen that one always finds a lower bound $$\frac{V^{\prime\prime\prime}}{V}>-0.2\,.$$ (16) This is the first constraint found on the third derivative of the inflaton potential, and is valid under the assumptions specified above. It is important to note, that the approach adopted here differs from the results of ref. [Liddle & Turner 1994], where it was pointed out, that an observation of $\partial_{{\rm ln}k}$ would provide knowledge about $V^{\prime\prime\prime}$. The difference being, that in our case we have only an observational upper bound on $r$, and without the use of the flow equations, this will leave $V^{\prime\prime\prime}$ completely unknown. No strong predictions can be made on the magnitude of $r$, simply because if $V^{\prime\prime\prime}/V$ is large then $r$ is allowed to be smaller, however, one would often expect $M^{3}V^{\prime\prime\prime}/V$ to be smaller than both $M^{2}V^{\prime\prime}/V$ and $MV^{\prime}/V$, eqs. (13,14), in which case one predicts $r>3\times 10^{-6}$. The number of e-folds $N$ depends on the detailed mechanism of inflation such as the reheat temperature and the energy scale of inflation, and can be somewhat different from 50 (see e.g. [Lyth & Riotto 1999]). If $N=60$ our direct bound on $V^{\prime\prime\prime}/V>-0.05$ remains unchanged, however, the inferred bound from the case of constant $\xi^{2}$ is weakened by approximately a factor of 2. The lower bound on r discussed above becomes $r>3\times 10^{-7}$. Naturally, for a lower value of $N$ the bounds are correspondingly stronger. 4 Conclusion COBE gave us the first clear information on the first derivative of the inflaton potential, and the combination of CMB observations with data from the Lyman-$\alpha$ forest has given us information on the first two derivatives of the inflaton potential. Here we have combined the present observations with the flow equations in slow roll space, and found a lower bound on the third derivative of the inflaton potential $V^{\prime\prime\prime}/V>-0.2$. We have also shown, that unless $V^{\prime\prime\prime}/V$ is unreasonably large, then one predicts the tensor to scalar ratio, $r$, to be bounded from below $3\times 10^{-6}<r$. Acknowledgements It is a pleasure to thank Pedro Ferreira and Francesco Villante for comments and discussions, and Massimo Hansen for inspiration. SHH is supported by a Marie Curie Fellowship of the European Community under the contract HPMFCT-2000-00607. MK is supported by a Marie Curie Fellowship of the Swiss National Science Foundation under the contract 83EU-062445. References [Bunn, Liddle & White 1996] Bunn E. F., Liddle A. R., White M. J., 1996, Phys. Rev. D 54, 5917. [Copeland, Grivell & Liddle 1997] Copeland E. J., Grivell I. J., Liddle A. R., 1997, astro-ph/9712028. [Covi & Lyth 1999] Covi L., Lyth D. H., 1999, Phys. Rev.  D59, 063515. [Croft et al. 2000] Croft R. A.  et al., 2000, astro-ph/0012324. [Dodelson, Kinney & Kolb 1997] Dodelson S., Kinney W. H., Kolb E. W., 1997, Phys. Rev.  D56, 3207. [Dodelson & Stewart 2002] Dodelson S., Stewart E., 2002, Phys. Rev. D65, 101301. [Halverson et al. 2002] Halverson N. W. et al., 2002, Astrophys. J. 568, 38. [Hannestad, Hansen & Villante 2001] Hannestad S., Hansen S. H., Villante F. L., 2001, Astropart. Phys.  16, 137. [Hannestad et al. 2002] Hannestad S., Hansen S. H., Villante F. L., Hamilton A. J., 2002, Astropart. Phys.  17, 375. [Hoffman & Turner 2001] Hoffman M. B., Turner M. S., 2001, Phys. Rev. D 64, 023506. [Kinney & Riotto 1998] Kinney W. H., Riotto A., 1998, Phys. Lett.  B435, 272. [Kinney, Melchiorri & Riotto 2001] Kinney W. H., Melchiorri A., Riotto A., 2001, Phys. Rev. D 63, 023505. [Knox 1995] Knox L., 1995, Phys. Rev. D 52, 4307. [Kolb & Turner 1990] Kolb E. W., Turner M. S., 1990, Redwood City, USA: Addison-Wesley (1990) 547 p. (Frontiers in physics, 69). [Kosowsky & Turner 1995] Kosowsky A., Turner M. S., 1995, Phys. Rev. D 52, 1739. [Lee et al. 2001] Lee A. T.  et al., 2001, Astrophys. J.  561, L1. [Liddle & Lyth 1992] Liddle A. R., Lyth D. H., 1992, Phys. Lett.  B291, 391. [Liddle & Turner 1994] Liddle A. R., Turner M. S., 1994, Phys. Rev. D50, 758 [Erratum-ibid. D54, 2980]. [Lidsey et al. 1997] Lidsey J. E., Liddle A. R., Kolb E. W., Copeland E. J., Barreiro T., Abney M., 1997, Rev. Mod. Phys.  69, 373. [Lyth & Riotto 1999] Lyth D. H., Riotto A., 1999, Phys. Rept.  314, 1. [Netterfield et al. 2002] Netterfield C. B. et al., 2002, Astrophys. J. 571, 604. [Percival et al. 2001] Percival W. J. et al., 2001,Mon. Not. R. Astron. Soc., 327, 1297. [Saunders et al. 2000] Saunders W. et al., 2000, Mon. Not. R. Astron. Soc., 317, 55. [Stewart 1997] Stewart E. D., 1997, Phys. Lett.  B391, 34. [Stewart 1997b] Stewart E. D., 1997, Phys. Rev.  D56, 2019. [Terrero-Escalante & Garcia 2002] Terrero-Escalante C. A., Garcia A. A., 2002, Phys. Rev. D 65, 023515. [Turner & White 1996] Turner M. S., White M., 1996, Phys. Rev.  D53, 6822. [Wang, Tegmark & Zaldarriaga 2002] Wang X., Tegmark M., Zaldarriaga M., 2002, Phys. Rev. D65, 123001.
On-line and On-board Planning and Perception for Quadrupedal Locomotion Carlos Mastalli,  Ioannis Havoutis,  Alexander W. Winkler,  Darwin G. Caldwell,  Claudio Semini Department of Advanced Robotics, Istituto Italiano di Tecnologia, via Morego, 30, 16163 Genova, Italy email: {carlos.mastalli, ioannis.havoutis, darwin.caldwell, claudio.semini}@iit.it, [email protected] Abstract We present a legged motion planning approach for quadrupedal locomotion over challenging terrain. We decompose the problem into body action planning and footstep planning. We use a lattice representation together with a set of defined body movement primitives for computing a body action plan. The lattice representation allows us to plan versatile movements that ensure feasibility for every possible plan. To this end, we propose a set of rules that define the footstep search regions and footstep sequence given a body action. We use Anytime Repairing A* (ARA*) search that guarantees bounded sub-optimal plans. Our main contribution is a planning approach that generates on-line versatile movements. Experimental trials demonstrate the performance of our planning approach in a set of challenging terrain conditions. The terrain information and plans are computed on-line and on-board. I Introduction Legged motion planning over rough terrain involves making careful decisions about the footstep sequence, body movements, and locomotion behaviors. Moreover, it should consider whole-body dynamics, locomotion stability, kinematic and dynamic capabilities of the robot, mechanical properties and irregularities of the terrain. Frequently, locomotion over rough terrain is decomposed into: (a) perception and planning to reason about terrain conditions, by computing a plan that allows the legged system to traverse the terrain toward a goal, and (b) control that executes the plan while compensating for uncertainties in perception, modelling errors, etc. In this work, we focus on generating on-line and versatile plans for quadruped locomotion over challenging terrain. In legged motion planning one can compute simultaneously contacts and body movements, leading to a coupled motion planning approach [1][2][3][4]. This can be posed as a hybrid system or a mode-invariant problem. Such approaches tend to compute richer motion plans than decoupled motion planners, especially when employing mode-invariant strategies. Nevertheless, these approaches are often hard to use in a practical setting. They are usually posed as non-linear optimization problems such as Mathematical Programming with Complementary Constraints [3], and are computationally expensive. On the other hand, the legged motion planning problem can be posed as a decoupled approach that is naturally divided into: motion and contact planning [5][6][7]. These approaches avoid the combinatorial search space at the expense of complexity of locomotion. A decoupled motion planner has to explore different plans in the space of feasible movements (state space), which is often defined by physical, stability, dynamic and task constraints. Nevertheless, the feasibility space is variable since the stability constraints depend on the kind of movement, e.g. static or dynamic walking. The challenge of decoupled planners lies primarily in reducing the computation time while increasing the complexity of motion generation. To the best of our knowledge, up to now, decoupled approaches are limited in the versatility of movements and computation time, for instance [5] reduces the computation time but is still limited to small changes of the robot’s yaw (heading). Therefore, our main contribution is a planning approach that increases the versatility of plans, based on the definition of footstep search regions and footstep sequence according to a body action plan. Our method computes on-line and on-board plans (around 1 Hz) using the incoming perception information on commodity hardware. We evaluate our planning approach on the Hydraulic Quadruped robot -HyQ- [8] shown in Fig. 1. The rest of the paper is structured as follows: after discussing previous research in the field of legged motion planning (Section II). Section III explains, how on-line and on-board versatile plans are generated based on a decoupled planning approach (body action and footstep sequence planners). In Section IV we evaluate the performance of our planning approach in real-world trials before Section V summarizes this work and presents ideas for future work. II Related Work Motion planning is an important problem for successful legged locomotion in challenging environments. Legged systems can utilize a variety of dynamic gaits (e.g. trotting and galloping) in environments with smooth and continuous support such as flats, fields, roads, etc. Such cyclic gaits often assume that contact will be available within the reachable workspace at every step. However, for more complex environments, reactive cyclic gait generation approaches quickly reach their capabilities as foot placement becomes crucial for the success of the behavior. Natural locomotion over rough terrain requires simultaneous computation of footstep sequences, body movements and locomotion behaviors (coupled planning) [1][2][3][4]. One of the main problems with such approaches is that the search space quickly grows and searching becomes unfeasible, especially for systems that require on-line solutions. In contrast, we can state the planning and control problem into a set of sub-problems, following a decoupled planning strategy. For example the body path and the footstep planners can be separated, thus reducing the search space for each component [5][6][7]. This can reduce the computation time at the expense of limiting the planning capabilities, sometimes required for extreme rough terrain. There are two main approaches of decoupled planning: contact-before-motion [9][10][6] and motion-before-contact [11][5]. These approaches find a solution in motion space, which defines the possible motion of the robot. The motion space of contact-before-motion planner lies in a sub-manifold of the configuration space111A configuration space defines all the possible configurations of the robot, e.g. joint positions. with certain motion constraints, thus, a footstep constraint switches the motion space to a lower-dimensional sub-manifold $Q_{\sigma}$222Configuration space of a certain stance $\sigma$. [12][13]. Therefore, these planners have to find a path that connects the initial $F_{\sigma}$ and goal $F_{\sigma^{\prime}}$ feasible regions. Note that the planner must find transitions between feasible regions and then compute paths between all the possible stances (feasible regions) that connect $F_{\sigma}$ and $F_{\sigma^{\prime}}$, which is often computationally expensive. On the other hand, motion-before-contact approaches allow us to shrink the search space using an intrinsic hierarchical decomposition of the problem into; high-level planning, body-path planning and footstep planning, and low-level planning, CoG trajectory and foot trajectory generation. These approaches reduce considerably the search space but can also limit the complexity of possible movements, nevertheless [5], [14] and [15] have demonstrated that this can be a successful approach for rough terrain locomotion. Nonetheless these approaches were tailored for specific types of trials where goal states are mostly placed in front of the robot. This way most of these frameworks developed planners that use a fixed yaw and only allow forward expansion of the planned paths. Our approach takes a step forward in versatility by allowing motion in all directions and changes in the robot’s yaw (heading) as in real environments this is almost always required. The planning approach described in this paper decouples the problem into: body action and footstep sequence planning (motion-before-contact). The decoupling strategy allows us to reduce the computation time, ensures sub-optimal plans, and computes plans on-line using the incoming information of the terrain. Our body action planner uses a lattice representation based on body movement primitives. Compared to previous approaches our planning approach generates on-line more versatile movements (i.e. forward and backward, diagonal, lateral and yaw-changing body movements) while also ensure feasibility for every body action. Both planning and perception are computed on-line and on-board. III Technical Approach Our overall task is to plan on-line an appropriate sequence of footholds $\mathbf{F}$ that allows a quadruped robot to traverse a challenging terrain toward a body goal state $(x,y,\theta)$. To accomplish this, we decouple the planning problem into: body action and footstep sequence planning. The body action planner searches a bounded sub-optimal solution around a growing body-state graph. Then, the footstep sequence planner selects a foothold sequence around an action-specific foothold region of each planned body action. Clearly, decoupled approaches reduce the combinatorial search space at the expense of the complexity of locomotion. This limits the versatility of the body movement (e.g. discretized yaw–changing movements instead of continuous changes). Nevertheless, our approach manages this trade-off by employing a lattice representation of potentially feasible body movements and action-specific foothold regions. III-A Terrain Reward Map The terrain reward map quantifies how desirable it is to place a foot at a specific location. The reward value for each cell in the map is computed using geometric terrain features as in [15]. Namely we use the standard deviation of height values, the slope and the curvature as computed through regression in a 6cm$\times$6cm window around the cell in question; the feature are computed from a voxel model (2 cm resolution) of the terrain. We incrementally compute the reward map based on the aforementioned features and recompute locally whenever a change in the map is detected. For this we define an area of interest around the robot of 2.5m$\times$5.5m that uses a cell grid resolution of 4 cm. The cost value for each cell of the map is computed as a weighted linear combination of the individual feature $R(x,y)=\mathbf{w}^{T}\mathbf{r}(x,y)$, where $\mathbf{w}$ and $\mathbf{r}(x,y)$ are the the weights and features values, respectively. Figure 2 shows the generation of the reward map from the mounted RGBD sensor. The reward values are represented using a color scale, where blue is the maximum reward and red is the minimum one. III-B Body Action Planning A body action plan over challenging terrain take into consideration the wide range of difficulties of the terrain, obstacles, type of action, potential shin collisions, potential body orientations and kinematic reachability. Thus, given a reward map of the terrain, the body action planner computes a sequence of body actions that maximize the cross-ability of the sub-space of candidate footsteps. The body action plans are computed by searching over a body-state graph that is built using a set of predefined body movements primitives. The body action planning is described in Algorithm 1. III-B1 Graph construction We construct the body-state graph using a lattice-based adjacency model. Our lattice representation is a discretization of the configuration space into a set of feasible body movements, in which the feasibility depends on the selected sequence of footholds around the movement. The body-state graph represents the transition between different body-states (nodes) and it is defined as a tuple, $\mathcal{G}=(\mathcal{S},\mathcal{E})$, where each node $\mathbf{s}\in\mathcal{S}$ represents a body-state and each edge $\mathbf{e}\in\mathcal{E}\subseteq\mathcal{S}\times\mathcal{S}$ defines a potential feasible transition from $\mathbf{s}$ to $\mathbf{s^{\prime}}$. A sequence of body-states (or body poses $(x,y,\theta)$) approximates the body trajectory that the controller will execute. An edge defines a feasible transition (body action) according to a set of body movement primitives. The body movement primitives are defined as body displacements (or body actions), which ensure feasibility together with a feasible footstep region. A feasible footstep region is defined according to the body action. Given a body-state query, a set of successor states are computed using a set of predefined body movement primitives. A predefined body movement primitive connects the current body state $\mathbf{s}=(x,y,\theta)$ with the successor body state $\mathbf{s^{\prime}}=(x^{\prime},y^{\prime},\theta^{\prime})$. The graph $\mathcal{G}$ is dynamically constructed due to the associated cost of transition $c_{body}(\mathbf{s},\mathbf{s^{\prime}})$ depends on the current and next states (or current state and action). In fact, the feasible regions of foothold change according to each body action, which affect the value of the transition cost. Moreover, an entire graph construction could require a greater memory pre-allocation and computation time than available (on-board computation). Figure 3 shows the graph construction for the body action planner. The associated cost of every transition $c_{body}(\mathbf{s},\mathbf{s^{\prime}})$ is computed using the footstep regions. These footstep regions depend on the body action in such a way that they ensure feasibility of the plan, as is explained in III-B4 subsection. For every expansion, the footstep regions of Left-Front LF (brown squares), Right-Front RF (yellow squares), Left-Hind LH (green squares) and Right-Hind RH (blue squares) legs are computed given a body action. The resulting states $\mathbf{s^{\prime}}$ are checked for body collision with obstacles, using a predefined area of the robot, and invalid states are discarded. For legged locomotion over challenging terrain, obstacles are defined as unfeasible regions to cross, e.g. a wall or tree. In our case, the obstacles are detected when the height deviation w.r.t. the estimated plane of the ground is larger than the kinematic feasibility of the system in question (HyQ). We build the obstacle map with 8 cm resolution, otherwise it would be computationally expensive due to the fact that it would increase the number of collision checks. In fact, the body collision checker evaluates if there are body cells in the resulting state that collide with an obstacle. III-B2 Body cost The body cost describes how desirable it is to cross the terrain with a certain body path (or body actions). This cost function is designed to maximize the cross-ability of the terrain while minimizing the path length. In fact, the body cost function $c_{body}$ is a linear combination of: terrain cost $c_{t}$, action cost $c_{a}$, potential shin collision cost $c_{pc}$, and potential body orientation cost $c_{po}$. The cost of traversing any transition between the body state $\mathbf{s}$ and $\mathbf{s^{\prime}}$ is defined as follows: $$\displaystyle c_{body}(\mathbf{s},\mathbf{s^{\prime}})=w^{b}_{t}c_{t}(\mathbf{% s})+w_{a}c_{a}(\mathbf{s},\mathbf{s^{\prime}})$$ $$\displaystyle+w_{pc}c_{pc}(\mathbf{s})+w_{po}c_{po}(\mathbf{s})$$ (1) where $w_{i}$ are the respective weight of the different costs. For a given current body state $(x,y,\theta)$, the terrain cost $c_{t}$ is evaluated by averaging the best $n$-footsteps terrain reward values around the foothold regions of a nominal stance (nominal foothold positions). The action cost $c_{a}$ is defined by the user according to the desirable actions of the robot, e.g. it is preferable to make diagonal body movements than lateral ones. The potential leg collision cost $c_{pc}$ is computed by searching potential obstacles in the predefined workspace region of the foothold, e.g. near to the shin of the robot. In fact, a potential shin collision is detected around a predefined shin collision region, which depends on the configuration of each leg as shown in Figure 4. This cost is proportional to the height defined around the footstep plane, where red bars represent collision elements. Finally, the potential body orientation $c_{po}$ is estimated by fitting a plane in the possible footsteps around the nominal stance for each leg. III-B3 Reducing the search space Decoupling the planning problem into body action and footstep planning avoids the combinatorial search space. This allows us to compute plans on-line, and moreover, to develop a closed-loop planning approach that can deal with changes in the environment. Nevertheless the reduced motion space might span infeasible regions due to the strong coupling between the body action and footstep plans. A strong coupling makes that a feasible movement depends on the whole plan (body action and footstep sequence). In contrast to previous approaches [5][6][14][11][15], our planner uses a lattice-based representation of the configuration space. The lattice representation uses a set of predefined body movement primitives that allows us to apply a set of rules that ensure feasibility in the generation of on-line plans. The body movement primitives are defined as 3D motion-actions $(x,y,\theta)$ of the body that discretize the search space. We implement a set of different movements (body movement primitives) such as: forward and backward, diagonal, lateral and yaw-changing movements. III-B4 Ensuring feasibility Our lattice representation allows the robot to search a body action plan according to a set of predefined body movement primitives. A predefined body movement primitive cannot guarantee a feasible plan due to the mutual dependency of movements and footsteps. For instance, the feasibility of clockwise movements depends on the selected sequence of footsteps, and can only be guaranteed in a specific footstep region. In fact, we exploit this characteristic in order to ensure feasibility in each body movement primitive. These footstep regions increase the support triangle areas given a certain body action, improving the execution of the whole-body plan. This strategy also guarantees that the body trajectory generator can connect the support triangles (support triangle switching) [14]. Figure 5 illustrates the footstep regions according to the type of body movement primitives. These footstep regions are tuned to increase the triangular support areas, improving the execution of the whole-body plan. III-B5 Heuristic function The heuristic function guides the search to promising solutions, improving the efficiency of the search. Heuristic-based search algorithms require that the heuristic function is admissibile and consistent. Most of the heuristic functions estimate the cost-to-go by computing the Euclidean distance to the goal. However, these kind of heuristic functions do not consider the terrain conditions. Therefore, we implemented a terrain-aware heuristic function that considers the terrain conditions. The terrain-aware heuristic function computes the estimated cost-to-go by averaging the terrain cost locally and estimating the number of footsteps to the goal: $$h(\mathbf{s})=-\bar{R}\mathcal{F}(\|\mathbf{g}-\mathbf{s}\|)$$ (2) where $\bar{R}$ is the average reward (or cost) and $\mathcal{F}(\|\mathbf{g}-\mathbf{s}\|)$ is the function that estimates the number of footsteps to the goal. III-B6 Ensuring on-line planning Open-loop planning approaches fail to deal adequately with changes in the environment. In real scenarios, the robot has a limited range of perception that makes open-loop planning approaches unreliable. Closed-loop planning considers the changes in the terrain conditions, and uses predictive terrain conditions for non-perceived regions, improving the robustness of the plan. Dealing with re-planning and updating of the environment information requires that the information is managed in an efficient search exploration of the state space. We manage to reduce the computation time of building a reward map by computing the reward values from a voxelized map of the environment. Additionally, we (re-)plan and update the information using ARA${}^{*}$ [16]. ARA${}^{*}$ ensures provable bounds on sub-optimality, depending on the definition of the heuristic function. On the other hand, the terrain-aware heuristic function guides the search according to terrain conditions. III-C Footstep Sequence Planning Given the desirable body action plan, $\mathbf{Q}=(\mathbf{x}_{d},\mathbf{y}_{d},\boldsymbol{\theta}_{d})$, the footstep sequence planner computes the sequence of footholds that reflects the intention of the body action. A local greedy search procedure selects the optimal footstep target. A (locally optimal) footstep is selected from defined footstep search regions by the body action planner. For every body action, the footstep planner finds a sequence of the 4-next footsteps. The footstep sequence planning is described in Algorithm 2. III-C1 Footstep cost The footstep cost describes how desirable is a foothold target, given a body state $\mathbf{s}$. The purpose of this cost function is to maximize the locomotion stability given a candidate set of footsteps. The footstep cost $c_{footstep}$ is a linear combination of: terrain cost $c_{t}$, support triangle cost $c_{st}$, shin collision cost $c_{c}$ and body orientation cost $c_{o}$. The cost of certain footstep $\mathbf{f}^{e}$ is defined as follows: $$\displaystyle c_{footstep}(\mathbf{f}^{e})=w^{f}_{t}c_{t}(\mathbf{f}^{e})+w_{% st}c_{st}(\mathbf{f}^{e})$$ $$\displaystyle+w_{c}c_{c}(\mathbf{f}^{e})+w_{o}c_{o}(\mathbf{f}^{e})$$ (3) where $\mathbf{f}^{e}$ defines the Cartesian position of the foothold target $e$ (foot index). We consider as end-effectors: the LF, RF, LH and RH feet of HyQ. The terrain cost $c_{t}$ is computed from the terrain reward value of the candidate foothold, i.e. using the terrain reward map (see Figure 2). The support triangle cost $c_{st}$ depends on the inradii of the triangle formed by the current footholds and the candidate one. As in the body cost computation, we use the same predefined collision region around the candidate foothold. Finally, the body orientation cost $c_{o}$ is computed using the plane formed by the current footsteps and the candidate one. We calculate the orientation of the robot from this plane. IV Experimental Results The following section describes the experiments conducted to validate the effectiveness and quantify the performance of our planning approach. IV-A Experimental Setup We implemented and tested our approach with HyQ, a Hydraulically actuated Quadruped robot developed by the Dynamic Legged Systems (DLS) Lab [8]. HyQ roughly has the dimensions of a medium-sized dog, weight 90 kg, is fully-torque controlled and is equipped with precision joint encoders, a depth camera (Asus Xtion) and an Inertial Measurement Unit (IMU). We used an external state measuring system, Vicon marker-based tracker system. The perception and planning is done on-board in an i7/2.8GHz PC, and all computations of the real-time controller are done using a PC104 stack. The first set of experiments are designed to test the capabilities of our motion planner. We use a set of benchmarks of realistic locomotion scenarios (see Figure 6): stepping stones, pallet, stair and gap. In these experiments, we compared the cost, number of expansions and computation time of ARA${}^{*}$ against A${}^{*}$ [16] using our lattice representation. The results are based on 9 predefined goal locations. Second experiment, the robot must plan on-line with dynamic changes in the terrain. Third experiment, the next experiment we show the on-line planning and perception results. Finally, we validate the performance of our planning approach by executing in HyQ. IV-B Results and Discussions IV-B1 Initial plan results The stepping stones, pallet, stair and gap experiments (see Figure 6) show the initial plan quality (see Table I) of our approach using A${}^{*}$ and ARA${}^{*}$. To this end, we plan a set of body actions and footstep sequences for 9 predefined goal locations, approximately 2.0 m away from the starting position, and compare the cost and number of expansions of the body action path, and the planning time of ARA${}^{*}$ against A${}^{*}$. Three main factors contribute to the decreased planning time while maintaining the quality of the plan: First, the lattice-based representation (using body movement primitives) considers versatile movements in the sense that it allows us to reduce the search space around feasible regions (feasible motions) according a certain body action, in contrast to grid-based approaches that ensure the feasibility by applying rules that are agnostic to body actions. Second, our terrain-aware heuristic function guides the tree expansion according to terrain conditions in contrast to a simple Euclidean heuristic. Finally, the ARA* algorithm implements a search procedure that guarantees bounded sub-optimality in the solution given a proper heuristic function [16]. Figure 7 shows the initial plan of A* and ARA*. IV-B2 On-line planning and perception Using a movement primitive-based lattice search reduces the size of the search space significantly, leading to responsive planning and replanning. In our experimental trials, we choose a lattice graph resolution (discretization) of 4 cm for $x/y$ and 1.8${}^{\circ}$ for $\theta$ and goal state is never more that 5 m away from the robot. In these trials, the plan/replan frequency is approximately 0.5 Hz. The efficient occupancy grid-based mapping allows us to incrementally build up the model of the environment and focus our computations to the area of interest around the robot body, generating plans quickly. This allows us to locally update the computed reward map and incrementally build the reward map as the robot moves with an average response frequency of 2 Hz as is shown in Figure 9. IV-B3 Planning and execution We generate swift and natural dynamic whole-body motions from an n-step lookahead optimization of the body trajectory that uses a dynamic stability metric, the Zero Moment Point (ZMP). A combination of floating-base inverse dynamics and virtual model control accurately executes such dynamic whole-body motions with an actively compliant system [17]. Figure 8 shows the execution of an initial plan. Our locomotion system is robust due to a combination of on-line planning and compliance control. V Conclusion We presented a planning approach that allows us to plan on-line, versatile movements. Our approach plans a sequence of footsteps given a body action plan. The body action planner guarantees a bounded sub-optimal plan given a set of predefined body movement primitives (lattice representation). In general, to generate versatile movement one has to explore a considerable region of the motion space, making on-line planning a computationally challenging task, especially in practical applications. Here, we reduce the exploration by ensuring the feasibility of every possible body action. In fact, we define the footstep search regions and footstep sequence based on the body action. We showed how our planning approach computes on-line plans given incoming terrain information. Various real-world experimental trials demonstrate the capability of our planning approach to traverse challenging terrain. References [1] Y. Tassa and E. Todorov, “Stochastic Complementarity for Local Control of Discontinuous Dynamics,” in Proceedings of Robotics: Science and Systems, 2010, [2] I. Mordatch, E. Todorov, and Z. Popović, “Discovery of complex behaviors through contact-invariant optimization,” ACM Transactions on Graphics, vol. 31, no. 4, pp. 1–8, July 2012, [3] M. Posa, C. Cantu, and R. Tedrake, “A direct method for trajectory optimization of rigid bodies through contact,” The International Journal of Robotics Research, Oct. 2013, [4] H. Dai, A. Valenzuela, and R. Tedrake, “Whole-body motion planning with simple dynamics and full kinematics,” in 14th IEEE-RAS International Conference on Humanoid Robots, 2014, [5] J. Z. Kolter, M. P. Rodgers, and A. Y. Ng, “A control architecture for quadruped locomotion over rough terrain,” in IEEE International Conference on Robotics and Automation (ICRA), 2008, pp. 811–818, [6] P. Vernaza, M. Likhachev, S. Bhattacharya, A. Kushleyev, and D. D. Lee, “Search-based planning for a legged robot over rough terrain,” in IEEE International Conference on Robotics and Automation, 2009, [7] M. Kalakrishnan, J. Buchli, P. Pastor, and S. Schaal, “Learning locomotion over rough terrain using terrain templates,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2009, pp. 167–172, [8] C. Semini, N. G. Tsagarakis, E. Guglielmino, M. Focchi, F. Cannella, and D. G. Caldwell, “Design of HyQ – a hydraulically and electrically actuated quadruped robot,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 2011, [9] A. Escande, A. Kheddar, and S. Miossec, “Planning support contact-points for humanoid robots and experiments on HRP-2,” in IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, Oct. 2006, pp. 2974–2979, [10] K. Hauser and J. C. Latombe, “Multi-modal Motion Planning in Non-expansive Spaces,” The International Journal of Robotics Research, vol. 29, no. 7, pp. 897–915, Oct. 2009, [11] M. Zucker, J. A. Bagnell, C. Atkeson, and J. Kuffner, “An Optimization Approach to Rough Terrain Locomotion,” in IEEE International Conference on Automation and Robotics ICRA, 2010, [12] K. Hauser and V. Ng-Thow-Hing, “Randomized multi-modal motion planning for a humanoid robot manipulation task,” The International Journal of Robotics Research, vol. 30, no. 6, pp. 678–698, Dec. 2010, [13] A. Escande, A. Kheddar, and S. Miossec, “Planning contact points for humanoid robots,” Robotics and Autonomous Systems, vol. 61, no. 5, pp. 428–442, May 2013, [14] M. Kalakrishnan, J. Buchli, P. Pastor, M. Mistry, and S. Schaal, “Learning, planning, and control for quadruped locomotion over challenging terrain,” The International Journal of Robotics Research, vol. 30, no. 2, pp. 236–258, 11 2010, [15] A. W. Winkler, I. Havoutis, S. Bazeille, J. Ortiz, M. Focchi, D. Caldwell, and C. Semini, “Path planning with force-based foothold adaptation and virtual model control for torque controlled quadruped robots,” in IEEE International Conference on Robotics and Automation (ICRA), 2014, [16] M. Likhachev, G. Gordon, and S. Thrun, “ARA*: Anytime A* with Provable Bounds on Sub-Optimality,” in Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference (NIPS-03).   MIT Press, 2004.    [17] A. W. Winkler, C. Mastalli, I. Havoutis, M. Focchi, D. Caldwell, and C. Semini, “Planning and Execution of Dynamic Whole-Body Locomotion for a Hydraulic Quadruped on Challenging Terrain,” in IEEE International Conference on Robotics and Automation (ICRA), 2015,
Threshold effects in electron-positron pair creation from the vacuum: Stabilization and longitudinal vs transverse momentum sharing K. Krajewska [email protected]    J. Z. Kamiński Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warszawa, Poland (November 26, 2020) Abstract Momentum distributions of electron-positron pairs created from the vacuum by an oscillating in time electric field are calculated in the framework of quantum field theory. A pronounced enhancement of those distributions is observed as the frequency of the electric field passes across the one-photon threshold. Below that threshold the pairs preferentially carry a longitudinal momentum, while above the threshold they tend to carry a transverse momentum. Such momentum sharing has an impact on the number of produced pairs: It grows fast with increasing the field frequency below the threshold but it saturates at a roughly constant value above it. On the other hand, at the fixed frequency above the one-photon threshold, the number of pairs scales quadratically with the field strength. This typically perturbative scaling holds even for large electric fields. Thus, the validity of the perturbation theory is extended here to processes which result in creation of particles with substantial transverse momenta. I Introduction The vacuum instability in the presence of a static electric field, which results in electron-positron ($e^{-}e^{+}$) pair creation, has been predicted decades ago Sauter ; Heisenberg-Euler ; Schwinger . Breaking the vacuum requires an enormous electric field strength, ${\cal E}_{S}=m_{\rm e}^{2}c^{3}/|e|=1.32\times 10^{18}$ V/m, where $m_{\rm e}$ is the electron rest mass and $e=-|e|<0$ is its charge (here and in what follows, we keep $\hbar=1$) Fradkin . Here, ${\cal E}_{S}$ is the Sauter-Schwinger critical field. Since such electric field cannot be achieved in laboratory settings, the Sauter-Schwinger mechanism of electron-positron pair production has not been verified experimentally yet. Another disadvantage is that the process is very weak. Hence, various proposals have been put forward aiming at enhancing the signal of electron-positron pairs Schutzhold ; Dunne1 ; Orthaber ; Fey ; Jansen ; Akal ; Otto1 ; Otto2 ; Jiang ; Sitiwaldi ; Akkermans ; Li1 ; Li2 ; Dumlu1 ; Dumlu2 ; Xie ; KTK . Thus, raising the question of optimal control of the process Hebenstreit ; Fillion . In this paper, we study the Sauter-Schwinger pair production by a pulsed electric field, with a frequency tuned so it passes across the one-photon threshold. This region is particularly interesting, due to a threshold-related enhancement of probability distributions of created pairs. Similar enhancements have been observed before in the context of multiphoton pair production and channel closing effects CC2 ; CC1 ; CC3 . It is, however, the one-photon process that is the most efficient of all, in agreement with the results presented in CC1 ; CC3 . At this point, let us also note that similar threshold-related enhancements have been observed in other multiphoton processes such as strong-field detachment or ionization XX ; AA ; YY ; ZZ and, as such, they are a universal strong-field phenomenon. A closely related to threshold enhancements is the concept of an effective mass acquired by particles in strong fields. Such a concept, depending explicitly on the electric field parameters, has been discussed in CC1 . Its limitation was also noted, as the effective mass should vary between different parameter regimes. While CC1 ; CC3 are related to the nonperturbative regime of pair creation, most of our results concern the opposite one. It is commonly believed that in the perturbative regime, the effective mass approaches the rest mass of particles. As we demonstrate in our paper, this is provided that the transverse momentum of created particles is negligible. In the current paper, we provide a detailed study of the Sauter-Schwinger process in the vicinity of the one-photon threshold. This threshold marks the border between a multiple- and one-photon pair production. Thus, it makes for a qualitatively different behavior of the resulting momentum distributions and the number of produced pairs in those two regimes. Our analysis shows, for instance, that the process is either dominated by the longitudinal or by the transverse motion of created particles, that is below and above the one-photon threshold respectively. This also affects the marginal momentum distributions of created particles, as presented in the paper. The overall result on the total number of produced pairs is that while it increases fast with the electric field frequency below the threshold, it stabilizes above it. While our conclusions follow from exact numerical calculations, they are also confirmed by analytical results derived from the perturbative treatment of pair production. Note that stabilization by external fields has been observed in other quantum-mechanical problems as well. See, for instance, the stabilization of atoms or molecules in intense laser fields Gav and the stabilization of electron states in semiconductor heterostructures by crossed magnetic and electric fields KK . Such an effect has also been observed in the nonlinear Bethe-Heitler process of pair production KKK . This counter-intuitive effect seems, therefore, to occur quite frequently in quantum physics. The paper is organized as follows. In Sec. II, for convenience of the reader, we briefly present the theoretical formulation of the Sauter-Schwinger pair production by a time-dependent electric field. This is followed by a detailed analysis of the resulting momentum distribution and the number of created pairs in Sec. III. We summarize our results in Sec. IV. II Theoretical formulation We consider the electron-positron pair creation from vacuum by a homogeneous in space, time-dependent electric field, ${\mathbfcal{E}}(t)={\cal E}(t){\bm{e}}_{z}$. In addition, we assume that the field is generated by lasers, meaning that the condition, $$\int_{-\infty}^{+\infty}\mathrm{d}t\,{\cal E}(t)=0,$$ (1) is satisfied KTK . In the presence of the time-dependent electric field, the fermionic field operator can be decomposed into eigenmodes which are labeled by the linear momentum ${\bm{p}}$. In the following, we will relate to its longitudinal $p_{\|}$ and transverse ${\bm{p}}_{\perp}$ components being defined with respect to the electric field direction, $$p_{\|}={\bm{p}}\cdot{\bm{e}}_{z},\quad{\bm{p}}_{\perp}={\bm{p}}-p_{\|}{\bm{e}}% _{z}.$$ (2) As it was demonstrated in Grib1 ; Grib2 ; KTK , the momentum distribution of particles $f({\bm{p}})$ generated in the given eigenmode of the fermionic field ${\bm{p}}$ is determined by solving the system of equations, $$\mathrm{i}\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}c_{\bm{p}}^{(1)}(t)\\ c_{\bm{p}}^{(2)}(t)\\ A(t)\end{bmatrix}=\begin{pmatrix}\omega_{\bm{p}}(t)&\mathrm{i}\Omega_{\bm{p}}(% t)&0\cr-\mathrm{i}\Omega_{\bm{p}}(t)&-\omega_{\bm{p}}(t)&0\cr 0&0&0\end{% pmatrix}\begin{bmatrix}c_{\bm{p}}^{(1)}(t)\\ c_{\bm{p}}^{(2)}(t)\\ A(t)\end{bmatrix}-\begin{bmatrix}0\\ 0\\ \mathrm{i}{\cal E}(t)\end{bmatrix},$$ (3) with the initial conditions that $$\lim_{t\rightarrow-\infty}c_{\bm{p}}^{(1)}(t)=1,\quad\lim_{t\rightarrow-\infty% }c_{\bm{p}}^{(2)}(t)=0,\quad\lim_{t\rightarrow-\infty}{\cal A}(t)=0.$$ (4) Namely, $$f({\bm{p}})=\lim_{t\rightarrow+\infty}|c_{\bm{p}}^{(2)}(t)|^{2},$$ (5) where $$|c_{\bm{p}}^{(1)}(t)|^{2}+|c_{\bm{p}}^{(2)}(t)|^{2}=1.$$ (6) Moreover, $A(t)$ in (3) defines the vector potential, ${\bm{A}}(t)=A(t){\bm{e}}_{z}$, where ${\cal E}(t)=-\dot{A}(t)$, and, according to Eq. (1), it has to satisfy the condition, $$\underset{t\rightarrow-\infty}{\lim}A(t)=\underset{t\rightarrow+\infty}{\lim}A% (t)=0.$$ (7) Here, we took into account Eq. (4). Moreover, in Eq. (3), we have introduced, $$\omega_{\bm{p}}(t)=\sqrt{(m_{\rm e}({\bm{p}}_{\perp})c^{2})^{2}+c^{2}(p_{\|}-% eA(t))^{2}},$$ (8) with the electron effective mass, $$m_{\rm e}({\bm{p}}_{\perp})=\frac{1}{c}\sqrt{(m_{\rm e}c)^{2}+{\bm{p}}_{\perp}% ^{2}}.$$ (9) This is to emphasize that the transverse momentum of the electron always enters the formulas through the coupling to $m_{\rm e}c$. This is confirmed by the definition of $$\Omega_{\bm{p}}(t)=-ce{\cal E}(t)\frac{m_{\rm e}({\bm{p}}_{\perp})c^{2}}{2% \omega_{\bm{p}}^{2}(t)},$$ (10) which appears in (3) along with $\omega_{\bm{p}}(t)$. Note that, for negligibly small transverse momenta of produced particles, their effective mass (9) reduces to the rest one. As we will show in the next section, this characterizes the perturbative regime of a few-photon pair production. In contrast, the one-photon processes are typically accompanied by a large transverse momentum gain. Thus, giving a rise to the particles rest mass. The first two equations of (3) are structurally identical with the Schrödinger equation of a two-level system. They have to be solved for the functions $c_{\bm{p}}^{(i)}(t)$, $i=1,2$, which determine the momentum distributions of created particles [Eqs. (5) and (6)]. Due to the axial symmetry of the problem, those distributions will depend only on $p_{\|}$ and $p_{\perp}^{2}={\bm{p}}^{2}-p_{\|}^{2}$, i.e., $f({\bm{p}})=f(p_{\|},p_{\perp})$. For our further purposes, we also introduce the marginal momentum distributions, $$f(p_{\|})=2\pi\int_{0}^{+\infty}\mathrm{d}p_{\bot}\,p_{\bot}f(p_{\|},p_{\bot})$$ (11) and $$f(p_{\bot}^{2})=\pi\int_{-\infty}^{+\infty}\mathrm{d}p_{\|}\,f(p_{\|},p_{\bot}).$$ (12) Then, the total number of pairs created in the relativistic unit volume becomes $$\displaystyle f$$ $$\displaystyle=\int\mathrm{d}^{3}pf(p_{\|},p_{\bot})=2\pi\int_{-\infty}^{+% \infty}\mathrm{d}p_{\|}\int_{0}^{+\infty}p_{\bot}\mathrm{d}p_{\bot}\,f(p_{\|},% p_{\bot})$$ $$\displaystyle=\int_{-\infty}^{+\infty}\mathrm{d}\,p_{\|}f(p_{\|})=\int_{0}^{+% \infty}\mathrm{d}p_{\bot}^{2}\,f(p_{\bot}^{2}).$$ (13) Note that these definitions allow us to determine the momentum probability distributions, $$\mathcal{P}(p_{\|},p_{\bot})=f(p_{\|},p_{\bot})/f$$ (14) and $$\mathcal{P}(p_{\|})=f(p_{\|})/f,\quad\mathcal{P}(p_{\bot}^{2})=f(p_{\bot}^{2})% /f.$$ (15) While in the following we will calculate $f(p_{\|},p_{\perp})$ along with the longitudinal and transverse momentum distributions of created particles, $f(p_{\|})$ and $f(p_{\perp}^{2})$, their functional dependence on $p_{\|}$ and $p_{\perp}^{2}$ remains the same as for the probability distributions mentioned above. Thus, it will be justified to talk about the particle’s momenta for which the pair creation will be the most/least probable, even though formally we will not calculate the probability distributions given by Eqs. (14) and (15). We will perform calculations for the following model of a pulsed electric field, $$\mathcal{E}(t)=\mathcal{E}_{0}F(t),$$ (16) with $$F(t)=\frac{{\cal N}_{0}}{\cosh(\beta t)}\sin(\omega t).$$ (17) Here, ${\cal E}_{0}$ is the amplitude while $\omega$ is the carrier frequency of field oscillations. The parameter $\beta$ in (17) determines the bandwidth of the pulse. In the following, we will fix $\beta=m_{\rm e}c^{2}$ and ${\cal E}_{0}$ to either ${\cal E}_{0}=-0.1{\cal E}_{S}$ or ${\cal E}_{0}=-{\cal E}_{S}$. On the other hand, we will change the frequency $\omega$ such that $\omega=N\omega_{0}$, with $\omega_{0}=0.2\pi\,m_{\rm e}c^{2}$ and the value of $N$ varying continuously. To fix the amplitude of the electric field while changing its carrier frequency, we will adjust ${\cal N}_{0}$ such that $$\underset{t}{\max}|F(t)|=1.$$ (18) This means that the number ${\cal N}_{0}$ has to be changed appropriately for different values of $N$. For ${\cal E}_{0}=-0.1{\cal E}_{S}$, the time dependence of the electric field ${\cal E}(t)$ and the corresponding vector potential $A(t)$ are shown in Fig. 1. Note that the maximum of the vector potential scales like $1/N$, and so it decreases with increasing $N$. Similar behavior of ${\cal E}(t)$ and $A(t)$ is observed if we change the electric field amplitude to ${\cal E}_{0}=-{\cal E}_{S}$. As we will demonstrate in the next section, this will have an impact on the momentum distributions of created pairs. III Numerical illustrations In Fig. 2, we present the color mappings of the momentum distribution $f(p_{\|},p_{\perp})$ as a function of the longitudinal momentum $p_{\|}$ and a real number $N=\omega/\omega_{0}$. The results are for ${\cal E}_{0}=-0.1{\cal E}_{S}$ and ${\bm{p}}_{\perp}={\bm{0}}$, with the remaining parameters given in the previous section. While in the upper panel we plot the results in the linear scale, their details become more visible in the logarithmic scale, which is used in the lower panel. As one can see, there is a region in the $(p_{\|},N)$-plane where the $e^{-}e^{+}$ pair creation is most probable. This region is characterized by a roughly zero longitudinal momentum of created pairs, $p_{\|}\approx 0$, and by the value $N\approx 3$. For $\omega_{0}=0.2\pi\,m_{\rm e}c^{2}$, the latter corresponds to the electric field carrier frequency of roughly $\omega\approx 2m_{\rm e}c^{2}$. This is the energy necessary to be absorbed from the external field in order to produce an electron-positron pair at rest. This agrees with the fact that the corresponding momentum distribution peaks at ${\bm{p}}={\bm{0}}$. It also shows that the most probable process is via absorption of a single photon. Other order processes, i.e., processes which occur via absorption of multiple photons, are possible but they are less likely to happen. This is even more clear in Fig. 3 for ${\cal E}_{0}=-{\cal E}_{S}$. The additional stripes in the distribution, which are visible already in the linear scale in the upper panel, correspond to different order processes. In these cases, the carrier frequency of the electric field is smaller than $2m_{\rm e}c^{2}$ and, hence, the one-photon pair production is forbidden. Nevertheless, it is possible to absorb additional photons from the field, which leads to the above-threshold pair production. With increasing $\omega$ (or, equivalently, $N$) we pass through the one-photon threshold at $\omega_{\rm th}=2m_{\rm e}c^{2}$. Since for the $n$-photon process the energy conservation law is $n\omega=2\sqrt{(m_{\rm e}({\bm{p}}_{\perp})c^{2})^{2}+c^{2}p_{\|}^{2}}$, one can anticipate that in the current case ($n=1$ and $\omega\approx\omega_{\rm th}$) it is energetically preferable to produce particles at rest. With still increasing $\omega$, there will be a portion of energy in excess to the threshold energy $\omega_{\rm th}$ that will be available to the particles. As it is already indicated by Figs. 2 and 3, which are for ${\bm{p}}_{\perp}={\bm{0}}$, this portion of energy, which will be equally redistributed between an electron and a positron, is hardly converted into their longitudinal motion. A more detailed analysis of the excess energy sharing between the longitudinal and the transverse motion of the particles can be based upon the color mappings presented in Fig. 4. The two-dimensional momentum mappings in Fig. 4 are for ${\cal E}_{0}=-{\cal E}_{S}$ and for different values of $N$ (i.e., for different carrier frequencies $\omega$). Such strong electric field has been chosen, as the features of the momentum distributions we want to discuss next are very pronounced in this case. First, we note that for $N=1,2,$ and $3$, the corresponding frequencies of the field oscillations, $\omega=N\omega_{0}$, are below the one-photon energy threshold $\omega_{\rm th}$. In other words, these processes correspond to the above-threshold pair production. In this regime, the pairs are mostly created with a zero transverse and a nonzero longitudinal momentum components. For this strong field, already in the linear scale, the multiphoton modulations in the pair creation momentum distributions are visible. More importantly, with increasing $N$ toward the one-photon threshold ($N_{\rm th}=10/\pi=3.18$) the maximum of those distributions shifts toward zero longitudinal momentum. At the same time, when looking at the magnitude of the distributions, those will be the most pronounced of all. While still increasing $N$, the pair creation remains the one-photon process. Such a process typically occurs with a nearly zero longitudinal momentum of created particles but the excess energy absorbed from the field is converted into the particles transverse motion. Thus, with increasing $N$ (and, hence, with increasing the available excess energy), the maximum of the momentum distribution is shifted toward larger transverse momenta. As it follows from our analysis, the energy sharing between the transverse and longitudinal motion of created particles strongly depends on whether the pair creation is due to a single- or to a multi-photon transition. In either case, the energy absorbed from the field seems to be redistributed such that the particles have minimal energy. Specifically, for small $N$, in which case the vector potential takes large values (see, Fig. 1), it has to be balanced by a large longitudinal momentum. This follows from the definition of $\omega_{\bm{p}}(t)$ [Eq. (8)] which enters the system of equations (3). It also explains why with decreasing $N$, the respective longitudinal momentum for which we observe maximum pair creation shifts toward larger values. At the same time, to minimalize the energy of created particles, the transverse momentum remains roughly zero. Taking into account the definition (9), one can conclude that multi-photon pair creation occurs with no increase of the particles effective mass. According to Eq. (8), the resulting momentum distributions should not be symmetric with respect to the momentum reflection $p_{\|}\rightarrow-p_{\|}$, or, consequently, ${\bm{p}}\rightarrow-{\bm{p}}$. This is confirmed by Fig. 4 for substantial values of the vector potential, i.e., for small values of $N$. However, for larger $N$ and, hence, for smaller values of the vector potential describing the external field, this asymmetry vanishes. As already discussed, close to the one-photon threshold, the pairs are created with roughly a zero momentum, ${\bm{p}}\approx{\bm{0}}$. With still increasing $N$, the coupling to the vector potential practically does not play a role. In this case, it is energetically preferable that the excess energy absorbed from the field contributes to the transverse motion of created particles. Hence, the one-photon pair creation typically is accompanied by an increase of the particles effective mass (9). To illustrate the transition between the pair creation while it occurs with multiple vs. single photon absorption we have used a rather strong electric field, ${\cal E}_{0}=-{\cal E}_{S}$. For weaker fields, such as ${\cal E}_{0}=-0.1{\cal E}_{S}$, the general features of the momentum distributions remain however the same (see, Fig. 5). The difference is that in this case the multiphoton modulations of the momentum distributions are not as pronounced as in Fig. 4. This will affect detailed features of the marginal momentum distributions presented below. In Fig. 6, we show the transverse momentum distribution $f(p_{\perp}^{2})$ of pairs generated from the vacuum by the electric field considered in this paper. This time, the amplitude of the electric field oscillations is ${\cal E}_{0}=-0.1{\cal E}_{S}$. Each curve corresponds to a different carrier frequency of the electric pulse determined by $N$. As the effect of the integration of the two-dimensional momentum maps with respect to the longitudinal momentum component, for those frequencies which are below the one-photon energy threshold, $f(p_{\perp}^{2})$ has a maximum at the zero perpendicular momentum. With increasing $N$, the maximum of the distribution grows in magnitude to become the most pronounced at the threshold value $N_{\rm th}=10/\pi$. Above the threshold, however, which is illustrated in Fig. 6 for $N=4,5,$ and $6$, the distribution exhibits an off-axis maximum. With increasing $N$, this maximum shifts toward larger values of $p_{\perp}$. While it spreads in $p_{\perp}$, the magnitude of the distribution drops down. The exact same behavior of $f(p_{\perp}^{2})$ is observed for stronger electric fields. Interestingly, as we have checked this for ${\cal E}_{0}=-{\cal E}_{S}$, the off-axis maxima of the transverse momentum distributions occur at the exact same values of $p_{\perp}$ for the given $N$. The respective values of $p_{\perp}$ can be estimated from the energy conservation relation. Namely, taking into account that the pairs produced in the one-photon process have most likely a zero longitudinal momentum, we estimate that $p_{\perp}=\sqrt{(N\omega_{0}/2c)^{2}-(m_{\rm e}c)^{2}}$. This predicts quite accurately the positions of the off-axis maxima in Fig. 6 and is independent of the amplitude of the electric field oscillations ${\cal E}_{0}$. In Fig. 7, we plot the longitudinal momentum distribution $f(p_{\|})$ (upper panel) and the total number of created pairs $f$ per unit volume (lower panel) for ${\cal E}_{0}=-0.1{\cal E}_{S}$. While all curves in the upper panel are bell-shaped, their maximum is shifted to positive values of $p_{\|}$ for small $N$. This asymmetry is lifted with increasing $N$, which was already noted in relation to the two-dimensional distributions $f(p_{\|},p_{\perp})$. If we perform similar calculations for ${\cal E}_{0}=-{\cal E}_{S}$, the bell-shaped curves centered at $p_{\|}\approx 0$ represent the results for $N>N_{\rm th}$ only. For small $N$, the marginal momentum distribution $f(p_{\|})$ exhibits multiple maxima for $p_{\|}>0$, which result from the multiphoton modulations observed in Fig. 4. Rather than that, the overall behavior of $f(p_{\|})$ is the same, irrespectively of the electric field strength. In the lower panel of Fig. 7, we present the total number of pairs $f$ calculated for discrete values of $N$ between 1 and 20. Below the one-photon-threshold ($N<N_{\rm th}$), $f$ takes rather small values which, however, increase quickly around $N_{\rm th}$. Above the one-photon threshold ($N>N_{\rm th}$), on the other hand, the number of created particles seems to saturate and even slightly decreases with $N$. One can conclude that, while the excess energy absorbed from the field in the one-photon process increases the effective mass of created particles $m_{\rm e}({\bm{p}}_{\perp})$, it hardly affects their number. Similar stabilization is observed for ${\cal E}_{0}=-{\cal E}_{S}$, even though the number of pairs in this case is by two orders of magnitude larger. In light of this result, it is interesting to study more closely the dependence of the number of created particles on the electric field strength. In Fig. 8, we plot the total number of created pairs $f$ [Eq. (13)] as a function of ${\cal E}_{0}$. The results are for different $N$, as denoted in the figure. We have checked that, above the one-photon-threshold, the curves fit well to the functional dependence: $f=3.5({\cal E}_{0}/{\cal E}_{S})^{2}$. In other words, for $N>N_{\rm th}$, the number of created pairs per relativistic unit volume depends quadratically on the external field strength, as observed already in relation to Fig. 7. This indicates the perturbative character of the process. The same conclusion is reached when considering the Keldysh parameter $\gamma=\omega{\cal E}_{S}/(m_{\rm e}c^{2}|{\cal E}_{0}|)$. Specifically, we would also like to note that $\gamma=2\pi N$ for ${\cal E}_{0}=-0.1{\cal E}_{S}$ and $\gamma=\pi N/5$ for ${\cal E}_{0}=-{\cal E}_{S}$, which are the cases thoroughly studied in this paper. This shows that most of our results relate to the perturbative regime of electron-positron pair creation, with $\gamma\gtrsim 1$. Let us also note that, for fixed ${\cal E}_{0}$, we go even more deeply into this regime when increasing $N$. Going back to Fig. 8, we note that neither of the curves can be fitted to the Schwinger tunneling formula $f\sim({\cal E}_{0}/{\cal E}_{S})^{2}\exp\bigl{(}-\pi{\cal E}_{S}/|{\cal E}_{0}% |\bigr{)}$, as the latter is applicable for $\gamma\ll 1$. In light of these conclusions, let us go back to Eq. (3) and treat the first two equations perturbatively. This can be done assuming that $$\underset{t}{\rm max}\,|\Omega_{\bm{p}}(t)|\ll\underset{t}{\rm min}\,|\omega_{% \bm{p}}(t)|,$$ (19) which means that $$\frac{{\cal E}_{0}}{2{\cal E}_{S}}\ll\Bigl{[}\frac{m_{\rm e}({\bm{p}}_{\perp})% }{m_{\rm e}}\Bigr{]}^{2}.$$ (20) Therefore, the perturbation theory can be applied when either the electric field is weak compared to the Sauter-Schwinger critical field, or when the transverse momentum of created particles is substantial (i.e., $m_{\rm e}({\bm{p}}_{\perp})\gg m_{\rm e}$). The latter is particularly important from the point of view of this paper, as it characterizes the one-photon processes. As we have shown numerically, beyond the one-photon threshold, the particles will be produced with a significant transverse momentum irrespectively of the electric field strength. This justifies the perturbative treatment of such processes, in agreement with (20). In this case, the original system of equation (3) reduces to $$\displaystyle\mathrm{i}\dot{c}_{\bm{p}}^{(1)}(t)$$ $$\displaystyle=\omega_{\bm{p}}(t)c_{\bm{p}}^{(1)}(t),$$ (21) $$\displaystyle\mathrm{i}\dot{c}_{\bm{p}}^{(2)}(t)$$ $$\displaystyle=-\mathrm{i}\Omega_{\bm{p}}(t)c_{\bm{p}}^{(1)}(t)-\omega_{\bm{p}}% (t)c_{\bm{p}}^{(2)}(t),$$ (22) and it can be solved analytically. Accounting for the initial conditions (4), we obtain that in the lowest order perturbation theory with respect to $\Omega_{\bm{p}}(t)$, the momentum distribution of created pairs (5) becomes $$f({\bm{p}})\approx\Bigl{|}\int_{-\infty}^{\infty}\mathrm{d}t\,\Omega_{\bm{p}}(% t)\mathrm{e}^{-2\mathrm{i}\int_{-\infty}^{t}\mathrm{d}\tau\,\omega_{\bm{p}}(% \tau)}\Bigr{|}^{2}.$$ (23) Note that the same formula can be derived using the quantum kinetic approach and the low-density approximation Schmidt ; Grib2 . Taking into account the definition of $\Omega_{\bm{p}}(t)$ [Eq. (10)], it becomes clear that the minima of $\omega_{\bm{p}}(t)$ contribute the most to the above integral. Those minima are determined by the conditions: ${\bm{p}}_{\perp}={\bm{0}}$ and $p_{\|}=eA(t)$. The latter means that the temporal longitudinal momentum of created particles oscillates in time following the vector potential and, hence, asymptotically it becomes zero. A trivial conclusion is that the most probable process would result in generation of particles at rest, which can be accomplished by absorbing a photon of energy $\omega_{\rm th}=2m_{\rm e}c^{2}$. This has been seen in our numerical results. In such case, $\Omega_{\bm{p}}(t)=-e{\cal E}_{0}F(t)/(2m_{\rm e}c)$ and, consequently, the momentum distribution (23) scales like ${\cal E}_{0}^{2}$. Going beyond the one-photon threshold, one can use the same argument. The difference is that there will be the excess energy absorbed from the field that will contribute to the effective mass of particles. In such case, $\Omega_{\bm{p}}(t)=-e{\cal E}_{0}F(t)/(2m_{\rm e}({\bm{p}}_{\perp})c)$, which surely indicates a quadratic scaling of the number of produced pairs $f$ [Eq. (23)] with the electric field strength ${\cal E}_{0}$. IV Summary Our results for the momentum distributions and the number of electron-positron pairs created in the dynamical Sauter-Schwinger process show dramatic changes with varying the frequency of the electric field oscillations in the vicinity of the one-photon threshold. Specifically, this concerns the energy sharing between the longitudinal and the transverse motion of created particles. While below the one-photon threshold the particles are created with a nearly zero transverse- and a substantial longitudinal momentum, this tendency is reversed above the one-photon threshold. The latter is particularly surprising, as it is commonly believed that particles are mostly created along the electric field. As we have shown, with increasing the field frequency above the one-photon threshold, the field effect is exclusively to increase the effective mass of produced particles (9). Their total number, however, remains roughly the same. Moreover, it scales quadratically with the strength of the electric field, which is typical for the perturbative regime of $e^{-}e^{+}$ pair creation. Note that such perturbative scaling of the number of produced pairs with the electric field strength has been demonstrated in this paper for arbitrarily strong electric fields. Thus, the validity of the perturbation theory has been extended in this paper to processes with substantial transverse momenta of created particles. Our results are in line with the previous claim that the effective mass of produced particles has to be redefined in different regimes of pair creation CC1 . In this paper, we have proposed the effective mass description that is applicable in the perturbative regime. This involves the transverse momentum of created particles (9). While below the one-photon threshold it approaches the rest mass, it can be substantially larger than that above the one-photon threshold. One can conclude, therefore, that the transverse momentum of particles can be a direct signature of their effective mass. Acknowledgments This work is supported by the National Science Centre (Poland) under Grant No. 2014/15/B/ST2/02203. References (1) F. Sauter, Z. Phys. 69, 742 (1931). (2) W. Heisenberg and H. Euler, Z. Phys. 98, 714 (1936). (3) J. S. Schwinger, Phys. Rev. 82, 664 (1951). (4) E. S. Fradkin, D. M. Gitman, and Sh. M. Shvartsman, Quantum Electrodynamics with Unstable Vacuum, (Springer, Berlin, 1991). (5) R. Schützhold, H. Gies, and G. Dunne, Phys. Rev. Lett. 101, 130404 (2008). (6) G. V. Dunne, H. Gies, and R. Schützhold, Phys. Rev. D 80, 111301 (2009). (7) M. Orthaber, F. Hebenstreit, and R. Alkofer, Phys. Lett. B 698, 80 (2011). (8) C. Fey and R. Schültzhold, Phys. Rev. D 85, 025004 (2012). (9) M. J. A. Jansen and C. Müller, Phys. Rev. A 88, 052125 (2013). (10) I. Akal, S. Villalba-Chávez, and C. Müller, Phys. Rev. D 90, 113004 (2014). (11) A. Otto, D. Seipt, D. Blaschke, B. Kämpfer, S. A. Smolyansky, Phys. Lett. B 740, 335 (2014). (12) A. Otto, D. Seipt, D. Blaschke, S. A. Smolyansky, and B. Kämpfer, Phys. Rev. D 91, 105018 (2015). (13) M. Jiang, W. Su, Z. Q. Lv, X. Lu, Y. J. Li, R. Grobe, and Q. Su, Phys. Rev. A 85, 033408 (2012). (14) I. Sitiwaldi and B.-S. Xie, Phys. Lett. B 777, 406 (2018). (15) E. Akkermans and G. Dunne, Phys. Rev. Lett. 108, 030401 (2012). (16) Z. L. Li, D. Lu, and B. S. Xie, Phys. Rev. D 89, 067701 (2014). (17) Z. L. Li, D. Lu, B. S. Xie, L. B. Fu, J. Liu, and F. F. Schen, Phys. Rev. D 89, 093011 (2014). (18) C. K. Dumlu and G. V. Dunne, Phys. Rev. Lett. 104, 250402 (2010). (19) C. K. Dumlu and G. V. Dunne, Phys. Rev. D 83, 065028 (2011). (20) I. Sitiwaldi and B.-S. Xie, Phys. Lett. B 768, 174 (2017). (21) J. Z. Kamiński, M. Twardy, and K. Krajewska, Phys. Rev. D 98, 056009 (2018). (22) F. Hebenstreit and F. Fillion-Gourdeau, Phys. Lett. B 739, 189 (2014). (23) F. Fillion-Gourdeau, F. Hebenstreit, D. Gagnon, and S. MacLean, Phys. Rev. D 96, 016012 (2017). (24) K. Krajewska and J. Z. Kamiński, Phys. Rev. A 82, 013420 (2010). (25) C. Kohlfürst, H. Gies, and R. Alkofer, Phys. Rev. Lett. 112, 050402 (2014). (26) Z. L. Li, D. Lu, and B. S. Xie, Phys. Rev. D 92, 085001 (2015). (27) K. Krajewska, I. I. Fabrikant, and A. F. Starace, Phys. Rev. A 74, 053407 (2006). (28) K. Krajewska, I. I. Fabrikant, and A. F. Starace, Laser Phys. 17, 368 (2007). (29) K. Krajewska, I. I. Fabrikant, and A. F. Starace, Phys. Rev. A 78, 023407 (2008). (30) K. Krajewska, I. I. Fabrikant, and A. F. Starace, Phys. Rev. A 86, 053410 (2012). (31) M. Gavrila, J. Phys. B 35, R147 (2002) and references therein. (32) K. Krajewska, J. Z. Kamiński, and R. M. Potvliege, Ann. Phys. (N. Y.) 323, 2639 (2008). (33) J. Z. Kamiński, K. Krajewska, and F. Ehlotzky, Phys. Rev. A 74, 033402 (2006). (34) A. A. Grib, V. M. Mostepanenko, and V. M. Frolov, Teor. Mat. Fiz. 13, 377 (1972). (35) A. A. Grib, S. G. Mamaev, and V. M. Mostepanenko, Vacuum Quantum Effects in Strong External Fields, (Atomizdat, Moscow, 1988). (36) S. M. Schmidt, D. Blaschke, G. Röpke, S. A. Smolyansky, A. V. Prozorkevich, and V. D. Toneev, Int. J. Mod. Phys. E 07, 709 (1998).
Second harmonic generation: Goursat problem on the semi-strip and explicit solutions Alexander Sakhnovich () Abstract A rigorous and complete solution of the initial-boundary-value (Goursat) problem for second harmonic generation (and its matrix analog) on the semi-strip is given in terms of the Weyl functions. A wide class of the explicit solutions and their Weyl functions is obtained also. \newsymbol\blackbox 1004 Short title. Second harmonic generation Branch of Hydroacoustics, Marine Institute of Hydrophysics, National Academy of Sciences of Ukraine e-mail address: al${}_{-}[email protected] 1 Introduction The second harmonic generation [9] is one of the simplest nonlinear interactions and can be presented in the form $$\frac{\partial}{\partial x}u_{1}=-2\overline{u_{1}}u_{2},\quad\frac{\partial}{% \partial t}u_{2}=u_{1}^{2},$$ (1.1) where $\overline{u_{1}}$ is complex conjugate to $u_{1}$. Second harmonic generation (SHG) (1.1) is essential in the study of impulse propagation. The results related to the case of a purely amplitude-modulated fundamental wave go back to Liouville [19, 3, 37]. The SHG integrability was proved and Lax pair was constructed in [12]. In spite of many important results (see [2, 14, 16, 37] and references therein) SHG has remained unsolved. One the reasons for this situation is connected with the continuously interacting nature of SHG [15]. The case of the Goursat problem for small and intermediate values of $x$ and $t$ have been treated in [15], and it is this paper that attracted our attention to the SHG. It is well-known that initial-boundary value problems for the integrable nonlinear equations are both important and difficult. Several interesting approaches have been suggested and various results have been obtained (see, for instance, [4, 7, 11, 13, 15, 17, 18, 23, 36]). Here we give a complete solution of the initial-boundary-value (Goursat) problem for SHG (and its matrix analog) on the semi-strip using approach that was developed in [32, 33, 34] (see also [24]). This approach consists from two stages: description of the evolution of the Weyl function in terms of the Möbius transformation and solution of the inverse problem - reconstruction of the potential by the Weyl function. To study explicit solutions we apply to SHG a version of the binary iterated Bäcklund-Darboux transformation called GBDT in the terminology of [28]. Bäcklund transformation is a well-known and fruitful approach in the nonlinear equations and spectral theory (see, for instance, [1, 20, 21, 22, 39]). Bäcklund transformations and construction of some self-similar solutions for SHG have been studied in [16, 38, 40]. GBDT have been initially developed in [25, 26] (see [10, 28, 30, 31] and references therein for various applications) and provides algebraic representation of the Darboux matrix in the form of the transform matrix from the system theory, which proves useful in the study of the explicit SHG solutions. Some preliminary results and definitions that make the paper self-contained to a certain degree are given in Section 2. Section 3 is dedicated to the solution of the Goursat problem on the semi-strip. GBDT for SHG is constructed in Section 4. A wide class of the explicit solutions and their Weyl functions is obtained in Section 5. We denote by ${\mathbb{R}}$, ${\mathbb{C}}$, and ${\mathbb{C}}_{+}$ the real axis, complex plane, and open upper half-plane, respectively. Matrix $\beta^{*}$ is adjoint to $\beta$, matrix diag $\{a_{1},\,a_{2},\ldots\}$ is a diagonal matrix with the entries $a_{1},\,a_{2},\ldots$ on the main diagonal, $\sigma$ means spectrum, and $I_{m}$ denotes $m\times m$ identity matrix ($I_{\cal G}$ is identity operator in the Hilbert space ${\cal G}$). 2 Preliminaries 2.1. We shall use the zero curvature representation of our nonlinear equation: $$G_{t}(x,t,z)-F_{x}(x,t,z)+[G(x,t,z),F(x,t,z)]=0,$$ (2.1) where $G_{t}=\frac{\partial}{\partial t}G$, $\,[G,F]=GF-FG$ (see [6] on this representation, its connections with Lax pairs, references, and historical remarks). Equation (2.1) is the compatibility condition of the auxiliary systems $w_{x}=Gw$ and $w_{t}=Fw$. Consider the case of the auxiliary systems $$\frac{\partial}{\partial x}w(x,t,z)=G(x,t,z)w(x,t,z),\quad G(x,t,z)=i\big{(}zj% +jV(x,t)\big{)},$$ (2.2) $$\frac{\partial}{\partial t}w(x,t,z)=F(x,t,z)w(x,t,z),\quad F(x,t,z)=\frac{i}{z% }jH(x,t),$$ (2.3) where $j$, $V$, and $H$ are $2m\times 2m$ matrix functions, $$j=\left[\begin{array}[]{lr}I_{m}&0\\ 0&-I_{m}\end{array}\right],\quad V=\left[\begin{array}[]{lr}0&v\\ v^{*}&0\end{array}\right],\quad H(x,t)\geq 0.$$ (2.4) Here system (2.2) is the well-known so called Dirac type, AKNS or ZS system, system (2.3) is the well-known canonical system, and compatibility condition (zero curvature) equation (2.1) is equivalent to the system $$H_{x}(x,t)=i(V(x,t)jH(x,t)-H(x,t)jV(x,t)),\quad iv_{t}(x,t)=2\big{(}H(x,t)\big% {)}_{12},$$ (2.5) where $\big{(}H\big{)}_{kl}$ are $m\times m$ blocks of $H$. The case $$m=1,\quad H(x,t)=\beta(x,t)^{*}\beta(x,t),\quad\beta(x,t)=[\overline{u_{1}(x,t% )}\quad u_{1}(x,t)]$$ (2.6) is of special interest. Notice that when $m=1$, rank $H\,\leq 1$, and $HjH\equiv 0$, then representation (2.6) follows automatically. In this case putting $v=-2iu_{2}$ we see that auxiliary systems (2.2) and (2.3) coincide with the auxiliary systems in [12]. If $u_{1}$ is continuously differentiable in $x$, then system (2.5) is equivalent to SHG (1.1). Indeed, the equivalence of the second equations in (1.1) and (2.5) is immediate. Consider the first equation in (2.5). As $H=\beta^{*}\beta$ and $\beta j\beta^{*}=0$, so the equality $H_{x}j\beta^{*}=\beta^{*}\beta_{x}j\beta^{*}$ follows. Therefore according to (2.5) we have $\beta^{*}\beta_{x}j\beta^{*}=-i\beta^{*}\beta jVj\beta^{*}$, i.e., $\beta_{x}j\beta^{*}=-i\beta jVj\beta^{*}$. Hence, using once more $\beta j\beta^{*}=0$ (and supposing $\beta\not=0$), we obtain $\beta_{x}=-i\beta jV+f\beta$. Now, taking into account the last relation in (2.6) we see that $f=\overline{f}$, and thus $H_{x}=(\beta^{*}\beta)_{x}=i(VjH-HjV)+2fH$. Compare this equality with (2.5) to derive $f=0$. In other words we have $\beta_{x}=-i\beta jV$, which is equivalent to $(u_{1})_{x}=-2\overline{u_{1}}u_{2}$, and vice versa $\beta_{x}=-i\beta jV$ yields the first equation in (2.5). 2.2. Consider now the $2m\times 2m$ fundamental solution of the Dirac type auxiliary system with a fixed $t$: $$\frac{d}{dx}W(x,z)=i\big{(}zj+jV(x)\big{)}W(x,z),\quad W(0,z)=I_{2m},$$ (2.7) where $0\leq x<\infty$ and $V$ is locally summable. Definition 2.1 A holomorphic function $\varphi$ such that $$\int_{0}^{\infty}\left[\begin{array}[]{lr}I_{m}&i\varphi(z)^{*}\end{array}% \right]KW(x,z)^{*}W(x,z)K^{*}\left[\begin{array}[]{c}I_{m}\\ -i\varphi(z)\end{array}\right]dx<\infty,$$ (2.8) where $z\in{{\mathbb{C}}}_{+}$ and $$K:=\frac{1}{\sqrt{2}}\left[\begin{array}[]{cc}I_{m}&-I_{m}\\ I_{m}&I_{m}\end{array}\right],\qquad K^{*}=K^{-1},$$ (2.9) is called a Weyl function of system (2.7) on $[0,\,\infty)$. Weyl functions called also Weyl-Titchmarsh or $m$-functions are widely used in the spectral theory. For the case of system (2.7) Weyl function admits Herglotz representation $$\varphi(z)=\mu z+\nu+\int_{-\infty}^{\infty}(\frac{1}{s-z}-\frac{s}{1+s^{2}})d% \tau(s),$$ where $\mu\geq 0$, $\nu=\nu^{*}$ and $\tau$ is a nondecreasing $m\times m$ distribution matrix function. Matrix function $\tau$ proves the spectral function of the selfadjoint operator ${\mathcal{Q}}f:=\big{(}-ij\frac{d}{dx}-V(x)\big{)}f$ defined on the proper domain. We shall need some results on system (2.7) and its Weyl functions from [29] (see also [10] for rational Weyl functions and corresponding explicit formulas). Put $${\cal{W}}(z)={\cal W}(l,z):=KW(l,\overline{z})^{*},\qquad{\cal W}(z)=:\{{\cal W% }_{kp}(z)\}_{k,p=1}^{2},$$ (2.10) where ${\cal W}_{kp}$ are the $m\times m$ blocks of ${\cal W}$. Denote by ${\cal{N}}(V,l)$ or simply by ${\cal{N}}(l)$ the set of Möbius (linear fractional) transformations $$\varphi(z,l,{\cal P})=i({\cal W}_{11}(z){\cal P}_{1}(z)+{\cal W}_{12}(z){\cal P% }_{2}(z))({\cal W}_{21}(z){\cal P}_{1}(z)+{\cal W}_{22}(z){\cal P}_{2}(z))^{-1},$$ (2.11) where the pairs ${\cal P}_{1}$ and ${\cal P}_{2}$ are taken from the often used parameter set of non-singular pairs of meromorphic $m\times m$ matrix functions with property-$j$, that is the pairs with the properties $${\cal P}_{1}(z)^{*}{\cal P}_{1}(z)+{\cal P}_{2}(z)^{*}{\cal P}_{2}(z)>0,\quad{% \cal P}_{1}(z)^{*}{\cal P}_{1}(z)-{\cal P}_{2}(z)^{*}{\cal P}_{2}(z)\geq 0.$$ (2.12) Taking into account formula (1.4) in [29] one can see that up to a scalar constant ${\cal{N}}(l)$ here coincides with ${\cal{N}}(l)$ in [29]. From (2.12) it follows that $\det({\cal W}_{21}(z){\cal P}_{1}(z)+{\cal W}_{22}(z){\cal P}_{2}(z))\not=0$ and that $$i(\varphi(z,l,{\cal P})^{*}-\varphi(z,l,{\cal P}))>0\quad(z\in{\mathbb{C}}_{+}).$$ (2.13) (see inequalities (2.18), (2.19) [29]). Moreover the unique Weyl function $\varphi(z)$ exists and is given by the relations $$\displaystyle{\varphi(z)=\bigcap_{l<\infty}{\cal N}(l)=\lim_{l_{k}\to\infty}% \varphi(z,l_{k})}$$ (2.14) for any sequence $\varphi_{k}(z)=\varphi(z,l_{k})\in{\cal N}(l_{k})$. (The first equality in (2.14) follows from Proposition 5.2 [29] and the second follows from the proof of Theorem 2.7 [29].) Finally when $V$ is locally bounded matrix function, then Theorem 5.4 and Remark 5.6 in [29] give a well-defined procedure to recover $V$ by $\varphi$ that we shall adduce below. It is easy to see that $(\zeta+i\eta)^{-2}\varphi(\zeta+i\eta)\in L^{2}_{m\times m}(-\infty,\,\infty)$. Introduce a family of bounded and boundedly invertible operators $S(l)>0$ ($0<l<\infty$), where $S(l)$ of the form $$S(l)=\frac{d}{dx}\int_{0}^{l}s(x-t)\,\cdot\,dt,\quad s(x)=-s(-x)^{*}$$ (2.15) acts on $L^{2}_{m}(0,l)$, and the kernel matrix function $s(x)$ is defined via the Fourier transform $$s(x)=\left(\frac{d}{dx}\frac{i}{2\pi}e^{\eta x}\int_{-\infty}^{\infty}e^{-i% \zeta x}(\zeta+i\eta)^{-2}\varphi(\zeta+i\eta)d\zeta\right)^{*},\quad\eta>0.$$ (2.16) Introduce further $m\times 2m$ matrix function $\gamma$ by the relation $$\gamma(x)=\frac{1}{\sqrt{2}}\left([-I_{m}\quad I_{m}]-\int_{0}^{x}(s^{\prime}(% t))^{*}S(x)^{-1}[s(t)\quad I_{m}]dt\right),$$ (2.17) where operator $S^{-1}$ is acting on $[s(t)\quad I_{m}]$ columnwise. Now potential $v$ (and thus $V$) is recovered by the equality $$v(x)=2i\chi_{x}(2x)J\gamma(2x)^{*}\quad\left(J=\left[\begin{array}[]{lr}0&I_{m% }\\ I_{m}&0\end{array}\right]\right).$$ (2.18) Here $m\times 2m$ matrix function $\chi$ is uniquely and easily obtained via $\gamma$ by the properties $$\chi(0)=\frac{1}{\sqrt{2}}[I_{m}\quad I_{m}],\quad\chi_{x}(x)J\chi(x)^{*}% \equiv 0,\quad\chi(x)J\gamma(x)^{*}\equiv 0.$$ (2.19) One can recover the potential of system (2.7) in a more traditional way - via spectral function, but the direct recovery via Weyl function is a more general method that is applicable also in the case of the skew-self-adjoint analog of (2.7) as in [24]. Procedure (2.15)-(2.19) is closely related to the study of the high energy asymptotics of the Weyl functions in [5, 8, 35] as well. 2.3. Suppose now that matrix functions $G(x,t,z)$ and $F(x,t,z)$ given in (2.2) and (2.3) are continuously differentiable in the domain $0\leq x<l$, $\quad 0\leq t<T$ ($-T<t\leq 0$) and that (2.1) holds. Similar to (2.7) we shall use notation $W(x,t,z)$ for the fundamental solution of (2.2) normalized by the condition $W(0,t,z)=I_{2m}$ and we shall denote by $R(x,t,z)$ the fundamental solution of (2.3): $$\frac{\partial}{\partial t}R(x,t,z)=\frac{i}{z}jH(x,t)R(x,t,z),\quad R(x,0,z)=% I_{2m}.$$ (2.20) It easily follows (see [34], p.168) that $$W(x,t,z)=R(x,t,z)W(x,0,z)R(0,t,z)^{-1}.$$ (2.21) 3 Goursat problem Denote now by $\varphi(t,z)$ the Weyl function of system (2.7) with $V(x)=V(x,t)$ and put $${\cal{R}}(t,z)=\{{\cal R}_{kp}(t,z)\}_{k,p=1}^{2}:=K\big{(}R(0,t,\overline{z})% ^{*}\big{)}^{-1}K^{*},$$ (3.1) where ${\cal R}_{kp}$ are $m\times m$ blocks of ${\cal R}$. Theorem 3.1 Suppose $2m\times 2m$ matrix function $H(x,t)\geq 0$ and $m\times m$ matrix function $v(x,t)$ are continuously differentiable in the semi-strip $D=\{(x,t):\,0\leq x<\infty,\,0\leq t<T\}$ and satisfy equations (2.5) with $V$ given by the second relation in (2.4). Then the evolution $\varphi(t,z)$ of the Weyl function is given by the formula $$\varphi(t,z)=i\big{(}-i{\cal R}_{11}(t,z)\varphi(0,z)+{\cal R}_{12}(t,z)\big{)% }\big{(}-i{\cal R}_{21}(t,z)\varphi(0,z)+{\cal R}_{22}(t,z)\big{)}^{-1}.$$ (3.2) Moreover $H$ and $v$ in the semi-strip $D$ are uniquely recovered by the initial-boundary values $v(x,0)$ and $H(0,t)$. Here $\varphi(0,z)$ is defined via $v(x,0)$ by the formula (2.14), ${\cal R}(t,z)$ is defined via $H(0,t)$ by the formulas (2.20) and (3.1), evolution $\varphi(t,z)$ of the Weyl function follows now from (3.2) and, finally, potential $v(x,t)$ is obtained via $\varphi(t,z)$ by the procedure (2.15)-(2.19). After $v$ is recovered we get $H(x,t)$ as a unique solution with the initial value $H(0,t)$ of the linear system $H_{x}=i(VjH-HjV)$. P r o o f . In view of the results cited in Section 2 (subsection 2.2) we need to prove only evolution formula (3.2). For that purpose we slightly modify the proof of Theorem 1.1 in Ch. 12 [34]. Rewrite (2.21) in the form $$R(l,t,z)^{-1}W(l,t,z)K^{*}=W(l,0,z)K^{*}KR(0,t,z)^{-1}K^{*}$$ (3.3) Replace $z$ by $\overline{z}$, take adjoints of both sides in (3.3) and use definitions (2.10) and (3.1) to get $${\cal U}(l,t,z)=\{{\cal U}_{kp}(l,t,z)\}_{k,p=1}^{2}:={\cal W}(l,t,z)\big{(}R(% l,t,\overline{z})^{-1}\big{)}^{*}={\cal R}(t,z){\cal W}(l,0,z),$$ (3.4) where ${\cal U}_{kp}$ are $m\times m$ blocks. Choose now a non-singular pair $\widehat{\cal P}_{1}$, $\widehat{\cal P}_{2}$ with property-$j$ and consider a Möbius transformation $$\psi(l,t,z):=({\cal U}_{11}(l,t,z)\widehat{\cal P}_{1}(z)+{\cal U}_{12}(l,t,z)% \widehat{\cal P}_{2}(z))\times$$ $$({\cal U}_{21}(l,t,z)\widehat{\cal P}_{1}(z)+{\cal U}_{22}(l,t,z)\widehat{\cal P% }_{2}(z))^{-1}.$$ (3.5) Notice that by (2.20) for $z\in{\mathbb{C}}_{+}$ we have $$\frac{\partial}{\partial t}\big{(}R(x,t,\overline{z})^{*}jR(x,t,\overline{z})% \big{)}=\frac{i(z-\overline{z})}{|z|^{2}}R(x,t,\overline{z})^{*}H(x,t)R(x,t,% \overline{z})\leq 0.$$ (3.6) Taking into account that $R(x,0,\overline{z})^{*}jR(x,0,\overline{z})=j$ we derive from (3.6) inequality: $$R(l,t,\overline{z})^{*}jR(l,t,\overline{z})\leq j\quad(z\in{\mathbb{C}}_{+}).$$ (3.7) From (3.7) it follows that $$R(l,t,\overline{z})^{-1}j\big{(}R(l,t,\overline{z})^{-1}\big{)}^{*}\geq j\quad% (z\in{\mathbb{C}}_{+}),$$ (3.8) and therefore the pair $$\left[\begin{array}[]{c}{\cal P}_{1}(z)\\ {\cal P}_{2}(z)\end{array}\right]:=\big{(}R(l,t,\overline{z})^{-1}\big{)}^{*}% \left[\begin{array}[]{c}\widehat{\cal P}_{1}(z)\\ \widehat{\cal P}_{2}(z)\end{array}\right]$$ (3.9) satisfies (2.12). According to definitions of ${\cal U}$ and $\psi$ and formula (3.9) we obtain $$\psi(l,t,z)=({\cal W}_{11}(l,t,z){\cal P}_{1}(z)+{\cal W}_{12}(l,t,z){\cal P}_% {2}(z))\times$$ $$({\cal W}_{21}(l,t,z){\cal P}_{1}(z)+{\cal W}_{22}(l,t,z){\cal P}_{2}(z))^{-1}.$$ (3.10) As the pair ${\cal P}_{1}$, ${\cal P}_{2}$ satisfies (2.12), so by (2.14) and (3.10) we get $$\lim_{l\to\infty}\,\psi(l,t,z)=-i\varphi(t,z).$$ (3.11) On the other hand from the last equality in (3.4) and (3.5) we get also $$\psi(l,t,z)=\big{(}-i{\cal R}_{11}(t,z)\varphi_{l}(z)+{\cal R}_{12}(t,z)\big{)% }\big{(}-i{\cal R}_{21}(t,z)\varphi_{l}(z)+{\cal R}_{22}(t,z)\big{)}^{-1},$$ (3.12) where $\varphi_{l}\in{\cal N}(l,0)$ is given by the formula $$\varphi_{l}(z)=i({\cal W}_{11}(l,0,z)\widehat{\cal P}_{1}(z)+{\cal W}_{12}(l,0% ,z)\widehat{\cal P}_{2}(z))\times$$ $$({\cal W}_{21}(l,0,z)\widehat{\cal P}_{1}(z)+{\cal W}_{22}(l,0,z)\widehat{\cal P% }_{2}(z))^{-1}.$$ (3.13) By (2.14) and (3.13) we have $$\lim_{l\to\infty}\varphi_{l}(z)=\varphi(0,z).$$ (3.14) Supposing $$\det\,\big{(}-i{\cal R}_{21}(t,z)\varphi(0,z)+{\cal R}_{22}(t,z)\big{)}\not=0,$$ (3.15) and taking into account (3.12) and (3.14) we see that $$\lim_{l\to\infty}\psi(l,t,z)=\big{(}-i{\cal R}_{11}(t,z)\varphi(0,z)+{\cal R}_% {12}(t,z)\big{)}\times$$ $$\big{(}-i{\cal R}_{21}(t,z)\varphi(0,z)+{\cal R}_{22}(t,z)\big{)}^{-1}.$$ (3.16) Compare (3.11) and (3.16) to obtain (3.2). Finally, using definitions in (2.4), (2.9), and (2.18) one can easily check that $$K^{*}JK=j,\quad KjK^{*}=J.$$ (3.17) Hence in view of (3.1) and (3.8) it follows that ${\cal R}(t,z)^{*}J{\cal R}(t,z)\geq J$, i.e., $$[i\varphi(0,z)^{*}\quad I_{m}]{\cal R}(t,z)^{*}J{\cal R}(t,z)\left[\begin{% array}[]{c}-i\varphi(0,z)\\ I_{m}\end{array}\right]\geq i(\varphi(0,z)^{*}-\varphi(0,z)),$$ (3.18) $z\in{\mathbb{C}}_{+}$. Recall that $i(\varphi(0,z)^{*}-\varphi(0,z))>0$ and so (3.18) yields (3.15). $\blackbox$ In a similar way the Goursat problem on the semi-strip $\widehat{D}=\{(x,t):\,0\leq x<\infty,\,-T<t\leq 0\}$ can be solved. Theorem 3.2 Suppose $2m\times 2m$ matrix function $H(x,t)\geq 0$ and $m\times m$ matrix function $v(x,t)$ are continuously differentiable in the semi-strip $\widehat{D}$ and satisfy equations (2.5) with $V$ given by the second relation in (2.4). Then $H$ and $v$ in $\widehat{D}$ are uniquely recovered by the initial-boundary values $v(x,0)$ and $H(0,t)$. Here $\varphi(0,z)$ is defined via $v(x,0)$ by the formula (2.14), ${\cal R}(t,z)$ is defined via $H(0,t)$ by the formulas (2.20) and (3.1), Weyl functions $\varphi(t,z)$ are given by (3.2) and, finally, potential $v(x,t)$ is obtained via $\varphi(t,z)$ by the procedure (2.15)-(2.19). After $v$ is recovered we get $H(x,t)$ as a unique solution with the initial value $H(0,t)$ of the linear system $H_{x}=i(VjH-HjV)$. P r o o f . Notice that instead of (3.7) formula (3.6) yields now inequality $$R(l,t,\overline{z})^{*}jR(l,t,\overline{z})\geq j\quad(z\in{\mathbb{C}}_{+},% \quad t\leq 0),$$ (3.19) i.e., $R$ and $R^{*}$ are $j$-expanding. Analogously we have now $$\big{(}{\cal R}(t,z)^{-1}\big{)}^{*}J{\cal R}(t,z)^{-1}\geq J\quad(z\in{% \mathbb{C}}_{+},\quad t\leq 0).$$ (3.20) Therefore we modify formula (3.4): $${\cal U}(l,t,z)=\{{\cal U}_{kp}(l,t,z)\}_{k,p=1}^{2}:={\cal R}(t,z)^{-1}{\cal W% }(l,t,z)={\cal W}(l,0,z)R(l,t,\overline{z})^{*}.$$ (3.21) Consider again $\lim_{l\to\infty}\psi(l,t,z)$, where $\psi$ is given by (3.5), and use (3.21) to get $$i\big{(}-i{\cal T}_{11}(t,z)\varphi(t,z)+{\cal T}_{12}(t,z)\big{)}\big{(}-i{% \cal T}_{21}(t,z)\varphi(t,z)+{\cal T}_{22}(t,z)\big{)}^{-1}=\varphi(0,z),$$ (3.22) where ${\cal T}(t,z)=\{{\cal T}_{kp}(t,z)\}_{k,p=1}^{2}:={\cal R}(t,z)^{-1}$. Rewrite (3.22) as $${\cal T}(t,z)\left[\begin{array}[]{c}-i\varphi(t,z)\\ I_{m}\end{array}\right]=\left[\begin{array}[]{c}-i\varphi(0,z)\\ I_{m}\end{array}\right]c(z),\quad\det\,c(z)\not=0\quad({\cal T}={\cal R}^{-1}).$$ (3.23) Multiply both sides of (3.23) by ${\cal R}$ to obtain (3.15) and, finally, (3.2). $\blackbox$ 4 GBDT for second harmonic generation A general result on the GBDT for systems rationally depending on $\lambda$ (and spectral parameter $\lambda$ depending on the variables $x$ and $t$) have been proved in [27]. Systems polynomially depending on $\lambda$ and $\lambda^{-1}$ have been in greater detail treated in [28]. Here we shall need a reduction of Theorem 1.1 [28] for the $2m\times 2m$ first order system of the form $$\frac{d}{ds}w(s,\lambda)+(\lambda q_{1}(s)+q_{0}(s)+\lambda^{-1}q_{-1}(s))w(s,% \lambda)=0,$$ (4.1) where the coefficients $q_{k}(s)$ are $2m\times 2m$ locally summable on $[0,$ $c)$ $(c\leq\infty)$ matrix functions, and the equalities $$q_{p}(s)^{*}=-jq_{p}j\quad(p=1,\,0,\,-1)$$ (4.2) hold. After fixing an integer $n>0$ the GBDT of the system (4.1) is determined by the three parameter matrices: two $n\times n$ matrices $A$ and $S(0)$, and $n\times 2m$ matrix $\Pi(0)$ such that $$AS(0)-S(0)A^{*}=i\Pi(0)j\Pi(0)^{*},\quad\det\,A\not=0,\quad S(0)=S(0)^{*}.$$ (4.3) Given these parameter matrices we define matrix function $\Pi(s)$ by its initial value $\Pi(0)$ and linear differential system $$\frac{d}{ds}\Pi(s)=A\Pi(s)q_{1}(s)+\Pi(s)q_{0}(s)+A^{-1}\Pi(s)q_{-1}(s),$$ (4.4) i.e., $\Pi$ is a generalized eigenfunction of the dual to (4.1) system $\frac{d}{ds}w=\lambda wq_{1}+wq_{0}+\lambda^{-1}wq_{-1}$. Matrix function $S(s)$ is defined by $S(0)$ and $\frac{d}{ds}S$: $$\frac{d}{ds}S=i\Big{(}\Pi(s)q_{1}(s)j\Pi(s)^{*}-A^{-1}\Pi(s)q_{-1}(s)j\Pi(s)^{% *}\big{(}A^{*}\big{)}^{-1}\Big{)}.$$ (4.5) Notice that relations (4.3)-(4.5) are chosen so that $$AS(s)-S(s)A^{*}=i\Pi(s)j\Pi(s)^{*},\quad S(s)=S(s)^{*}.$$ (4.6) We introduce Darboux matrix (gauge transformation) by the formula $$w_{A}(s,\lambda)=I_{2m}-ij\Pi(s)^{*}S(s)^{-1}(A-\lambda I_{n})^{-1}\Pi(s).$$ (4.7) Compare (4.5) with subsection 2.2, where the operators $S$ necessary to solve the general type inverse problem have been defined differently, - there is difference between the usage of the transfer matrix function and ways to define its elements for the general type problems and for the case of the explicit solutions, i.e., the GBDT case. By Theorem 1.1 (see also Proposition 1.4) [28] we have Theorem 4.1 Suppose coefficients of system (4.1) satisfy (4.2) and the parameter matrices $A$, $S(0)$ and $\Pi(0)$ satisfy relations (4.3). Define matrix functions $\Pi$ and $S$ by the equations (4.4) and (4.5). Then in the points of invertibility of $S$ the matrix function $w_{A}$ satisfies the system $$\frac{d}{ds}w_{A}(s,\lambda)=w_{A}(s,\lambda)(\lambda q_{1}(s)+q_{0}(s)+% \lambda^{-1}q_{-1}(s))-$$ $$(\lambda\widetilde{q}_{1}(s)+\widetilde{q}_{0}(s)+\lambda^{-1}\widetilde{q}_{-% 1}(s))w_{A}(s,\lambda),$$ (4.8) where $$\widetilde{q}_{1}\equiv q_{1},\quad\widetilde{q}_{0}(s)=q_{0}(s)-(q_{1}(s)Y_{0% }(s)-X_{0}(s)q_{1}(s)),$$ (4.9) $$\widetilde{q}_{-1}(s)=q_{-1}(s)+q_{-1}(s)Y_{-1}(s)-X_{-1}(s)q_{-1}(s)-X_{-1}(s% )q_{-1}(s)Y_{-1}(s),$$ (4.10) $$X_{k}(s)=ij\Pi(s)^{*}S(s)^{-1}A^{k}\Pi(s),\quad Y_{k}(s)=ij\Pi(s)^{*}\big{(}A^% {*}\big{)}^{k}S(s)^{-1}\Pi(s).$$ (4.11) Moreover we have $$\widetilde{q}_{p}(s)^{*}=-j\widetilde{q}_{p}j\quad(p=1,\,0,\,-1).$$ (4.12) If $w$ satisfies system (4.1), then matrix function $$\widetilde{w}(s,\lambda):=w_{A}(s,\lambda)w(s,\lambda)$$ (4.13) satisfies GBD transformed system $$\frac{d}{ds}\widetilde{w}(s,\lambda)+(\lambda\widetilde{q}_{1}(s)+\widetilde{q% }_{0}(s)+\lambda^{-1}\widetilde{q}_{-1}(s))\widetilde{w}(s,\lambda)=0.$$ (4.14) Transfer matrix functions of the form $$w_{A}(\lambda)=I_{\cal G}-\Pi_{2}^{*}S^{-1}(A_{1}-\lambda I_{\cal H})^{-1}\Pi_% {1}\quad(A_{1}S-SA_{2}=\Pi_{1}\Pi_{2}^{*})$$ have been introduced and studied by L. Sakhnovich in the context of his method of operator identities (see [33, 34] and references therein) and take roots in the M. S. Livšic characteristic matrix functions. From (4.7) easily follows [33, 34] a useful equality $$w_{A}(s,\overline{\lambda})^{*}jw_{A}(s,\lambda)=j.$$ (4.15) Notice that according to (4.9) identities $q_{1}\equiv 0$ and $q_{0}\equiv 0$ yield $\widetilde{q}_{1}\equiv 0$ and $\widetilde{q}_{0}\equiv 0$. By (4.7) and (4.11) identity (4.10) can be rewritten as $$\widetilde{q}_{-1}(s)=(I_{2m}-X_{-1}(s))q_{-1}(s)(I_{2m}+Y_{-1}(s))=w_{A}(s,0)% q_{-1}(s)jw_{A}(s,0)^{*}j.$$ (4.16) If $q_{-1}\equiv 0$, then we have $\widetilde{q}_{-1}\equiv 0$ also. It is immediate that auxiliary systems (2.2) and (2.3) satisfy conditions of Theorem 4.1. To apply Theorem 4.1 we consider matrix functions $\Pi$, $S$, $w_{A}$, $X_{k}$, and $Y_{k}$ that depend on the variables $x$ and $t$ instead of one variable $s$. Namely, we fix $n>0$ and parameter matrices $A$, $S(0,0)$, and $\Pi(0,0)$ such that $$AS(0,0)-S(0,0)A^{*}=i\Pi(0,0)j\Pi(0,0)^{*},\quad\det\,A\not=0,\quad S(0,0)=S(0% ,0)^{*},$$ (4.17) and introduce matrix functions $\Pi(x,t)$ and $S(x,t)$ by the equations $$\Pi_{x}(x,t)=-iA\Pi(x,t)j-i\Pi(x,t)jV(x,t),\quad\Pi_{t}(x,t)=-iA^{-1}\Pi(x,t)% jH(x,t),$$ (4.18) $$S_{x}(x,t)=\Pi(x,t)\Pi(x,t)^{*},\quad S_{t}(x,t)=-A^{-1}\Pi(x,t)jH(x,t)j\Pi(x,% t)^{*}\big{(}A^{*}\big{)}^{-1}.$$ (4.19) Compatibility of systems (4.18) and (4.19) follows from (2.5) (see [28]). Now Theorem 4.1 yields the following result. Theorem 4.2 Suppose continuously differentiable matrix functions $H$ and $v$ satisfy system (2.5). Choose $n>0$ and parameter matrices $A$, $S(0,0)$, and $\Pi(0,0)$ such that (4.17) holds. Then matrix function $$\widetilde{H}(x,t):=jw_{A}(x,t,0)jH(x,t)jw_{A}(x,t,0)^{*}j,$$ (4.20) where $w_{A}(x,t,\lambda)=I_{2m}-ij\Pi(x,t)^{*}S(x,t)^{-1}(A-\lambda I_{n})^{-1}\Pi(x% ,t)$, and matrix function $$\widetilde{v}(x,t)=v(x,t)-2\big{(}X_{0}(x,t)\big{)}_{12}\quad(X_{0}(x,t)=ij\Pi% (x,t)^{*}S(x,t)^{-1}\Pi(x,t))$$ (4.21) satisfy system (2.5) also. Moreover, if $$H(x,t)\geq 0,\quad H(x,t)jH(x,t)\equiv 0,$$ (4.22) then we have $$\widetilde{H}(x,t)\geq 0,\quad\widetilde{H}(x,t)j\widetilde{H}(x,t)\equiv 0.$$ (4.23) P r o o f . As system (2.5) is equivalent to the compatibility condition (2.1) so there is a non-degenerate $m\times m$ matrix function $w$ that satisfies equations $w_{x}=Gw$, $w_{t}=Fw$. From the identities (4.6) and (4.17) in the case of two variables we get $$AS(x,t)-S(x,t)A^{*}=i\Pi(x,t)j\Pi(x,t)^{*}.$$ (4.24) Substitute now $x=s$ into Theorem 4.1 to get $\widetilde{w}_{x}=(izj-\widetilde{q}_{0})\widetilde{w}$ for $\widetilde{w}=w_{A}w$. Taking into account that $q_{0}=-ijV$ and $q_{1}=-ij$ we rewrite the second equality in (4.9) as $\widetilde{q}_{0}=-ijV+(jX_{0}-X_{0}j)$, i.e., $$\widetilde{q}_{0}=-ij\widetilde{V},\quad\widetilde{V}=V+jX_{0}j-X_{0}=\left[% \begin{array}[]{lr}0&\widetilde{v}\\ \widetilde{v}^{*}&0\end{array}\right].$$ Therefore we get $$\frac{\partial}{\partial x}\widetilde{w}(x,t,\lambda)=i\big{(}zj+j\widetilde{V% }(x,t)\big{)}\widetilde{w}(x,t,\lambda),\quad\widetilde{v}(x,t)=v(x,t)-2\big{(% }X_{0}(x,t)\big{)}_{12}.$$ (4.25) Substitute also $t=s$ into Theorem 4.1 to get $\widetilde{w}_{t}=-z^{-1}\widetilde{q}_{-1}\widetilde{w}$. Taking into account that $q_{-1}=-ijH$ from formula (4.16) it follows that $\widetilde{q}_{-1}=-ij\widetilde{H}$, where $\widetilde{H}$ is given by the equality (4.20). In other words we have $$\frac{\partial}{\partial t}\widetilde{w}(x,t,\lambda)=\frac{i}{z}j\widetilde{H% }(x,t)\widetilde{w}(x,t,\lambda).$$ (4.26) The compatibility condition for systems (4.25) and (4.26) is equivalent to the system $$\widetilde{H}_{x}(x,t)=i(\widetilde{V}(x,t)j\widetilde{H}(x,t)-\widetilde{H}(x% ,t)j\widetilde{V}(x,t)),\quad i\widetilde{v}_{t}(x,t)=2\big{(}\widetilde{H}(x,% t)\big{)}_{12}.$$ (4.27) Finally relations (4.23) follow from (4.15), (4.20), and (4.22). $\blackbox$ 5 Explicit solutions In this section we shall treat the case $m=1$, $H=\beta^{*}\beta$, $$\beta(x,t)=[\overline{b}e^{-i(cx+dt)}\quad be^{i(cx+dt)}],\quad v(x,t)=-\frac{% b^{2}}{d}e^{2i(cx+dt)},\quad cd=|b|^{2},$$ (5.1) where $c,d\,\in{\mathbb{R}}$. In other words that is the case $$u_{1}(x,t)=be^{i(cx+dt)},\quad u_{2}(x,t)=\frac{b^{2}}{2id}e^{2i(cx+dt)}.$$ (5.2) It is immediate that these $H$ and $v$ satisfy system (2.5), i.e., $u_{1}$ and $u_{2}$ satisfy SHG. To construct $\Pi$ corresponding to the initial solution given by (5.1) we need $n\times n$ matrices $Q_{1}$ and $Q_{2}$ such that $$Q_{2}^{2}=d^{2}(I_{n}-2cA^{-1}),\quad AQ_{2}=Q_{2}A,\quad Q_{1}=-d^{-1}AQ_{2}.$$ (5.3) Define columns $\Phi$ and $\Psi$ of $\Pi=[\Phi\quad\Psi]$ by the relations $$\Phi(x,t)=e^{-i(cx+dt)}\big{(}e(x,t)f_{1}+e(x,t)^{-1}f_{2}\big{)},\quad e(x,t)% :=\exp\{i(xQ_{1}+tQ_{2})\},$$ (5.4) $$\Psi(x,t)=-d\overline{b}^{-2}e^{i(cx+dt)}\big{(}e(x,t)(A-cI_{n}+Q_{1})f_{1}+e(% x,t)^{-1}(A-cI_{n}-Q_{1})f_{2}\big{)}.$$ (5.5) Direct calculation shows that $\Pi$ satisfies (4.18). Now in view of (4.19) we obtain $$S(x,t)=S(0,t)+\int_{0}^{x}\Pi(s,t)\Pi(s,t)^{*}ds,$$ (5.6) $$S(0,t)=S(0,0)-\int_{0}^{t}A^{-1}\Pi(0,s)jH(0,s)j\Pi(0,s)^{*}\big{(}A^{*}\big{)% }^{-1}ds.$$ (5.7) Remark 5.1 If $A$ is diagonal (or similar to diagonal) matrices $Q_{1}$ and $Q_{2}$ always exist and are easy to construct. Moreover if we require additionally that $\sigma(A)\cap\sigma(A^{*})=\emptyset$, then $S(x,t)$ is uniquely recovered from the identity (4.24). Namely, for $A=U{\mathrm{diag}}\{a_{1},\,a_{2},\ldots\}U^{-1}$ formula (4.24) yields $$S(x,t)=iU\Big{\{}(a_{k}-\overline{a}_{p})^{-1}\big{(}U^{-1}\Pi(x,t)j\Pi(x,t)^{% *}(U^{*})^{-1}\big{)}_{kp}\Big{\}}_{k,p=1}^{n}U^{*}.$$ (5.8) The next proposition on the explicit solutions is a corollary of Theorem 4.2. Proposition 5.2 Let functions $H=\beta^{*}\beta$ and $v$ be given by (5.1), matrix functions $\Pi=[\Phi\quad\Psi]$ be given by (5.3)-(5.5) and $S$ be given by (5.6) and (5.7) (or by (5.8) if the conditions of Remark 5.1 hold). Let also relations (4.17) be valid. Then in the points of invertibility of $S$ functions $\widetilde{H}=\widetilde{\beta}^{*}\widetilde{\beta}$ and $\widetilde{v}=v-2i\Phi^{*}S^{-1}\Psi$, where $$\widetilde{\beta}=[\widetilde{\beta}_{1}\quad\widetilde{\beta}_{2}]=\beta(j+ij% \Pi^{*}(A^{*})^{-1}S^{-1}\Pi),$$ satisfy system (2.5). In particular, functions $\widetilde{u}_{1}=\sqrt{\overline{\widetilde{\beta}_{1}}\widetilde{\beta}_{2}}$ and $\widetilde{u}_{2}=u_{2}+\Phi^{*}S^{-1}\Psi$ satisfy SHG in the domains, where $S$ is invertible and branch $\sqrt{\overline{\widetilde{\beta}_{1}}\widetilde{\beta}_{2}}$ is continuously differentiable. Notice further that choosing $S(0,0)>0$ in view of (5.6) and (5.7) we get $$S(x,t)>0\quad(x\geq 0,\quad t\leq 0),$$ (5.9) i.e., for $x\geq 0,\quad t\leq 0$ matrix function $S$ is invertible. In this case we obtain Weyl functions $\varphi(t,z)$ of systems $$\frac{d}{dx}\widetilde{W}(x,t,z)=i\big{(}zj+j\left[\begin{array}[]{lr}0&% \widetilde{v}(x,t)\\ \widetilde{v}(x,t)^{*}&0\end{array}\right]\big{)}\widetilde{W}(x,t,z)\quad(x% \geq 0)$$ (5.10) explicitly also. Proposition 5.3 Suppose conditions of Proposition 5.2 are satisfied and $S(0,0)>0$. Then Weyl functions of systems (5.10) are given by the equality $$\varphi(t,z)=i\frac{\Omega_{2}(t,z)}{\Omega_{1}(t,z)}\quad(t\leq 0),$$ (5.11) where functions $\Omega_{1}$ and $\Omega_{2}$ have the form: $$\Omega(t,z)=\left[\begin{array}[]{c}\Omega_{1}(t,z)\\ \Omega_{2}(t,z)\end{array}\right]:=Kw_{A}(0,t,z)e^{idtj}Z\left[\begin{array}[]% {c}1\\ 0\end{array}\right],\quad Z:=\left[\begin{array}[]{lr}1&1\\ g_{1}(z)&g_{2}(z)\end{array}\right],$$ (5.12) $$g_{1}(z):=db^{-2}(z-c-h(z)),\quad g_{2}(z):=db^{-2}(z-c+h(z)),$$ (5.13) $$h(z):=\sqrt{z(z-2c)},\quad(z\in{\mathbb{C}}_{+},\quad\Im h>0).$$ (5.14) P r o o f . According to Definition 2.1 it will suffice to show that $$\widetilde{W}(x,t,z)K^{*}\Omega(t,z)\in L^{2}_{2}(0,\,\infty)$$ (is squarely summable) and $\Omega_{1}\not=0$. For this purpose notice that by Theorem 4.1 the normalized fundamental solution $\widetilde{W}$ ($\widetilde{W}(0,t,z)=I_{2}$) admits representation $$\widetilde{W}(x,t,z)=w_{A}(x,t,z)W(x,t,z)w_{A}(0,t,z)^{-1},$$ (5.15) where $W$ is the normalized solution of the initial system $W_{x}=i(zj+jV)W$ with $v$ as in (5.1). One can compute directly that $$W(x,t,z)=e^{i(cx+dt)j}Ze^{ih(z)xj}Z^{-1}e^{-idtj}.$$ (5.16) Therefore in view of (5.12), (5.15), and (5.16) we have $$\widetilde{W}(x,t,z)K^{*}\Omega(t,z)=e^{ih(z)x}w_{A}(x,t,z)e^{i(cx+dt)j}\left[% \begin{array}[]{c}1\\ g_{1}(z)\end{array}\right].$$ (5.17) From (4.19) and (5.9) it follows that $$\frac{d}{dx}S(x,t)^{-1}=-S(x,t)^{-1}\Pi(x,t)\Pi(x,t)^{*}S(x,t)^{-1}.$$ (5.18) Hence we obtain $$\int_{0}^{\infty}S(x,t)^{-1}\Pi(x,t)\Pi(x,t)^{*}S(x,t)^{-1}dx\leq S(0,t)^{-1},$$ (5.19) i.e., the columns of $\Pi^{*}S^{-1}$ belong $L^{2}_{2}$. As according to (5.14) we have $$h(z)-(z-c)=-c^{2}\big{(}z-c+h(z)\big{)}^{-1},$$ so taking into account (5.4) and (5.5) one can see that $$\lim_{x\to\infty}e^{ih(z)x}\Pi(x)=0,\quad\Im\,z>\max\,(c^{2},\,||Q_{1}||+1).$$ (5.20) Using the definition of $w_{A}$, (5.19), and (5.20) we derive that the columns in the right-hand side of (5.17) are squarely summable. Thus we have shown that $\widetilde{W}(x,t,z)K^{*}\Omega(t,z)\in L^{2}_{2}(0,\,\infty)$, and it remains to prove that $\Omega_{1}\not=0$. From (4.24) it easily follows [33, 34] that $$w_{A}(x,t,z)^{*}jw_{A}(x,t,z)=$$ $$j-i(z-\overline{z})\Pi(x,t)^{*}(A^{*}-\overline{z}I_{n})^{-1}S(x,t)^{-1}(A-zI_% {n})^{-1}\Pi(x,t).$$ (5.21) By (5.9) and (5.21) we have $$w_{A}(x,t,z)^{*}jw_{A}(x,t,z)\geq j\quad(z\in{\mathbb{C}}_{+}).$$ (5.22) Notice now that for sufficiently large $\Im z$ the inequality $$[1\quad 0]Z^{*}e^{-idtj}je^{idtj}Z\left[\begin{array}[]{c}1\\ 0\end{array}\right]>0$$ (5.23) is true. Thus in view of (5.12), (5.22), and (5.23) it follows that $\Omega J\Omega^{*}>0$, and so $\Omega_{1}\not=0$. Therefore equality (5.11) is true for all $z$ with sufficiently large imaginary part and hence for all $z\in{\mathbb{C}}_{+}$. $\blackbox$ References [1] Ablowitz, M.J., Segur, H.: Solitons and the inverse scattering transform. Philadelphia: SIAM Stud. Appl. Math. 4, 1981 [2] Akhmanov, S.A., Vysloukh, V.A., Chirkin, A.S.: Optics of femtosecond laser pulses. New York: American Institute of Physics, 1992 [3] Bass, F.G., Sinitsyn, V.G.: Ukr. Fiz. Zh. 17, 124 (1972). [4] Berezanskij, Yu.M.: The integration of semi-infinite Toda chain by means of inverse spectral problem. Rep. Math. Phys. 24:1, 21-47 (1986) [5] Clark, S., Gesztesy, F.: Weyl-Titchmarsh $M$-function asymptotics, local uniqueness results, trace formulas, and Borg-type theorems for Dirac operators. Trans. Amer. Math. Soc. 354, 3475–3534 (2002) [6] Faddeev, L.D., Takhtajan, L.A.: Hamiltonian methods in the theory of solitons. Springer, 1986 [7] Fokas, A.S., Its, A.R.: Soliton generation for initial-boundary-value problem. Phys. Rev. Letters 68, 3117-3120 (1992) [8] Gesztesy, F., Simon, B.: On local Borg-Marchenko uniqueness results. Commun. Math. Phys. 211, 273–287 (2000) [9] Glenn, W.H.: Second-harmonic generation by picosecond optical pulses. IEEE J. Quantum Electron. QE-5, 284-290 (1969) [10] Gohberg, I., Kaashoek, M.A., Sakhnovich, A.L.: Scattering problems for a canonical system with a pseudo-exponential potential. Asymptotic Analysis 29:1, 1–38 (2002) [11] Kac, M., van Moerbeke, P.: A complete solution of the periodic Toda problem. Proc. Natl. Acad. Sci. USA 72, 2879-2880 (1975) [12] Kaup, D.J.: Simple harmonic generation: an exact method of solution. Stud. Appl. Math. 59, 25-35 (1978) [13] Kaup, D.J., Newell, A.C.: The Goursat and Cauchy problems for the sine-Gordon equation. SIAM J. Appl. Math. 34, 37-54 (1978) [14] Kaup, D.J., Steudel, H.: Virtual solitons and the asymptotics of second harmonic generation. Inverse Probl. 17:4, 959-970 (2001) [15] Kaup, D.J., Steudel, H.: Recent results on second harmonic generation. Contemporary Math. 326, 33-48 (2003) [16] Khusnutdinova, K.R., Steudel, H.: Second harmonic generation: Hamiltonian structures and particular solutions. J.Math.Phys. 39, 3754-3764 (1998) [17] Kiselev, O.M.: Solution of the Goursat problem for the Maxwell-Bloch system. Theor. Math. Phys. 98:1, 20-26 (1994) [18] Krichever, I.M.: 1980 An analogue of d’Alembert’s formula for the equations of the principal chiral field and for the Sine-Gordon equation. Sov. Math. Dokl. 22, 79-84 (1980) [19] Liouville, J.: J. Math. Pure Appl. 18, 71 (1853). [20] Marchenko, V.A.: Nonlinear equations and operator algebras. Dordrecht: Reidel, 1988 [21] Matveev, V.B., Salle, M.A.: Darboux transformations and solitons. Springer, 1991 [22] Miura, R. (ed.): Bäcklund Transformations. Lecture Notes in Math. 515, Springer, 1976 [23] Sabatier, P.C.: Elbow scattering and inverse scattering applications to LKdV and KdV. J. Math. Phys. 41:1, 414-436 (2000) [24] Sakhnovich, A.L.: The Goursat problem for the sine-Gordon equation and the inverse spectral problem. Russ. Math. Iz. VUZ 36:11, 42-52 (1992) [25] Sakhnovich, A.L.: Exact solutions of nonlinear equations and the method of operator identities. Lin. Alg. Appl. 182, 109–126 (1993) [26] Sakhnovich, A.L.: Dressing procedure for solutions of nonlinear equations and the method of operator identities. Inverse problems 10, 699–710 (1994) [27] Sakhnovich, A.L.: Iterated Bäcklund-Darboux transformation and transfer matrix-function (nonisospectral case). Chaos, Solitons and Fractals 7:8, 1251-1259 (1996) [28] Sakhnovich, A.L.: Generalized Bäcklund-Darboux transformation: spectral properties and nonlinear equations. J. Math. Anal. Appl. 262, 274–306 (2001) [29] Sakhnovich, A.L.: Dirac type and canonical systems: spectral and Weyl-Titchmarsh functions, direct and inverse problems. Inverse Probl. 18, 331-348 (2002) [30] Sakhnovich, A.L.: Dirac type system on the axis: explicit formulas for matrix potentials with singularities and soliton-positon interactions. Inverse Problems 19, 845-855 (2003) [31] Sakhnovich, A.L.: Non-Hermitian matrix Schrödinger equation: Bäcklund-Darboux transformation, Weyl functions, and ${\cal{PT}}$ symmetry. J. Phys. A 36, 7789-7803 (2003) [32] Sakhnovich, L.A.: Integrable nonlinear equations on the semi-axis, Ukrainian Math. J. 43:11, 1470-1476 (1991) [33] Sakhnovich, L.A.: Interpolation theory and its applications. Dordrecht: Kluwer Academic Publishers, 1997 [34] Sakhnovich, L.A.: Spectral theory of canonical differential systems, method of operator identities. OT 107, Basel: Birkhäuser, 1999 [35] Simon, B.: A new aproach to inverse spectral theory, I. Fundamental formalism. Ann. of Math. 150, 1029–1057 (1999) [36] Sklyanin, E.K.: Boundary conditions for integrable equations. Funct. Anal. Appl. 21, 164-166 (1987) [37] Steudel, H., de Morisson Faria, C.F., Kamchatnov, A.M., Paris, M.: The inverse problem for second harmonic generation with an amplitude-modulated initial pulse. Phys. Letts A 276, 267-271 (2000) [38] Steudel, H., Meinel, R.: Periodic solutions generated by Backlund transformations. Physica D 21, 155-162 (1986) [39] Zakharov, V.E., Mikhailov, A.V.: On the integrability of classical spinor models in two-dimensional space-time. Comm. Math. Phys. 74, 21-40 (1980) [40] Zhiber, A.V.: The complete description of the Lie-Backlund algebra for equations of the generation of the second harmonic (Russian). Dinamika Neodnorodnoj Zhidkosti, Din. Sploshnoj Sredy 44, 3-14 (1980)